Close

    Archive for category: Colocation Resources

    by

    ASHRAE’s new energy standard for data centers 

    By Bill Kosik, PE, CEM, LEED AP, BEMP; exp, Chicago

    ASHRAE Standard 90.4 is a flexible, performance-based energy standard that goes beyond current ASHRAE 90.1 methodology.

    Learning objectives:

    • Explain ASHRAE Standard 90.1.
    • Understand the fundamentals of ASHRAE Standard 90.4.
    • Explore how ASHRAE 90.4 will impact data center mechanical/electrical system design.

    The data center industry is fortunate to have many dedicated professionals volunteering their time to provide expertise and experience in the development of new guidelines, codes, and standards. ASHRAE, U.S. Green Building Council, and The Green Grid, among others, routinely call on these subject matter experts to participate in working committees with the purpose of advancing the technical underpinnings and long-term viability of the organizations’ missions. For the most part, the end goal of these working groups is to establish consistent, repeatable processes that will be applicable to a wide range of project sizes, types, and locations. For ASHRAE, this was certainly the case when it came time to address the future of the ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings vis-à-vis how it applies to data centers.

    ASHRAE Standard 90.1 and data centers

    ASHRAE 90.1 has become the de facto energy standard for U.S. states and cities as well as many countries around the world. Data centers are considered commercial buildings, so the use of ASHRAE 90.1 is compulsory to demonstrate minimum energy conformance for jurisdictions requiring such. Specific to computer rooms, ASHRAE 90.1 has evolved over the last decade and a half, albeit in a nonlinear fashion. The 2001, 2004, and 2007 editions of ASHRAE 90.1 all have very similar language for computer rooms, except for humidity control, economizers, and how the baseline HVAC systems are to be developed. It is not until the ASHRAE 90.1-2010 edition where there are more in-depth requirements for computer rooms. For example, ASHRAE 90.1-2010 contains a new term, “sensible coefficient of performance” (SCOP), an energy benchmark used for computer and data processing room (CDPR) air conditioning units. The construct of SCOP is dividing the net sensible cooling capacity (in watts) by the input power (in watts). The definition of SCOP and the detail on how the units are to be tested comes from the Air Conditioning, Heating, and Refrigeration Institute (AHRI) in conjunction with the American National Standards Institute (ANSI) and was published in AHRI/ANSI Standard 1360: Performance Rating of Computer and Data Processing Room Air Conditioners.

    With the release of ASHRAE 90.1-2013, additional clarification, and requirements related to data centers including information for sizing water economizers and an introduction of a new alternative compliance path using power-usage effectiveness (PUE) were included. As a part of the PUE alternate compliance path, cooling, lighting, power distribution losses, and information technology (IT) equipment energy are to be documented individually. But since the requisites related to IT equipment (ITE) listed in ASHRAE 90.1 were originally meant for server closets or computer rooms that consume only a piece of the energy of the total building, there were still difficulties in demonstrating compliance. Yet there was no slowdown in technology growth; projects began to slowly include full-sized data centers with an annual energy usage greater than the building in which they are housed. Even with all the revisions and additions to ASHRAE 90.1 relating to data centers, there were still instances that proved difficult in applying ASHRAE 90.1 for energy-use compliance.

    Fortunately, as the data center community continued to evolve in terms of sophistication in designing and operating highly energy-efficient facilities, so did ASHRAE 90.1 with the release of the 2013 edition. But even before ASHRAE 90.1-2013 was released, the data center community was pushing for clearer criteria for energy-use compliance. It was crucial that these criteria would not stifle innovation, but at the same time provide logic and consistency on how to comply with ASHRAE 90.1. Many in the data center engineering community (including ASHRAE) knew something needed to change.

    ASHRAE Standard 90.4-2016

    Given the long history of ASHRAE 90.1 (dating back to 1976) and its demonstrated effectiveness in reducing energy use in buildings, several questions needed to be addressed before new criteria could be developed. What would be the best way to develop new language for data center facility energy use? Should it be an overlay to the existing standard? Should it be a stand-alone document? Should it be a stand-alone document and duplicate all the language in ASHRAE 90.1? How should the technical processes developed by The Green Grid and U.S. Green Building Council be folded into the standard? Would it be able to keep up with the fast-paced technology developments that are truly unique to data centers?

    Fast-forward a few years and in mid-2016, ASHRAE published ASHRAE 90.4-2016: Energy Standard for Data Centers. Coming in at just 68 pages, ASHRAE 90.4 doesn’t seem to be as detailed as compared with other standards released by ASHRAE (ASHRAE 90.1 weighs in at just over 300 pages). But this is by design—instead of trying to weave in data center-specific language into the existing standard, ASHRAE wisely chose to create a (mostly) stand-alone standard that is only applicable to data centers and contains references to ASHRAE 90.1. These references mainly are for building envelope, service-water heating, lighting, and other requirements. Using this approach avoids doubling up on future revisions to the standard, minimizes any unintended redundancies, and ensures that the focus of ASHRAE 90.4 is exclusive to data center facilities. Also, issuing updates to ASHRAE 90.1 will automatically update ASHRAE 90.4 for the referenced sections. In the same way, updates to ASHRAE 90.4 will not affect the language in ASHRAE 90.1. Using ASHRAE 90.1 will not automatically require the use of ASHRAE 90.4. In fact, since many local jurisdictions operate on a 3-year cycle for updating their building codes, many are still using the ASHRAE 90.1-2013 or earlier. The normative reference in ASHRAE 90.4 is ASHRAE 90.1-2016; however, the final say on an administrative matter like this will always fall to the authority having jurisdiction (AHJ).

    Fundamentals of ASHRAE 90.4

    ASHRAE 90.4 gives the engineer a completely new method for determining compliance. ASHRAE introduces new terminology for demonstrating compliance: design and annual mechanical load component (MLC) and electrical-loss components (ELC). ASHRAE is careful to note that these values are not comparable to PUE and are to be used only in the context of ASHRAE 90.4. The standard includes compliance tables consisting of the maximum load components for each of the 19 ASHRAE climate zones. Assigning an energy efficiency target, either in the form of design or an annualized MLC to a specific climate zone, will certainly raise awareness to the inextricable link between climate and data center energy performance (see figures 1 and 2). Since strategies like using elevated temperatures in the data center and employing different forms of economization are heavily dependent on the climate, an important goal is to increase the appreciation and understanding of these connections throughout the data center design community.

    Design mechanical-load component

    MLC can be calculated in one of two ways to determine compliance. The first is a summation of the peak power of the mechanical components in kilowatts, as well as establishing the design load of the IT equipment, also in kilowatts. ASHRAE 90.4 has a table of climate zones with the respective design dry-bulb and wet-bulb temperatures that are to be used when determining the peak mechanical system load. The calculation procedure is shown below. It must be noted that when comparing the calculated values of design MLC, the analysis must be done at both 100% and 50% ITE load; both values must be less than or equal to the values listed in Table 6.2.1 (design MLC) in ASHRAE 90.4.

    Design MLC=[cooling design power (kW)+pump design power (kW)+heat rejection design fan power (kW)+air handler unit design fan power (kW)]÷data center design ITE power (kW)

    Annualized mechanical-load component

    The concepts used for the annualized MLC path are like the design MLC, except an hourly energy analysis is required when using the annualized MLC path.

    This energy analysis must be done using software specifically designed for calculating energy consumption in buildings and must be accepted by the rating authority. Some of the primary requirements of the software include the dynamic characteristics of the data center, both inside and outside. The following are some of the software requirements used in the modeling:

    • Test in accordance with ASHRAE Standard 140: Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs.
    • Able to evaluate energy-use status for 8,760 hours/year.
    • Account for hourly variations in IT load, which cascades down to electrical system efficiency, cooling system operation, and miscellaneous equipment power.
    • Include provisions for daily, weekly, monthly, and seasonal building-use schedules.
    • Use performance curves for cooling equipment, adjusting power use based on outdoor conditions as well as evaporator and condenser temperatures.
    • Calculate energy savings based on economization strategies for air- and water-based systems.
    • Produce hourly reports that compare the baseline HVAC system to a proposed system to determine compliance with the standard.
    • Calculate required HVAC equipment capacities and water- and airflow rates.

    Since ASHRAE 90.4 categorizes compliance metrics based on climate zone, it is imperative that the techniques used in simulating the data center’s energy use are accurate based on the specific location of the facility. As such, the simulation software must perform the analysis using climatic data including hourly atmospheric pressure, dry-bulb and dew point temperatures as well as wet-bulb temperature, relative humidity, and moisture content. This data is available from different sources and in the form of typical meteorological year, (TMY2, TMY3), and EnergyPlus Weather (EPW) files that are used as an input to the main simulation program.

    This compulsory hourly energy-use simulation considers fluctuations in mechanical system energy consumption, particularly in cases where the equipment is designed for some type of economizer mode, as well as energy reductions in vapor-compression equipment from reduced lift due to outdoor temperature and moisture levels. This approach seems to be the most representative of determining the energy performance of the data center, and since it is based on already established means of determining building energy use (i.e., hourly energy-use simulation techniques), it also will be the most understandable. Again, it must be noted that when comparing the calculated values of annualized MLC, the analysis must be done at both 100% and 50% ITE load; both values must be less than or equal to the values listed in Table 6.2.1.2 (annualized MLC) in the ASHRAE standard. It also is important to note that both the design and annualized MLC values are tied to the ASHRAE climate zones. When energy use is calculated using simulation techniques, it becomes obvious that the energy used has a direct correlation to the climate zone, primarily due to the ability to extend economization strategies for longer periods of time throughout the year. If we compare calculated annualized MLC values for data centers with the MLC values in ASHRAE 90.4, the ASHRAE requirements are relatively flat when plotted across the climate zones. This means the calculated MLC values in this example have energy-use efficiencies that are in excess of the minimum required by the standard (see Figure 7).

    Annual MLC=[cooling design energy (kWh)+pump design energy (kWh)+heat rejection design fan energy (kWh)+air handler unit design fan energy (kWh)]÷data center design ITE energy (kWh)

    Design electrical-loss component

    Using the ASHRAE 90.4 approach to calculate the ELC defines the electrical system efficiencies and losses. For the purposes of ASHRAE 90.4, the ELC consists of three parts of the electrical system architecture:

    1. Incoming electrical service segment
    2. Uninterruptible power supply (UPS) segment
    3. ITE distribution segment.

    The segment for electrical distribution for mechanical equipment is stipulated to have losses that do not exceed 2%, but is not included in the ELC calculations. All the values for equipment efficiency must be documented using the manufacturer’s data, which must be based on standardized testing using the design ITE load. The final submittal to the rating authority (the organization or agency that adopts or sanctions the results of the analysis) must consist of an electrical single-line diagram and plans showing areas served by electrical systems, all conditions and modes of operation used in determining the operating states of the electrical system, and the design ELC calculations demonstrating compliance. Tables 8.2.1.1 and 8.2.1.2 in ASHRAE 90.4 list the maximum ELC values for ITE loads less than 100 kW and greater than or equal to 100 kW, respectively. The tables show the maximum ELC for the three segments individually as well as the total.

    The electrical distribution system’s efficiency impacts the data center’s overall energy efficiency in two ways: the lower the efficiency, the more incoming power is needed to serve the IT load. In addition, more air conditioning energy is required to cool the electrical energy dissipated as heat. ASHRAE 90.4, Section 6.2.1.2.1.1, is explicit on how this should be handled: “The system’s UPS and transformer cooling loads must also be included in [the MLC], evaluated at their corresponding part-load efficiencies.” The standard includes an approach on how to evaluate single-feed UPS systems (e.g., N, N+1, etc.) and active dual-feed UPS systems (2N, 2N+1, etc.). The single-feed systems must be evaluated at 100% and 50% ITE load. The dual active-feed systems must be evaluated at 50% and 25% ITE load, as these types of systems will not normally operate at a load greater than 50%.

    Addressing reliability of systems and equipment

    One of the distinctive design requirements of data centers is the high degree of reliability. One manifestation of this is the use of redundant mechanical equipment. The redundant equipment will come online when a failure occurs or when maintenance is required without compromising the original level of redundancy. Different engineers use different approaches based on their clients’ needs. Some will design in extra cooling units, pumps, chillers, etc. and have these pieces of equipment running all the time, cycling units on and off as necessary. Other designs might have equipment to handle more stringent design conditions, such as ASHRAE 0.4% climate data (dry-bulb temperatures corresponding to the 0.4% annual cumulative frequency of occurrence).

    And yet others will use variable-speed motors to vary water and airflow, delivering the required cooling based on a changing ITE load. Since these design approaches are quite different from one another, Table 6.2.1.2.1.2 in ASHRAE 90.4 provides methods for calculating MLC compliance under these scenarios.

    Performance-based approach

    ASHRAE 90.4 uses a performance-based approach rather than a prescriptive one to accommodate the rapid change in data center technology and to allow for innovation in developing energy efficiency cooling solutions. Some of the provisions seem to especially encourage innovative solutions including:

    • Onsite renewables or recovered energy. The standard allows for a credit to the annual energy use if onsite renewable energy generation is used or waste heat is recovered for other uses. Data centers are ideal candidates for renewable energy generation, as the load can be constant through the course of the daytime and nighttime hours. Also, when water-cooled computers are used with high-discharge water temperatures, the water can be used for building heating, boiler-water preheating, snow melting, or other thermal uses.
    • Derivation of MLC values. The MLC values in the tables in ASHRAE 90.4 are considered generic to allow multiple systems to qualify for the path. The MLC values are based on systems and equipment currently available in the marketplace from multiple manufacturers. This is the benchmark for minimum compliance that must be met. But ideally, the project would go beyond the minimum and demonstrate even greater energy-reduction potential.
    • Design conditions. The annualized MLC values for air systems are based on a delta T (temperature rise of the supply air) of 20°F and a return-air temperature of 85°F. However, the proposed design is not bound to these values if the design temperatures are in agreement with the performance characteristics of the coils, pumps, fan capacities, etc. This provision from the standard gives the engineer a lot of room to innovate and propose nontraditional designs, such as water cooling of the ITE equipment.
    • Trade-off method. Sometimes mechanical and electrical systems have constraints that may disqualify them from meeting the MLC or ELC values on their own merit. The standard allows, for example, a less efficient mechanical system to be offset by a more efficient electrical system and vice versa. Another benefit of using this approach comes from the mechanical and electrical engineer having to collaborate by going through an iterative, synergistic design process.

    Publishing ASHRAE 90.4-2016 is a watershed moment—to date, there has not been a code-ready, technically robust approach to characterize mechanical and electrical system designs to judge conformance to an energy standard. This is no small feat, considering that data center mechanical/electrical systems can have a wide variety of design approaches, especially as the data center industry continues to develop more efficient ITE equipment requiring novel means of power and cooling. And since ASHRAE 90.4 is a separate document from ASHRAE 90.1, as computer technology changes, the process to augment/revise ASHRAE 90.4 should be less difficult because they are two separate documents. While certainly not perfect, ASHRAE 90.4 is a major step along the path of ensuring energy efficiency in data centers.


    Bill Kosik is a senior mechanical engineer at exp in Chicago. Kosik is a member of the Consulting-Specifying Engineer editorial advisory board.

    View the original article and related content on Consulting Specifying Engineer

    by

    10 aspects to consider for data center clients

    By Mark A. Kosin, Southland Industries

    There are 10 common aspects to consider in the analysis of mechanical, electrical, and plumbing systems.

    Of all the data center markets throughout North America, Northern Virginia (NoVa) has consistently been the most active due in large part to its history. In the early 1990s, the region played a crucial role in the development of the internet infrastructure, which naturally drew a high concentration of data center operators who could connect to many networks in one place.

    NoVa, and especially Loudoun County, Virginia, was made for data centers. With its abundant fiber, inexpensive and reliable power, rich water supply in an area that does not experience droughts, and attractive tax incentive programs, it’s ideal for many data center clients.

    There are more than 40 data centers located in Loudoun Count, and the majority are in “Data Center Alley,” which boasts a high concentration of data centers and supports about half of the country’s Internet traffic. With more than 4.5 million sq ft of data center space available and a projected 10 million sq ft by 2021, Ashburn, Virginia, data centers continue to lead the pack. As Ashburn becomes the site of some of the industry’s most progressive energy-saving initiatives and connectivity infrastructure developments, there’s no doubt that the region will continue to be a market to watch.

    Recently, an increase in competition has been driving technology and innovations throughout the NoVa data center colocation market. With such a competitive landscape, clients are looking at all aspects of their mechanical, electrical, and plumbing (MEP) designs to differentiate themselves from the competition. By looking holistically at clients’ priorities, the firm evaluates various factors during system comparisons and allows each client to choose the right mechanical and electrical systems to achieve their overall goals and optimize success. There are ten common aspects to consider in the analysis of mechanical, electrical, and plumbing systems.

    1. First cost

    When businesses turn to a colocation provider, and the fiscal benefits of such strategies are only increasing, first costs become a primary motivation. A recent study explained that rising competition in the colocation sector is leading to price declines in leasing and creating an extremely client-friendly environment.

    2. Energy efficiency

    Because power consumption directly drives operating costs, energy efficiency is a big concern for many businesses. Choosing a data center that integrates the latest technologies and architecture can help minimize environmental impacts. Innovations like highly efficient cooling plants and leveraging medium voltage electrical distribution systems can help reduce the amount of energy needed to power the building, resulting in a lower Power Usage Effectiveness (PUE).

    3. Reliability

    To avoid financial and business repercussions in the case of a planned or unplanned outage, reliability is a must. If going offline for even a few minutes will have significant financial and business repercussions, then employing MEP solutions that have backup options available in case of a planned or unplanned outage is a must.

    4. Flexibility 

    Flexibility with scaling systems has been an attractive strategy, particularly with colocation providers. Adaptability to multiple clients for phasing and making sure design provisions are made so the construction of a new phase can occur without downtime in active phases. Flexibility is a key component when it comes to meeting your business objectives because it allows your needs to be accommodated at any given time.

    5. Redundancy

    Providing continuous operations through all foreseeable circumstances, such as power outages and equipment failure, is necessary to ensure a data center’s reliability. Redundant systems that are concurrently maintainable provide peace of mind that the client’s infrastructure is protected.

    6. Maintainability

    Clients want systems that are easily maintainable to be able to ensure their critical assets are running at full speed. The system sections should be focused on operational excellence in order to protect customers’ critical power load and cooling resources.

    7. Speed to market

    Clients’ leases usually hinge on having timely inventory. Clients expect a fast- tracked, constructible design that is coordinated and installed in a timely manner. Through the integrated design-build model, long lead items can be pre-purchased in parallel with designs being completed and coordinated.

    8. Scalability

    Scalability and speed to market go hand in hand. It’s vital to understand that system infrastructure choices early in design can affect equipment lead times and installation durations for future phases. Also, in order to provide control and save operational costs during a period of accelerated MEP growth, systems need to be easily scalable to fast-track additional growth.

    9. Sustainability

    Customers benefit from solar power, reclaimed water-based cooling systems, waterless cooling technologies, and much more. Water is becoming a larger consideration with mechanical system selections. The enormous volume of water required to cool high-density server farms with mechanical systems is making water management a growing priority for data center operators. A 15-megawatt data center can use up to 360,000 gallons of water per day. Clients recognize that sustainability is not only good for the environment, but is also good for their bottom line.

    10. Design tolerances

    Since 2011, new temperature and humidity guidelines have helped rethink the design of data centers. Service level agreements (SLAs) are being designed with different limits. That has resulted in more and more innovations with MEP systems within mission critical facilities.


    Mark A. Kosin is vice president, business team leader for mid-Atlantic division at Southland Industries. This article originally appeared on Southland Industries blog. Southland Industries is a CFE Media content partner.

     

    View the original article and related content on Consulting Specifying Engineer

    CHOOSE YOUR AREA OF INTEREST: INDUSTRIAL SOLUTIONS OR DATA CENTER SOLUTIONS