Thursday, November 4, 2010

IES and LEED Energy Modeling

For a couple years now many IES has placed much of there development efforts towards compliance with the Performance Rating Method of ASHRAE 90.1 (LEED Energy Modeling). What we saw during this time, however, was mostly a new look and more of those buttons. The most frustrating part was that IES had LEED written all over their product, being able to demonstrate compliance on a large number of credits.

Wait a minute. The single largest point getting credit in the LEED Rating system, not to mention the largest consulting fees for any one credit (over 100K depending on the project), is EA1 Optimize Energy Performance, and the use of an ASHRAE compliant energy model can get you the most points. Energy Modeling, that's the business. And what is IES? It's an Energy Modeling software. Wait, what?

Now, several years later, they've done it. Finally, thank you!

Why did we like IES in the first place? Because it's a visual, multi-purpose engineering and analysis tool that inter-operates with BIM. So how then do you take a BIM model into IES for LEED energy modeling?

Everything has changed, and you'll have to wait until my next post to hear about it. ;)

Monday, September 20, 2010

Design Builder!

With the growing emphasis on energy efficiency and Integrated Design, we are seeing more and more tools emerging to aid in energy efficient design analysis. Design Builder isn't entirely new, but after last year's added gbxml functionality, Design Builder has become a very capable addition to the Integrated Design toolbox.

Much like IES and eQuest, Design builder is a heavy-duty energy modeler that markets itself to both architects and engineers. Since it isn't the intent of this blog to compare or evaluate multiple applications side by side, I will only mention what I believe are the strengths of Design Builder, and where I think it can still improve.

Strengths:
  • Energy Plus Design Builder is built on top of the Energy Plus simulation engine, arguably the most powerful and innovative energy modeler in the industry.
  • Ease of Use The hierarchical structure between project/building/zone makes it easy to edit variables and assign data (such as material properties, space properties, etc)
  • Templates While I wouldn't say that the template database is entirely comprehensive, it is very good. Particularly useful are the "early design" defaults.
  • Daylighting and Energy Placed sensors can calculate daylight levels and adjust the natural light levels accordingly. This functionality is somewhat of a breakthrough when compared to earlier methods using approximated daylight factors, or requiring daylight autonomy calculations to be manually fed between a daylighting analysis tool and the energy modeler.
  • Airflow and Energy Advanced functionality in Energy Plus can calculate bulk airflow between zones and building openings in a detail, rather than with an approximated or scheduled method. At an early stage in design, scheduled air changes per hour (ACH) can be good enough, but for detailed design verification you want more sophisticated analysis capabilities.
  • CFD A separate CFD module utilizes the same geometry from the energy model. I've not tested this yet so I will reserve comments until later.
  • Interoperability Design Builder imports gbxml file formats! At times it takes a mess of imperfect zones and simplifies them very rationally. Other times, when you import a fairly clean model you can sometimes simply miss a zone. The import process still needs development to be completely reliable, as with many other analysis tools?
  • Shading Recent updates to shading object importing now allows Design Builder to import gbxml shading objects, such as overhangs, surrounding buildings, etc.
Finally, Design Builder is particularly useful when comparing simple building forms in a way that acknowledges the specific impacts of building form on relevant green design strategies, or Energy Conservation Measures (ECMs). I'm attaching a link to a research project done by Ryan Meyer, a Graduate student in the civil and Architectural Engineering Dept at the University of Wyoming.

Form Energy Analysis

Keep up the good work!

Tuesday, December 22, 2009

Bentley Hevacomp and Revit Interoperbilty

In my previous post I mentioned interoperability issues the gbxml import to Bentley Hevacomp. In this post I plan to show examples, specifically of taking a model from Revit into Hevacomp.

The example shown below comes from a university studio project where students had to take existing buildings and re-design them to perform better. Students working with non-residential buildings had to evaluate their schemes against ASHRAE 90.1 (to be honest not all of the models came through cleanly), before modifying the designs for better performance.

The original office building modeled in Revit was fairly basic which helped with a clean export from Revit into several programs including Ecotect, IES, Green Building Studio, and Design Builder.

Gbxml to Ecotect

Gbxml to Hevacomp

The export from Revit to Hevacomp had some obvious problems, but because Hevacomp is ahead of the field in BIM and LEED (see previous post) I really wanted to make this connection work. After having failed with the gbxml path I tried an IFC model exchange between Revit and Bentley Architecture, which came through clean but it only got me half of the way there.

Having translated the model from Revit to Bently Architecture using IFC I didn't have an energy model, but I had all of the building components which could be used to export a gbxml from Bently Architecture into Hevacomp. I found that this exchange worked out much better than the direct Revit to Hevacomp path, and now that I had the model in Hevacomp I could quickly evaluate the design against AHSRAE 90.1 standards and determine how many LEED energy points the design could earn.

Using the IFC exchange I was also able to cleanly take the Revit model into Archicad where it could be used with Graphisoft EcoDesigner, but this really belongs to a separate post when I can get around to it.


Now that we had BIM models which we used to run energy, daylighting, and solar analysis (not shown), the students then modified their designs for better performance. For this particular project the strategies used included daylighting, stack ventilation, a sun space (removed from energy model here), and solar electricity generation.

The energy model from the modified design didn't have any problems between Revit and other analysis programs (not shown) but absolutely failed when I attempted to import it into Hevacomp. In fact the import into Hevacomp brought the building roofs with nothing else.

As with the first model I tried an IFC exchange between Revit to Bently Architecture where the model came in fairly cleanly.

However, now when I tried to export a gbxml of the new model from Bentley Architecture into Hevacomp, the top part of the model never came.

This roofless model is much better than what we got from a direct Revit to Hevacomp exchange, however it's still not acceptable for analysis. I attempted to troubleshoot what went wrong this second time but ultimately gave up without an answer. I was hoping for success but came away with a sad ending after a promising start to the story. Hopefully this description sheds some light on how important interoperability is and why I am critical of software that doesn't communicate well outside of its own family of tools.

During the next semester of class we will continue exploring multiple paths from BIM to analysis software, and will hopefully be able to settle on a consistent set of tools which work well together.

Monday, November 23, 2009

LEED Energy Model and Design Part 2: Bentley Reaches a Milestone

Bentley Hevacomp deserves a lot of credit for becoming the first program to hit the big 3: importing XML format, visualizing the model, and being able easily run an ASHRAE 90.1 energy simulation.

What exactly do each of these mean?

Import XML: Buildings created by architects can quickly be imported into energy modeling programs. Traditionally ¾ of the energy modeling budget is chewed up by re-creating the model in a new program, making it impossible in our current project timelines and billing structures for energy analysis to be used iteratively or comparatively. Both IFC and XML formats can be used to exchange rooms/zones for energy analysis purposes, and while many experts agree that the IFC format is richer and more versatile the XML language has become the dominant export format used by the major BIM authoring tools. Importing XML is really the first step.

Visualize the Model: Several energy modeling programs have had the capability to import XML files for several years now, but this functionality is not being used. In particular, MEP programs such as HAP and Trane Trace do a fairly good job importing XML files, but because their GUI (graphical user interface) doesn’t allow users to visualize the model, the quality of imported files has not really been trusted. Being able to visualizing the model is extremely important, but we really need visualization and ease of editing, the latter of which isn’t yet found in Hevacomp or most other programs.

ASHRAE 90.1: Many future projects pursuing high performance standards will be doing so as a direct response to the shifting priorities towards energy found in LEED V3.0. One might think that there are obvious reasons other than LEED to pursue energy efficient design, particularly now that it has become such an important politic topic. The sad reality is that energy has been treated as little more than a buzzword in the building industry, with a majority of even LEED projects performing disproportionately poor in energy. “Next time we’ll do it right,” were the famous last words of the pre-recession building boomers.

Clap clap clap! Bentley Hevacomp really deserves half of a medal for being the first to achieve the all three of these Benchmarks, with the other half delivered once they work through the hiccups and streamline their design/analysis integrated process.


So what works and what doesn’t?


Doesn’t work: first of all, when you import into Hevacomp from non-Bentley BIM authoring tools, the success of the import is not guaranteed (part of the larger BIM interoperability headache). Second, the way that Bentley automatically generates an energy model is quick and painless, but the process produces a model which is potentially too complex, and correct me if I’m wrong but there isn’t much flexibility here.

Does work: once you have a robust energy model Hevacomp provides an incredible amount of simulation and analysis types, focused on meeting the requirements of building codes. While originally based in England and designed for British and European markets, Hevacomp has changed considerably since they were acquired by Bentley in 2008. The most noticeable improvements to Hevacomp are the integration of Energy Plus, ASHRAE 90.1, and BIM interoperability through xml.

Where to Go?

Well I’m not sure that the success of Hevacomp will gain much market share for Bentley BIM, though it would help. For Hevacomp to successfully charge ahead of its competitors it needs to focus on the development and/or adoption of international standards for XML and IFC file exchange requirements. Hevacomp is targeting a wider audience than Bentley users, but until information exchange is standardized Hevacomp users will have difficulties importing from other BIM applications.

Monday, August 3, 2009

Ecotect within Revit

Isn't this useful, Ecotect solar analysis tools built into Revit. An early release of the Solar Radiation Technology plug-in for Revit will allow users to give feedback during the development of this feature. I've add my personal thoughts at the end of this post, the rest of you should head over to the Autodesk Labs site and start testing out the plug-in for yourselves.

Unwanted Solar gain is a major contributor towards unnecessary building energy consumption, particularly for office buildings in temperate or hot climates. When people talk about siting and building orientation, they tend to think about a building design independent from it's context, where a scheme is plunked down onto the site and rotated until it magically finds the right orientation.

By altering the orientation, the only thing that you're doing is changing the amount of solar gain entering your spaces at different times of day and in different seasons. If you instead focus specifically on solar gain at a schematic level you can derive optimal building forms and have a better idea of where to place the glass and the solid elements of your design - a strategy much more effective than the drop and rotate method. Now with the Solar Radiation Technology plug-in you can instantly do this analysis on building masses in Revit Architecture and in Revit MEP.

By default the roofs and floors are included in the solar insolation analysis, but it's fairly easy to select these faces and either exclude them or include them as shading elements. By ignoring the roof elements the color scale will adjust to give a better visual understanding of which vertical surfaces are more suited for glazing.
To test interoperability I imported an object from Rhino and found that the analysis worked just as well on imported masses. It was actually much easier to import it from Rhino into Revit for the insolation analysis than it would be to take it directly from Rhino into Ecotect.

I do have one concern about this process: are the calculations made on the squares or on the corner points? With triangles it is easy to know the direction normal to the surface, but on out-of-plane squares it is impossible without further triangulation. Perhaps the values are determined for the corner points and averaged across the square, in which case contour lines would display more accurate results. In any case it would be interesting to learn the calculation methods being used.


My critique/wish list for the Solar Radiation Analysis plug-in:

  • I'd like to have the ability to extract the values for both the solar radiation and the area of the surface collecting the radiation. This would allow us to better compare options numerically, and to also determine the amount of energy that could be collected using PV systems.
  • I'd like the ability to include object surfaces (in addition to mass surfaces) as either shading or analysis surfaces. This functionality would allow me to optimize shading devices using the outer face of a window as the analysis surface. By default, all objects other than masses need to stay out of this process so that the calculation time stays small.
  • The ability to distinguish between direct and diffuse radiation. While they both contribute to Solar gain, I tend to want diffuse radiation much more than I want direct radiation simply because diffuse light is necessary for good daylighting.
Keep up the good work!

Monday, July 20, 2009

LEED Energy Model & Design: Part One, The Freeform Method

With the April launch of LEED v 3.0 the US Green Building Council has made a huge shift in their priorities concerning energy consumption. EA Credit 1, Optimized Energy Performance has always been the biggest potential point getter with 10 available, yet it is often underutilized.
Recognizing the neglect of energy credits for other low-hanging-fruit, the USGBC has raised the stakes in version 3.0 so that EA credit 1 will now allow 19 possible points for NC and Schools, and up to 21 points for Core and Shell. In a similar move the second largest point getter is now EA credit 2, Onsite Renewables with up to 7 points available. This priority shift towards Energy is in line with many other countries, including the EU, who almost solely focus on energy issues as many other areas of focus in LEED are better accounted for already in their local building codes (daylighting, site selection, public transportation, and many more).

The purpose of increasing the importance on energy is to ensure that energy issues becomes design issue. The current mainstream approach is to hold off on a LEED energy analysis until the beginning of Design Development, which keeps the energy analysis budget down. At this stage if a scheme doesn't meet the performance prerequisites, the client and the mechanical engineer/energy modeler began a process small tweaks with glass types, lighting and HVAC systems etc, to try to achieve the needed improvements. When you wait until this stage to evaluate options you're limited in the ability to make improvements, and you are really just buying a few extra percentage points through better materials or more efficient mechanical systems. So where is the architect in this process? Because of the blatant ineffectiveness of this approach everyone knew that something would have to change, and thankfully it will with LEED v 3.0.

Finally we have incentive to use energy analysis to shape our designs and influence our strategies, but how exactly is this done? How can we evaluate very simple schemes and extract information that shapes the development of our design? A current approach used in decision-making (rather than validation) is to run a simulation of a current scheme and then modify specific properties, rerun the analysis and see if the overall energy decreases and by how much? This is somewhat affective but unecesarry as it can be done for the most part without modeling, and isn't really used to inform the big moves at the begining. If we set ambitious goals such as zero energy usage or carbon emissions in line with the 2030 challenge, we then have to rely a lot more on modeling, however both of these are typically very difficult to achieve with in mainstream construction and so they're rarely used as goals. The LEED targets are much more commercially attainable.

The ASHRAE approach used by LEED is fairly good ranking method that sets a base value from which we can compare and evaluate our proposed design. The baseline is established by taking our scheme and simplifying it to a base building with strip windows(usually), ASHRAE specified systems, space types, construction types, then simulating energy use without shading, rotating the building every 90 degrees and taking an average of 4 simulations. After a baseline is establish you will model the building as it is designed and compare the % improvement over the baseline. A 10% improvement is the LEED prerequisite, and the points increase from there up to 19 points for a 48% improvement. This method is successful in certain climates, but let's say you have an office tower in a dessert - using using a baseline value of 40% exposed glazing is simply a bad design to begin with. In temperate regions, though, it is a decent method of evaluation that should be exploited for easy LEED points through informed early design decision. In LEED 3.0 it's not only easier to meet the prerequisite, but once you go above a 14% improvement you can rack up points much faster - which should be an incentive for those developers who typically buy points to start buying better designs.

But again, how can we use this process to shape better buildings? This is where I'd like to introduce the Freeform Method, developed to reveal the current gaps in our software capabilities.

Working in a process where clients need to know how many points they can achieve with a given scheme, I've taken the analysis back one step to evaluate how many points are possible given their site, their program requirements, and a conceptual building form. The building form, or massing, is always the first design decision made and the last decision that the architect is willing to change. In masterplanning the building footprints and heights are often set by planners before an architect even has a chance at the design, and after a client has approves a scheme, or even an image of a scheme, it's hard to go back and change the building form that was agreed upon.

By focusing on the form we can quickly evaluate the potential of a scheme by developing an ASHRAE baseline, simulating energy performance and comparing a number of energy conservation methods. This provides the design team with the knowledge that, given their building form, the greatest potential for energy savings can be achieved by optimizing one or more strategies such as daylighting, solar performance, thermal envelope, ventilation, etc... The rules are then set and the architect can design the building with a clear set of priorities and guidelines for energy performance. Other tools are then used to optimize specific design strategies (see earlier post). Not shown in the process map are the loops for alternate building forms, but you can imagine that coming from the same program requirements multiple forms can be compared equally. You could also include varying levels of each strategy, for example basic passive solar strategies are free but less effective than active strategies such as sun-tracking shades, electrochromic glazing, etc, where the solar performance is truly optimized but at an additional cost.

For this analysis process to be successful the simulation criteria all needs to be automated, the results must be trusted by engineers, and it must be done using simple forms generated or imported through a BIM exchange via .IFC or .XML. If any time is spent on remodeling, reassigning data, or reconfiguring the analysis models, the whole process is liable to fail due to resourcing and time constraints. Efforts in software development are crucial for the mainstream adoption of any improved design methods; when architects encounter information and responsibilities that are new that they don't fully understand, they often divert the responsibility elsewhere and continue their traditional method of working. But if the assessment of LEED energy points and potential was easily understandable at the beginning of design, better decisions could be made with confidence.

Currently there are no tools where such a method is automated, or even where BIM data can easily be reused for LEED energy analysis. However there are many new developments in the works and by the end of 2009 we should have at least three tools capable of BIM to LEED energy analysis. My next post on this topic will be an overview of the different tools and a timeline of when you can expect them to easily handle LEED energy modeling requirements. I'll also talk about which optimization features each program has to aid in the Freeform Method. Any comments on this method are appreciated, and don't just tell me, tell software developers what we need from their tools.

Wednesday, May 6, 2009

GBXML 2010

To save time and energy I'm attaching a link to the Bimology post on the same topic. Without re-hashing any details I plan to dwell on only a couple of things: shading devices and curtain walls.
When exporting to gbxml you now have 5 settings for how Revit will translate the model. You can either export as "simple" or "complex" which determines whether or not glazing will be broken up by mullions. If you choose "complex" Revit translates the window openings very accurately while still rationalizing curves. If you choose "simple" the panes all cluster together into one large window with an equivalent area. Both settings give you the choice to export with or without shading elements(4), and with "complex" you can also choose to export the mullions as shading elements(5)

Troll House shown in Revit

Detail level:"simple with shading surfaces."

Detail level:"complex with shading surfaces and mullions."

When importing the gbxml into Ecotect you can clearly see the differences in the curtain walls between "simple" and "complex." The 5th options which now allows us to export mullions as shading elements gives us some new options when designing exterior shading devices. But if you'll notice one of the overhangs was ignored.

If we are using Ecotect we can still import this as geometry and incorporate it either in our Ecotect analysis or in an export to another program like Green Building Studio.

If you are using Ecotect there is one glitch however: an entire curtain wall is read as a window with windows inside of it. When you look at what's supposed to happen in the Revit export you see that the perimeter, the mullion gaps between panes, and any solid curtain panels should translate as a single, solid wall.

But in Ecotect it all comes in as glass so you need to take a minute to reassign the parent wall as a material other than glass.

Now that was all fairly simple but let's look at a more complicated building. Below we have on the right side a glass curtain wall with horizontal shading elements made out of "mullion" objects. On the left we have a solid wall with the circular curtain wall elements cut out, or "embedded."

Revit actually does a fairly good job of rationalizing these shapes for the energy model.

When we view by "zone color" we can see that we have a problem with one floor. Ideally we would solve this issue by diagnosing or Revit model so we would never see it again. However if our diagnosis did not fix the problem, or if we were working in a non-integrated team structure where we didn't have access to the source file, we could always clean this up quickly in Ecotect

For buildings with repetitive floor plates you can always take one clean floor and copy it up. Or if you are doing design energy modeling and are only worried about relative values you can delete all but a typical level, make the floor and ceiling adiabatic, then study different form and facade options in your energy model.

Wednesday, April 29, 2009

Surface Subdivision 2010

The 2010 release of Revit Architecture has made a huge improvements in form generation, gbxml exporting, and what makes me most happy, new tools for surface subdivision.

First let me apologize that these buildings are less than spectacular, I had to rush to get this post out before my nemesis, Tzigo (who's actually a really nice guy, to his disadvantage), could do it and take credit. But as you can see these very strange shapes have surfaces that can be divided up and rationalized into various patterns which are easy to manipulate.

This level of control is extremely useful when you develop curtain systems in detail, but much earlier in concept design it also simplifies solar radiation analysis. When calculating incendent solar radiation in Ecotect the values are not consistent across non-planar surfaces. In urban settings also the shadows from the surrounding buildings will cause inconsistent insolation values across the building surfaces. For these reasons it's important to be able to subdivide large surfaces before running analysis.

In previous versions of Revit the triangulation of complex shapes has made Ecotect analysis very time consuming and sometimes nearly impossible. When they weren't too small the triangles would often be long and narrow which can misrepresents results diplaying deceptive data. Now in Revit 2010 you can take any curving, warping, bent out of shape surface and subdivide it exactly as you want for easy and efficient analysis. You can analyse a single building or easily run it for an entire neighborhood. For Revit users who've never done this before (I can't tell you have much easier is) now is definitely the best time.

Thanks for that.

Sunday, April 26, 2009

Shading Devices 2010

Warning!!
I first reviewed Ecotect Analysis 2010 before I had begun using Revit Architecture 2010. My updated post on gbxml 2010 goes into much more detail about the translation of Revit shading devices. Be sure to read both posts.

The development of BIM energy modeling has often been plagued with shading device difficulties, but the tools are getting better all of the time.

When you export to Ecotect you are able to import zones, as you can with other analysis programs such as IES and Green Building Studio, but with Ecotect you can also import building geometry. This ability allows you to separately import shading devices, testing out multiple options for optimal performance.

In the past I've used this method to import shading devices and complex shapes from Revit to Ecotect to IES-VE, now we can go from Ecotect into Green Building Studio.

In Ecotect Analysis 2010 the direct link to Green Building Studio enables users to create much more accurate models for energy analysis. This link also enables Green Building Studio users the opportunity to become involved in facade design, quickly relating geometry variations to energy consumption.

Thanks for that.

Sunday, April 19, 2009

Introducing Freeform Energy

Freeform Energy was started in the Spring of 2009 by Jon Allen Gardzelewski with the intent to bring sophisticated environmental design skills and services to the North American market. With international, cross disciplinary experience Freeform Energy is a model firm to fill the knowledge gap in AEC efforts at achieving energy efficient design.

Located in Washington, D.C., Freeform Energy is ideally situated to support the priority shifts of the new administration—a change which is wholeheartedly welcomed in the green building industry. A high performance building culture requires a policy shift at the top levels and integrated teams making informed decisions on the ground. A careful understanding of decisions and their impacts on building performance is key, creating a more transparent and accountable design environment that replaces guesswork with information.

Freeform Energy is a great believer that through the use of technology and ingenuity we can realize clean energy solutions, cut out carbon and improve the natural environmental. Green as much more than the business of saving energy—we're creating a higher standard of living for both ourselves and our planet.

www.freeformenergy.com

Sunday, January 4, 2009

CFD Q&A with IES

Computational Fluid Dynamics is a complex area of analysis seeing increased use on innovative projects, as recognized in a recent edition of Architectural Record. I've done my own studies in the past but have had some lingering questions which Dr. Liam Harrison from IES has been kind enough to answer.

Q> I have some questions about the CFD analysis process and wonder if there is reference material anywhere.

At the moment there is only the Microflo manual which can be found in the Help menu of the VE. We are currently putting together some tutorials but they have to wait for some slack periods in our project work to be completed.

Q> I can find wind data from the energy plus web page (tmy2, tmy3), are there better sources that you use?

We tend to use the energy plus data ourselves. Sometimes we use data taken on the site of a proposed development but this is rare as they tend to put the anemometers on top of a building so it is difficult to back out the localized wind effects caused by the buildings and they rarely have a whole years worth of data.

Q> From this wind data I can find the prevailing wind speed and direction which would be measured generally from 10 meters off of the ground. Is this right, 10 meters?

Yes, the standard height to measure wind speed and direction at a weather station is 10m above ground level. At this height the effects of vegetation, buildings and other obstructions on the wind speed and direction is limited. Also the data is usually taken in open areas such as airports to further limit the effect of obstructions.

I have tried to find a statement on the energy plus web page that all the data is taken at 10m height but can't find one so perhaps you will want to check for yourself to make sure.

Q> If my analysis grids extends 10 M then this weather station reading should indicate appropriate values to use in my study?

Microflo asks for the wind speed at 10M height and the terrain type and then produces a wind boundary profile on the inflow boundary of the CFD model equal to the equation from the ASHRAE Handbook - Fundamentals [2005] (section 16.3 “Airflow around Buildings”) - see Appendix D in the Microflo manual.

Q> If my grid extends 100 M for a tall building then I should use the wind profile power law (http://en.wikipedia.org/wiki/Wind_profile_power_law) to adjust for the height?

No, give Microflo the wind speed at 10M and the terrain type. Microflo will then automatically vary the wind speed over the inflow boundary using the ASHRAE handbook equation.

Q> Using the wind profile power law I’ll assume a neutral stability condition and use: “α is approximately 1/7, or 0.143.” Right?

Microflo will use a different value depending on the terrain type. See Appendix D in the Microflo manual.

Q> For an urban site the value of α might not be 1/7, but I don’t want to try to figure out the log wind profile (http://en.wikipedia.org/wiki/Log_wind_profile) unless I have to. Do the IES analysis settings take into account if the site is rural or urban?

Yes Microflo can take terrain type into account - the user can specify "Country," "Suburban," and "City" in Settings>>CFD_Settings.

Q> Finally, I’ve reduced the full analysis grid size to keep the number of cells under 500,000. Are there rules of thumb you use for spacing above or around a site?

The academic studies on CFD for the built environment gives the following recommendations:
The inlet, the lateral and the top boundary should be 5H away from the building, where H is the building height. For buildings with an extension in lateral direction much larger than the height, the blockage ratio should be below 3%. The outflow boundary should be positioned at least 15H behind the building to allow for flow development. For the same reason this outflow length should also be applied for an urban area with many buildings, where H is to be replaced by Hmax, the height of the tallest building. To prevent an artificial acceleration of the flow over the tallest building, the top of the computational domain should be also at least 5Hmax away from this building. For the blockage ratio the limit of 3% is recommended, although there are no results on whether it is better to include more of the surrounding buildings in the model and reduce the distance of the lateral boundaries from the built area.


In a Microflo model the recommendation above will be difficult to follow as it will tend to produce very large mesh sizes if you resolve the flow around the buildings. You need to have a fine mesh in areas where the air velocity gradient is greatest, i.e. around the buildings. In area where the velocity gradient is small you can have larger cells, i.e. near the boundaries of the computational domain. E.g. this can be achieved by:
  1. going into "Grid" mode at (1) in image below
  2. click into a region in the mesh, at (3) in image
  3. click "Edit mesh region" at (2)
  4. select a power law to vary the mesh and make it coarser at the boundaries and finer at the building

Even using this technique it will be difficult to resolve the flow near the buildings and have a domain as large as the recommendations so you may have to be pragmatic and forgo some of the computational domain in order to have enough cells near to the buildings. The only way to prove if having a smaller domain and more resolution near the building is better than the reverse situation is to do grid dependence studies where you change the grid to find the point where the solution changes insignificantly between meshes. In general though it is more important to resolve the flow near to the buildings, although this can make to solution less stable and harder to obtain a converged solution (nobody said CFD was easy).

No it's not easy but it doable, even for architects. Just be sure if you are working in a program other than IES that you know how to adjust the wind speed based on terrain type and the height of your grid.

Friday, January 2, 2009

Design Analysis Workflow

The most appropriate workflow for early analysis is to began designing within the BIM application. At this stage you can utilize the geometry for massing and orientation analysis, while also setting up thermal zones to began the process of energy modeling. But perhaps most crucial to this process is the guarantee that your model always contains the latest information, so that any change to the design can instantly be evaluated.
Our first example (above) shows how a mass in Revit can be used to define the building elements. After manipulating the mass you hit "remake" and all of the building elements will adjust to define the new shape. This process optimizes every benefit of Building Information Modeling while enabling us to reduce our building geometry to a basic form with which we can conduct multiple types of environmental analysis.

Incident solar radiation studies allow us to optimize our massing and orientation, and to prioritize which surfaces will need treatment as a project moves forward. Radiation levels are shown on the north facing surfaces (top) and on the southwest surfaces displayed with vector spikes (above).

If it is the case that 3D design information is consistently generated in non-BIM modeling platforms, we can still utilize this data so long as certain modeling standards are employed. In almost every 3D platform there is an opportunity to model with solids rather than surface geometry. When creating solids, or masses which can behave as solids, you set yourself up for four opportunities:
  1. Orientation and facades studies. Subdivide your model into surfaces, thoughtfully, then import it into Ecotect.
  2. Coordination. After linking the data into a BIM platform the solids can be sectioned and measured to produce your document set.
  3. Area schedules. Instantly.
  4. Energy modeling. Rooms/spaces/zones are used within the BIM platform to set up in-depth energy modeling. This can be conducted by the architect but preferably it is done by an experienced engineer.

The simple model shown above was used for facade analysis, plan and section views, area schedules, and an iterative energy model.

In our second example we took a simple tower shape and modified things (twisted it). For a full update after this modification there will be a few more steps than just hitting "remake," but within this process rests infinitely more potential than that where the model is only used to judge aesthetics. I must note here that all of these opportunities were available in the first example but with an improved workflow.









Twisted shape in plan, section, elevation and axonometric.





















Modeling solids, analyzing surfaces

The processes described above are those which I have had the most success with, but I am always happy to hear of other experiences.

Sunday, November 23, 2008

Understanding Virtual Enviornment

IES Virtual Environment is a comprehensive energy modeling and environmental analysis tool which can simulate just about everything happening to your building. Past use of Virtual Environment existed mostly outside of the architects' realm, however IES has done much recently to integrate their tools into the architectural process with push-button toolkits.

The strength of the toolkit is that it can be embedded into architectural design programs (Revit & Sketchup so far) and with a modified workflow one is capable of achieving truly integrated and iterative analysis.

The toolkit performs a series of independent analysis which makes it less comprehensive than the full Virtual Environment. The full suite contains an array of modules, each performing different tasks that are capable of feeding back into the main energy model. For example, if your project will utilize daylight harvesting you would first run daylight simulations and then link these results back into the thermal simulation module where you can see the energy impact from reduced cooling loads and electricity use. The process is similar for natural ventilation and solar shading analysis where separate modules link their results to the main thermal simulation engine. Even HVAC systems design can feed into the central energy model for collaborative A/E BIM analysis in a central model.

It is not immediately apparent where Virtual Environment falls in the traditional A/E firm structure and project workflow. What is apparent is that the ultimate success of its use requires a process shift in either the way the architect designs and delivers, or in the roles assigned to the mechanical and energy engineers (US in particular). Either way there must be time, budget, and client expectations to ensure that this level of collaboration succeeds.