As a major contributor to global greenhouse gas (GHG) emissions, buildings and infrastructure must rapidly decarbonize before 2050 in order to meet global GHG reduction goals (IPCC 2018). The built environment is responsible for generating approximately 40% of energy-related global GHG emissions, and 11% is generated by the manufacturing of materials (IEA 2019). In buildings and construction, GHG emissions or ‘carbon impacts’ can be divided into two categories: operational carbon (attributed to operational energy consumption during the building’s lifetime) and embodied carbon (EC) (attributed to building materials, which includes impacts from material extraction, manufacturing and transportation, as well as building construction, maintenance, replacement of building components, demolition/deconstruction and disposal).
EC is calculated using life-cycle assessment (LCA), which is the process of calculating the environmental impacts of an object or process according to standardized calculation methods ISO 14040 (2006b) and ISO 14044 (2006c). In existing standards for LCA in the building industry, these environmental impact results are categorized by life-cycle stages. Figure 1 describes the standardized framework for life-cycle assessment of buildings according to European Standard (EN) 15978 (Sustainability of Construction Works—Assessment of Environmental Performance of Buildings—Calculation Method) (CEN 2011).
EC generally encompasses the global warming potential (GWP) results in life-cycle stages A1–A5, B1–B5 and C1–C4, as defined in Figure 1 (WorldGBC 2019). ‘Upfront carbon’ refers to the emissions produced in stages A1–A5, before the building is occupied (WorldGBC 2019). This upfront carbon usually comprises the majority of a material’s EC impact, and is deemed especially important due to the importance of reducing carbon emissions as quickly as possible (RICS 2017; WorldGBC 2019). EC emissions also occur in stage B1 due to emissions from materials unrelated to energy or water use; in stages B2–B5 due to material manufacturing and maintenance; in stages C1–C4 due to demolition and waste treatment; and as additional information in stage D due to materials that have reuse, recovery and recycling potential. Although EC is often assessed at the building level, the primary focus of this paper is EC assessment at the product level because it relates to procurement and specification of construction materials.
The construction materials market is developing its capacity to quantify the EC of construction products accurately. A common way to communicate the environmental impacts of construction products for business-to-business (B2B) communication is through an environmental product declaration (EPD). An EPD, referred to in ISO 14025 as a ‘type III environmental declaration,’ is a document that reports the LCA results of a particular product along with a summary of the analysis methodology, assumptions and data sources (ISO 2006a). Currently, EPDs are only available for a subset of the total number of construction products on the market, but they are a growing source of environmental data for the construction industry. Their proliferation has been driven by building certification schemes, such as the US Green Building Council’s LEED program, which has an optional credit that involves the use of EPDs (Andersen et al. 2019).1
A product category is ‘a defined group of products that fulfill the same function.’ A product category rule (PCR)
define[s] the criteria for a specific product category and sets out the requirements that must be met when preparing an EPD for products under this category. (Fet et al. 2009: 202)
Relevant standards that govern the methodology of EPDs are:
As outlined in EN 15804 (CEN 2019), EPDs are primarily intended for B2B communication. In the context of environmental performance assessment, a main purpose of construction product EPDs is to supply data for building-scale analysis. That is, the construction product, along with an EPD that quantifies its impacts, is generally considered a source of information for building-scale assessment, and not an object of assessment in itself. While the methodology presented here can be applied to inform building-scale assessment, the primary focus is the use of EPDs for the purpose of comparing products. Therefore, it is worth noting this background purpose of EPDs in relation to building-scale assessment. As construction products are ultimately used in buildings, any product-to-product comparison using EPDs must meet preconditions of equivalent functionality and contribution to overall building impacts in order for those comparisons to be fair and reasonable. This is further addressed in section 1.1.2 below.
EPDs serve as readily available, third-party-verified sources of environmental information (Gelowitz & McArthur 2018) that are useful for communicating objective environmental data, improving corporate image and transparency, and understanding the impacts of production processes (Ibáñez-Forés et al. 2016). Having access to this information allows building designers and owners to speak with more confidence about the sustainability of their buildings. For this reason, they are recognized as a feasible and increasingly popular means of communicating environmental performance claims (Andersen et al. 2019).
In the early stages of design, teams use LCA to assess the embodied impacts of different design alternatives, e.g. which material to use for the structural system, how to configure the building and alternative facade enclosure options. These can influence EC as well as the impacts of operating the building over its lifetime. For these types of decisions, a whole-building LCA (WBLCA) approach is necessary and comparisons at the building scale enable the assessment of functionally equivalent solutions. In order for EPDs to be used in the early design stage, they must use aligned upstream data sets in order to be integrated into a consistent WBLCA framework. This ensures that the assessment of different material types and components is consistent. This role of construction product data—as an input into early-design-stage WBLCA—is critical and has informed the standards that govern EPD reporting.
While a reliance on industry-averages and broadly representative upstream data sets is appropriate for the types of analysis and decisions described above during the early design stage, later-stage analysis and decisions can be informed by more specific data. During the specification and procurement stages of a project, decisions turn from choosing between various material types to choosing between different manufacturers and products within a given material type. Thus, manufacturer- and product-specific data could be much more informative for these types of decisions.
Ng and To (2015: 1) argue that ‘prudent selection of “low carbon” construction materials is […] imperative’ due to the significant contribution of construction materials manufacturing to global carbon emissions and how widely EC can range between different product options. EPDs are well-positioned to help architecture, engineering and construction (AEC) professionals select low-carbon materials in construction specification and procurement, since EPDs already contain EC data and are becoming more standardized and common in the marketplace. Burke et al. (2018: 9) note that the use of EPDs is a
first step in the right direction because they facilitate comparison of environmental impacts of building products.
However, in order for EPDs to be used effectively in low-carbon construction procurement, the barriers described below in section 1.1.3 must be addressed.
EN 15804 (CEN 2019) describes the conditions for comparisons, including, ‘that comparisons between construction products are carried out in the context of their application.’ That is, since the construction product is viewed not as an object of assessment in itself, but rather as a data source to inform building-scale analysis,
in principle the comparison of products on the basis of their EPD is defined by the contribution they make to the environmental performance of the building. (CEN 2019)
Further, 'comparisons are possible at the sub-building level, e.g. for assembled systems, components, products for one or more life cycle stages. In such cases the principle that the basis for comparison of the assessment is the entire building, shall be maintained' (CEN 2019) by ensuring that the comparison assumes equivalency in terms of function, performance, included and excluded amounts and processes; and consistent accounting for any operational impacts and 'the elementary flows related to material inherent properties.' (CEN 2019).
A related application of product-specific data is that of incorporation into a later-stage WBLCA analysis to account for specific product choices. The industry-average for a given product type (e.g. rebar) that is useful for early-design-stage decisions represents a range of possible values that differ due to manufacturer, facility, geographic location and other variables. To gain a more accurate picture of the EC of a building project, a later-stage WBLCA would ideally incorporate data that are specific to the particular products used in the project to determine more precisely where the product and, by extension, the building falls on the range of possible values. As described below, any product-specific impact value similarly represents a range of possible values, and uncertainty assessment at the product scale (which is the focus of the methodology presented here) could be incorporated into the WBLCA Monte Carlo simulation. Such a simulation would use random sampling to combine and propagate the EC uncertainties of all the different products in the building to produce an overall probability distribution of uncertainty for the WBLCA. Thus, the use of product-specific data can aid the accuracy of later-stage (including as-built) WBLCA, and the uncertainty associated with the product-specific LCA results can aid in the overall uncertainty assessment for the WBLCA results.
There are many reasons why EPDs are not ideally suited for comparison, especially by non-experts. They can be difficult to interpret; most business users know very little about EPDs and would find it difficult to interpret the contents (Ibáñez-Forés et al. 2016). When sustainable design resources require expertise or training, their uptake is much more limited (Lamé et al. 2017). The American Institute of Architects (AIA) notes that
it is unlikely that any design professional will use LCA unless the inventory analysis data have already been collected, tabulated, and indexed in a way that promotes ease of use. (AIA 2010)
It is also important to note that PCR development is not well-regulated. This means that the quality of PCRs varies widely, leading to harmonization issues and erratic coverage of the product market (Gelowitz & McArthur 2016; Hunsager et al. 2014; Ingwersen & Stevenson 2012; Passer et al. 2015; Ibáñez-Forés et al. 2016; Subramanian et al. 2012; Minkov et al. 2015).
EPDs also have significant data-quality issues, such as overuse of generic data sets (which is more of a concern in the context of later-stage analysis and decisions than it is in the context of early design-stage analysis, as described above), lack of common data sources, limited data availability (Ingwersen & Stevenson 2012; Gelowitz & McArthur 2017; Minkov et al. 2015; Modahl et al. 2013), and poor reliability of results due to uncertainty and use of point estimates without confidence intervals or margins of error (Bhat & Mukherjee 2019). Differences between EPDs in their degree of reliance on specific data versus generic data can cause significant differences in results (Modahl et al. 2013). All these factors limit the quality and comparability of EPDs.
There are various frameworks to evaluate and address data quality in LCA (Wang & Shen 2013; Bhat & Mukherjee 2019; ISO 2006c). While EPDs themselves contain a qualitative description of their data quality per ISO 14044 (2006c), there are few available resources for evaluating data quality in EPDs to aid in product-to-product comparability. The document Guidance for the Implementation of the EU Product Environmental Footprint (PEF) during the Environmental Footprint (EF) Pilot Phase (European Commission 2016: annex E) provides guidance on the data requirements used in product environmental footprint category rules (PEFCRs). It identifies a semi-quantitative method and criteria to assess the data set. The method provides quality ratings on a scale between very good (1) and very poor (5), and a method to average six data-quality indicators to generate a single data-quality requirement. It also provides guidance on how to improve data quality by mandating specific data for the most relevant activities. Similar frameworks are presented in EN 15804 (CEN 2019: annex E). The Q metadata method is another resource (Erlandsson 2018).2 This method evaluates EPDs based on six criteria: (1) product comparability, (2) manufacturing representativeness, (3) data accuracy, (4) third-party review type, (5) additional documentation specifications and (6) Q metadata verification. Through this evaluation process, Q metadata helps to gauge the appropriate use(s) of an EPD, including side-by-side comparisons. As described below, the methodology presented here builds upon the Q metadata framework to further address the fairness and reliability of comparisons between products.
The EU product environmental footprint (PEF) guidance provides recommendations to improve comparability. The Q metadata framework provides a method to determine if results are sufficiently comparable for use in public procurement. Neither method provides guidance to interpret the comparability of EPDs that are not in compliance with either method. While an ideal future state would have product data reporting data quality and precision to enable a comparison, both are dependent upon changes in current LCA practices. The method developed in this paper enables assessment based on the data included in global EPDs today in a manner that enables interpretation by building industry professionals.
Given the landscape of EC data and its applications, the authors posit the following:
To address these issues, a methodology is proposed here for quantitatively assessing the data quality or specificity of EPDs, along with a discussion of how to apply that assessment to inform comparisons. As described further in the discussion, this method is meant to be one possible first step for this purpose. The intention is that others may build upon it to add to its sophistication and usefulness. While the primary discussion is about informing comparisons, the method can also be applied as part of a later-stage WBLCA process that accounts for specific product choices.
According to van Belle (2008), there are two subcategories of variation in data: (1) variability, which refers to the natural variation in a quantity; and (2) uncertainty, which refers to the precision of data measurement. Bhat & Mukherjee (2019: 106) discuss
methodological ambiguity in accounting for uncertainties, be they epistemic (due to inherent randomness) or aleatory (due to lack of knowledge). This reduces the reliability of using LCA for design selection, and of decision-making during material procurement.
Thus, the method presented here uses the term ‘uncertainty’ primarily to refer to the natural variation of data, aligning with van Belle’s use of variability and Bhat and Mukherjee’s epistemic uncertainty.
This section describes the methodology for addressing the data-quality issues associated with generic and aggregated data. The methodology develops and applies an uncertainty factor to the EPD’s declared GWP to obtain upper and lower bounds. These bounds are meant to represent the range of plausible values due to uncertainty in the underlying LCA data. The two basic steps of the proposed methodology are:
EPDs and their underlying LCAs rely (to varying extents) on aggregated and generic data. By definition, aggregation takes multiple values and returns a single value. Thus, when an EPD’s result is presented as a single number, but is based on generic/aggregated data, that single result represents a range of possible values. For example, one product EPD that covers five manufacturing facilities reports the production-weighted average impact (as is typical for multi-facility EPDs) and also the range of values due to facility differences (USG 2016). In the case of this example EPD, there is a 25% variation (as measured by mean divided by range) between facilities for GWP, and even larger variation for some other impact categories.
Current LCA practice defaults to industry-average data for upstream materials (AIA 2010) and point estimate outputs without associated information regarding the range of data (Bhat & Mukherjee 2019). This approach presumes a given material input is ‘average’ and therefore gives the EPD the ‘benefit of the doubt’ because using industry-average data for highly variable processes is not conservative. The concept of conservatism in environmental risk analysis states that
from out of the uncertainty and/or variability, the assessor would deliberately choose an estimate that [s]he believes is more likely to overestimate than to underestimate the risk. (Finkel 1994: 602)
ISO 21930:2017 states that ‘data gaps shall be filled with conservative assumptions’ (ISO 2017: section 7.1.8). Thus, this paper offers a method that adopts a ‘burden of the doubt’ (as opposed to a ‘benefit of the doubt’) approach with the aim of yielding conservative estimates for data gaps.
In the proposed method, the range of uncertainty is estimated for each EPD, and the EPD is evaluated based on a conservative estimate using the ‘high’ end of the estimate rather than the average. This means that for any given specific instance of a product (e.g. a specific piece of rebar) in the scope of the EPD, the product’s actual specific impact is assumed to have a high likelihood of being as good as or better than the conservatively assessed value. On the other hand, if a declared value is simply based on averaged results, the actual impact associated with a given specific instance of a product is only approximately 50% likely to be as good as or better than this declared value.
While this method approximates all uncertainties for EPDs at this time, the ultimate goal is that data variation and uncertainty will be reported by EPDs in the future. When this occurs, the EPD uncertainty data will take precedence over this method’s estimates. If product-specific data are reported with the range of uncertainty, these data could be integrated into WBLCAs using a Monte Carlo simulation.
The proposed method consists of (1) developing a set of data-quality factors to characterize the data quality of the EPD, (2) combining the factors using the root mean square method, and (3) applying the combined factor to the EPD result to provide a range of likely GWP due to uncertainty and a conservative point estimate in that range, as described in the following subsections.
The factors characterizing the variability of the underlying data were inspired by Q Metadata (Erlandsson 2018), and are defined as follows:
The methodology uses a pair of Z-values, expressed as a percentage above or below the reported average data, to describe the range of uncertainty. This is a reference to the statistics concept of ‘z-score’ used to describe how far a data point is from the mean. It should be noted that the method does differ in its use of Z-values from the technical definition of z-score. (See the discussion section below for more on this topic.)
As stated above, the best-case scenario would be that EPDs include their own uncertainty values. The next best case would be that the uncertainty values associated with each factor are material specific (as, for example, the variation in GWP due to facility differences may be larger for some material categories and smaller for others) and/or circumstance specific (as, for example, the variation in GWP due to facility differences would depend in part on the number and locations and production methods of the different facilities). However, at the time of writing, there were not enough industry LCA data to develop material- and circumstance-specific uncertainty factors. (The one exception is the Supply chain-specific factor, which does vary by material category as described below.) In the absence of EPD-specific uncertainty data or material- or circumstance-specific default Z-values, a set of global (non-material-specific) default Z-values has been developed for each factor. These default values are roughly based on the data variations seen in a database of EPDs (Building Transparency 2020). See Table 1 for the default ± Z-values and their qualifying conditions.
|ZM: Manufacturer specific||True||Products in the EPD are declared by only
That organization owns the facility at which the relevant product is made
|False||FALSE for all other conditions other than the TRUE condition described above. This includes EPDs that are described as representing a ‘sector’ or an ‘industry’||20%|
|ZF: Facility specific||True||
Manufacturer specific =
Only ONE manufacturing facility used the EPD or data point to declare products
|False||FALSE for all other conditions other than the TRUE condition described above||20%|
|ZP: Product specific||True||
Manufacturer specific =
The EPD or data point describes a single performance specification
No other product used the same EPD or data point
|False||FALSE for all other conditions other than the TRUE condition described above||20%|
|ZT: Time specific||True||
Product specific =
The declaration describes a single run of manufacturing that is no more than 90 calendar days long, AND no other batch or product uses the same declaration
|False||FALSE for all other conditions other than the TRUE condition described above||20%|
|ZS: Supply chain specific||%||See the discussion in section 2.2.1||Varies|
ZS reflects the influence of variation in the upstream supply chain on the uncertainty of the overall GWP of the EPD. It is expressed as a percentage with a positive and a negative value, and can be estimated as follows:
The root mean square method is used to combine the values from all the factors, producing a set of ± overall uncertainties for the EPD. The uncertainty of the overall EPD, ZEPD, is calculated by summing the variance (Z2) of each factor and taking the square root:
Note that there may be some overlap between the variables, conceptually. For example, Product specific is only true if Manufacturer specific is true. This chain of dependency means that these are ‘nested variables.’ Accounting for the full interdependence or correlation between variables would add complexity to the evaluation. Therefore, in the absence of a full sensitivity analysis, the values from the different factors are treated in the framework as independent of each other. This allows the factors to be added mathematically rather than resorting to simulation. In this methodology, each factor represents the additional uncertainty over the other factors.
Once an EPD’s uncertainty is assessed in the form ZEPD = +X%/–Y%, the set of likely values can be calculated. That is, the ±ZEPD values, when combined with the average reported in the EPD, form the bounds of reasonable confidence, meaning that it is reasonably likely that any given specific instance of a product (e.g. a specific kg of rebar) in the population (i.e. all the products represented by the EPD) will have a value within that range. (See section 3.3 below for further discussion of this language.) For example:
|If||ECd = 100 kg CO2e, and|
|ZEPD = +35%/–35%|
|then||the low end of the range of likely values = 100 – (0.35*100) = 65 kg CO2e|
|the high end of the range of likely values = 100 + (0.35*100) = 135 kg CO2e|
The higher, conservative result of applying the uncertainty factor to the declared value from the EPD is then used as ECc, the conservative single-number value that the proposed method generates that can be applied to a decision-informing process that sorts and compare EPDs. In this example, ECc = 135 kg CO2e.
The ultimate goal of this proposed method is to increase data transparency and motivate data-quality improvement in order to enable more informed comparisons between functionally equivalent materials and products during specification and procurement. This method also serves to inform the user about the general data quality of an EPD using a numerical value (quantitative) instead of a verbal description (qualitative), simplifying the transmission of information to the user. While acknowledging that EPDs have traditionally been focused on B2B communication and providing data for design-stage WBLCA, the authors propose that EPDs could serve in an expanded role to aid in the broader goal of lower carbon buildings. In order to be useful in such an expanded role of informing specification and procurement decision-making, there are some ways EPDs could change in terms of data quality and specificity. To employ EPDs in this capacity as they currently exist, though, the authors have presented one potential option on how to address the barriers to comparison described above.
In addition to directly addressing the engagement with EPD data by a set of EPD end-users (AEC professionals), the methodology also aims to indirectly address engagement by the EPD producer (manufacturer). The authors envision an industry-wide culture of EC awareness where AEC professionals use EPDs to drive product-selection decisions and where the growth of the construction market’s awareness of EC creates the market driver for manufacturers to manage their LCA numbers. Manufacturers, in an effort to make their products more competitive, would work to drive down the assessed EC (referred to as ECc here). To achieve this, the manufacturer could do the following:
AEC professionals are currently using, and will continue to use, EPDs for product-to-product comparisons for specification and procurement, despite the many problems with comparability as discussed above. Therefore, the proposed method’s aim is to improve that process, which includes the underlying assumption that a simpler user experience is more likely to promote uptake, and that the typical AEC professional will likely rely on one number (the single-point impact value) and not written descriptions of data quality.
EPDs ought to conform to the data-quality assessment standards outlined in ISO 14044 clause 126.96.36.199 (ISO 2006c), which include addressing time-related coverage, geographical coverage, technology coverage, precision, completeness, representativeness, consistency, reproducibility, data sources, and uncertainty of information:
Where a study is intended to be used in comparative assertions intended to be disclosed to the public, the data quality requirements … above shall be addressed.
Although the data-quality assessment portion of an EPD is helpful for a qualitative understanding of the relationship between reported results and the likelihood of those results matching actual impacts, this is insufficient for comparing EPDs in the product selection process. A simple side-by-side comparison of EPDs’ reported single-point ECs misses the potential range of the actual impacts due to differences in underlying data quality and variability. Also, in an environmental performance assessment of a building, it would be feasible to capture the overall data quality by using a quantitative method for the constituent parts of the building, but it would be impossible to do so using a qualitative method.
Given this need and the overall goal of reliably representing product EC, especially for fairer comparisons between products, the authors helped to develop the methodology here to address variability in the EPD results, with the aims to both draw attention to and quantitatively adjust the one item (in the authors’ opinion) that most AEC professionals look at in an EPD (the impact result) to reflect data-quality issues.3 In the present, rather than wait for others to do the work—which may not ever happen—a goal of the method is to uncover this uncertainty in order to motivate the production of higher quality data in the future.
Different materials and products report data at different levels of detail, which can make it difficult to compare EPDs because they have different underlying assumptions. The comparability of EPDs (or lack thereof) is important to consider when comparing the environmental impacts of products. Factors that impact the comparability of EC reported in EPDs, and whether or not the proposed methodology addresses them, include:
In all the above instances, the 20% value is provided as an estimate of unknown variation. Ideally each of these ‘z’ factors would be developed based upon statistical information relevant for each material category. At this time, the authors arbitrarily picked 20% as a high value of uncertainty with the goal to have a conservative assessment of the actual variability data for each material/product system to customize default assumptions. When LCA studies support precise numbers, these reported variations could supersede the default values given. As new EPDs come on the market responding to updates of ISO 21930 (2017), the quantitative assessment of some of these variables will become available to supersede these default estimates. In current industry practice, no EPDs are provided in ‘real time’ as ideally stated in the Time-specific factor. In that case, all products will have the same assessment of uncertainty due to temporal representation, and thus this factor is especially crude. A future improvement in this method would include a more nuanced calculation for the Time-specific factor.
If an EPD’s declared EC result represents a set of possible values above and below the declared average, the authors’ position is that the range of plausible or likely values falls between the production-weighted 20th and 80th percentiles of the larger set of possible values. Further, they believe that the appropriate conservative point value to use for ECc is the upper end of that range, i.e. the 80th percentile. In the ideal future state of more complete knowledge of the relevant data sets, ZEPD would represent the actual 80th and 20th percentile ranges of the production-weighted EC results. The reported 80th percentile would thus equate to a value that is ≥EC of 80% of the products represented.
In lieu of this, but with more complete knowledge of the relevant data sets than is currently the case, an approximation could be created based on the standard deviation of the population (i.e. all the specific individual products represented by the EPD). Combined with an understanding of the distribution of data, this could enable an estimation of specific confidence intervals. In such a scenario, as in the ideal future state described above, the 80th percentile represents the point where it would be 80% likely that a given data point (the emissions associated with a specific individual product, e.g. this particular piece of rebar) would be equal to or less than this value. However, this method assumes a known distribution of the data set, something that is not always the case for production-weighted estimates of LCA results.
Therefore, the authors intentionally choose to use a less precise definition of Z, where it represents an approximate estimate of uncertainty due to variation in the EPD’s aggregated data. The degree of confidence is accordingly also intentionally imprecise. In the methodology presented, +ZEPD forms the upper bound of reasonable confidence. And while the methodology ultimately aims to approximate the 80th percentile, the authors choose not to ascribe a certain value (e.g. 80%) to this confidence, as the use of a precise number may make the estimate appear to be more accurate than the authors believe it can currently be. Further, it should be noted that others could adapt this method by using the same general approach, but defining a different set of plausible values (as opposed the range formed by the approximate 20th and 80th percentiles as described here) and/or a different choice for an appropriate conservative single-point estimate for sorting and comparing (as opposed to the approximate 80th percentile as described here).
In order to enable fairer comparisons between environmental product declarations (EPDs) by experts in the later stages of design and decision-making, the authors wanted to be able to discern between EPDs of varying levels of background data specificity and transparency. To fulfill this goal, a method was developed using a burden-of-the-doubt approach. The presented methodology, inspired by the Q Metadata method for evaluating EPDs, assessed uncertainty based on five factors that relate to different aspects of an EPD’s data quality. The five factors were then combined to produce an overall uncertainty assessment to reflect an EPD’s data quality. The uncertainty assessment was then used to provide, in addition to the EPD’s declared single-point EC, a range of values and a conservatively assigned single-point value that can be used for sorting and comparing.
It is important to have more transparent accounting of EC data in the specification and procurement stages of building design. Improved data-quality reporting and assessment in these stages will increase the accuracy and credibility of environmental assessments of buildings in general. While this method can currently serve as a rudimentary check on the data quality of EPDs, it is also meant to motivate change in the industry. With better data, the industry can be empowered to reduce greenhouse gas emissions on a significant scale.
This method was originally developed to support the Embodied Carbon in Construction Calculator (EC3) tool, a database of EPDs developed to help building industry professionals make specification and procurement decisions. The EC3 tool sorts EPDs based on type and performance characteristics, as well as conservatively assessed GWPs. Stacy Smedley of Skanska/Building Transparency and Phil Northcott of C Change Labs conceived of the EC3 project, starting at the proof-of-concept stage. Their vision for both the functionality of the tool and the burden-of-the-doubt methodology is the foundation for the methodology presented here, and some text, such as the description of nested variables in section 2.3, is from their input to draft the EC3 methodology literature. Vicki Rybl assisted with the development of the EC3 tool and methodology.
Kate Simonen is on the board of directors of Building Transparency, a non-profit organization that supports the EC3 tool. The methodology presented in this paper was developed for the EC3 tool.
The Charles Pankow Foundation served as lead sponsor and grant administrator for the beta development of the EC3 methodology. For the many sponsors for the EC3 project, see www.buildingtransparency.org.
AIA. (2010). AIA guide to building life cycle assessment in practice. American Institute of Architects (AIA). Retrieved from https://www.aia.org/resources/7961-building-life-cycle-assessment-in-practice
Andersen, S. C., Larsen, H. F., Raffnsoe, L., & Melvang, C. (2019). Environmental product declarations (EPDs) as a competitive parameter within sustainable buildings and building materials. IOP Conference Series: Earth and Environmental Science, 323(1). DOI: https://doi.org/10.1088/1755-1315/323/1/012145
Bhat, C. G., & Mukherjee, A. (2019). Sensitivity of life-cycle assessment outcomes to parameter uncertainty: Implications for material procurement decision-making. Transportation Research Record, 2673(3), 106–114. DOI: https://doi.org/10.1177/0361198119832874
Building Transparency. 2020. Embodied Carbon in Construction Calculator (EC3) tool. Building Transparency. Retrieved from https://www.buildingtransparency.org/
Burke, R. D., Parrish, K., & El Asmar, M. (2018). Environmental product declarations: Use in the architectural and engineering design process to support sustainable construction. Journal of Construction Engineering and Management, 144(5), 1–10. DOI: https://doi.org/10.1061/(ASCE)CO.1943-7862.0001481
CEN. (2011). EN 15978:2011: Sustainability of construction works—Assessment of environmental performance of buildings—Calculation method. International standard. European Committee for Standardization (CEN).
CEN. 2019. EN 15804:2012+A2:2019: Sustainability of construction works. Environmental product declarations. Core rules for the product category of construction products—BSI British Standards. European Committee for Standardization (CEN).
Erlandsson, M. (2018). Q metadata for EPD. Retrieved from https://www.ivl.se/download/18.57581b9b167ee95ab99345/1547122416899/C363.pdf
European Commission. (2016). Guidance for the implementation of the EU product environmental footprint (PEF) during the environmental footprint (EF) pilot phase, version 5.2. European Commission. Retrieved from https://ec.europa.eu/environment/eussd/smgp/pdf/Guidance_products.pdf
Fet, A. M., Skaar, C., & Michelsen, O. (2009). Product category rules and environmental product declarations as tools to promote sustainable products: Experiences from a case study of furniture production. Clean Technologies and Environmental Policy, 11(2), 201–207. DOI: https://doi.org/10.1007/s10098-008-0163-6
Finkel, A. M. (1994). The case for ‘plausible conservatism’ in choosing and altering defaults. In Science and judgment in risk assessment. National Academies Press for the National Research Council Committee on Risk Assessment of Hazardous Air Pollutants. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK208270/
Gelowitz, M. D. C., & McArthur, J. J. (2016). Investigating the effect of environmental product declaration adoption in LEED® on the construction industry: A case study. Procedia Engineering, 145, 58–65. DOI: https://doi.org/10.1016/j.proeng.2016.04.014
Gelowitz, M. D. C., & McArthur, J. J. (2017). Comparison of type III environmental product declarations for construction products: Material sourcing and harmonization evaluation. Journal of Cleaner Production, 157(July), 125–133. DOI: https://doi.org/10.1016/j.jclepro.2017.04.133
Gelowitz, M. D. C., & McArthur, J. J. (2018). Insights on environmental product declaration use from Canada’s first LEED® v4 Platinum commercial project. Resources, Conservation and Recycling, 136(June), 436–444. DOI: https://doi.org/10.1016/j.resconrec.2018.05.008
Hunsager, E. A., Bach, M., & Breuer, L. (2014). An institutional analysis of EPD programs and a global PCR registry. International Journal of Life Cycle Assessment, 19(4), 786–795. DOI: https://doi.org/10.1007/s11367-014-0711-8
Ibáñez-Forés, V., Pacheco-Blanco, B., Capuz-Rizo, S. F., & Bovea, M. D. (2016). Environmental product declarations: Exploring their evolution and the factors affecting their demand in Europe. Journal of Cleaner Production, 116, 157–169. DOI: https://doi.org/10.1016/j.jclepro.2015.12.078
IEA. (2019). Global status report for buildings and construction 2019. International Energy Agency (IEA). DOI: https://doi.org/10.1038/s41370-017-0014-9
Ingwersen, W. W., & Stevenson, M. J. (2012). Can we compare the environmental performance of this product to that one? An update on the development of product category rules and future challenges toward alignment. Journal of Cleaner Production, 24(March), 102–108. DOI: https://doi.org/10.1016/j.jclepro.2011.10.040
IPCC. (2018). Summary for policymakers—Global warming of 1.5°C. 2018. Intergovernmental Panel on Climate Change (IPCC). Retrieved from https://www.ipcc.ch/sr15/chapter/summary-for-policy-makers/
ISO. (2006c). ISO 14044: Environmental management—Life cycle assessment—Requirements and guidelines. International Organization for Standardization (ISO). DOI: https://doi.org/10.1007/s11367-011-0297-3
ISO. (2017). ISO 21930: Sustainability in buildings and civil engineering works—Core rules for environmental product declarations of construction products and services. International Organization for Standardization (ISO).
Lamé, G., Leroy, Y., & Yannou, B. (2017). Ecodesign tools in the construction sector: Analyzing usage inadequacies with designers’ needs. Journal of Cleaner Production, 148, 60–72. DOI: https://doi.org/10.1016/j.jclepro.2017.01.173
LETI. (2020). LETI embodied carbon primer. Retrieved from https://www.leti.london/ecp
Minkov, N., Schneider, L., Lehmann, A., & Finkbeiner, M. (2015). Type III environmental declaration programmes and harmonization of product category rules: Status quo and practical challenges. Journal of Cleaner Production, 94, 235–246. DOI: https://doi.org/10.1016/j.jclepro.2015.02.012
Modahl, I. S., Askham, C., Lyng, K. A., Skjerve-Nielssen, C., & Nereng, G. (2013). Comparison of two versions of an EPD, using generic and specific data for the foreground system, and some methodological implications. International Journal of Life Cycle Assessment, 18(1), 241–251. DOI: https://doi.org/10.1007/s11367-012-0449-0
Ng, S. T., & To, C. (2015). Unveiling the embodied carbon of construction materials through a product-based carbon labeling scheme. International Journal of Climate Change: Impacts and Responses, 7(3), 1–9. DOI: https://doi.org/10.18848/1835-7156/CGP/v07i03/37241
NSF International. (2019). Product category rule for environmental product declarations—PCR for concrete. NSF International. Retrieved from http://www.nsf.org/newsroom_pdf/concrete_pcr_2019.pdf
Passer, A., Lasvaux, S., Allacker, K., De Lathauwer, D., Spirinckx, C., Wittstock, B., Kellenberger, D., Gschösser, F., Wall, J., & Wallbaum, H. (2015). Environmental product declarations entering the building sector: Critical reflections based on 5 to 10 years’ experience in different European countries. International Journal of Life Cycle Assessment, 20, 1999–1212. DOI: https://doi.org/10.1007/s11367-015-0926-3
RICS. (2017). Whole life carbon assessment for the built environment. The Royal Institution of Chartered Surveyors (RICS). Retrieved from https://www.rics.org/uk/upholding-professional-standards/sector-standards/building-surveying/whole-life-carbon-assessment-for-the-built-environment/
Subramanian, V., Ingwersen, W., Hensler, C., & Collie, H. (2012). Comparing product category rules from different programs: Learned outcomes towards global alignment. International Journal of Life Cycle Assessment, 17(7), 892–903. DOI: https://doi.org/10.1007/s11367-012-0419-6
Van Belle, G. (2008). Statistical rules of thumb: Second edition. Wiley. DOI: https://doi.org/10.1002/9780470377963
Wang, E., & Shen, Z. (2013). A hybrid data quality indicator and statistical method for improving uncertainty analysis in LCA of complex system-application to the whole-building embodied energy analysis. Journal of Cleaner Production, 43(March), 166–173. DOI: https://doi.org/10.1016/j.jclepro.2012.12.010
WorldGBC. (2019). Bringing embodied carbon upfront: Coordinated action for the building and construction sector to tackle embodied carbon. Retrieved from https://www.worldgbc.org/embodied-carbon