pelicun.control module

This module has classes and methods that control the loss assessment.

Contents

Assessment()

A high-level class that collects features common to all supported loss assessment methods.

FEMA_P58_Assessment([inj_lvls])

An Assessment class that implements the loss assessment method in FEMA P58.

class pelicun.control.Assessment[source]

Bases: object

A high-level class that collects features common to all supported loss assessment methods. This class will only rarely be called directly when using pelicun.

Attributes
beta_tot

Calculate the total additional uncertainty for post processing.

Methods

calculate_damage(self)

Characterize the damage experienced in each random event realization.

calculate_losses(self)

Characterize the consequences of damage in each random event realization.

define_loss_model(self)

Create the stochastic loss model based on the inputs provided earlier.

define_random_variables(self)

Define the random variables used for loss assessment.

read_inputs(self, path_DL_input, path_EDP_input)

Read and process the input files to describe the loss assessment task.

write_outputs(self)

Export the results.

property beta_tot

Calculate the total additional uncertainty for post processing.

The total additional uncertainty is the squared root of sum of squared uncertainties corresponding to ground motion and modeling.

Returns
beta_total: float

The total uncertainty (logarithmic EDP standard deviation) to add to the EDP distribution. Returns None if no additional uncertainty is assigned.

read_inputs(self, path_DL_input, path_EDP_input, verbose=False)[source]

Read and process the input files to describe the loss assessment task.

Parameters
path_DL_input: string

Location of the Damage and Loss input file. The file is expected to be a JSON with data stored in a standard format described in detail in the Input section of the documentation.

path_EDP_input: string

Location of the EDP input file. The file is expected to follow the output formatting of Dakota. The Input section of the documentation provides more information about the expected formatting.

verbose: boolean, default: False

If True, the method echoes the information read from the files. This can be useful to ensure that the information in the file is properly read by the method.

define_random_variables(self)[source]

Define the random variables used for loss assessment.

define_loss_model(self)[source]

Create the stochastic loss model based on the inputs provided earlier.

calculate_damage(self)[source]

Characterize the damage experienced in each random event realization.

calculate_losses(self)[source]

Characterize the consequences of damage in each random event realization.

write_outputs(self)[source]

Export the results.

class pelicun.control.FEMA_P58_Assessment(inj_lvls=2)[source]

Bases: pelicun.control.Assessment

An Assessment class that implements the loss assessment method in FEMA P58.

Attributes
beta_tot

Calculate the total additional uncertainty for post processing.

Methods

aggregate_results(self)

calculate_damage(self)

Characterize the damage experienced in each random event realization.

calculate_losses(self)

Characterize the consequences of damage in each random event realization.

define_loss_model(self)

Create the stochastic loss model based on the inputs provided earlier.

define_random_variables(self)

Define the random variables used for loss assessment.

read_inputs(self, path_DL_input, path_EDP_input)

Read and process the input files to describe the loss assessment task.

write_outputs(self)

read_inputs(self, path_DL_input, path_EDP_input, verbose=False)[source]

Read and process the input files to describe the loss assessment task.

Parameters
path_DL_input: string

Location of the Damage and Loss input file. The file is expected to be a JSON with data stored in a standard format described in detail in the Input section of the documentation.

path_EDP_input: string

Location of the EDP input file. The file is expected to follow the output formatting of Dakota. The Input section of the documentation provides more information about the expected formatting.

verbose: boolean, default: False

If True, the method echoes the information read from the files. This can be useful to ensure that the information in the file is properly read by the method.

define_random_variables(self)[source]

Define the random variables used for loss assessment.

Following the FEMA P58 methodology, the groups of parameters below are considered random. Simple correlation structures within each group can be specified through the DL input file. The random decision variables are only created and used later if those particular decision variables are requested in the input file.

  1. Demand (EDP) distribution

Describe the uncertainty in the demands. Unlike other random variables, the EDPs are characterized by the EDP input data provided earlier. All EDPs are handled in one multivariate lognormal distribution. If more than one sample is provided, the distribution is fit to the EDP data. Otherwise, the provided data point is assumed to be the median value and the additional uncertainty prescribed describes the dispersion. See _create_RV_demands() for more details.

  1. Component quantities

Describe the uncertainty in the quantity of components in each Performance Group. All Fragility Groups are handled in the same multivariate distribution. Consequently, correlation between various groups of component quantities can be specified. See _create_RV_quantities() for details.

  1. Fragility EDP limits

Describe the uncertainty in the EDP limit that corresponds to exceedance of each Damage State. EDP limits are grouped by Fragility Groups. Consequently, correlation between fragility limits are currently limited within Fragility Groups. See _create_RV_fragilities() for details.

  1. Reconstruction cost and time

Describe the uncertainty in the cost and duration of reconstruction of each component conditioned on the damage state of the component. All Fragility Groups are handled in the same multivariate distribution. Consequently, correlation between various groups of component reconstruction time and cost estimates can be specified. See _create_RV_repairs() for details.

  1. Damaged component proportions that trigger a red tag

Describe the uncertainty in the amount of damaged components needed to trigger a red tag for the building. All Fragility Groups are handled in the same multivariate distribution. Consequently, correlation between various groups of component proportion limits can be specified. See _create_RV_red_tags() for details.

  1. Injuries

Describe the uncertainty in the proportion of people in the affected area getting injuries exceeding a certain level of severity. FEMA P58 uses two severity levels: injury and fatality. Both levels for all Fragility Groups are handled in the same multivariate distribution. Consequently, correlation between various groups of component injury expectations can be specified. See _create_RV_injuries() for details.

define_loss_model(self)[source]

Create the stochastic loss model based on the inputs provided earlier.

Following the FEMA P58 methodology, the components specified in the Damage and Loss input file are used to create Fragility Groups. Each Fragility Group corresponds to a component that might be present in the building at several locations. See _create_fragility_groups() for more details about the creation of Fragility Groups.

calculate_damage(self)[source]

Characterize the damage experienced in each random event realization.

First, the time of the event (month, weekday/weekend, hour) is randomly generated for each realization. Given the event time, the number of people present at each floor of the building is calculated.

Second, the realizations that led to collapse are filtered. See _calc_collapses() for more details on collapse estimation.

Finally, the realizations that did not lead to building collapse are further investigated and the quantities of components in each damage state are estimated. See _calc_damage() for more details on damage estimation.

calculate_losses(self)[source]

Characterize the consequences of damage in each random event realization.

For the sake of efficiency, only the decision variables requested in the input file are estimated. The following consequences are handled by this method:

Reconstruction time and cost Estimate the irrepairable cases based on residual drift magnitude and the provided irrepairable drift limits. Realizations that led to irrepairable damage or collapse are assigned the replacement cost and time of the building when reconstruction cost and time is estimated. Repairable cases get a cost and time estimate for each Damage State in each Performance Group. For more information about estimating irrepairability see _calc_irrepairable() and reconstruction cost and time see _calc_repair_cost_and_time() methods.

Injuries Collapse-induced injuries are based on the collapse modes and corresponding injury characterization. Injuries conditioned on no collapse are based on the affected area and the probability of injuries of various severity specified in the component data file. For more information about estimating injuries conditioned on collapse and no collapse, see _calc_collapse_injuries() and _calc_non_collapse_injuries, respectively.

Red Tag The probability of getting an unsafe placard or red tag is a function of the amount of damage experienced in various Damage States for each Performance Group. The damage limits that trigger an unsafe placard are specified in the component data file. For more information on assigning red tags to realizations see the _calc_red_tag() method.

aggregate_results(self)[source]
write_outputs(self)[source]
class pelicun.control.HAZUS_Assessment(hazard='EQ', inj_lvls=4)[source]

Bases: pelicun.control.Assessment

An Assessment class that implements the damage and loss assessment method following the HAZUS Technical Manual and the HAZUS software.

Parameters
hazard: {‘EQ’, ‘HU’}

Identifies the type of hazard. EQ corresponds to earthquake, HU corresponds to hurricane. default: ‘EQ’.

inj_lvls: int

Defines the discretization used to describe the severity of injuries. The HAZUS earthquake methodology uses 4 levels. default: 4

Attributes
beta_tot

Calculate the total additional uncertainty for post processing.

Methods

aggregate_results(self)

calculate_damage(self)

Characterize the damage experienced in each random event realization.

calculate_losses(self)

Characterize the consequences of damage in each random event realization.

define_loss_model(self)

Create the stochastic loss model based on the inputs provided earlier.

define_random_variables(self)

Define the random variables used for loss assessment.

read_inputs(self, path_DL_input, path_EDP_input)

Read and process the input files to describe the loss assessment task.

write_outputs(self)

Export the results.

read_inputs(self, path_DL_input, path_EDP_input, verbose=False)[source]

Read and process the input files to describe the loss assessment task.

Parameters
path_DL_input: string

Location of the Damage and Loss input file. The file is expected to be a JSON with data stored in a standard format described in detail in the Input section of the documentation.

path_EDP_input: string

Location of the EDP input file. The file is expected to follow the output formatting of Dakota. The Input section of the documentation provides more information about the expected formatting.

verbose: boolean, default: False

If True, the method echoes the information read from the files. This can be useful to ensure that the information in the file is properly read by the method.

define_random_variables(self)[source]

Define the random variables used for loss assessment.

Following the HAZUS methodology, only the groups of parameters below are considered random. Correlations within groups are not considered because each Fragility Group has only one Performance Group with a in this implementation.

  1. Demand (EDP) distribution

Describe the uncertainty in the demands. Unlike other random variables, the EDPs are characterized by the EDP input data provided earlier. All EDPs are handled in one multivariate lognormal distribution. If more than one sample is provided, the distribution is fit to the EDP data. Otherwise, the provided data point is assumed to be the median value and the additional uncertainty prescribed describes the dispersion. See _create_RV_demands() for more details.

  1. Fragility EDP limits

Describe the uncertainty in the EDP limit that corresponds to exceedance of each Damage State. EDP limits are grouped by Fragility Groups. See _create_RV_fragilities() for details.

define_loss_model(self)[source]

Create the stochastic loss model based on the inputs provided earlier.

Following the HAZUS methodology, the component assemblies specified in the Damage and Loss input file are used to create Fragility Groups. Each Fragility Group corresponds to one assembly that represents every component of the given type in the structure. See _create_fragility_groups() for more details about the creation of Fragility Groups.

calculate_damage(self)[source]

Characterize the damage experienced in each random event realization.

First, the time of the event (month, weekday/weekend, hour) is randomly generated for each realization. Given the event time, the number of people present at each floor of the building is calculated.

Next, the quantities of components in each damage state are estimated. See _calc_damage() for more details on damage estimation.

calculate_losses(self)[source]

Characterize the consequences of damage in each random event realization.

For the sake of efficiency, only the decision variables requested in the input file are estimated. The following consequences are handled by this method for a HAZUS assessment:

Reconstruction time and cost Get a cost and time estimate for each Damage State in each Performance Group. For more information about estimating reconstruction cost and time see _calc_repair_cost_and_time() methods.

Injuries The number of injuries are based on the probability of injuries of various severity specified in the component data file. For more information about estimating injuries _calc_non_collapse_injuries.

aggregate_results(self)[source]