OpenQuake's Engine for Seismic Hazard and Risk Analysis

Overview

OpenQuake Engine

OpenQuake Logo

The OpenQuake Engine is an open source application that allows users to compute seismic hazard and seismic risk of earthquakes on a global scale. It runs on Linux, macOS and Windows, on laptops, workstations, standalone servers and multi-node clusters. DOI: 10.13117/openquake.engine

AGPLv3 PyPI Version PyPI - Wheel Language grade: Python

Current LTS release (for users wanting stability)

Current Long Term Support version is the OpenQuake Engine 3.11 'Wegener'. The documentation is available at https://github.com/gem/oq-engine/tree/engine-3.11#openquake-engine.

Current release (for users needing the latest features)

Current stable version is the OpenQuake Engine 3.12. The documentation is available at https://github.com/gem/oq-engine/tree/engine-3.12#openquake-engine.

Documentation (master tree)

General overview

For contributors

Installation

Mirrors

A mirror of this repository, hosted in Pavia (Italy), is available at https://mirror.openquake.org/git/GEM/oq-engine.git.

The main download server (downloads.openquake.org) is hosted in Nürnberg (Germany).

Running the OpenQuake Engine

Visualizing outputs via QGIS

IRMT Logo

License

The OpenQuake Engine is released under the GNU Affero Public License 3.

Contacts

Thanks

The OpenQuake Engine is developed by the Global Earthquake Model Foundation (GEM) with the support of



If you would like to help support development of OpenQuake, please contact us at [email protected]. For more info visit the GEM website at https://www.globalquakemodel.org/partners

Comments
  • Adding new parameters to site.py and adding a new gsim

    Adding new parameters to site.py and adding a new gsim

    This PR adds new soil and topographical parameters useful for predicting lateral spread displacements. This PR also adds new gsim, Youd et al. (2002), for estimating lateral spread displacements.

    enhancement 
    opened by Prajakta-Jadhav-25 21
  • Logic tree

    Logic tree

    Addresses bug https://bugs.launchpad.net/openquake/+bug/857364 Relates to bluepring https://blueprints.launchpad.net/openquake/+spec/openquake-logic-tree-module

    Requires newest opensha.

    opened by angri 21
  • New GSIM Idini2017

    New GSIM Idini2017

    • Implements Idini et al (2017) gsim, including its test and verification tables
    • Includes soiltype into site.py and adds it in site_test.py and oqvalidation.py
    enhancement 
    opened by pheresi 15
  • Risk classical

    Risk classical

    Implements the risk classical calculator based on the oq-risklib library.

    I apologize for the size of the change but, for the sake of simplicity, it has been necessary to delete/refactor a lot of obsolete lines of code.

    In this context, all the risk parsing stuff has been moved to a single file openquake/parser/risk. They should be refactored and moved into the nrml repository.

    opened by matley 13
  • Fix webui error remove other user

    Fix webui error remove other user

    Fix webui error remove other user Tests are green here: https://ci.openquake.org/job/zdevel_oq-engine/2535/ Closes https://github.com/gem/oq-engine/issues/2611

    enhancement 
    opened by hascar 12
  • Dump and restore Stochastic Event Set

    Dump and restore Stochastic Event Set

    Addresses https://bugs.launchpad.net/oq-engine/+bug/1247665

    Additionally, I have changed the database permissions such that oq_admin is able to create temporary tables (used for the dump/restore workflow).

    For a clean diff, https://github.com/gem/oq-engine/pull/1298/ should land first

    opened by matley 12
  • Classical Hazard calculator: execute and task functionality

    Classical Hazard calculator: execute and task functionality

    This branch is for https://bugs.launchpad.net/openquake/+bug/1020117.

    I'll soon submit another branch to clean up and improve test coverage. I wanted to submit this as soon as possible so I can get a review on the core functionality.

    Pull request #802 needs to land first.

    opened by larsbutler 12
  • Decouple job and mixins

    Decouple job and mixins

    This branch addresses: https://bugs.launchpad.net/openquake/+bug/908148

    The major change here is that calculator Mixin classes must be instantiated with a Job object. The Job and the Mixin are no longer mashed together at runtime into a Frankenobject; instead, we use a silly little pattern called 'composition'.

    There is still a bit of a mess here; the mixins still exist and some things are not quite how we would like them to be. This branch, however, is a big step towards our goal.

    For a few of the next steps in Operation: Destroy the Mixins, see: https://bugs.launchpad.net/openquake/+bug/909663 https://bugs.launchpad.net/openquake/+bug/907243

    https://github.com/gem/openquake/pull/622 and https://github.com/gem/openquake/pull/623 should land first.

    opened by larsbutler 12
  • Can requirements be bumped ? (numpy, scipy, pandas)

    Can requirements be bumped ? (numpy, scipy, pandas)

    Context

    Currently, the package limits the version of third party packages such as: numpy, scipy, pandas

    https://github.com/gem/oq-engine/blob/master/setup.py

    install_requires = [
        'setuptools',
        'h5py >=2.10',
        'numpy >=1.18, <1.20',
        'scipy >=1.3, <1.7',
        'pandas >=0.25, <1.3',
        'pyzmq <20.0',
        'psutil >=2.0, <5.7',
        'shapely >=1.7, <1.8',
        'docutils >=0.11',
        'decorator >=4.3',
        'django >=3.2',
        'matplotlib',
        'requests >=2.20, <3.0',
        'pyshp ==1.2.3',
        'toml',
        'pyproj >=1.9',
    ]
    

    Version 1.18 of numpy more than one year old.

    These requirements limit the usage of the open-quake package in project other dependencies.

    Question

    Is it possible to run the package with more recent versions of the requirements ?

    What is danger if the open-quake is run with newer version of third party packages ?

    investigation 
    opened by alexandreCameron 11
  • multi fault rupture format

    multi fault rupture format

    This is needed to better support models produced by GI or SHERIFS. In order to move forward we need to define a format for the input. What I suggest at the moment, is to define this source typology using two files. In one file that we could call fault_sections.xml we store the geometry of the fault sections with their ID.

    For example:

    <?xml version='1.0' encoding='utf-8'?>
    <nrml xmlns:gml="http://www.opengis.net/gml"
        xmlns="http://openquake.org/xmlns/nrml/0.5">
        <faultSections name="Dummy" id="fs1">
            <section name="Dummy Section" id="s1">
    			<kiteSurface>
    			<profile>
    				<gml:LineString><gml:posList>
    					10.0 45.0 0.0 10.0 45.5 10.0
    				</gml:posList></gml:LineString>
    			</profile>
    			<profile>
    				<gml:LineString><gml:posList>
    					10.5 45.0 0.0 10.5 45.5 10.0
    				</gml:posList></gml:LineString>
    			</profile>
    			</kiteSurface>
            </section>
        </faultSections>
    </nrml>
    

    The corresponding source model should look like this:

    <?xml version='1.0' encoding='utf-8'?>
    <nrml xmlns:gml="http://www.opengis.net/gml"
        xmlns="http://openquake.org/xmlns/nrml/0.5">
    
      	<sourceModel name="Hazard Model" investigation_time="1.0">
    		<sourceGroup name="group 1" rup_interdep="indep"
                src_interdep="indep" tectonicRegion="Active Shallow Crust">
                    <multiFaultSource id="1" name="Test1" tectonicRegion="Active Shallow Crust" 
                            faultSectionFname="./sections_kite.xml">
    			        <multiPlanesRupture probs_occur="0.93144767 0.06855233">
    			            <magnitude>4.7</magnitude>
    			            <sectionIndexes indexes="s1"/>
    			            <hypocenter depth="5.0" lat="45.0" lon="10.0"/>
    			            <rake>90</rake>
    			        </multiPlanesRupture>
    			        <multiPlanesRupture probs_occur="0.93144767 0.06855233">
    			            <magnitude>6.0</magnitude>
    			            <sectionIndexes indexes="s1,s2"/>
    			            <rake>-90</rake>
    			        </multiPlanesRupture>
    		    </multiFaultSource>
    	       </sourceGroup>
        </sourceModel>
    </nrml>
    
    enhancement 
    opened by mmpagani 11
  • Add categorical site class as permitted site parameter

    Add categorical site class as permitted site parameter

    Modified openquake/hazardlib/site.py, openquake/hazardlib/oqvalidation.py, and openquake/hazardlib/tests/gsim/check_gsim.py to take new site parameter ('siteclass').

    Updated mcverry_2006.py and test tables.

    enhancement 
    opened by cvanhoutte-zz 11
  • Graizer HDF5 file does not include PGA

    Graizer HDF5 file does not include PGA

    When running classical hazard analyses using some NGAEast GMPEs, I get an error raised from the NGAEast_GRAIZER.hdf5 file included in the OpenQuake source code, which is shown below. The PEER Report 2015-04 (https://peer.berkeley.edu/publications/2015-04) does include PGA in Chapter 9 and in its appendix data. I'm now using OpenQuake 3.15.0, but this has been a problem in previous versions of OpenQuake too. Is this a bug or was PGA purposely excluded in the HDF5 file?

    [2023-01-03 06:10:41 #11 CRITICAL] Traceback (most recent call last):
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/engine/engine.py", line 255, in run_calc
        oqparam = log.get_oqparam()
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/logs.py", line 202, in get_oqparam
        return readinput.get_oqparam(self.params)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/readinput.py", line 302, in get_oqparam
        oqparam = OqParam(**job_ini)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/oqvalidation.py", line 1117, in __init__
        self.check_gsims(gsims)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/oqvalidation.py", line 1270, in check_gsims
        raise ValueError(
    ValueError: The IMT PGA is not accepted by the GSIM [NGAEastGMPE]
    gmpe_table = 'NGAEast_GRAIZER.hdf5'
    
    Traceback (most recent call last):
      File "/home/carlos/openquake/bin/oq", line 8, in <module>
        sys.exit(oq())
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commands/__main__.py", line 56, in oq
        sap.run(commands, prog='oq')
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/baselib/sap.py", line 225, in run
        return _run(parser(funcdict, **parserkw), argv)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/baselib/sap.py", line 216, in _run
        return func(**dic)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commands/engine.py", line 175, in main
        run_jobs(jobs)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/engine/engine.py", line 412, in run_jobs
        run_calc(job)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/engine/engine.py", line 255, in run_calc
        oqparam = log.get_oqparam()
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/logs.py", line 202, in get_oqparam
        return readinput.get_oqparam(self.params)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/readinput.py", line 302, in get_oqparam
        oqparam = OqParam(**job_ini)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/oqvalidation.py", line 1117, in __init__
        self.check_gsims(gsims)
      File "/home/carlos/openquake/lib/python3.10/site-packages/openquake/commonlib/oqvalidation.py", line 1270, in check_gsims
        raise ValueError(
    ValueError: The IMT PGA is not accepted by the GSIM [NGAEastGMPE]
    gmpe_table = 'NGAEast_GRAIZER.hdf5'
    
    opened by carlosfherrera 0
  • oq nrml_to csv no longer including tectonicRegion

    oq nrml_to csv no longer including tectonicRegion

    When converting source files to csv or gpkg using the command

    oq nrml_to csv <source.xml>

    the tectonicRegion field is empty. This seems to be new as a result of the if statement here.

    Can we remove the if statement (or adjust it if the purpose is important), so that the tectonicRegion is always added? The parameter is very useful.

    opened by kejohnso 0
  • Conditioned ground motion calculator

    Conditioned ground motion calculator

    Initial work towards a conditioned ground motion calculator as specified in #8317

    This PR adds

    • the function get_station_data(..) to commonlib/readinput.py to read station data input csv files
    • the ConditionedGmfComputer class to handle the 20-odd steps involved in obtaining the conditioned mean and covariance of the ground motion at the target sites as described in #8317

    To do in subsequent PRs

    • verification tests
    • entry for the user manual
    • performance improvements
    enhancement 
    opened by raoanirudh 0
  • Simulating ground motion fields conditioned on station data

    Simulating ground motion fields conditioned on station data

    Simulating ground motion fields conditioned on station data

    Rationale

    Given an earthquake rupture model, a site model, and a set of ground shaking intensity models (GSIMs), the OpenQuake scenario calculator simulates a set of ground motion fields (GMFs) at the target sites for the requested set of intensity measure types (IMTs). This set of GMFs can then be used in the scenario_damage and scenario_risk calculators to estimate the distribution of potential damage, economic losses, fatalities, and other consequences. The scenario calculators are useful for simulating both historical and hypothetical earthquakes.

    For historical events, ground motion recordings and macroseismic intensity data ("ground truth") can provide additional constraints on the ground shaking estimates in the region affected by the earthquake. The U.S. Geological Survey ShakeMap system (Worden et al., 2020[^1], Wald et al., 2022[^2]) provides near-real time estimates of ground shaking for earthquakes, conditioned on station and macroseismic data where available. Support to read USGS ShakeMaps and simulate GMFs based on the mean and standard deviation values estimated by the ShakeMap — for peak ground acceleration (PGA) and spectral acceleration (SA) at 0.3s, 1.0s, and 3.0s — was added to the OpenQuake-engine in #3545 and #3606, and a link to the scenario_damage and scenario_risk calculators was added in #3608, based on the methodology described by Silva & Horspool (2019)[^3]. Further support for reading ShakeMaps from sources other than the USGS was added as discussed in #6572. Support for MMI was added in #6660, and some performance improvements were introduced in #6624.

    Starting from USGS ShakeMap v4.1, the conditioning of the ground shaking is based on the methodology developed by Engler et al. (2022)[^4], which preserves the partition of the (conditioned) within-event and between-event residuals, thus enabling more accurate propagation of the uncertainties through a Monte Carlo simulation approach.

    The ability to run OpenQuake scenario damage and loss calculations starting from ShakeMaps from multiple providers has proved to be a useful feature for the assessment of damage and losses for past earthquakes, development and calibration of fragility and vulnerability models, and testing of probabilistic seismic risk models[^5][^6]. However, there are still many instances where the user may wish to consider a different set of assumptions in the ground motion conditioning process compared to the ShakeMap service providers. For instance, users may be interested in:

    • testing the impact of using different rupture geometries and source parameters from published literature for an event[^7]
    • including additional recording station data that they have obtained, or excluding certain outliers based on criteria they deem relevant
    • employing a different Ground-Motion–Intensity Conversion Equation (GMICE) to convert macroseismic data to ground motion intensities or choosing to ignore the macroseismic data altogether
    • testing the behavior of individual GMPEs compared to the station data or using a weighted average GMPE for the tectonic region based on a probabilistic hazard model
    • incorporating a custom vs30 grid or using custom site-amplification functions if microzonation studies are available for the affected area
    • obtaining estimates for IMT(s) that are not directly available from the ShakeMap provider(s)
    • employing different models for the spatial cross-correlation of the within-event residuals and cross-correlation of the IMs for the between-event residuals

    For the reasons mentioned above, it would be beneficial to include within the OpenQuake-engine, a Conditioned GMF calculator tightly coupled with the existing scenario calculator[^8], that would:

    1. accept user-provided station data in addition to the usual inputs for a scenario calculation including an earthquake rupture model, a site model, and a single GSIM or a GSIM logic tree
    2. calculate the conditioned mean and conditioned between-event and within-event covariances for the target sites for the requested IMTs, for each GSIM given the station data, based on the procedure outlined in Engler et al. (2022)[^4]
    3. simulate the requested number of GMFs at the target sites
    4. proceed to damage / loss / consequence calculations if desired

    New inputs

    Station data csv file

    In addition to the usual inputs for a scenario calculation including an earthquake rupture model, a site model, and a single GSIM or a GSIM logic tree, the user would also need to provide a csv file containing the observed intensity values for one or more intensity measure types at a set of ground motion recording stations.

    The path to this input file could be specified in the job file using a new parameter station_data_file, eg.:

    [station_data]
    station_data_file = stationlist_all.csv
    

    The csv file snippet below illustrates the information that would be contained within such a station data file:

    |STATION_ID |STATION_NAME|LONGITUDE |LATITUDE |STATION_TYPE|PGA_VALUE|PGA_LN_SIGMA|SA(0.3)_VALUE|SA(0.3)_LN_SIGMA|SA(1.0)_VALUE|SA(1.0)_LN_SIGMA| |-----------------------|------------|-----------|---------|------------|---------|------------|-------------|----------------|-------------|----------------| |VIGA |LAS VIGAS |-99.23326 |16.75870|seismic |0.3550|0.0 |0.5262 |0.0 |0.1012 |0.0 | |VNTA |LA VENTA |-99.81885 |16.91426 |seismic |0.2061|0.0 |0.3415 |0.0 |0.1051 |0.0 | |COYC |COYUCA |-100.08996|16.99778|seismic |0.1676|0.0 |0.2643 |0.0 |0.0872 |0.0 | |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ | |UTM_14Q_041_186|NA |-99.79820 |16.86687 |macroseismic|0.6512 |0.8059 |0.9535 |1.0131 |0.4794 |1.0822 | |UTM_14Q_041_185|NA |-99.79761 |16.77656 |macroseismic|0.5797 |0.8059 |0.8766 |1.0131 |0.4577 |1.0822 | |UTM_14Q_040_186|NA |-99.89182 |16.86655 |macroseismic|0.4770 |0.8059 |0.7220 |1.0131 |0.3223 |1.0822 | |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |⋮ |

    Mandatory fields

    • STATION_ID: string; subject to the same validity checks as the id fields in other input files
    • LONGITUDE, LATITUDE; or LON, LAT: floats; valid longitude and latitude values
    • STATION_TYPE: string; currently the only two valid options are 'seismic' and 'macroseismic'
    • <IMT>_VALUE, <IMT>_LN_SIGMA / <IMT>_STDDEV: floats; for each IMT observed at the recording stations, two values should be provided – for IMTs that are assumed to be lognormally distributed (eg. PGV, PGA, SA), these would be the median and lognormal standard deviation using the column headers <IMT>_VALUE, <IMT>_LN_SIGMA respectively; and for other IMTs (eg., MMI), these would simply be the mean and standard deviation using the column headers <IMT>_VALUE, <IMT>_STDDEV respectively

    Optional fields

    • STATION_NAME: string; free form and not subject to the same constraints as the STATION_ID field, the optional STATION_NAME field can contain information that aids in identifying a particular station
    • Other fields: could contain notes about the station, flags indicating outlier status for the values reported by the station, site information, etc., but these optional fields will not be read by the station_data_file parser

    Site model file for the station sites

    All of the site parameters required by the GMMs used in the conditioned scenario calculation will also need to be provided for the set of sites in the station_data_file. This could be accomplished in various ways –

    A: Use the existing site model csv file parser to read the site model for the station locations at the same time as the site model file for the hazard / exposure sites, eg:

    [site_params]
    site_model_file = site_model.csv  site_model_stations.csv
    

    The advantage in this case would be that the already existing site model parser can be used directly, without needing to manage two separate site models under the hood. The drawback is that the two site models will get merged and it could become difficult to separate the stations from the hazard/exposure sites in downstream parts of the calculation.

    B: Read the site model for the station locations separately from the site model file for the hazard / exposure sites, eg:

    [site_params]
    site_model_file = site_model.csv
    station_site_model_file = site_model_stations.csv
    

    The advantage in this case would be that the site model for the stations can be kept separate from the the site model file for the hazard / exposure sites. The site model for the stations will be used only during the conditioning process, whereas the site model file for the hazard / exposure sites will be used for the ground motion simulation. The drawback is the requirement of adding a new input parameter station_site_model_file and separate management of two site model files, which might be a non-trivial task.

    Ground motion models

    Users can choose to specify one or more GSIMs for the conditioned scenario calculation using any of the following options:

    • A single GMM, eg. gsim = ChiouYoungs2014
    • A GSIM logic tree file, eg. gsim_logic_tree_file = gsim_logic_tree.xml
    • A weighted average GSIM, eg. gsim_logic_tree_file = gsim_weighted_avg.xml, where the file gsim_weighted_avg.xml can be constructed using the modifiable GMPE structure for AvgGMPE as shown in the example below:
    <?xml version="1.0" encoding="UTF-8"?>
    <nrml xmlns:gml="http://www.opengis.net/gml" 
          xmlns="http://openquake.org/xmlns/nrml/0.4">
      <logicTree logicTreeID='lt1'>
        <logicTreeBranchingLevel branchingLevelID="bl1">
          <logicTreeBranchSet 
            branchSetID="bs1" 
            uncertaintyType="gmpeModel" 
            applyToTectonicRegionType="Active Shallow Crust">
            <logicTreeBranch branchID="br1">
              <uncertaintyModel>
                [AvgGMPE]
                  b1.AbrahamsonEtAl2014.weight=0.22
                  b2.BooreEtAl2014.weight=0.22
                  b3.CampbellBozorgnia2014.weight=0.22
                  b4.ChiouYoungs2014.weight=0.22
                  b5.Idriss2014.weight=0.12
              </uncertaintyModel>
              <uncertaintyWeight>
                  1.0
              </uncertaintyWeight>
            </logicTreeBranch>
          </logicTreeBranchSet>
        </logicTreeBranchingLevel>
      </logicTree>
    </nrml>
    

    For a regular GSIM logic tree, the conditioning steps will be undertaken for each of the GSIMs in the logic tree file separately, and GMFs will be generated for each of the GSIMs separately as well. This can be useful when the modeller is evaluating or comparing the predictions from different GSIMs for a scenario.

    If a weighted average GSIM is provided, the conditioning and GMF simulation will occur only for the blended GSIM. This can be useful when the modeller is evaluating or trying to calibrate the performance of the set of GSIMs for the tectonic region as a whole against one or more scenarios, perhaps looking at the GSIM logic tree from a probabilistic hazard model for the region.

    Note: Each of the GSIMs specified for a conditioned GMF calculation must provide the within-event and between-event standard deviations separately. If a GSIM of interest provides only the total standard deviation, a (non-ideal) workaround might be for the user to specify the ratio between the within-event and between-event standard deviations (example), which the engine will use to add the between and within standard deviations to the GSIM.

    Calculation steps

    The broad calculation steps are as follows, with more detailed descriptions of each step provided subsequently:

    1. Read the usual scenario/scenario_damage/scenario_risk inputs
    2. If the station_data_file is present in the job file,
      1. Read the station data input file
      2. Trigger the ConditionedGmfComputer instead of the regular GmfComputer
      3. Calculate and store the conditioned between-event residual mean at every station site
      4. Calculate and store the nominal event bias
      5. Calculate the conditioned mean and covariance of the ground motion at the target sites
      6. Simulate and store the requested number of ground motion fields at the target sites
    3. Proceed to damage / loss / consequence calculations if desired

    Reading the station data input file

    This would require adding a function get_station_data(..) to commonlib/readinput.py, to extract the following three pieces of information from the station data csv file:

    1. station_data: a dataframe containing the mean and standard deviation values, in lognormal space or otherwise, for the set of valid IMTs in the csv file
    2. station_sites: a dataframe containing {station_id, lon, lat} for the list of stations in the csv file
    3. imts: the set of valid intensity measure types for which intensity observations are available in the csv file

    The ConditionedGmfComputer

    The conditoned GMF calculator should essentially follow the conditioning steps outlined in Appendix B of Engler et al. (2022)[^4], see attached pdf of Appendix B for the equations. In short, the main steps are for each target IMT:

    1. From station_data, get the mean observations (recorded values at the stations, "yD") and any additional sigma ("var_addon_D") for the observations that are uncertain, which might arise if the values for this particular IMT were not directly recorded, but obtained by conversion equations or cross-correlation functions
    2. Compute the predicted mean intensity ("mu_yD") and uncertainty components ("phi_D" and "tau_D") at the observation points, from the specified GMM(s)
    3. Compute the raw residuals at the observation points, by subtracting the predicted intensity from the observed intensity (zeta_D = yD - mu_yD)
    4. Compute the station data within-event covariance matrix ("cov_WD_WD"), and add on the additional variance of the residuals for the cases where the station data is uncertain
    5. Compute the (pseudo)-inverse of the station data within-event covariance matrix ("cov_WD_WD_inv")
    6. Compute the cross-intensity measure correlations for the observed intensity measures ("corr_HD_HD")
    7. Using Bayes rule, compute the posterior distribution (i.e., mean "mu_HD_yD" and covariance "cov_HD_HD_yD") of the normalized between-event residual, employing Engler et al. (2022), eqns B8 and B9 (also eqns B18 and B19)
    8. Compute the distribution of the conditional between-event residual ("mu_BD_yD" and "cov_BD_BD_yD")
    9. Compute the nominal bias and its variance as the means of the conditional between-event residual mean and covariance, display this information in the calculation log, and also store it as one of the outputs of the calculation
    10. From the GMMs, compute the predicted mean ("mu_Y") and stddevs ("phi_Y" and "tau_Y") of the intensity at the target sites
    11. Compute the mean of the conditional between-event residual for the target sites ("mu_BY_yD"), eqn. B18 and B19a
    12. Compute the within-event covariance matrices for the target sites and observation sites ("cov_WY_WD" and "cov_WD_WY")
    13. Compute the within-event covariance matrix for the target sites (apriori, "cov_WY_WY")
    14. Compute the regression coefficient matrix ("RC" = cov_WY_WD × cov_WD_WD_inv)
    15. Compute the scaling matrix "C" for the conditioned between-event covariance matrix, eqn. B22
    16. Compute the conditioned within-event covariance matrix for the target sites ("cov_WY_WY_wD"), eqn. B21
    17. Compute the "conditioned between-event" covariance matrix for the target sites ("cov_BY_BY_yD"), last term of eqn. B17
    18. Compute the conditioned mean of the ground motion at the target sites ("mu_Y_yD"), eqn. B16
    19. Compute the conditional covariance of the ground motion at the target sites ("cov_Y_Y_yD"), eqn. B17
    20. Use the conditioned mean vector (mu_Y_yD) and standard deviation matrices (cov_WY_WY_wD and cov_BY_BY_yD) computed in the preceding two steps to simulate the requested number of ground motion fields at the target sites

    Outputs

    1. Nominal event bias (from step 9 above): one bias value for each IMT, for every GSIM used in the calculation
    2. Conditioned between-event residual mean (from step 8 above): one bias value for each station site, for each IMT, for every GSIM used in the calculation (useful if the between-event uncertainties are heteroscedastic, depending upon either Vs30, the median estimated ground motions, or both)
    3. Simulated ground motion fields
    4. All other existing outputs of the scenario calculator

    Verification tests

    Since the implementation is expected to closely follow Engler et al. (2022), the results should match for the set of eleven simplified one-dimensional verification tests devised by Worden et al. (2020) to check the USGS ShakeMap implementation of the Engler et al. (2022) equations.

    Future improvements

    The conditioning process requires the specification of a spatial correlation model of the within-event residuals between two points for the intensity measures involved in the calculation, a model for the cross-measure correlation of the within-event residuals, and a model for the cross-measure correlation of the between-event residuals. Assuming a conditional independence of the cross-measure correlation and the spatial correlation of a given intensity measure, the spatial cross-correlation of two different IMs at two different sites can be obtained as the product of the cross-correlation of two IMs at the same location and the spatial correlation due to the distance between sites. Given the limited set of correlation models available in the engine currently, the choice of the three aforementioned correlation models could be hardcoded in the initial implementation of the conditioned GMF calculator, using JB2009CorrelationModel as the spatial correlation model of the within-event residuals, BakerJayaram2008 as the cross-measure correlation model for the within-event residuals, and GodaAtkinson2009 as the cross-measure correlation model for the between-event residuals.

    Ideally, the choice of the different correlation models should be exposed to the user through parameters in the job file. Direct spatial cross-correlation models for the within-event residuals[^9][^10] could also be used instead of separate models for the spatial correlation and cross-measure correlation. Supporting these choices would entail a substantial refactoring of the existing correlation.py and cross_correlation.py modules of the engine, and is thus left for a future issue.

    References

    [^1]: Worden, C. B., Thompson, E. M., Hearne, M. G., & Wald, D. J. (2020). ShakeMap Manual Online: technical manual, user’s guide, and software guide, U. S. Geological Survey. URL: http://usgs.github.io/shakemap/. DOI: https://doi.org/10.5066/F7D21VPQ. [^2]: Wald, D. J., Worden, C. B., Thompson, E. M., & Hearne, M. G. (2022). ShakeMap operations, policies, and procedures. Earthquake Spectra, 38(1), 756–777. DOI: https://doi.org/10.1177/87552930211030298 [^3]: Silva, V., & Horspool, N. (2019). Combining USGS ShakeMaps and the OpenQuake-engine for damage and loss assessment. Earthquake Engineering and Structural Dynamics, 48(6), 634–652. DOI: https://doi.org/10.1002/eqe.3154 [^4]: Engler, D. T., Worden, C. B., Thompson, E. M., & Jaiswal, K. S. (2022). Partitioning Ground Motion Uncertainty When Conditioned on Station Data. Bulletin of the Seismological Society of America, 112(2), 1060–1079. DOI: https://doi.org/10.1785/0120210177 [^5]: Riga, E., Karatzetzou, A., Apostolaki, S., Crowley, H., & Pitilakis, K. D. (2021). Verification of seismic risk models using observed damages from past earthquake events. Bulletin of Earthquake Engineering (Vol. 19). DOI: https://doi.org/10.1007/s10518-020-01017-5 [^6]: Crowley, H., Silva, V., Kalakonas, P., Martins, L., Weatherill, G. A., Pitilakis, K. D., Riga, E., Borzi, B., & Faravelli, M. (2020). Verification of the European Seismic Risk Model (ESRM20). In 17th World Conference on Earthquake Engineering. Sendai, Japan. [^7]: de Pro‐Díaz, Y., Vilanova, S., & Canora, C. (2022). Ranking Earthquake Sources Using Spatial Residuals of Seismic Scenarios: Methodology Application to the 1909 Benavente Earthquake. Bulletin of the Seismological Society of America. DOI: https://doi.org/10.1785/0120220067 [^8]: The GMPE Strong Motion Modeller's Toolkit (gmpe-smtk) includes an example of conditional simulation of ground motion fields using hazardlib in conditional_simulation.py, though the current issue envisages a much tighter coupling with the engine's scenario calculator architecture, using the formulation from Appendix B of Engler et al. (2022)[^4] to compute the conditioned mean and within-event and between-event residuals, and permitting users to input station data directly. [^9]: Loth, C., & Baker, J. W. (2013). A spatial cross‐correlation model of spectral accelerations at multiple periods. Earthquake Engineering & Structural Dynamics, 42(3), 397–417. DOI: https://doi.org/10.1002/eqe2212 [^10]: Du, W., & Ning, C. L. (2021). Modeling spatial cross-correlation of multiple ground motion intensity measures (SAs, PGA, PGV, Ia, CAV, and significant durations) based on principal component and geostatistical analyses. Earthquake Spectra, 37(1), 486–504. DOI: https://doi.org/10.1177/8755293020952442

    proposal 
    opened by raoanirudh 0
Releases(v3.15.0)
  • v3.15.0(Sep 12, 2022)

    [Michele Simionato (@micheles)]

    • Added a check on sum(srcs_weights) == 1 for mutex sources

    [Kendra Johnson (@kejohnso)]

    • Fixed disaggregation by lon, lat in presence of multiFaultSources

    [Michele Simionato (@micheles)]

    • Replaced command oq download_shakemap with oq shakemap2gmfs
    • Raised an error for missing required IMTs in ShakeMap grid files
    • Extended the custom_site_id to 8 characters
    • Restricted the accepted characters in risk IDs
    • Extended disagg_by_src to mutually exclusive sources (i.e. Japan) and managed "colon" sources specially

    [Anne Hulsey (@annehulsey)]

    • Contributed Mag_Dist_TRT and Mag_Dist_TRT_Eps disaggregations

    [Michele Simionato (@micheles)]

    • Internal: added a way to disable the DbServer from openquake.cfg or by setting OQ_DATABASE=local
    • Implemented total_losses, even for insurance calculations
    • Optimized "computing risk" in the event_based_risk calculator (~30% faster)
    • Changed the magnitude binning formula, thus fixing some disaggregation calculations (for instance when there is a single magnitude for a TRT)
    • Changed the aggrisk/aggcurves exporters in presence of insurance losses
    • Internal: changed how avg_losses, src_loss_table and agg_curves-stats are stored to simplify the management of secondary losses
    • Internal: we have now repeatable rupture IDs in classical PSHA

    [Pablo Iturrieta (@pabloitu)]

    • Added support for negative binomial temporal occurrence models

    [Marco Pagani (@mmpagani), Michele Simionato (@micheles)]

    • Added support for disaggregation in case of mutually exclusive sources

    [Michele Simionato (@micheles)]

    • Fixed error message when trying to compute disagg_by_src with too many sources: in some cases, it contained a misleading reference to point sources
    • Reorganized the Advanced Manual; changed the theme to be consistent with the OpenQuake manual
    • Internal: added command oq db engine_version
    • Added a check for required site parameters not passed correctly
    • Fixed ps_grid_spacing approximation when the grid is degenerate
    • Logging a warning when starting from an old hazard calculation
    • The extra fields of the site collection were lost when using --hc

    [Julián Santiago Montejo Espitia]

    • Implemented the Arteta et al. (2021) subduction model for Colombia

    [Michele Simionato (@micheles)]

    • Added host field to the job table (dbserver restart required)
    • --exports=csv was not honored for the realizations output; now it is

    [Paul Henshaw (@pslh), Sandra Giacomini]

    • Ported the OpenQuake manual from latex to reStructuredText format

    [Michele Simionato (@micheles)]

    • Entered automatically in sequential mode if there is not enough memory
    • Raised an early error for missing risk IDs in the vulnerability files
    • Changed the definition of aggrisk again to ensure consistency with the average losses

    [Tom Son (@SnowNooDLe)]

    • Added width and hypo_depth estimation to Campbell and Bozorgnia (2014)

    [Michele Simionato (@micheles)]

    • Improved the precision of the ps_grid_spacing approximation
    • Added a check for missing mags when using GMPETables
    • Fixed a bug in upgrade_nrml -m for point sources with different usd/lsd
    • Automatically discard irrelevant TRTs in disaggregation calculations

    [Ashta Poudel, Anirudh Rao (@raoanirudh), Michele Simionato (@micheles)]

    • Added the ability to run connectivity analysis in event_based_damage and scenario_damage calculation with an appropriate exposure

    [Tom Son (@SnowNooDLe), Michele Simionato (@micheles)]

    • Added ztor estimation to Campbell and Bozorgnia (2014)

    [Michele Simionato (@micheles)]

    • Internal: removed REQUIRES_COMPUTED_PARAMETERS
    • Using PointMSR when the reqv approximation is enabled
    • Internal: changed the rupture storage for classical calculations
    • Optimized rupture instantiation for point sources
    • Optimized distance calculations for point sources

    [Tom Son (@SnowNooDLe), Claudio Schill]

    • Simple performance improvement of Kuehn et al. 2020 model

    [Michele Simionato (@micheles)]

    • Changed again the string representation of logic tree paths and added an utility hazardlib.lt.build to build trees from literal lists
    • Fixed the field source_info.trti in the datastore to point to the correct tectonic region type index and not to zero
    • Added a check for inconsistent IDs across different kinds of risk functions (i.e. fragility and consequence)
    • Fixed a logging statement that could run out of memory in large calculations
    • Optimized iter_ruptures for point sources by vectorizing the generation of planar surfaces by magnitude, nodal plane and hypocenter

    [Tom Son (@SnowNooDLe)]

    • Implemented a missing piece in Chiou & Youngs (2014) model Predicted PSA value at T ≤ 0.3s should be set equal to the value of PGA when it falls below the predicted PGA

    [Marco Pagani (@mmpagani)]

    • Added the possibility of disaggregating in terms of epsilon*
    • Added a method to compute the cross-correlation matrix
    • Added Hassani & Atkinson (2018)
    • Added Hassani & Atkinson (2020)

    [Michele Simionato (@micheles)]

    • Fixed disaggregation returning NaNs in some situations with nonParametric/multiFaultSources
    • Bug fix: not storing far away ruptures coming from multiFaultSources
    • Implemented CScalingMSR
    • Optimized context collapsing in classical calculations
    • Setting ps_grid_spacing now sets the pointsource_distance too
    • Saving memory in preclassical calculations on machines with 8 cores or less
    • Changed the magnitude-dependent maximum_distance feature to discard ruptures below minmag and above maxmag
    • Added the ability to estimate the runtime of a calculation by using the --sample-sources option
    • Fixed a wrong formula in modifiable_gmpe.add_between_within_stds
    • Reduced the stress on the memory in classical calculations, thus improving the performance
    • Setting the truncation_level to the empty string is now forbidden; some GMFs calculations not setting truncation_level can now give different results since truncation_level=None is now replaced with truncation_level=99
    Source code(tar.gz)
    Source code(zip)
  • v3.14.0(Apr 12, 2022)

    Release 3.14.0

    [Michele Simionato (@micheles)]

    • Changed the definition of aggrisk: dividing by the effective time
    • Internal: removed flag save_disk_space since now it is always on
    • Slightly changed the collapsing of nodal planes and hypocenters in presence of the equivalent distance approximation (reqv)
    • Extended oq reduce_sm to multiFaultSources
    • Fixed the check on unique section IDs for multiFaultSources
    • Implemented multi-aggregation with a syntax like aggregate_by=taxonomy,region;taxonomy;region
    • Removed the obsolete commands oq to_shapefile and oq from_shapefile and turned pyshp into an optional dependency
    • Setting num_rlzs_disagg=0 is now valid and it means considering all realizations in a disaggregation calculation
    • Rounded the magnitudes in multiFaultSources to two digits

    [Marco Pagani (@mmpagani)]

    • Extended ModifiableGMPE to work with GMPETable and subclasses

    [Michele Simionato (@micheles)]

    • Upgraded shapely from version 1.7 to version 1.8: this causes slight changes in the results for most calculations
    • Removed the not used (and not working) functionality applyToSourceType
    • Raised an error when the total standard deviation is zero, unless truncation_level is set to zero

    [Tom Son (@SnowNooDLe)]

    • Fixed a typo and a few bugs within Kuehn et al. (2020) model to include Z2.5 when the given region is JAPAN

    [Michele Simionato (@micheles)]

    • Changed /extract/events to return events sorted by ID
    • Changed the default amplification method to "convolution"
    • Fixed a bug with discard_trts sometimes discarding too much
    • Raised a helpful error message when ensurepip is missing
    • Fixed parentdir bug in event_based_damage
    • Fixed sorting bug in the /v1/calc/run web API
    • Internal: introduced limited unique rupture IDs in classical calculations with few sites

    [Prajakta Jadhav (@Prajakta-Jadhav-25), Dharma Wijewickreme (@Dharma-Wijewickreme)]

    • Added GMPE Youd et al. (2002) and the corresponding site parameters

    [Michele Simionato (@micheles)]

    • Fixed the exporter aggrisk-stats in the case of zero losses
    • Vectorized all GMPEs and forbidden non-vectorized GMPEs
    • Raised the limit to 94 GMPEs per tectonic region type
    • Optimized the NBCC2015_AA13 GMPEs
    • Optimized the GMPETable and the derived NGAEast GMPEs
    • Fixed a 32/64 bit bug in oq export loss_maps-stats

    [Marco Pagani (@mmpagani)]

    • Added a more flexible version of the GC2 implementation
    • Added caching of distances in multi fault ruptures
    • Added the NRCan site term to the modifiable GMPE

    [Michele Simionato (@micheles)]

    • Optimized .get_bounding_box, .polygon and .mesh_size for MultiFaultSources
    • Fixed bug in presence of mixed vectorized/nonvectorized GMPEs
    • Extended oq postzip to multiple files and oq abort to multiple jobs
    • Internal: changed install.py to install the venv in /opt/openquake/venv
    • Fixed a BOM issue on Windows when reading job.ini files
    Source code(tar.gz)
    Source code(zip)
  • v3.11.5(Feb 10, 2022)

  • v3.13.0(Jan 25, 2022)

    [Michele Simionato (@micheles)]

    • Improved the precision of the pointsource_distance approximation
    • Added command oq show rlz:<no>
    • Internal: added an environment variable OQ_DATABASE

    [Manuela Villani (@mvillani)]

    • Added a function in the modifiable GMPE to convert ground-motion assuming different representations of the horizontal component.

    [Kendra Johnson (@kejohnso)]

    • Implemented possibility of assigning the parameters floating_x_step and floating_y_step for kite fault sources in the job configuration file

    [Michele Simionato (@micheles)]

    • The branchID is now autogenerated in the gsim logic tree files, thus solving the issue of wrong branch paths for duplicated branchIDs
    • Added a check for missing gsim information in the job.ini file
    • Fixed the case of continuous fragility functions with minIML=noDamageLimit

    [Miguel Leonardo-Suárez (@mleonardos)]

    • Added GMPE from Jaimes et al. (2020) for Mexican intraslab earthquakes

    [Michele Simionato (@micheles)]

    • Enforced ps_grid_spacing <= pointsource_distance
    • Internal: added command oq plot source_data?
    • The engine is now splitting the MultiFaultSources, thus improving the task distribution

    [Claudia Mascandola (@mascandola)]

    • Added a new class to the abrahamson_2015 gmm.
    • Added a new class to the lanzano_luzi_2019 and skarlatoudis_2013 gmms

    [Marco Pagani (@mmpagani), Shreyasvi Chandrasekhar (@Shreyasvi91)]

    • Added GMM from Bora et al. (2019)
    • Fixed bug in the multifault surface when defined using kite fault surfaces

    [Giuseppina Tusa (@gtus23)]

    • Added a new gsim file tusa_langer_azzaro_2019.py to implement the GMMs from Tusa et al. (2020).

    [Michele Simionato (@micheles)]

    • Added command oq compare uhs CALC_1 CALC_2
    • split_sources=false is now honored in disaggregation calculations
    • Internal: rup/src_id now refers to the row in the source_info table

    [Miguel Leonardo-Suárez (@mleonardos)]

    • Added the GMPE Arroyo et al. (2010) for Mexican subduction interface events

    [Marco Pagani (@mmpagani)]

    • Added a new method to the modifiable GMPE with which is possible to compute spatially correlated ground-motion fields even when the initial GMM only provides the total standard deviation.
    • Fixed a bug in the modify_recompute_mmax
    • Added a get_coeffs method to the CoeffTable class
    • Added support for EAS, FAS, DRVT intensitity measure types

    [Michele Simionato (@micheles)]

    • Extended the mag-dependent filtering to the event based calculator
    • The flag discrete_damage_distribution=true was incorrectly ignored when computing the consequences
    • Implemented reaggregate_by feature
    • Supported the custom_site_id in the GMF exporters
    • Bug fix: the site collection of the child calculation was ignored when using the --hazard-calculation-id option
    • Supported Python 3.9 and deprecated Python 3.6
    • Extended oq prepare_site_model to support .csv.gz files
    • Solved the issue of "compute gmfs" slow tasks in event_based and used the same approach in classical calculations too
    • Made sure valid.gsim instantiates the GSIM
    • ShakeMap calculations failing with a nonpositive definite correlation matrix now point out to the manual for the solution of the problem
    • Introduced the GodaAtkinson2009 cross correlation between event model
    • Specifying consequence files without fragility files now raises an error
    • Fixed a bug in event_based_risk with nontrivial taxonomy mapping producing NaNs in the event loss table
    • Internal: added kubernetes support from the WebUI

    [Shreyasvi Chandrasekhar (@Shreyasvi91)]

    • Added a new GMPE for significant duration proposed by Bahrampouri et al (2021).

    [Claudia Mascandola (@mascandola)]

    • Added the computation of tau and phi stdevs to the sgobba_2020 GMPE
    • Added a new class to the lanzano_2019 gmm.

    [Michele Simionato (@micheles)]

    • Changed completely the storage of the PoEs and reduced the memory consumption in classical calculations (plus 4x speedup in "postclassical")
    • Changed the behavior of sites_slice
    • Changed custom_site_id to an ASCII string up to 6 characters
    • Fixed the error raised in presence of a mag-dep distance for a tectonic region type and a scalar distance for another one

    [Yen-Shin Chen (@vup1120)]

    • Added the Thingbaijam et al. (2017) Magnitude Scaling Law for Strike-slip

    [Michele Simionato (@micheles)]

    • Changed the API of ContextMaker.get_mean_stds
    • Extended the WebUI to run sensitivity analysis calculations
    • Changed the string representation of logic tree paths and enforced a maximum of 64 branches per branchset
    • Added command oq info disagg
    • Accepted site models with missing parameters by using the global site parameters instead
    • Supported the syntax source_model_logic_tree_file = ${mosaic}/XXX/in/ssmLT.xml
    • Fixed a performance bug with ignore_master_seed=true
    • Added a command oq info cfg to show the configuration file paths
    • Added a check on the intensity measure levels with --hc is used
    • Bug fix: pointsource_distance = 0 was not honored
    • Fixed a small bug of oq zip job_haz.ini -r job_risk.ini: now it works even if the oqdata directory is empty
    • Optimized the aggregation of losses in event_based_risk and made it possible to aggregate by site_id for more than 65,536 sites
    • Fixed the calculation of average insured losses with a nontrivial taxonomy mapping: now the insured losses are computed before the average procedure, not after
    • Unified scenario_risk with event_based_risk, changing the numbers when producing discrete damage distributions
    • Added aggrisk output to event based damage calculation
    • Added parameter discrete_damage_distribution in scenario damage calculations and changed the default behavior
    • Deprecated consequence models in XML format
    • Event based damage calculations now explicitly require to specify number_of_logic_tree_samples (before it assumed a default of 1)

    [Elena Manea (@manea), Laurentiu Danciu (@danciul)]

    • Added the GMPE Manea (2021)

    [Michele Simionato (@micheles)]

    • Added a check against duplicated branchset IDs
    • Improved error checking when reading the taxonomy mapping file
    • Renamed conversion -> risk_id in the header of the taxonomy mapping file

    [Antonio Ettorre (@vot4anto)]

    • Bumped h5py to version 3.1.0

    [Michele Simionato (@micheles)]

    • Renamed the parameter individual_curves -> individual_rlzs
    • Reduced the number of disaggregation outputs and removed the long-time deprecated XML exporters
    • Fixed the ShakeMap calculator failing with a TypeError: get_array_usgs_id() got an unexpected keyword argument 'id'
    • Added conseq_ratio in the aggcurves exporter for event_based_damage
    • Added a conditional_spectrum calculator
    • Fixed an array<->scalar bug in abrahamson_gulerce_2020
    • Restored the classical tiling calculator
    Source code(tar.gz)
    Source code(zip)
  • v3.12.1(Nov 10, 2021)

    [Michele Simionato (@micheles)]

    • Fixed a bug in event_based_risk with nontrivial taxonomy mapping producing NaNs in the event loss table
    • Bug fix: pointsource_distance = 0 was not honored
    • Improved the universal installer when installing over a previous installation from packages
    • Fixed an error in oq zip job_haz.ini -r job_risk.ini
    • Fixed the disaggregation Mag_Lon_Lat exporter header mixup
    • Fixed the ShakeMap calculator failing with a TypeError: get_array_usgs_id() got an unexpected keyword argument 'id'
    • Fixed the Abrahamson Gulerce GMPE failing with a scalar<->array error
    Source code(tar.gz)
    Source code(zip)
  • v3.11.4(Sep 8, 2021)

    [Michele Simionato (@micheles)]

    • Fixed a bug in the adjustment term in NSHMP2014 breaking the USA model
    • Fixed the sanity check in event_based_damage giving false warnings
    • Fixed the corner case when there are zero events per realization in scenario_damage
    Source code(tar.gz)
    Source code(zip)
  • v3.12.0(Sep 6, 2021)

    [Marco Pagani (@mmpagani)]

    • Updated verification tables for Abrahamson et al. (2014) and checked values with other public resources.

    [Michele Simionato (@micheles)]

    • Added command oq info consequences
    • Improved error message for area_source_discretization too large
    • Improved command oq info exports
    • Internal: changed the signature of hazardlib.calc.hazard_curve.classical
    • Extended the multi-rupture scenario calculator to multiple TRTs
    • Removed the experimental feature pointsource_distance=?
    • Refactored the GMPE tests, with a speedup of 1-14 times
    • Added a script utils/build_vtable to build verification tables
    • oq info gsim_logic_tree.xml now displays the logic tree
    • Fixed a bug in the adjustment term in NSHMP2014 breaking the USA model

    [Graeme Weatherill (@g-weatherill)]

    • Implements Abrahamson & Gulerce (2020) NGA Subduction GMPE

    [Nico Kuehn (@nikuehn)/Graeme Weatherill (@g-weatherill)]

    • Implements Kuehn et al. (2020) NGA Subduction GMPE

    [Chung-Han Chan/Jia-Cian Gao]

    • Implements Lin et al. (2011)

    [Graeme Weatherill (@g-weatherill)/Nico Kuehn (@nikuehn)]

    • Implements Si et al. (2020) NGA Subduction GMPE

    [Michele Simionato (@micheles)]

    • There is now a huge speedup when computing the hazard curve statistics if numba is available
    • Made it possible to compute consequences in presence of a taxonomy mapping
    • Fixed a bug in get_available_gsims: GSIM aliases were not considered
    • Optimized the single site case by splitting the sources less
    • Restricted the acceptable methods in GMPE subclasses

    [Claudia Mascandola (@mascandola)]

    • Added the Lanzano et al. (2020) GMPE

    [Stanley Sayson (@stansays)]

    • Added the Stewart et al. (2016) GMPE for V/H
    • Added the Bozorgnia and Campbell (2016) GMPE for V/H
    • Added the Gulerce and Abrahamson (2011) GMPE
    • Corrected Campbell and Bozorgnia (2014) GMPE

    [Michele Simionato (@micheles)]

    • Fixed a subtle bug: in presence of a nontrivial taxonomy mapping, loss curves could be not computed due to duplicated event IDs in the event loss table coming from a int->float conversion
    • Forced a name convention on the coefficient tables (must start with COEFFS)
    • Replaced IMT classes with factory functions
    • Changed the minimum_distance from a parameter of the GMPE to a parameter in the job.ini
    • Supported consequences split in multiple files

    [Claudia Mascandola (@mascandola)]

    • Added the Sgobba et al. (2020) GMPE

    [Michele Simionato (@micheles)]

    • Improved the warning on non-contributing TRTs and made it visible for all calculators
    • Fixed a bug in scenarios from CSV ruptures with wrong TRTs
    • Added a limit of 12 characters to IMT names
    • Forbidded multiple inheritance in GMPE hierarchies
    • Added parameter ignore_encoding_errors to the job.ini
    • Extended the damage calculators to generic consequences
    • Renamed cname -> consequence in the CSV input files
    • Made sure the CSV writer writes in UTF-8

    [Graeme Weatherill (@g-weatherill)]

    • Updates Kotha et al. (2020) slope/geology model coefficients

    [Michele Simionato (@micheles)]

    • Improved post_risk to use all the cores in a cluster, since it was using the master only
    • Improved the validation of the investigation_time in event_based_damage
    • Renamed the losses_by_event CSV exporter to risk_by_event and made it work consistently for losses, damages and consequences; also removed the no_damage field

    [Marco Pagani (@mmpagani), Michele Simionato (@micheles)]

    • Implemented MultiFaultSources
    • Added method for computing rjb to kite surfaces
    • Added support for new epistemic uncertainties in the SSC LT

    [Michele Simionato (@micheles)]

    • Fixed newlines in the CSV exports on Windows

    [Graeme Weatherill (@g-weatherill)]

    • Added Ameri (2014) GMPE for the Rjb case

    [Michele Simionato (@micheles)]

    • Optimized the slow tasks in event_based calculations
    • Added an early check for fragility functions in place of vulnerability functions or viceversa

    [Marco Pagani (@mmpagani)]

    • Numeric fix to the amplification with the convolution method
    • Implemented the BakerJayaram2008 cross correlation model
    • Fixed the calculation of distances for kite surfaces with Nan values

    [Michele Simionato (@micheles)]

    • Fixed logic tree bug: MultiMFDs were not modified
    • Internal: added a view composite_source_model to show the sources by group

    [Nicolas Schmid (@schmidni)]

    • Added possibility to use *.shp files instead of *.xml files when doing risk calculations from shakemaps.

    [Michele Simionato (@micheles)]

    • Rewritten the event_based_damage calculation to support aggregate_by
    • Made it possible to run an event based risk calculation starting from a parent ran by a different user

    [Pablo Heresi (@pheresi)]

    • Implemented Idini et al (2017) GSIM.
    • Added dynamic site parameter 'soiltype'

    [Michele Simionato (@micheles)]

    • Added support for traditional disaggregation
    • Removed the global site parameter reference_siteclass and turned backarc, z1pt0 and z2pt into dynamic site parameters
    • Internal: storing the SiteCollection in a pandas-friendly way
    • Added HDF5 exporter/importer for the GMFs
    • Replaced XML exposures with CSV exposures in the demos

    [Claudia Mascandola (@mascandola)]

    • Fix to LanzanoEtAl2016 in presence of a "bas" term in the site model

    [Nicolas Schmid (@schmidni)]

    • Improve performance for ShakeMap calculations when spatialcorr and crosscorr are both set to 'no'
    • Add feature to do ShakeMap calculations for vulnerability models using MMI.

    [Michele Simionato (@micheles)]

    • Added a flag ignore_master_seed (false by default)
    • Estimated the uncertainty on the losses due to the uncertainty in the vulnerability functions in event_based_risk and scenario_risk calculations
    • Supported exposures with generic CSV fields thanks to the exposureFields mapping
    • Honored custom_site_id in the hazard curves and UHS CSV exporters
    • Added a check for the case of aValue=-Inf in the truncatedGR MFD
    • Extended the engine to read XML ShakeMaps from arbitrary sources (in particular local path names and web sites different from the USGS site)
    • Fixed readinput.get_ruptures to be able to read ruptures in engine 3.11 format
    • scenario_risk calculations starting from ruptures in CSV format now honor the parameter number_of_ground_motion_fields

    [Nicolas Schmid (@schmidni)]

    • Optimized spatial covariance calculations for ShakeMaps (more than 10x)
    • Adjusted logic in cross correlation matrix for ShakeMaps; now calculations are skipped for corr='no'

    [Michele Simionato (@micheles)]

    • Added a cholesky_limit to forbid large Cholesky decompositions in ShakeMap calculations
    • Weighted the heavy sources in parallel in event based calculations
    • Supported zero coefficient of variations with the beta distribution
    • Internal: changed how the agg_loss_table is stored
    • Fixed the avg_losses exporter when aggregate_by=id
    • Fully merged the calculators scenario_risk, event_based_risk and ebrisk and ensured independency from the number of tasks even for the "BT" and "PM" distributions
    • Storing the agg_loss_table as 64 bit floats instead of 32 bit floats
    • Changed the algorithm used to generate the epsilons to avoid storing the epsilon matrix
    Source code(tar.gz)
    Source code(zip)
  • v3.11.3(Mar 22, 2021)

    [Michele Simionato (@micheles)]

    • Fixed hdf5.dumps that was generating invalid JSON for Windows pathnames, thus breaking the QGIS plugin on Windows
    • Fix a bug when reusing a hazard calculation without aggregate_by for a risk calculation with aggregate_by
    • Fixed the aggregate curves exporter for aggregate_by=id: it was exporting b'asset_id' instead of asset_id
    Source code(tar.gz)
    Source code(zip)
  • v3.11.2(Mar 2, 2021)

  • v3.11.1(Mar 1, 2021)

  • v3.11.0(Feb 23, 2021)

    [Michele Simionato (@micheles)]

    • Extended the collapse_logic_tree feature to scenarios and event based calculations
    • Extended the taxonomy mapping feature to multiple loss types
    • The error was not stored in the database if the calculation failed before starting
    • Made ground_motion_fields=true mandatory in event_based_risk

    [Robin Gee (@rcgee)]

    • Added a check for missing soil_intensities in classical calculations with site amplification

    [Michele Simionato (@micheles)]

    • Documented all the parameters in a job.ini file, and removed some obsolete ones
    • Added a CSV exporter for the output avg_gmf
    • Fixed reporting in case of CorrelationButNoInterIntraStdDevs errors
    • Better error message when the rupture is far away from the sitesa
    • Made the calculation report exportable with --exports rst
    • The boolean fields vs30measured and backarc where not cast correctly when read from a CSV field (the engine read them always as True)
    • Extended oq plot to draw more than 2 plots
    • Raised an early error for zero probabilities in the hypocenter distribution or the nodal plane distribution
    • Extended the autostart zmq distribution logic to celery and dask
    • Stored the _poes during the classical phase and not after, to save time
    • Implemented a memory-saving logic in the classical calculator based on the memory.limit parameter in openquake.cfg;

    [Richard Styron (@cossatot)]

    • Added TaperedGRMFD to hazardlib

    [Michele Simionato (@micheles)]

    • Fixed a wrong check failing in the case of multi-exposures with multiple cost types
    • Removed a check causing a false error "Missing vulnerability function for taxonomy"
    • Consequence functions associated to a taxonomy missing in the exposure are now simply discarded, instead of raising an error
    • Added a warning when there are zero losses for nonzero GMFs
    • Added a command oq plot avg_gmf?imt=IMT
    • Internal: stored avg_gmf as a DataFrame
    • Honored the individual_curves parameter in avg_losses, agg_losses and and agg_curves (i.e. by default only expose the statistical results)
    • Refactored the oq commands and removed the redundant oq help since there is oq --help instead
    • Support for input URLs associated to an input archive
    • Introduced deformation_component parameter in the secondary perils
    • Optimized the storage of the risk model with a speedup of 60x for a calculation with ~50,000 fragility functions (2 minutes->2seconds) and a 3x reduction on disk space
    • Accepted aggregate_by=id in scenario/event based calculations
    • Accepted aggregate_by=site_id in scenario/event based calculations
    • Removed the generation of asset loss maps from event_based_risk
    • Made the "Aggregate Losses" output in scenario_risk consistent with event_based_risk and scenario_risk and supported aggregate_by
    • Perform the disaggregation checks before starting the classical part
    • Changed the "Aggregate Loss Curves" CSV exporter to generate a file for each realization, for consistency with the other exporters
    • The ebrisk outputs "Total Losses" and "Total Loss Curves" are now included in the outputs "Aggregate Losses" and "Aggregate Curves"
    • Introduced an agg_loss_table dataset and optimized the generation of aggregate loss curves (up to 100x speedup)
    • Removed misleading zero losses in agg_losses.csv
    • Fixed oq recompute_losses and renamed it to oq reaggregate
    • Bug fix: ignore_covs=true now sets the coefficient of variations to zero

    [Anirudh Rao (@raoanirudh)]

    • Improved error handling of bad or zero coefficients of variation for the Beta distribution for vulnerability

    [Michele Simionato (@micheles)]

    • Fixed 32 bit rounding issues in scenario_risk: now the total losses and and the sum of the average losses are much closer
    • Internal: made the loss type occupants a bit less special
    • Documented oq to_nrml

    [Claudia Mascandola (@mascandola)]

    • Added the Lanzano et al. (2019) GMPE

    [Michele Simionato (@micheles)]

    • Honored minimum_asset_loss also in the fully aggregated loss table, not only in the partially aggregated loss tables and average losses
    • Bug fixed: the log was disappearing in presence of an unrecognized variable in the job.ini
    • Implemented minimum_asset_loss in scenario_risk for consistency with the ebrisk calculator
    • Added a command oq plot gridded_sources?
    • Fixed oq recompute_losses to expose the outputs to the database
    • Fixed oq engine --run --params that was not working for the pointsource_distance
    • Changed the meaning of the pointsource_distance approximation

    [Marco Pagani (@mmpagani), Michele Simionato (@micheles), Thomas Chartier (@tomchartier)]

    • Added experimental version of KiteSource and KiteSurface

    [Michele Simionato (@micheles)]

    • Changed the serialization of ruptures to support MultiSurfaces
    • Fixed a small bug of logic in the WebUI: if the authentication is turned off, everyone must be able to see all calculations
    • Fixed a bug in the calculation of averages losses in scenario_risk calculations in presence of sites with zero hazard
    • Optimized the prefiltering by using a KDTree
    • Experimental: implemented gridding of point sources
    • Reduced slow tasks due to big complex fault sources
    • Moved the parameter num_cores into openquake.cfg
    • Internal: introduced the environment variable OQ_REDUCE
    • Using pandas to export the GMF in CSV format
    • Internal: required h5py == 2.10.0
    • Internal: made the classical ruptures pandas-friendly
    • Internal: made the damage distributions pandas-friendly

    [Marco Pagani (@mmpagani)]

    • Added a new type of undertainty for the seismic source characterisation logic tree called TruncatedGRFromSlipAbsolute
    • Added a get_fault_surface_area method to sources

    [Michele Simionato (@micheles)]

    • Changed the source seed algorithm in event based calculations
    • Added an estimate of the portfolio damage error due to the seed dependency
    • Stored the damage distributions in a pandas-friendly way and extended DataStore.read_df to accept multi-indices

    [Viktor Polak (@viktor76525)]

    • Added the Phung et al. (2020) GMPE

    [Michele Simionato (@micheles)]

    • Implemented truncGutenbergRichterMFD from slip rate and rigidity
    • Fixed bug when computing the damage distributions per asset and event
    • Simplified/optimized the UCERF filtering

    [Viktor Polak (@viktor76525)]

    • Added the Chao et al. (2020) GMPE

    [Michele Simionato (@micheles)]

    • Introduced an early memory check in classical calculations
    • Reduced the memory occupation in classical calculations
    • Implemented AvgPoeGMPE
    • Forbidded the usage of aggregate_by except in ebrisk calculations
    • Added a check on valid branch ID names: only letters, digits and the characters "#:-_." are accepted
    • Huge performance improvement for very complex logic trees
    • Shortened the logic tree paths when exporting the realizations

    [Graeme Weatherill (@g-weatherill)]

    • Refactor of the Kotha et al. (2020) GMM and its adjustments for ESHM20

    [Michele Simionato (@micheles)]

    • Huge speedup in models with src_multiplicity > 1
    • Fixed bug in source model logic tree sampling with more than 2 branchsets
    • Fixed hazard maps all zeros for individual_curves=true and more than 1 site
    • Fixed a bug in oq prepare_site_model when sites.csv is the same as the vs30.csv file and there is a grid spacing
    • Speeding up the preclassical calculator
    • Added an entry point /extract/eids_by_gsim for the QGIS plugin
    • Internal: automatically convert the source IDs into unique IDs
    • Changed scenario calculations to depend on the ses_seed, not the random_seed
    • Added check on the versions of numpy, scipy and pandas between master and workers
    • Added a check for large seed dependency in the GMFs and an estimate of the portfolio error due to the seed dependency

    [Viktor Polak (@viktor76525)]

    • Added fpeak site parameter
    • Added the Hassani and Atkinson (2020) GMPE

    [Marco Pagani (@mmpagani)]

    • Added a check on DEFINED_FOR_REFERENCE_VELOCITY when using amplification
    • Added a method to create a TruncatedGRMFD from a value of scalar seismic moment
    • Added a method to the modifiable GMPE to add (or subtract) a delta std
    • Added a method to the modifiable GMPE to set the total std as the sum of tau plus a delta
    Source code(tar.gz)
    Source code(zip)
  • v3.10.1(Oct 18, 2020)

    [Matteo Nastasi (@nastasi-oq)]

    • Add info to doc about OpenQuake manual path for linux and mac installers

    [Laurentiu Danciu (@danciul) and Athanasios Papadopoulos]

    • Implemented intensity prediction equations for use in the Swiss Risk Model. The new IPEs refer to models obtained from the ECOS (2009), Faccioli and Cauzzi (2006), Bindi et al. (2011), and Baumont et al. (2018) studies.
    • Added new float site parameter 'amplfactor'
    • Extended the ModifiableGMPE class to allow amplification of the intensity of the parent IPE based on the ‘amplfactor’ site parameter

    [Anirudh Rao (@raoanirudh)]

    • Fixed the glossary in the manual

    [Michele Simionato (@micheles)]

    • Avoided storing too much performance_data when pointsource_distance is on
    • Fixed performance regression in classical_risk, classical_damage, classical_bcr
    • Fixed oq engine --run job.ini for ShakeMap calculations
    Source code(tar.gz)
    Source code(zip)
  • v3.10.0(Sep 29, 2020)

    [Richard Styron (@cossatot)]

    • Added secondary perils ZhuLiquefactionGeneral and HazusLateralSpreading, supplementing HazusLiquefaction and NewmarkDisplacement

    [Michele Simionato (@micheles)]

    • Fixed a bug with site models containing non-float parameters
    • Raised the limit on the asset ID from 20 to 50 characters
    • Changed the /extract/events API to extract only the relevant events
    • Removed the GMF npz exporter
    • Speed-up risk saving in scenario_risk and scenario_damage

    [Antonio Ettorre (@vot4anto)]

    • Bumped GDAL to version 3.1.2

    [Michele Simionato (@micheles)]

    • Optimized scenario_damage for the case of many sites
    • Implemented secondary perils
    • Fixed a 32 bit/64 bit bug in oq prepare_site_model when sites.csv is the same as the vs30.csv file
    • Parallelized by GSIM when there is a single rupture

    [Francis Bernales (@ftbernales)]

    • Added the Stewart et al. (2016) GMPE
    • Added the Bozorgnia & Campbell (2016) GMPE
    • Added the Gulerce et al. (2017) GMPE

    [Michele Simionato (@micheles)]

    • Unified source model logic tree sampling with gsim logic tree sampling
    • Added early_latin and late_latin sampling algorithms
    • Changed the logic tree sampling algorithm and made it possible to use both early_weights and late_weights
    • Restored magnitude-dependent maximum distance
    • Displaying the hazard maps in the WebUI for debugging purposes
    • Used the hazard map to get the disaggregation IML from the disaggregation PoE and added a warning for zero hazard
    • Internal: implemented multi-run functionality (oq engine --multi --run)
    • Reduced tremendously the data transfer in disaggregation calculations
    • Internal: introduced compress/decompress utilities
    • Reduced the memory and disk space occupation in classical calculations with few sites; also changed slightly the rupture collapsing mechanism
    • In disaggregation, force poes_disagg == poes
    • Fixed multi-site disaggregation: ruptures far away were not discarded, just considered distant 9999 km

    [Marco Pagani (@mmpagani)]

    • Added a prototype implementation of the kernel method

    [Michele Simionato (@micheles)]

    • Added zipcode site parameter
    • Added command oq renumber_sm ssmLT.xml

    [Robin Gee (@rcgee)]

    • Set DEFINED_FOR_REFERENCE_VELOCITY for GMPEs modified for Switzerland

    [Michele Simionato (@micheles)]

    • Added parameter max_num_loss_curves to the job.ini file
    • Changed oq engine --reuse-hazard to just reuse the source model, if possible
    • Added command oq recompute_losses <calc_id> <aggregate_by>
    • Fixed noDamageLimit, minIML, maxIML not being honored in continuous fragility functions
    • Unified the scenario calculator with the event based one, with minor differences in the numbers akin to a change of seed
    • Fixed a bug in event based when a rupture occurs more than 65535 times
    • Added a demo EventBasedDamage
    • Fixed bug in event_based_damage: the number of buildings in no damage state was incorrect
    • Added commands oq nrml_to csv and oq nrml_to gpkg
    • Supported year and ses_id >= 65536 in event based

    [Graeme Weatherill (@g-weatherill)]

    • Implements a heteroskedastic standard deviation model for the Kotha et al. (2020) GMPE

    [Michele Simionato (@micheles)]

    • Called check_complex_fault when serializing the source in XML
    • Restored scenario_damage with fractional asset number
    • Added a view oq extract disagg_by_src
    • Fixed error with large ShakeMap calculations ('events' not found)
    • Raised an error when using disagg_by_src with too many point sources
    • The minimum_magnitude parameter was incorrectly ignored in UCERF

    [Iason Grigoratos (@jasongrig)]

    • Implemented the Zalachoris & Rathje (2019) GMM

    [Michele Simionato (@micheles)]

    • Optimized the disaggregation outputs, saving storage time

    [Graeme Weatherill (@g-weatherill)]

    • Adds PGV coefficients to USGS CEUS GMPE tables (where applicable)

    [Michele Simionato (@micheles)]

    • Removed the disagg_by_src exporter
    • Internal: added filtering features to the datastore
    • Calculations with a number of levels non-homogenous across IMTs are now an error
    • Implemented rupture collapsing in disaggregation (off by default)
    • Fixed a bug in the dmg_by_event exporter: the damage distributions could be associated to the wrong GMPE in some cases
    • Solved a bug with nonparametric ruptures: due to rounding errors, the disaggregation matrix could contain (small) negative probabilities
    • Extended the scenario calculators to compute the statistical outputs if there is more than one GMPE
    • Fixed the formula used for the avg_damages-rlzs outputs in event based damage calculations
    • Raised an error if investigation_time is set in scenario calculations

    [Graeme Weatherill (@g-weatherill)]

    • Fixed a bug in the mixture model application when running multiple GMPEs

    [Michele Simionato (@micheles)]

    • Replaced outputs losses_by_asset with avg_losses-rlzs, and dmg_by_asset with ``avg_damages-rlzs`, for consistency with the event based outputs
    • Extended the /extract/ API to manage JSON and removed the oqparam API
    • Added a check on ebrisk to avoid generating too many loss curves
    • Introduced an output "Source Loss Table" for event based risk calculations
    • Raised an early error when max_sites_disagg is below the number of sites in disaggregation calculations
    • Extended the amplification framework to use different intensity levels for different amplification functions
    • Optimized the disaggregation in the case of multiple realizations
    • Fixed bug in GMF amplification without intensity_measure_types_and_levels
    • Optimized the computation of the disaggregation PMFs by orders of magnitude by using numpy.prod
    • Changed the disaggregation calculator to distribute by magnitude bin, thus reducing a lot the data transfer
    • Vectorized the disaggregation formula
    • Do not perform the disaggregation by epsilon when not required
    • Introduced management of uncertainty in the GMF amplifi
    • Changed the disaggregation calculator to distribute by IMT, thus reducing a lot the data transfer in calculations with many IMTs
    • Changed /extract/disagg_layer to produce a single big layer
    • Changed the binning algorithm for lon, lat in disaggregation, to make sure that the number of bins is homogeneous across sites

    [Marco Pagani (@mmpagani)]

    • Fixed a bug in the ParseNDKtoGCMT parser + updated tests.
    • Ported the method serialise_to_hmtk_csv implemented in the corresponding class of the catalogue toolkit + added a test into the GCMTCatalogue class.
    • Added a modifiable GMPE using the site term of CY14.
    • Added a generalised modificable GMPE. This first version allows the definition of the epsilon of the within event residual.

    [Michele Simionato (@micheles)]

    • Introduced a mixed XML+HDF5 format for gridded sources
    • Internal: added a check on gridded sources: the arrays prob_occurs must have homogeneous length across ruptures
    • Removed the dependency from PyYAML, replaced the .yml files in the HMTK with .toml files and added an utility utils/yaml2toml
    Source code(tar.gz)
    Source code(zip)
  • v3.9.0(Apr 27, 2020)

    [Michele Simionato (@micheles)]

    • Fixed a type error in the command oq engine --run --params
    • Restored the flag split_sources for testing purposes
    • Fixed a BOM bug in CSV exposures
    • When exporting the loss curves per asset now we also export the loss ratio and the inverse return period, for consistency with the other exporters
    • Fixed the exporter of the loss curves per asset: due to an ordering bug in some cases it was exporting wrong losses
    • Added a flag save_disk_space to avoid storing the inputs
    • Changed the logic underlying the pointsource_distance approximation and added the syntax pointsource_distance=?
    • Logged a warning when the pointsource_distance is too small

    [Graeme Weatherill (@g-weatherill)]

    • Implemented Pitilakis et al. (2020) Site Amplification Model

    [Michele Simionato (@micheles)]

    • Fixed an export bug with modal_damage_state=true in scenario_damage calculations
    • Fixed a bug in calc_hazard_curves with multiple TRTs
    • Fixed how AvgGMPE was stored and made it applicable with correlation models if all underlying GMPEs are such

    [Paolo Tormene (@ptormene)]

    • Added a second tectonic region type to the EventBasedPSHA demo

    [Michele Simionato (@micheles)]

    • Fixed an ordering bug in /extract/rupture_info affecting the QGIS plugin
    • Fixed oq engine --eo output_id output_dir for the Full Report output
    • Added year and ses_id to the events table
    • Removed NaNs in the low return period part of the loss curves
    • Fixed the tot_curves and tot_losses exporters in ebrisk calculations
    • Reduced the rupture storage in classical calculations by using compression
    • Improved the task distribution in the classical calculator, avoiding generating too few or too many tasks
    • Enhanced oq check_input to check complex fault geometries
    • Added a warning against magnitude-dependent maximum_distance

    [Marco Pagani (@mmpagani)]

    • Fixed a bug in the coeff table of YEA97

    [Graeme Weatherill (@g-weatherill)]

    • Implemented support for Gaussian Mixture Model approach to characterise ground motion model uncertainty

    [Michele Simionato (@micheles)]

    • Enhanced oq reduce_sm to read the source models in parallel
    • Deprecated the usage of a different number of intensity levels per IMT

    [Matteo Nastasi (@nastasi-oq)]

    • Internal: added 'oq-taxonomy' to docker images

    [Michele Simionato (@micheles)]

    • Extended the pointsource_distance approximation to work on single site calculations, with a spectacular performance benefit in most calculations
    • Added Bindi2011, Bindi2014 and Cauzzi2014 scaled GMPEs contributed by the INGV
    • Added a check on classical calculations which are too large to run
    • Added a parameter collapse_level and a new collapsing algorithm
    • Added a check for missing TRTs in the GSIM logic tree file
    • Reduced the storage required for site specific calculations with complex logic trees by removing duplicated ruptures
    • Restored the computation of the mean disaggregation when multiple realizations are requested
    • Slightly changed the syntax of oq info (see oq info --help) and added information about the available IMTs, MFDs and source classes
    • Optimized get_composite_source_model (in the case of a complex source specific logic trees a speedup of 80x was measured)
    • Internal: fixed oq info source_model_logic_tree.xml
    • Avoided reading multiple times the source models in the case of complex logic trees
    • Moved the check on invalid TRTs earlier, before processing the source models
    • Removed the ucerf_classical calculator (just use the classical one)

    [Paolo Tormene (@ptormene)]

    • Added a warning in oq reduce_sm listing duplicate source IDs

    [Michele Simionato (@micheles)]

    • Improved oq reduce_sm to reduce also duplicated source IDs if they belong to different source types
    • Removed the ucerf_hazard calculator (just use the event_based one)
    • Changed the seed algorithm in all event based calculators including UCERF
    • Fixed the ShakeMap code to use the formula for the median and not the mean
    • Added a check on excessive data transfer in disaggregation calculations
    • Changed back the disaggregation calculator to read the rupture data from the workers, thus saving a lot of memory and time
    • Fixed a bug that made it impossible to abort/remove a failed task
    • Added extendModel feature to the source model logic tree parser

    [Graeme Weatherill (@g-weatherill)]

    • Fixed bug in the HMTK: the bin_width parameter was not passed to mtkActiveFaultModel.build_fault_model

    [Michele Simionato (@micheles)]

    • Avoided submitting too many tasks in the disaggregation calculator
    • Added a parameter discard_trts for manual reduction of GSIM logic tree
    • Fixed a bug in case of duplicated nodal planes affecting the Italy model
    • Removed dynamic reduction of the GSIM logic tree (i.e. now the logic tree is known upfront, before calculating the PoES)

    [Paolo Tormene (@ptormene)]

    • Fixed an encoding issue in reading configuration files on Windows

    [Michele Simionato (@micheles)]

    • Internal: started the zmq workers when the DbServer starts
    • Fixed a bug when reading rupture.txt files
    • Internal: added an option --calc-id to oq run
    • Added a check against negative number of cores in openquake.cfg
    • Raised a clear error message if the enlarged bounding box of the sources does not contain any site or if it is larger than half the globe

    [Kendra Johnson (@kejohnso)]

    • Correction to catalogue plotting tool in hmtk to include the last bins in density plots

    [Paolo Tormene (@ptormene)]

    • Added Classical PSHA Non-parametric sources Demo

    [Robin Gee (@rcgee)]

    • Change the header of the exported sigma_epsilon_XX.csv file to indicate that values correspond to inter event sigma

    [Graeme Weatherill (@g-weatherill)]

    • Adds independent verification tables for the USGS CEUS models and revises implementation for collapsed epistemic uncertainty on sigma and site amplification
    • Enhances SERA adaptation of the Abrahamson et al. (2015) BC Hydro GMPE to add in a configurable smoothed tapering term on the forearc/backarc scaling

    [Michele Simionato (@micheles)]

    • Added a check on the engine version between master and workers

    [Paolo Tormene (@ptormene)]

    • Removed the multi_node flag, that is not used anymore

    [Michele Simionato (@micheles)]

    • Added a command oq postzip to send small calculations to the WebUI
    • Added a limit of 1000 sources when disagg_by_src=true
    • Internal: fixed oq export input -e zip that was flattening the tree structure of the input files in the exported zip archive
    • Implemented GMFs amplification
    • Introduced the flag approx_ddd to support the old algorithm in scenario_damage calculations; it is automatically used for exposures with fractional asset numbers

    [Paolo Tormene (@ptormene)]

    • Modified the server views in order to allow using numpy.load(allow_pickle=False) in the QGIS IRMT plugin
    • Internal: changed some copy.deepcopy calls into copy.copy in hazardlib

    [Michele Simionato (@micheles)]

    • Removed implicit intensity_measure_types_and_levels
    • Added a check to forbid case-similar headers in the exposures
    • Improved the error message in case of CSV exposures with wrong headers
    • Reduced the slow tasks issue in event_based/ebrisk with many sites
    • Enhanced oq compare to accept a file with the control sites
    • Improved the error message for duplicate sites
    • Speedup of the ebrisk calculator
    • Extended the minimum_intensity feature to the classical calculator
    • Solved a memory bug when using the nrcan site term: due to a deepcopy the engine could run out of memory in the workers for large site collections
    • Added a check to forbid multiple complexFaultGeometry nodes
    • Internal: we are now shutting down the ProcessPool explicitly in order to support Python 3.8
    • Internal: removed the class hazardlib.gsim.base.IPE
    • Changed the aggregate loss curves generation to not use the partial asset loss table, with a huge memory reduction
    • Extended oq check_input to accept multiple files
    • Changed the scenario damage calculator to use discrete damage distributions
    • Forced the "number" attribute in the exposure must be an integer in the range 1..65535, extrema included
    Source code(tar.gz)
    Source code(zip)
  • v3.8.1(Feb 12, 2020)

    [Michele Simionato (@micheles)]

    • Fixed random HDF5 bug in disaggregation calculations
    • Fixed memory issue in nrcan15_site_term.p
    • Fixed get_duplicates check in the SiteCollection
    • Fixed bug in case of MMI (log(imls) -> imls)
    Source code(tar.gz)
    Source code(zip)
  • v3.8.0(Jan 20, 2020)

    [Graeme Weatherill (@g-weatherill)]

    • Updates SERA Craton GMPE to incorporate NGA East site response and reflect changes in CEUS USGS model

    [Michele Simionato (@micheles)]

    • The total loss curves in event_based_risk are now built with pandas
    • Added an option oq engine --param to override the job.ini parameters
    • Internal: reduced the number of NGAEastUSGS classes from 39 to 1
    • Internal: reduced the number of NGAEast classes from 44 to 2
    • Internal: reduced the 15 NSHMP2014 classes to a single class
    • Internal: reduced the 22 NBCC2015_AA13 classes to a single class

    [Graeme Weatherill (@g-weatherill)]

    • Adds complete suite of GMPEs for the Central and Eastern US, as adopted within the 2018 US National Seismic Hazard Map

    • Implements NGA East site amplification model within NGA East Base class

    • Implemented site amplification by convolution

    • Improved the error message if the event_id does not start from zero in the gmfs.csv files

    • Changed the rupture exporter to export LINESTRINGs instead of degenerate POLYGONs

    • Introduced minimum_loss_fraction functionality in ebrisk

    • Refined the rupture prefiltering mechanism, possibly changing the numbers in calculations with nonzero coefficients of variations

    • Optimized the generation of aggregate loss curves in ebrisk

    • Introduced an experimental AvgGMPE and used it to implement (optional) reduction of the gsim logic tree

    [Graeme Weatherill (@g-weatherill)]

    • Implemented Abrahamson et al (2018) update of the BC Hydro GMPE
    • Added configurable nonergodic sigma option to BC Hydro and SERA GMPEs
    • Small refactoring and bug fix in average SA GMPE

    [Michele Simionato (@micheles)]

    • Avoided reading multiple times the GSIM logic tree
    • Changed the GSIM logic tree sampling by ordering the branches by TRT
    • Ignored IMT-dependent weights when using sampling to make such calculations possible
    • Storing (partially) the asset loss table

    [Robin Gee (@rcgee)]

    • Set DEFINED_FOR_REFERENCE_VELOCITY in CampbellBozorgnia2003NSHMP2007

    [Graeme Weatherill (@g-weatherill)]

    • Re-adjustment of SERA Subduction model epistemic scaling factors

    [Michele Simionato (@micheles)]

    • Improved the task distribution in the ebrisk calculator
    • Fixed a bug in ebrisk with aggregate_by when building the rup_loss_table
    • Storing the asset loss table in scenario_risk, but only for assets and events above over a loss_ratio_threshold parameter
    • Storing the asset damage table in scenario_damage and event based damage, but only for assets and events above a collapse_threshold parameter
    • Avoided transferring the GMFs upfront in scenario_damage, scenario_risk and event_based_damage

    [Daniele Viganò (@daniviga)]

    • Included pandas in the engine distribution

    [Michele Simionato (@micheles)]

    • Avoided reading multiple time the gsim logic tree file and relative files
    • Added a check for duplicate sites in the site model file
    • Implemented an event_based_damage calculator
    • Added an API /v1/calc/ID/extract/gmf_data?event_id=XXX
    • Added an API /v1/calc/ID/extract/num_events
    • Fixed the /v1/calc/ID/status endpoint to return an error 404 when needed
    • Removed the "sites are overdetermined" check, since it now unneeded
    • Turned the calculation of consequences into a plugin architecture

    [Matteo Nastasi (@nastasi-oq)]

    • Add '/v1/ini_defaults' web api entry point to retrieve all default values for ini attributes (attrs without a default are not returned)

    [Michele Simionato (@micheles)]

    • Renamed rlzi -> rlzi in the sigma-epsilon dataset and exporter
    • Renamed id -> asset_id in all the relevant CSV exporters
    • Renamed rlzi -> rlz_id in the dmg_by_event.csv output
    • Renamed rupid -> rup_id in the ruptures.csv output
    • Renamed id -> event_id in the events.csv output and gmfs.csv output
    • Renamed sid -> site_id in the gmfs.csv output
    • Renamed ordinal -> rlz_id in the realizations.csv output

    [Alberto Chiusole (@bebosudo)]

    • Changed the way how the available number of CPU cores is computed

    [Kendra Johnson (@kejohnso), Robin Gee (@rcgee)]

    • Added GMPEs for Rietbrock-Edwards (2019) and Yenier-Atkinson (2015)

    [Michele Simionato (@micheles)]

    • Added more check on the IMTs and made it possible to import a GMF.csv file with more IMTs than needed
    • Enabled magnitude-dependent pointsource_distance
    • Removed the syntax for magnitude-dependent maximum distance, since now it can be automatically determined by the engine
    • Saving more information in the case of single-site classical hazard
    • Extended pointsource_distance to generic sources
    • Removed the boundary information from the CSV rupture exporter
    • Changed the /extract/rupture/XXX API to returns a TOML that can be used by a scenario calculator
    • Added general support for file-reading GMPEs
    • Made it possible to disaggregate on multiple realizations with the parameters rlz_index or num_rlzs_disagg
    • Fixed downloading the ShakeMaps (again)
    • Better error message in case of too large maximum_distance
    • Optimized the case of point sources with an hypocenter distribution and GSIMs independent from it and in general the case of ruptures with similar distances

    [Graeme Weatherill (@g-weatherill)]

    • Updates SERA craton GMPE to reflect updates to NGA East site response model

    [Michele Simionato (@micheles)]

    • Fixed and HDF5 SWMR issue in large disaggregation calculations
    • Made rrup the unique acceptable filter_distance
    • Fixed disaggregation with a parent calculation
    • Models with duplicated values in the hypocenter and/or nodal plane distributions are now automatically optimized
    • Fixed an issue with missing noDamageLimit causing NaN values in scenario_damage calculations
    • Added more validations for predefined hazard, like forbidding the site model

    [Marco Pagani (@mmpagani)]

    • Adding the shift_hypo option for distributed seismicity

    [Michele Simionato (@micheles)]

    • Raised an early error for extra-large GMF calculations
    • Reduced the GMF storage by using 32 bit per event ID instead of 64 bit
    • Raised an error in case of duplicated sites in the site model
    • Fixed the case of implicit grid with a site model: sites could be incorrectly discarded
    • Fixed the ShakeMap downloader to find also unzipped uncertaintly.xml files
    • Fixed the rupture exporters to export the rupture ID and not the rupture serial
    • Removed the non-interesting agg_maps outputs
    • Changed the task distribution in the classical calculator and added a task_multiplier parameter

    [Marco Pagani (@mmpagani)]

    • Fixed a bug in the GenericGmpeAvgSA

    [Michele Simionato (@micheles)]

    • Added a /v1/calc/validate_zip endpoint to validate input archives
    • Deprecated inferring the intensity measure levels from the risk functions
    • Fixed a too strict check on the minimum intensities of parent an child calculations
    • Extended the ebrisk calculator to compute at the same time both the aggregate curves by tag and the total curves

    [Marco Pagani (@mmpagani)]

    • Implemented Morikawa and Fujiwara (2013) GMM

    [Michele Simionato (@micheles)]

    • Changed the seed algorithm in sampling with more than one source model, thus avoiding using more GMPEs than needed in some cases
    • If ground_motion_fields=false is set, the GMFs are not stored even if hazard_curves_from_gmfs=true
    • oq show job_info now works while the calculation is running
    • Reduced the sent data transfer in ebrisk calculations
    • Deprecated the old syntax for the reqv feature
    • Added short aliases for hazard statistics mean, max and std
    • Reduced substantially the memory occupation in the task queue
    • Added an API /extract/sources and an experimental oq plot sources
    • Added a check on valid input keys in the job.ini
    • Fixed the check on dependent calculations
    • Specifying at the same time both a grid and individual sites is an error

    [Daniele Viganò (@daniviga)]

    • Docker containers rebased on CentOS 8
    • Fixed an issue causing zombie ssh processes when using zmq as task distribution mechanism
    • Introduced support for RHEL/CentOS 8

    [Michele Simionato (@micheles)]

    • Added a check for no GMFs in event_based_risk
    • Avoided transferring the site collection
    • Storing the sources in TOML format
    Source code(tar.gz)
    Source code(zip)
  • v3.7.1(Oct 25, 2019)

    [Michele Simionato (@micheles)]

    • Fixed disaggregation with a parent calculation
    • Fixed the case of implicit grid with a site model: sites could be incorrectly discarded
    • Fixed the ShakeMap downloader to find also unzipped uncertaintly.xml files
    • Fixed the rupture exporters to export the rupture ID and not the rupture serial

    [Marco Pagani (@mmpagani)]

    • Fixed a bug in the GenericGmpeAvgSA
    Source code(tar.gz)
    Source code(zip)
  • v3.7.0(Sep 26, 2019)

    [Michele Simionato (@micheles)]

    • Hiding calculations that fail before the pre-execute phase (for instance, because of missing files); they already give a clear error
    • Added an early check on truncation_level in presence of correlation model

    [Guillaume Daniel (@guyomd)]

    • Implemented Ameri (2017) GMPE

    [Michele Simionato (@micheles)]

    • Changed the ruptures CSV exporter to use commas instead of tabs
    • Added a check forbidding aggregate_by for non-ebrisk calculators
    • Introduced a task queue
    • Removed the cache_XXX.hdf5 files by using the SWMR mode of h5py

    [Kris Vanneste (@krisvanneste)]

    • Updated the coefficients table for the atkinson_2015 to the actual values in the paper.

    [Michele Simionato (@micheles)]

    • Added an /extract/agg_curves API to extract both absolute and relative loss curves from an ebrisk calculation
    • Changed oq reset --yes to remove oqdata/user only in single-user mode
    • Now the engine automatically sorts the user-provided intensity_measure_types
    • Optimized the aggregation by tag
    • Fixed a bug with the binning when disaggregating around the date line
    • Fixed a prefiltering bug with complex fault sources: in some cases, blocks ruptures were incorrectly discarded
    • Changed the sampling algorithm for the GMPE logic trees: now it does not require building the full tree in memory
    • Raised clear errors for geometry files without quotes or with the wrong header in the multi_risk calculator
    • Changed the realizations.csv exporter to export '[FromShakeMap]' instead of '[FromFile]' when needed
    • Changed the agg_curves exporter to export all realizations in a single file and all statistics in a single file
    • Added rlz_id, rup_id and year to the losses_by_event output for ebrisk
    • Fixed a bug in the ruptures XML exporter: the multiplicity was multiplied (incorrectly) by the number of realizations
    • Fixed the pre-header of the CSV outputs to get proper CSV files
    • Replaced the 64 bit event IDs in event based and scenario calculations with 32 bit integers, for the happiness of Excel users

    [Daniele Viganò (@daniviga)]

    • Numpy 1.16, Scipy 1.3 and h5py 2.9 are now required

    [Michele Simionato (@micheles)]

    • Changed the ebrisk calculator to read the CompositeRiskModel directly from the datastore, which means 20x less data transfer for Canada

    [Anirudh Rao (@raoanirudh)]

    • Fixed a bug in the gmf CSV importer: the coordinates were being sorted and new site_ids assigned even though the user input sites csv file had site_ids defined

    [Michele Simionato (@micheles)]

    • Fixed a bug in the rupture CSV exporter: the boundaries of a GriddedRupture were exported with lons and lats inverted
    • Added some metadata to the CSV risk outputs
    • Changed the distribution mechanism in ebrisk to reduce the slow tasks

    [Graeme Weatherill (@g-weatherill)]

    • Updates Kotha et al. (2019) GMPE to July 2019 coefficients
    • Adds subclasses to Kotha et al. (2019) to implement polynomial site response models and geology+slope site response model
    • Adds QA test to exercise all of the SERA site response calculators

    [Michele Simionato (@micheles)]

    • Internal: there is not need to call gsim.init() anymore

    [Graeme Weatherill (@g-weatherill)]

    • Adds parametric GMPE for cratonic regions in Europe

    [Michele Simionato (@micheles)]

    • In the agglosses output of scenario_risk the losses were incorrectly multiplied by the realization weight
    • Removed the output sourcegroups and added the output events

    [Graeme Weatherill (@g-weatherill)]

    • Adds new meta ground motion models to undertake PSHA using design code based amplification coefficients (Eurocode 8, Pitilakis et al., 2018)
    • Adds site amplification model of Sandikkaya & Dinsever (2018)

    [Marco Pagani (@mmpagani)]

    • Added a new rupture-site metric: the azimuth to the closest point on the rupture

    [Michele Simionato (@micheles)]

    • Fixed a regression in disaggregation with nonparametric sources, which were effectively discarded
    • The site amplification has been disabled by default in the ShakeMap calculator, since it is usually already taken into account by the USGS

    [Daniele Viganò (@daniviga)]

    • Deleted calculations are not removed from the database anymore
    • Removed the 'oq dbserver restart' command since it was broken

    [Richard Styron (@cossatot)]

    • Fixed YoungsCoppersmith1985MFD.from_total_moment_rate(): due to numeric errors it was producing incorrect seismicity rates

    [Michele Simionato (@micheles)]

    • Now we generate the output disagg_by_src during disaggregation even in the case of multiple realizations
    • Changed the way the random seed is set for BT and PM distributions
    • The filenames generated by disagg_by_src exporter now contains the site ID and not longitude and latitude, consistently with the other exporters
    • Accepted again meanLRs greater than 1 in vulnerability functions of kind LN
    • Fixed a bug in event based with correlation and a filtered site collection
    • Fixed the CSV exporter for the realizations in the case of scenarios with parametric GSIMs
    • Removed some misleading warnings for calculations with a site model
    • Added a check for missing risk_investigation_time in ebrisk
    • Reduced drastically (I measured improvements over 40x) memory occupation, data transfer and data storage for multi-sites disaggregation
    • Sites for which the disaggregation PoE cannot be reached are discarded and a warning is printed, rather than killing the whole computation
    • oq show performance can be called in the middle of a computation again
    • Filtered out the far away distances and reduced the time spent in saving the performance info by orders of magnitude in large disaggregations
    • Reduced the data transfer by reading the data directly from the datastore in disaggregation calculations
    • Reduced the memory consumption sending disaggregation tasks incrementally
    • Added an extract API disagg_layer
    • Moved max_sites_disagg from openquake.cfg into the job.ini
    • Fixed a bug with the --config option: serialize_jobs could not be overridden
    • Implemented insured losses
    Source code(tar.gz)
    Source code(zip)
  • v3.6.0(Jul 16, 2019)

    [Michele Simionato (@micheles)]

    • In some cases applyToSources was giving a fake error about the source not being in the source model even if it actually was

    [Chris Van Houtte (@cvanhoutte)]

    • Adds the Van Houtte et al. (2018) significant duration model for New Zealand

    [Michele Simionato (@micheles)]

    • Added a way to compute and plot the MFD coming from an event based
    • Storing the MFDs in TOML format inside the datastore

    [Robin Gee (@rcgee)]

    • Moves b4 constant into COEFFS table for GMPE Sharma et al., 2009

    [Graeme Weatherill (@g-weatherill)]

    • Adds functionality to Cauzzi et al. (2014) and Derras et al. (2014) calibrated GMPEs for Germany to use either finite or point source distances

    [Michele Simionato (@micheles)]

    • Restored the ability to associate site model parameters to a grid of sites
    • Made it possible to set hazard_curves_from_gmfs=true with ground_motion_fields=false in the event based hazard calculator
    • Introduced a mechanism to split the tasks based on an estimated duration
    • Integrated oq plot_memory into oq plot
    • Removed NaN values for strike and dip when exporting griddedRuptures
    • Fixed oq reset to work in multi-user mode
    • Extended the source_id-filtering feature in the job.ini to multiple sources
    • Supported WKT files for the binary perils in the multi_risk calculator
    • Added an early check on the coefficients of variation and loss ratios of vulnerability functions with the Beta distribution
    • Made sure that oq engine --dc removes the HDF5 cache file too
    • Removed the flag optimize_same_id_sources because it is useless now
    • Introduced a soft limit at 65,536 sites for event_based calculations
    • Fixed a performance regression in ucerf_classical that was filtering before splitting, thus becoming extra-slow
    • Improved the progress log, that was delayed for large classical calculations
    • Exported the ruptures as 3D multi-polygons (instead of 2D ones)
    • Changed the aggregate_by exports for consistency with the others
    • Changed the losses_by_event exporter for ebrisk, to make it more consistent with scenario_risk and event_based_risk
    • Changed the agglosses and losses_by_event exporters in scenario_risk, by adding a column with the realization index
    • Changed the generation of the hazard statistics to consume very little memory
    • Fixed a bug with concurrent_tasks being inherited from the parent calculation instead of using the standard default
    • Removed the dependency from mock, since it is included in unittest.mock
    • For scenario, replaced the branch_path with the GSIM representation in the realizations output
    • Added a check for suspiciously large source geometries
    • Deprecated the XML disaggregation exporters in favor of the CSV exporters
    • Turned the disaggregation calculator into a classical post-calculator to use the precomputed distances and speedup the computation even more
    • Fixed the disaggregation calculator by discarding the ruptures outside the integration distance
    • Optimized the speed of the disaggregation calculator by moving a statistical functions outside of the inner loop
    • Changed the file names of the exported disaggregation outputs
    • Fixed an export agg_curves issue with pre-imported exposures
    • Fixed an export agg_curves issue when the hazard statistics are different from the risk statistics
    • Removed the disaggregation statistics: now the engine disaggregates only on a single realization (default: the closest to the mean)
    • Forbidden disaggregation matrices with more than 1 million elements
    • Reduced the data transfer when computing the hazard curves
    • Optimized the reading of large CSV exposures
    • Fixed the --hc functionality across users
    • Optimized the reduction of the site collection on the exposure sites
    • Made more robust the gsim logic tree parser: lines like <uncertaintyModel gmpe_table="../gm_tables/Woffshore_low_clC.hdf5"> are accepted again
    • Added a check against duplicated values in nodal plane distributions and hypocenter depth distributions
    • Changed the support for zipped exposures and source models: now the name of the archive must be written explicitly in the job.ini
    • Added support for numpy 1.16.3, scipy 1.3.0, h5py 2.9.0
    • Removed the special case for event_based_risk running two calculations

    [Graeme Weatherill (@g-weatherill)]

    • Adds the Tromans et al. (2019) adjustable GMPE for application to PSHA in the UK

    [Michele Simionato (@micheles)]

    • Optimized src.sample_ruptures for (multi)point sources and are sources
    • Fixed a mutability bug in the DistancesContext and made all context arrays read-only: the fix may affect calculations using the GMPEs berge_thierry_2003, cauzzi_faccioli_2008 and zhao_2006;
    • Fixed a bug with the minimum_distance feature
    • Fixed a bug in the exporter of the aggregate loss curves: now the loss ratios are computed correctly even in presence of occupants
    • Removed the (long time deprecated) capability to read hazard curves and ground motion fields from XML files: you must use CSV files instead

    [Marco Pagani (@mmpagani)]

    • Implemented a modified GMPE that add between and within std to GMPEs only supporting total std

    [Michele Simionato (@micheles)]

    • Added the ability to use a taxonomy_mapping.csv file
    • Fixed a bug in classical_damage from CSV: for hazard intensity measure levels different from the fragility levels, the engine was giving incorrect results
    • Serialized also the source model logic tree inside the datastore
    • Added a check on missing intensity_measure_types in event based
    • Fixed oq prepare_site_model in the case of an empty datadir
    • Added a comment line with useful metadata to the engine CSV outputs
    • Removed the long time deprecated event loss table exporter for event based risk and enhanced the losses_by_event exporter to export the realization ID
    • Removed the long time deprecated GMF XML exporter for scenario
    • IMT-dependent weights in the gsim logic tree can be zero, to discard contributions outside the range of validity of (some of the) GSIMs
    • Now it is possible to export individual hazard curves from an event
    • Added a view gmvs_to_hazard
    Source code(tar.gz)
    Source code(zip)
  • v3.5.2(May 31, 2019)

    [Daniele Viganò (@daniviga)]

    • Fixed packaging issue, the .hdf5 tables for Canada were missing

    [Michele Simionato (@micheles)]

    • Fixed regression in the gsim logic tree parser for the case of .hdf5 tables
    Source code(tar.gz)
    Source code(zip)
  • v3.5.1(May 20, 2019)

    [Michele Simionato (@micheles)]

    • Added a rlzi column to to sig_eps.csv output
    • Accepted GMF CSV files without a rlzi column
    • Accepted a list-like syntax like return_periods=[30, 60, 120, 240, 480] in the job.ini, as written in the manual
    • Fixed a bug in the asset_risk exporter for uppercase tags

    [Paul Henshaw (@pslh)]

    • Fixed an encoding bug while reading XML files on Windows
    Source code(tar.gz)
    Source code(zip)
  • v3.5.0(May 13, 2019)

    [Michele Simionato (@micheles)]

    • Added a view gmvs_to_hazard

    [Giovanni Lanzano (@giovannilanzanoINGV)]

    • Lanzano and Luzi (2019) GMPE for volcanic zones in Italy

    [Michele Simionato (@micheles)]

    • Now it is possible to export individual hazard curves from an event based calculation by setting hazard_curves_from_gmfs = true and `individual_curves = true (before only the statistics were saved)

    [Graeme Weatherill (@g-weatherill)]

    • Adds adaptation of Abrahamson et al. (2016) 'BC Hydro' GMPEs calibrated to Mediterranean data and with epistemic adjustment factors

    [Chris Van Houtte (@cvanhoutte)]

    • Added new class to bradley_2013b.py for hazard maps
    • Modified test case_37 to test multiple sites

    [Marco Pagani (@mmpagani)]

    • Fixed a bug in the logic tree parser and added a check to forbid logic trees with applyToSources without applyToBranches, unless there is a single source model branch

    [Michele Simionato (@micheles)]

    • Removed the experimental parameter prefilter_sources

    [Daniele Viganò (@daniviga)]

    • Multiple DbServer ZMQ connections are restored to avoid errors under heavy load and/or on slower machines

    [Michele Simionato (@micheles)]

    • Removed the ugly registration of custom signals at import time: now they are registered only if engine.run_calc is called
    • Removed the dependency from rtree
    • Removed all calls to ProcessPool.shutdown to speed up the tests and to avoid non-deterministic errors in atexit._run_exitfuncs

    [Marco Pagani (@mmpagani)]

    • Added tabular GMPEs as provided by Michal Kolaj, Natural Resources Canada

    [Michele Simionato (@micheles)]

    • Extended the ebrisk calculator to support coefficients of variations

    [Graeme Weatherill (@g-weatherill)]

    • Adds Kotha et al (2019) shallow crustal GMPE for SERA
    • Adds 'ExperimentalWarning' to possible GMPE warnings
    • Adds kwargs to check_gsim function

    [Michele Simionato (@micheles)]

    • Fixed problems like SA(0.7) != SA(0.70) in iml_disagg
    • Exposed the outputs of the classical calculation in event based calculations with compare_with_classical=true
    • Made it possible to serialize together all kind of risk functions, including consequence functions that before were not HDF5-serializable
    • Fixed a MemoryError when counting the number of bytes stored in large HDF5 datasets
    • Extended asset_hazard_distance to a dictionary for usage with multi_risk
    • Extended oq prepare_site_model to work with sites.csv files
    • Optimized the validation of the source model logic tree: now checking the sources IDs is 5x faster
    • Went back to the old logic in sampling: the weights are used for the sampling and the statistics are computed with identical weights
    • Avoided to transfer the epsilons by storing them in the cache file and changed the event to epsilons associations
    • Reduced the data transfer in the computation of the hazard curves, causing in some time huge speedups (over 100x)
    • Implemented a flag modal_damage_state to display only the most likely damage state in the output dmg_by_asset of scenario damage calculations
    • Reduced substantially the memory occupation in classical calculations by including the prefiltering phase in the calculation phase

    [Daniele Viganò (@daniviga)]

    • Added a 'serialize_jobs' setting to the openquake.cfg which limits the maximum number of jobs that can be run in parallel

    [Michele Simionato (@micheles)]

    • Fixed two exporters for the ebrisk calculator (agg_curves-stats and losses_by_event)
    • Fixed two subtle bugs when reading site_model.csv files
    • Added /extract/exposure_metadata and /extract/asset_risk
    • Introduced an experimental multi_risk calculator for volcanic risk

    [Guillaume Daniel (@guyomd)]

    • Updating of Berge-Thierry (2003) GSIM and addition of several alternatives for use with Mw

    [Michele Simionato (@micheles)]

    • Changed the classical_risk calculator to use the same loss ratios for all taxonomies and then optimized all risk calculators
    • Temporarily removed the insured_losses functionality
    • Extended oq restore to download from URLs
    • Removed the column 'gsims' from the output 'realizations'
    • Better parallelized the source splitting in classical calculations
    • Added a check for missing hazard in scenario_risk/scenario_damage
    • Improved the GsimLogicTree parser to get the line number information, a feature that was lost with the passage to Python 3.5
    • Added a check against mispellings in the loss type in the risk keys
    • Changed the aggregation WebAPI from aggregate_by/taxonomy,occupancy/avg_losses?kind=mean&loss_type=structural to aggregate/avg_losses?kind=mean&loss_type=structural&tag=taxonomy&tag=occupancy
    • Do not export the stddevs in scenario_damage in the case of 1 event
    • Fixed export bug for GMFs imported from a file
    • Fixed an encoding error when storing a GMPETable
    • Fixed an error while exporting the hazard curves generated by a GMPETable
    • Removed the deprecated feature aggregate_by/curves_by_tag
    Source code(tar.gz)
    Source code(zip)
  • v3.4.0(Mar 18, 2019)

    [Michele Simionato (@micheles)]

    • Compatibility with 'decorator' version >= 4.2

    [Giovanni Lanzano (@giovannilanzanoINGV)]

    • Contributed a GMPE SkarlatoudisEtAlSSlab2013

    [Michele Simionato (@micheles)]

    • Changed the event loss table exporter to export also rup_id and year
    • Extended the ebrisk calculator to compute loss curves and maps

    [Rodolfo Puglia (@rodolfopuglia)]

    • Spectral acceleration amplitudes at 2.5, 2.75 and 4 seconds added

    [Marco Pagani (@mmpagani)]

    • Improved the event based calculator to account for cluster-based models

    [Michele Simionato (@micheles)]

    • Removed the now redundant command oq extract hazard/rlzs

    [Daniele Viganò (@daniviga)]

    • Fixed 'oq abort' and always mark killed jobs as 'aborted'

    [Michele Simionato (@micheles)]

    • Made it possible to use in the Starmap tasks without a monitor argument
    • Stored the sigma and epsilon parameters for each event in event based and scenario calculations and extended the gmf_data exporter consequently
    • Fixed the realizations CSV exporter which was truncating the names of the GSIMs
    • Deprecated the XML exporters for hcurves, hmaps, uhs
    • Introduced a sap.script decorator
    • Used the WebExtractor in oq importcalc
    • Restored validation of the source_model_logic_tree.xml file
    • Raised an early error for missing occupants in the exposure
    • Added a check to forbid duplicate file names in the uncertaintyModel tag
    • Made it possible to store the asset loss table in the ebrisk calculator by specifying asset_loss_table=true in the job.ini
    • Added a flag oq info --parameters to show the job.ini parameters
    • Removed the source_name column from the disagg by source output

    [Rao Anirudh]

    • Fixed wrong investigation_time in the calculation of loss maps from loss curves

    [Robin Gee (@rcgee)]

    • Added capability to optionally specify a time_cutoff parameter to declustering time window

    [Michele Simionato (@micheles)]

    • Merged the commands oq plot_hmaps and oq plot_uhs inside oq plot
    • Changed the storage of hazard curves and hazard maps to make it consistent with the risk outputs and Extractor-friendly

    [Chris Van Houtte (@cvanhoutte)]

    • Added necessary gsims to run the Canterbury Seismic Hazard Model in Gerstenberger et al. (2014)
    • Added a new gsim file mcverry_2006_chch.py to have the Canterbury- specific classes.
    • Added a new gsim file bradley_2013b.py to implement the Christchurch-specific modifications to the Bradley2013 base model.

    [Michele Simionato (@micheles)]

    • Added a check on the intensity measure types and levels in the job.ini, to make sure they are ordered by period
    • Reduced the number of client sockets to the DbServer that was causing (sporadically) the hanging of calculations on Windows
    • Extended the WebAPI to be able to extract specific hazard curves, maps and UHS (i.e. IMT-specific and site specific)
    • Removed the realization index from the event loss table export, since is it redundant
    • Forced all lowercase Python files in the engine codebase
    • Removed the dependency from nose

    [Robin Gee (@rcgee)]

    • Updated GMPE of Yu et al. (2013)

    [Michele Simionato (@micheles)]

    • Added an Extractor client class leveraging the WebAPI and enhanced oq plot_hmaps to display remote hazard maps
    • Added a check when disaggregation is attempted on a source model with atomic source groups
    • Implemented serialization/deserialization of GSIM instances to TOML
    • Added a check against mispelled rupture distance names and fixed the drouet_alpes_2015 GSIMs
    • Changed the XML syntax used to define dictionaries IMT -> GSIM
    • Now GSIM classes have an .init() method to manage notrivial initializations, i.e. expensive initializations or initializations requiring access to the filesystem
    • Fixed a bug in event based that made it impossible to use GMPETables
    • Associated the events to the realizations even in scenario_risk: this involved changing the generation of the epsilons in the case of asset correlation. Now there is a single aggregate losses output for all realizations
    • Removed the rlzi column from the GMF CSV export
    • Introduced a new parameter ebrisk_maxweight in the job.ini
    • For classical calculations with few sites, store information about the realization closest to the mean hazard curve for each site
    • Removed the max_num_sites limit on the event based calculator

    [Valerio Poggi (@klunk386)]

    • Added an AvgSA intensity measure type and a GenericGmpeAvgSA which is able to use it

    [Michele Simionato (@micheles)]

    • Introduced the ability to launch subtasks from tasks
    • Stored rupture information in classical calculations with few sites

    [Chris Van Houtte (@cvanhoutte)]

    • Adding conversion from geometric mean to larger horizontal component in bradley_2013.py

    [Michele Simionato (@micheles)]

    • Fixed a bug in applyToSources for the case of multiple sources
    • Moved the prefiltering on the workers to save memory
    • Exported the aggregated loss ratios in avg losses and agg losses
    • Removed the variables quantile_loss_curves and mean_loss_curves: they were duplicating quantile_hazard_curves and mean_hazard_curves
    • Only ruptures boundingbox-close to the site collection are stored

    [Marco Pagani (@mmpagani)]

    • Added cluster model to classical PSHA calculator

    [Michele Simionato (@micheles)]

    • Fixed a bug in scenario_damage from ShakeMap with noDamageLimit=0
    • Avoided the MemoryError in the controller node by speeding up the saving of the information about the sources
    • Turned utils/reduce_sm into a proper command
    • Fixed a wrong coefficient in the ShakeMap amplification
    • Fixed a bug in the hazard curves export (the filename did not contain the period of the IMT thus producing duplicated files)
    • Parallelized the reading of the exposure

    [Marco Pagani (@mmpagani)]

    • Fixed the implementation on mutex ruptures

    [Michele Simionato (@micheles)]

    • Changed the aggregated loss curves exporter
    • Added an experimental calculator ebrisk
    • Changed the ordering of the events (akin to a change of seed in the asset correlation)

    [Robin Gee (@rcgee)]

    • Fixed bug in tusa_langer_2016.py BA08SE model - authors updated b2 coeff
    • Fixed bug in tusa_langer_2016.py related to coeffs affecting Repi models

    [Michele Simionato (@micheles)]

    • Added a check to forbid to set ses_per_logic_tree_path = 0
    • Added an API /extract/event_info/eidx
    • Splitting the sources in classical calculators and not in event based
    • Removed max_site_model_distance
    • Extended the logic used in event_based_risk - read the hazard sites from the site model, not from the exposure - to all calculators
    • In classical_bcr calculations with a CSV exposure the retrofitted field was not read. Now a missing retrofitted value is an error
    Source code(tar.gz)
    Source code(zip)
  • v3.3.2(Jan 22, 2019)

    [Robin Gee (@rcgee)]

    • Fixed bug in tusa_langer_2016.py BA08SE model - authors updated b2 coeff

    [Michele Simionato (@micheles)]

    • Fixed a bug in scenario_damage from ShakeMap with noDamageLimit=0
    • Avoided the MemoryError in the controller node by speeding up the saving of the information about the sources
    • Fixed a wrong coefficient in the ShakeMap amplification
    • Fixed a bug in the hazard curves export (the filename did not contain the period of the IMT thus producing duplicated files)
    Source code(tar.gz)
    Source code(zip)
  • v3.3.1(Jan 14, 2019)

    [Michele Simionato (@micheles)]

    • Fixed the GMF exporter to export the event IDs and not event indices

    [Robin Gee (@rcgee)]

    • Fixed bug in tusa_langer_2016.py related to coeffs affecting Repi models
    Source code(tar.gz)
    Source code(zip)
  • v3.3.0(Jan 7, 2019)

    [Graeme Weatherill (@g-weatherill)]

    • Adds GMPE suite for national PSHA for Germany

    [Daniele Viganò (@daniviga)]

    • Added a warning box when an unsupported browser is used to view the WebUI
    • Updated Docker containers to support a multi-node deployment with a shared directory
    • Moved the Docker containers source code from oq-builders
    • Updated the documentation related to the shared directory which is now mandatory for multi-node deployments

    [Matteo Nastasi (@nastasi-oq)]

    • Removed tests folders

    [Stéphane Drouet (@stephane-on)]

    • Added Drouet & Cotton (2015) GMPE including 2017 erratum

    [Michele Simionato (@micheles)]

    • Optimized the memory occupation in classical calculations (Context.poe_map)
    • Fixed a wrong counting of the ruptures in split fault sources with an hypo_list/slip_list causing the calculation to fail
    • Made the export of uniform hazard spectra fast
    • Made the std hazard output properly exportable
    • Replaced the ~ in the header of the UHS csv files with a -
    • Restored the individual_curves flag even for the hazard curves
    • Implemented dGMPE weights per intensity measure type
    • Extended --reuse-hazard to all calculators
    • Fixed a bug in event_based_risk from GMFs with coefficients of variations

    [Graeme Weatherill (@g-weatherill)]

    • Adds magnitude scaling relation for Germany

    [Michele Simionato (@micheles)]

    • Used floats for the the GSIM realization weights, not Python Decimals
    • Added a flag fast_sampling, by default False
    • Added an API /extract/src_loss_table/<loss_type>
    • Removed the rupture filtering from sample_ruptures and optimized it in the RuptureGetter by making use of the bounding box
    • Raised the limit on ses_per_logic_tree_path from 216 to 232;
    • Added a parameter max_num_sites to increase the number of sites accepted by an event based calculation up to 2 ** 32 (the default is still 2 ** 16)
    • Added a command oq compare to compare hazard curves and maps within calculations
    • Extended the engine to read transparently zipped source models and exposures
    • Restored the check for invalid source IDs in applyToSources
    • Extended the command oq zip to zip source models and exposures
    • Parallelized the associations event ID -> realization ID
    • Improved the message when assets are discarded in scenario calculations
    • Implemented aggregation by multiple tags, plus a special case for the country code in event based risk

    [Marco Pagani (@mmpagani)]

    • Added two modified versions of the Bindi et al. (2011) to be used in a backbone approach to compute hazard in Italy
    • Added a modified version of Berge-Thierry et al. 2003 supporting Mw

    [Michele Simionato (@micheles)]

    • Changed the way loss curves and loss maps are stored in order to unify the aggregation logic with the one used for the average losses
    • Now it is possible to compute the ruptures without specifying the sites
    • Added an early check for the case of missing intensity measure types
    • Deprecated the case of exposure, site model and region_grid_spacing all set at the same time
    • Implemented multi-exposure functionality in event based risk
    • Changed the event based calculator to store the ruptures incrementally without keeping them all in memory
    • Refactored the UCERF event based calculator to work as much as possible the regular calculator
    • Optimized the management and storage of the aggregate losses in the event based risk calculation; also, reduced the memory consumption
    • Changed the default for individual_curves to "false", which is the right default for large calculations
    • Optimized the saving of the events
    • Removed the save_ruptures flag in the job.ini since ruptures must be saved always
    • Optimized the rupture generation in case of sampling and changed the algorithm and seeds
    • Fixed a bug with the IMT SA(1) considered different from SA(1.0)
    • Removed the long-time deprecated GMF exporter in XML format for event_based
    • Added a re-use hazard feature in event_based_risk in single-file mode
    • Made the event ID unique also in scenario calculations with multiple realizations
    • Removed the annoying hidden .zip archives littering the export directory
    • Added an easy way to read the exposure header
    • Added a way to run Python scripts using the engine libraries via oq shell
    • Improved the minimum_magnitude feature
    • Fixed the check on missing hazard IMTs
    • Reduced substantially the memory occupation in event based risk
    • Added the option spatial_correlation=no correlation for risk calculations from ShakeMaps
    • Removed the experimental calculator ucerf_risk
    • Optimized the sampling of time-independent sources for the case of prefilter_sources=no
    • Changed the algorithm associating events to SESs and made the event based hazard calculator faster in the case of many SESs
    • Reduced substantially the memory consumption in event based risk
    • Made it possible to read multiple site model files in the same calculation
    • Implemented a smart single job.ini file mode for event based risk
    • Now warnings for invalid parameters are logged in the database too
    • Fixed oq export avg_losses-stats for the case of one realization
    • Added oq export losses_by_tag and oq export curves_by_tag
    • Extended oq export to work in a multi-user situation
    • Forbidden event based calculations with more than max_potential_paths in the case of full enumeration
    • Saved a large amount of memory in event_based_risk calculations
    • Added a command oq export losses_by_tag/<tagname> <calc_id>
    • Extended oq zip to zip the risk files together with the hazard files
    • Changed the building convention for the event IDs and made them unique in the event loss table, even in the case of full enumeration
    • Optimized the splitting of complex fault sources
    • Fixed the ShakeMap download procedure for uncertainty.zip archives with an incorrect structure (for instance for ci3031111)
    • Disabled the spatial correlation in risk-from-ShakeMap by default
    • Optimized the rupture sampling where there is a large number of SESs
    • Extended the reqv feature to multiple tectonic region types and removed the spinning/floating for the TRTs using the feature
    • Reduced the GMPE logic tree upfront for TRTs missing in the source model
    • Fixed the ShakeMap downloader to use the USGS GeoJSON feed
    • Improved the error message when there are more than 65536 distinct tags in the exposure
    • Turned vs30measured into an optional parameter

    [Chris Van Houtte (@cvanhoutte)]

    • Added siteclass as a site parameter, and reference_site_class as a site parameter than can be specified by the user in the ini file
    • Added new classes to mcverry_2006.py to take siteclass as a predictor
    • Updated comments in mcverry_2006.py
    • Added new mcverry_2006 test tables to account for difference in site parameter
    • Added qa_test_data classical case_32

    [Michele Simionato (@micheles)]

    • Fixed the rupture exporter for Canada
    • Extended the oq prepare_site_model to optionally generate the fields z1pt0, z2pt5 and vs30measured
    • It is now an error to specify both the sites and the site model in the job.ini, to avoid confusion with the precedency
    • Implemented a reader for site models in CSV format
    • Made the export_dir relative to the input directory
    • Better error message for ShakeMaps with zero stddev
    • Added a source_id-filtering feature in the job.ini
    • Added a check on non-homogeneous tectonic region types in a source group
    • Fixed the option oq engine --config-file that broke a few releases ago
    • Replaced nodal_dist_collapsing_distance and hypo_dist_collapsing_distance with pointsource_distance and made use of them in the classical and event based calculators

    [Graeme Weatherill (@g-weatherill)]

    • Fixes to hmtk completeness tables for consistent rates and addition of more special methods to catalogue

    [Michele Simionato (@micheles)]

    • Restricted ChiouYoungs2008SWISS01 to StdDev.TOTAL to avoid a bug when computing the GMFs with inter/intra stddevs
    • Raised an error if assets are discarded because too far from the hazard sites (before it was just a warning)
    • Added an attribute .srcidx to every event based rupture and stored it
    • Fixed an issue with the Byte Order Mark (BOM) for CSV exposures prepared with Microsoft Excel
    • Reduced the site collection instead of just filtering it; this fixes a source filtering bug and changes the numbers in case of GMF-correlation
    • Added a command oq prepare_site_model to prepare a sites.csv file containing the vs30 and changed the engine to use it
    • Added a cutoff when storing a PoE=1 from a CSV file, thus avoiding NaNs in classical_damage calculations
    • Reduced the data transfer in the risk model by only considering the taxonomies relevant for the exposure
    • Extended oq engine --run to accept a list of files
    • Optimized the saving of the risk results in event based in the case of many sites and changed the command oq show portfolio_loss to show mean and standard deviation of the portfolio loss for each loss type

    [Marco Pagani (@mmpagani)]

    • Added a first and preliminary version of the GMM for the Canada model represented in an analytical form.
    • Added a modified version of Atkinson and Macias to be used for the calculation of hazard in NSHMP2014.
    • Added support for PGA to the Si and Midorikawa (1999).

    [Michele Simionato (@micheles)]

    • Made it possible to run the risk over an hazard calculation of another user
    • Worked around the OverflowError: cannot serialize a bytes object larger than 4 GiB in event based calculations
    • Started using Python 3.6 features
    • Fixed the check on vulnerability function ID uniqueness for NRML 0.5
    • Ruptures and GMFs are now computed concurrently, thus mitigating the issue of slow tasks
    • Changed the name of the files containing the disaggregation outputs: instead of longitude and latitude they contain the site ID now
    • If a worker runs close to out of memory, now a warning appears in the main log
    • 'lons' and 'lats' are now spelled 'lon' and 'lat' in the REQUIRES_SITES_PARAMETERS to be consistent with site_model.xml

    [Daniele Viganò (@daniviga)]

    • Fixed a bug about 'The openquake master lost its controlling terminal' when running with 'nohup' from command line

    [Michele Simionato (@micheles)]

    • The export_dir is now created recursively, i.e. subdirectories are automatically created if needed
    • Fixed a bug with the minimum_magnitude feature and extended it to be tectonic region type dependent
    • Changed the rupture generation to yield bunches of ruptures, thus avoiding the 4GB pickle limit
    • Parallelized the splitting of the sources, thus making the preprocessing faster

    [Marco Pagani (@mmpagani)]

    • Implemented two additional versions of the Silva et al. 2002 GMPE
    • Added the possibility of setting rake to 'undefined'
    • Added first 'modified GMPE' implementing the site term for Canada 2015 model
    • Fixed a bug in the disaggregation calculation due to wrong binning of magnitudes

    [Michele Simionato (@micheles)]

    • Now the combination uniform_hazard_spectra=true and mean_hazard_curves=false is accepted again, as requested by Laurentiu Danciu (@danciul)

    [Daniele Viganò (@daniviga)]

    • Support for Ubuntu Trusty is removed
    • Replaced supervisord with systemd in Ubuntu packages

    [Michele Simionato (@micheles)]

    • Changed the way the rupture geometries are stored to be consistent with the source geometries
    • We are now saving information about the source geometries in the datastore (experimentally)
    • Fixed a bug in event based with sampling causing incorrect GMFs
    • Unified all distribution mechanisms to returns the outputs via zmq
    • Added a check for inconsistent IMTs between hazard and risk
    • Replaced the forking processpool with a spawning processpool
    Source code(tar.gz)
    Source code(zip)
  • v3.2.0(Sep 6, 2018)

    [Matteo Nastasi (@nastasi-oq)]

    • specified 'amd64' as the only architecture supported by ubuntu packages

    [Michele Simionato (@micheles)]

    • Changed the source writer: now the srcs_weights are written in the XML file only if they are nontrivial
    • Changed the algorithm assigning the seeds: they are now generated before the source splitting; also, a seed-related bug in the splitting was fixed
    • For event based, moved the rupture generation in the prefiltering phase

    [Daniele Viganò (@daniviga)]

    • Fixed a bug with CTRL-C when using the processpool distribution

    [Robin Gee (@rcgee)]

    • Raised the source ID length limit in the validation from 60 to 75 characters to allow sources with longer IDs

    [Michele Simionato (@micheles)]

    • Introduced a multi_node flag in openquake.cfg and used it to fully parallelize the prefiltering in a cluster
    • Used the rupture seed as rupture ID in event based calculations
    • Changed the deprecation mechanism of GSIMs to use a class attribute superseded_by=NewGsimClass
    • Solved the pickling bug in event based hazard by using generator tasks
    • Improved the distribution of the risk tasks by changing the weight

    [Pablo Heresi (@pheresi)]

    • Contributed the HM2018CorrelationModel

    [Michele Simionato (@micheles)]

    • Restored the individual_curves flag that for the moment is used for the risk curves
    • Introduced two experimental new parameters floating_distance and spinning_distance to reduce hypocenter distributions and nodal plane distributions of ruptures over the corresponding distances
    • Optimized the parsing of the logic tree when there is no "applyToSources"
    • Made the IMT classes extensible in client code
    • Reduced the hazard maps from 64 to 32 bit, to be consistent with the hazard curves and to reduce by half the download time

    [Graeme Weatherill (@g-weatherill)]

    • Implements a fix of Montalva et al (2016) for new coefficients (now Montalva et al. (2017))

    [Michele Simionato (@micheles)]

    • Parallelized the reading of the source models
    • Optimized oq info --report by not splitting the sources in that case
    • Speedup the download of the hazard curves, maps and uhs
    • Honored concurrent_tasks in the prefiltering phase too
    • It is now legal to compute uniform hazard spectra for a single period
    • Added command oq plot_memory
    • Introduced a MultiGMPE concept
    • Saved the size of the datastore in the database and used it in the WebUI

    [Graeme Weatherill (@g-weatherill)]

    • Adds geotechnical related IMTs

    [Michele Simionato (@micheles)]

    • Renamed /extract/agglosses -> /extract/agg_losses and same for aggdamages
    • Supported equivalent epicentral distance with a reqv_hdf5 file
    • Fixed the risk from ShakeMap feature in the case of missing IMTs
    • Changed the way gmf_data/indices and ruptures are stored
    • Added experimental support for dask
    • Added 11 new site parameters for geotechnic hazard
    • Changed the SiteCollection to store only the parameters required by the GSIMs

    [Robin Gee (@rcgee)]

    • The number of sites is now an argument in the method _get_stddevs() in the GMPE of Kanno, 2006

    [Michele Simionato (@micheles)]

    • Changed the serialization of ruptures to HDF5: the geometries are now stored in a different dataset
    • Bug fix: the asset->site association was performed even when not needed
    • Made it possible to serialize to .hdf5 multipoint sources and nonparametric gridded sources
    • Added a check on source model logic tree files: the uncertaintyModel values cannot be repeated in the same branchset
    • Added a flag std_hazard_curves; by setting it to true the user can compute the standard deviation of the hazard curves across realizations

    [Marco Pagani (@mmpagani)]

    • Added Thingbaijam et al. (2017) magnitude-scaling relationship

    [Michele Simionato (@micheles)]

    • Added an /extract/ API for event_based_mfd
    • Fixed a bug in the classical_damage calculators: multiple loss types were not treated correctly

    [Marco Pagani (@mmpagani)]

    • Adding tests to the method computing decimal time

    [Michele Simionato (@micheles)]

    • Removed the event_based_rupture calculator and three others
    • Added a field size_mb to the output table in the database and made it visible in the WebUI as a tooltip
    • Added a command oq check_input job.ini to check the input files
    • Made the loss curves and maps outputs from an event based risk calculation visible to the engine and the WebUI (only the stats)
    • Added a check on duplicated branchIDs in GMPE logic trees

    [Daniele Viganò (@daniviga)]

    • Fixed a bug when reading exposure with utf8 names on systems with non-utf8 terminals (Windows)
    • Changed the openquake.cfg file and added a dbserver.listen parameter
    • Added the hostname in the WebUI page. It can be customize by the user via the local_settings.py file

    [Michele Simionato (@micheles)]

    • Added a Content-Length to the outputs downloadable from the WebUI
    • Fixed a bug when extracting gmf_data from a hazard calculation with a filtered site collection
    • Stored an attributed events.max_gmf_size
    • Added a check on exposures with missing loss types
    • Added a LargeExposureGrid error to protect the user by tricky exposures (i.e. France with assets in the Antilles)
    • Changed the event_based_risk calculator to compute the loss curves and maps directly; removed the asset_loss_table
    • Changed the event_based_risk calculator to distribute by GMFs always
    • Optimized the memory consumption in the UCERF classical calculator
    • Added a parameter minimum_magnitude in the job.ini
    • Added an utility utils/combine_mean_curves.py
    Source code(tar.gz)
    Source code(zip)
  • v3.1.0(Jun 1, 2018)

    [Marco Pagani (@mmpagani) and Changlong Li (@mstlgzfdh)]

    • Added a version of the Yu et al. (2013) GMPE supporting Mw

    [Michele Simionato (@micheles)]

    • Reduced the data transfer in the UCERF calculators
    • Stored the zipped input files in the datastore for reproducibility
    • Fixed a regression when reading GMFs from an XML in absence of a sites.csv file

    [Robin Gee (@rcgee)]

    • Extend oq to_shapefile method to also work with YoungsCoppersmithMFD and arbitraryMFD MFD typologies.

    [Michele Simionato (@micheles)]

    • Now the hazard statistics can be computed efficiently even in a single calculation, i.e. without the --hc option
    • Added a check on the Python version in the oq command
    • Reduced the data transfer when sending the site collection
    • Changed the default filter_distance

    [Daniele Viganò (@daniviga)]

    • Fixed a bug where the PID was not saved into the database when using the command line interface
    • Made it impossible to fire multiple CTRL-C in sequence to allow processes teardown and tasks revocation when Celery is used

    [Michele Simionato (@micheles)]

    • Used scipy.spatial.distance.cdist in Mesh.get_min_distance
    • Prefiltered sites and assets in scenario calculations
    • Made it possible to specify the filter_distance in the job.ini
    • Made rtree optional again and disabled it in macOS
    • Optimized the SiteCollection class and doubled the speed of distance calculations in most continental scale calculations
    • Fixed an ordering bug in event based risk from GMFs when using a vulnerability function with PMF
    • Replaced Rtree with KDtree except in the source filtering
    • Parallelized the source prefiltering
    • Removed the tiling feature from the classical calculator
    • Undeprecated hazardlib.calc.stochastic.stochastic_event_set and made its signature right
    • Removed the source typology from the ruptures and reduced the rupture hierarchy
    • Removed the mesh spacing from PlanarSurfaces
    • Optimized the instantiation of the rtree index
    • Replaced the old prefiltering mechanism with the new one

    [Daniele Viganò (@daniviga)]

    • Managed the case of a dead controlling terminal (SIGHUP)

    [Michele Simionato (@micheles)]

    • Removed Decimal numbers from the PMF distribution in hazardlib
    • Fixed another tricky bug with rtree filtering across the international date line
    • Added a parameter prefilter_sources with values rtree|numpy|no
    • Removed the prefiltering on the workers, resulting in a huge speedup for gridded ruptures at the cost of a larger data transfer
    • Changed the losses_by_event output to export a single .csv file with all realizations
    • Added a cross_correlation parameter used when working with shakemaps
    • Now sites and exposure can be set at the same time in the job.ini
    • Introduced a preclassical calculator
    • Extended the scenario_damage calculator to export dmg_by_event outputs as well as losses_by_event outputs if there is a consequence model
    • Unified region and region_constraint parameters in the job.ini
    • Added a check to forbid duplicated GSIMs in the logic tree
    • Introduced some changes to the realizations exporter (renamed field uid -> branch_path and removed the model field)
    • Added a command oq celery inspect
    • Reduced the check on too many realizations to a warning, except for event based calculations
    • Improved the hazard exporter to exports only data for the filtered site collection and not the full site collection
    • Extended the BCR exporter to export the asset tags

    [Catalina Yepes (@CatalinaYepes)]

    • Revised/enhanced the risk demos

    [Michele Simionato (@micheles)]

    • Added a warning about the option optimize_same_id_sources when the user should take advantage of it

    [Daniele Viganò (@daniviga)]

    • celery-status script converted into oq celery status command
    • Removed Django < 1.10 backward compatibility
    • Updated Python dependices (numpy 1.14, scipy 1.0.1, Django 1.10+, Celery 4+)

    [Michele Simionato (@micheles)]

    • Implemented scenario_risk/scenario_damage from shakemap calculators
    • Exported the asset tags in the asset based risk outputs
    • Fixed a numeric issue for nonparametric sources causing the hazard curves to saturate at high intensities
    • Added an utility to download shakemaps
    • Added an XML exporter for the site model
    • Slight change to the correlation module to fix a bug in the SMTK
    • Added a distribution mechanism threadpool
    Source code(tar.gz)
    Source code(zip)
  • v3.0.1(Apr 24, 2018)

    [Michele Simionato (@micheles)]

    • Fixed a numeric bug affecting nonparametric sources and causing saturating hazard curves at high intensity
    Source code(tar.gz)
    Source code(zip)
  • v3.0.0(Apr 9, 2018)

    [Michele Simionato (@micheles)]

    • Fixed a bug with newlines in the logic tree path breaking the CSV exporter for the realizations output
    • When setting the event year, each stochastic event set is now considered independent
    • Fixed a bug in the HMTK plotting libraries and added the ability to customize the figure size
    • Fixed bug in the datastore: now we automatically look for the attributes in the parent dataset, if the dataset is missing in the child datastore
    • Extended extract_losses_by_asset to the event based risk calculator
    • Stored in source_info the number of events generated per source
    • Added a script utils/reduce_sm to reduce the source model of a calculation by removing all the sources not affecting the hazard
    • Deprecated openquake.hazardlib.calc.stochastic.stochastic_event_set
    • Fixed the export of ruptures with a GriddedSurface geometry
    • Added a check for wrong or missing <occupancyPeriods> in the exposure
    • Fixed the issue of slow tasks in event_based_risk from precomputed GMFs for sites without events
    • Now the engine automatically associates the exposure to a grid if region_grid_spacing is given and the sites are not specified otherwise
    • Extracting the site mesh from the exposure before looking at the site model
    • Added a check on probs_occur summing up to 1 in the SourceWriter
    • oq show job_info now shows the received data amount while the calculation is progressing

    [Daniele Viganò (@daniviga)]

    • Removed support for Python 2 in setup.py
    • Removed files containing Python 2 dependencies
    • Added support for WebUI groups/permissions on the export outputs and datastore API endpoints

    [Michele Simionato (@micheles)]

    • Fixed oq show for multiuser with parent calculations
    • Fixed get_spherical_bounding_box for griddedSurfaces
    • Implemented disaggregation by source only for the case of a single realization in the logic tree (experimental)
    • Replaced celery with celery_zmq as distribution mechanism
    • Extended oq info to work on source model logic tree files
    • Added a check against duplicated fields in the exposure CSV
    • Implemented event based with mutex sources (experimental)
    • Add an utility to read XML shakemap files in hazardlib
    • Added a check on IMTs for GMFs read from CSV

    [Daniele Viganò (@daniviga)]

    • Changed the default DbServer port in Linux packages from 1908 to 1907

    [Michele Simionato (@micheles)]

    • Logged rupture floating factor and rupture spinning factor
    • Added an extract API for losses_by_asset
    • Added a check against GMF csv files with more than one realization
    • Fixed the algorithm setting the event year for event based with sampling
    • Added a command oq importcalc to import a remote calculation in the local database
    • Stored avg_losses-stats in event based risk if there are multiple realizations
    • Better error message in case of overlapping sites in sites.csv
    • Added a an investigation time attribute to source models with nonparametric sources
    • Bug fix: in some cases the calculator event_based_rupture was generating too few tasks and the same happened for classical calculation with `optimize_same_id_sources=true
    • Changed the ordering of the epsilons in scenario_risk
    • Added the ability to use a pre-imported risk model
    • Very small result values in scenario_damage (< 1E-7) are clipped to zero, to hide numerical artifacts
    • Removed an obsolete PickleableSequence class
    • Fixed error in classical_risk when num_statistics > num_realizations
    • Fixed a TypeError when reading CSV exposures with occupancy periods
    • Extended the check on duplicated source IDs to models in format NRML 0.5
    • Added a warning when reading the sources if .count_ruptures() is suspiciously slow
    • Changed the splitting logic: now all sources are split upfront
    • Improved the splitting of complex fault sources
    • Added a script to renumber source models with non-unique source IDs
    • Made the datastore of calculations using GMPETables relocatable; in practice you can run the Canada model on a cluster, copy the .hdf5 on a laptop and do the postprocessing there, a feat previously impossible.

    [Valerio Poggi (@klunk386)]

    • Included a method to export data directly from the Catalogue() object into standard HMTK format.

    [Michele Simionato (@micheles)]

    • Now the parameter disagg_outputs is honored, i.e. only the specified outputs are extracted from the disaggregation matrix and stored
    • Implemented statistical disaggregation outputs (experimental)
    • Fixed a small bug: we were reading the source model twice in disaggregation
    • Added a check to discard results coming from the wrong calculation for the distribution mode celery_zmq
    • Removed the long time deprecated commands oq engine --run-hazard and oq engine --run-risk
    • Added a distribution mode celery_zmq
    • Added the ability to use a preimported exposure in risk calculations
    • Substantial cleanup of the parallelization framework
    • Fixed a bug with nonparametric sources producing negative probabilities
    Source code(tar.gz)
    Source code(zip)
Owner
Global Earthquake Model
GEM is a not-for-profit foundation that drives a global collaborative effort to assess earthquake risk around the globe
Global Earthquake Model
Frida-based ceserver.iOS analysis is possible with Cheat Engine.

frida-ceserver frida-based ceserver. iOS analysis is possible with Cheat Engine. Original by Dark Byte. Usage Install python library. pip install pack

null 87 Dec 30, 2022
A Python interface between Earth Engine and xarray for processing weather and climate data

wxee What is wxee? wxee was built to make processing gridded, mesoscale time series weather and climate data quick and easy by integrating the data ca

Aaron Zuspan 160 Dec 31, 2022
A Python library for the Docker Engine API

Docker SDK for Python A Python library for the Docker Engine API. It lets you do anything the docker command does, but from within Python apps – run c

Docker 6.1k Jan 3, 2023
Tesseract Open Source OCR Engine (main repository)

Tesseract OCR About This package contains an OCR engine - libtesseract and a command line program - tesseract. Tesseract 4 adds a new neural net (LSTM

null 48.3k Jan 5, 2023
A website application running in Google app engine, deliver rss news to your kindle. generate mobi using python, multilanguages supported.

Readme of english version refers to Readme_EN.md 简介 这是一个运行在Google App Engine(GAE)上的Kindle个人推送服务应用,生成排版精美的杂志模式mobi/epub格式自动每天推送至您的Kindle或其他邮箱。 此应用目前的主要

null 2.6k Jan 6, 2023
All in one Search Engine Scrapper for used by API or Python Module. It's Free!

All in one Search Engine Scrapper for used by API or Python Module. How to use: Video Documentation Senginta is All in one Search Engine Scrapper. Wit

null 33 Nov 21, 2022
domhttpx is a google search engine dorker with HTTP toolkit built with python, can make it easier for you to find many URLs/IPs at once with fast time.

domhttpx is a google search engine dorker with HTTP toolkit built with python, can make it easier for you to find many URLs/IPs at once with fast time

Naufal Ardhani 59 Dec 4, 2022
Fully Dockerized cryptocurrencies Trading Bot, based on Freqtrade engine. Multi instances.

Cryptocurrencies Trading Bot - Freqtrade Manager This automated Trading Bot is based on the amazing Freqtrade one. It allows you to manage many Freqtr

Cédric Dugat 47 Dec 6, 2022
SC4.0 - BEST EXPERIENCE · HEX EDITOR · Discord Nuker · Plugin Adder · Cheat Engine

smilecreator4 This site is for people who want to hack or want to learn it! Furthermore, this program does not work without turning off Antivirus or W

null 1 Jan 4, 2022
Elkeid HUB - A rule/event processing engine maintained by the Elkeid Team that supports streaming/offline data processing

Elkeid HUB - A rule/event processing engine maintained by the Elkeid Team that supports streaming/offline data processing

Bytedance Inc. 61 Dec 29, 2022
Grape - A webbrowser with its own Search Engine

Grape ?? A Web Browser made entirely in python. Search Engine ?? Installation: F

Grape 2 Sep 6, 2022
Explorer is a Autonomous (self-hosted) Bittorrent Network Search Engine.

Explorer Explorer is a Autonomous (self-hosted) Bittorrent Network Search Engine. About The Project Screenshots Supported features Number Feature 1 DH

null 51 Jun 14, 2022
A Python Module That Uses ANN To Predict A Stocks Price And Also Provides Accurate Technical Analysis With Many High Potential Implementations!

Stox ⚡ A Python Module For The Stock Market ⚡ A Module to predict the "close price" for the next day and give "technical analysis". It uses a Neural N

Dopevog 31 Dec 16, 2022
Public API client for GETTR, a "non-bias [sic] social network," designed for data archival and analysis.

GoGettr GoGettr is an API client for GETTR, a "non-bias [sic] social network." (We will not reward their domain with a hyperlink.) GoGettr is built an

Stanford Internet Observatory 72 Dec 14, 2022
Terraform module to ship CloudTrail logs stored in a S3 bucket into a Kinesis stream for further processing and real-time analysis.

AWS infrastructure to ship CloudTrail logs from S3 to Kinesis This repository contains a Terraform module to ship CloudTrail logs stored in a S3 bucke

Nexthink 8 Sep 20, 2022
hydrotoolbox is a Python script for hydrologic calculations and analysis or by function calls within Python.

hydrotoolbox is a Python script for hydrologic calculations and analysis or by function calls within Python.

Tim Cera 4 Aug 20, 2022
Bot for Telegram data Analysis

Bot Scraper for telegram This bot use an AI to Work powered by BOG Team you must do the following steps to make the bot functional: Install the requir

null 8 Nov 28, 2022
Reddit bot that uses sentiment analysis

Reddit Bot Project 2: Neural Network Boogaloo Reddit bot that uses sentiment analysis from NLTK.VADER WIP_WIP_WIP_WIP_WIP_WIP Link to test subreddit:

TpK 1 Oct 24, 2021
Bendford analysis of Ethereum transaction

Bendford analysis of Ethereum transaction The python script script.py extract from already downloaded archive file the ethereum transaction. The value

sleepy ramen 2 Dec 18, 2021