Legacy

HazPy Legacy methods and classes. This module interacts with and builds upon the hazpy Legacy desktop software.

hazpy.legacy

Methods

getStudyRegions

legacy.getStudyRegions()

Gets all study region names imported into your local Hazus install

Returns:

studyRegions: list – study region names

members

Example:

# get list of study regions
studyRegions = hazpy.legacy.getStudyRegions()

createExportObj

legacy.createExportObj()

Creates a dictionary to be used in the hazus.legacy.export method and hazus.legacy.Exporting class

Returns:

exportObj: dict – opt fields are boolean and decide options for exports. The rest of the fields are strings.

members

Example:

# create a template export object
exportObject = hazpy.legacy.createExportObj()

# modify required keys
exportObject['study_region'] = 'My_Eq_StudyRegionName' # set the study region to export
exportObject['output_directory'] = r'C:\Users\user\disasters\event' # set the directory for the output files

# modify what you will export (True will export and False will not export)
exportObject['opt_csv'] = True # export data as CSVs
exportObject['opt_shp'] = True # export data as Shapefiles
exportObject['opt_report'] = True # export high level report
exportObject['opt_json'] = True # export data as json

# modify optional report keys
exportObject['title'] = 'Kiholo Earthquake M6.4' # report title
exportObject['meta'] = 'Shakemap version 5 US' # subtitle/meta data for the report

export

legacy.export()

Exports data from Hazus legacy. Can export CSVs, Shapefiles, PDF Reports, and Json

Use hazus.legacy.createExportObj() to create a base object for keyword arguments

Keyword arguments:

exportObj: dictionary – {

opt_csv: boolean – export CSVs,

opt_shp: boolean – export Shapefile(s),

opt_report: boolean – export report,

opt_json: boolean – export Json,

study_region: str – name of the Hazus study region (HPR name),

?title: str – title on the report,

?meta: str – sub-title on the report (ex: Shakemap v5),

output_directory: str – directory location for the outputs

}

members

Example:

# set up export object
studyRegions = hazpy.legacy.getStudyRegions() # get all study regions
exportObject = hazpy.legacy.createExportObj() # create a template export object
exportObject['study_region'] = studyRegions[0] # set the study region to export
exportObject['output_directory'] = r'C:\Users\user\disasters\event' # set the directory for the output files

# perform export
hazpy.legacy.export(exportObject)

Classes

HazusDB

class hazpy.legacy.HazusDB[source]

Creates a connection to the Hazus SQL Server database with methods to access databases, tables, and study regions

members

HazusDB.createConnection(orm='pyodbc')[source]

Creates a connection object to the local Hazus SQL Server database

Key Argument:

orm: string - - type of connection to return (choices: ‘pyodbc’, ‘sqlalchemy’)

Returns:

conn: pyodbc connection

HazusDB.getDatabases()[source]

Creates a dataframe of all databases in your Hazus installation

Returns:

df: pandas dataframe

HazusDB.getTables(databaseName)[source]

Creates a dataframe of all tables in a database

Keyword Arguments:

databaseName: str – the name of the Hazus SQL Server database

Returns:

df: pandas dataframe

HazusDB.getStudyRegions()[source]

Creates a dataframe of all study regions in the local Hazus SQL Server database

Returns:

studyRegions: pandas dataframe

HazusDB.query(sql)[source]

Performs a SQL query on the Hazus SQL Server database

Keyword Arguments:

sql: str – a T-SQL query

Returns:

df: pandas dataframe

class hazpy.legacy.HazusDB.EditSession(database, schema, table)

Creates an edit session for a Hazus database table

Keyword Arguments:

database: str – the database or study region name

schema: str – the schema name, typically ‘dbo’

table: str – the table name you want to edit

Returns:

df: pandas dataframe – an editable dataframe. Use the save() method when finished.

Example:

# set up hazpy database object
db = hazpy.legacy.HazusDB() # initializes the HazusDB class

# create database connection object
conn = db.createConnection() # returns a connection to the hazpy SQL Server database

# predefined database queries
databases = db.getDatabases() # returns a dataframe of all databases in your hazpy installation
tables = db.getTables('DATABASE_NAME') # returns all tables in a specified database
studyRegions = db.getStudyRegions() # returns a dataframe of all study regions

# custom query
sql = 'select * from DATABASE.dbo.TABLENAME'
selection = db.query(sql)

# edit a HazusDB table as a pandas dataframe
editSession = db.EditSession('DATABASE_NAME', 'dbo', 'TABLE_NAME')
# edit the dataframe
editSession.save()

StudyRegion

class hazpy.legacy.StudyRegion(studyRegion)[source]

Creates a study region object using an existing study region in the local Hazus database

Keyword Arguments:

studyRegion: str – the name of the study region

members

StudyRegion.createConnection(orm='pyodbc')[source]

Creates a connection object to the local Hazus SQL Server database

Key Argument:

orm: string – type of connection to return (choices: ‘pyodbc’, ‘sqlalchemy’)

Returns:

conn: pyodbc connection

StudyRegion.query(sql)[source]

Performs a SQL query on the Hazus SQL Server database

Keyword Arguments:

sql: str – a T-SQL query

Returns:

df: pandas dataframe

StudyRegion.getHazardBoundary()[source]

Fetches the hazard boundary from a Hazus SQL Server database

Returns:

df: pandas dataframe – geometry in WKT

StudyRegion.getEconomicLoss()[source]

Queries the total economic loss for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of economic loss

StudyRegion.getTotalEconomicLoss()[source]

Queries the total economic loss summation for a study region from the local Hazus SQL Server database

Returns:

totalLoss: integer – the summation of total economic loss

StudyRegion.getBuildingDamageByOccupancy()[source]

Queries the building damage by occupancy type for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of building damage by occupancy type

StudyRegion.getBuildingDamageByType()[source]

Queries the building damage by structure type for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of building damage by structure type

StudyRegion.getBuildingDamageByType()[source]

Queries the building damage by structure type for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of building damage by structure type

StudyRegion.getFatalities()[source]

Queries the fatalities for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of fatalities

StudyRegion.getDisplacedHouseholds()[source]

Queries the displaced households for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of displaced households

StudyRegion.getShelterNeeds()[source]

Queries the short term shelter needs for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of short term shelter needs

StudyRegion.getDebris()[source]

Queries the debris for a study region from the local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of debris

StudyRegion.getHazardsAnalyzed(returnType='list')[source]

Queries the local Hazus SQL Server database and returns all hazards analyzed

Key Argument:

returnType: string – choices: ‘list’, ‘dict’

Returns:

df: pandas dataframe – a dataframe of the hazards analyzed

StudyRegion.getHazard()[source]

Queries the local Hazus SQL Server database and returns a geodataframe of the hazard

Returns:

df: pandas dataframe – a dataframe of the hazard

StudyRegion.getEssentialFacilities()[source]

Queries the call essential facilities for a study region in local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of the essential facilities and damages

Example:

# set up hazpy study region object
studyRegion = hazpy.legacy.StudyRegion('STUDY_REGION_NAME') # initializes the StudyRegion class

# determine the hazards analyzed in the study region
hazards = studyRegion.getHazardsAnalyzed()

# get the hazard boundary
hazardBoundary = studyRegion.getHazardBoundary()

# export boundary to spatial formats
hazardBoundary.toShapefile('output_directory/file_name.shp') # to Esri Shapefile
hazardBoundary.toGeoJSON('output_directory/file_name.geojson') # to web compatible GeoJSON

# get data from the study region
econLoss = studyRegion.getEconomicLoss() # creates a StudyRegionDataFrame

# export data to a CSV
econLoss.toCSV('output_directory/file_name.csv')

# add geometry to the StudyRegionDataFrame
econLossSpatial = econloss.addGeometry()

# export data to spatial formats
econLossSpatial.toShapefile('output_directory/file_name.shp') # to Esri Shapefile
econLossSpatial.toGeoJSON('output_directory/file_name.geojson') # to web compatible GeoJSON

StudyRegionDataFrame

class hazpy.legacy.StudyRegionDataFrame(studyRegion, df)[source]

Intializes a study region dataframe class - A pandas dataframe extended with extra methods

Key Argument:

studyRegion: str – the name of the study region database df: pandas dataframe – a dataframe to extend as a StudyRegionDataFrame

members

StudyRegionDataFrame.addCensusTracts()[source]

Queries the census tract geometry for a study region in local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of the census geometry and fips codes

StudyRegionDataFrame.addCensusBlocks()[source]

Queries the census block geometry for a study region in local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of the census geometry and fips codes

StudyRegionDataFrame.addCounties()[source]

Queries the county geometry for a study region in local Hazus SQL Server database

Returns:

df: pandas dataframe – a dataframe of the census geometry and fips codes

StudyRegionDataFrame.addGeometry()[source]

Adds geometry to any HazusDB class dataframe with a census block, tract, or county id

Key Argument:

dataframe: pandas dataframe – a HazusDB generated dataframe with a fips column named either block, tract, or county

Returns:

df: pandas dataframe – a copy of the input dataframe with the geometry added

StudyRegionDataFrame.toCSV(path)[source]

Exports a StudyRegionDataFrame to a CSV

Key Argument:

path: str – the output directory path, file name, and extention (example: ‘C:/directory/filename.csv’)

StudyRegionDataFrame.toShapefile(path)[source]

Exports a StudyRegionDataFrame to an Esri Shapefile

Key Argument:

path: str – the output directory path, file name, and extention (example: ‘C:/directory/filename.shp’)

StudyRegionDataFrame.toGeoJSON(path)[source]

Exports a StudyRegionDataFrame to a web compatible GeoJSON

Key Argument:

path: str – the output directory path, file name, and extention (example: ‘C:/directory/filename.geojson’)

Example:

# prerequisite - StudyRegion
# set up hazpy study region object
studyRegion = hazpy.legacy.StudyRegion('STUDY_REGION_NAME') # initializes the StudyRegion class

# get data from the study region
displacedHouseholds = studyRegion.getDisplacedHouseholds() # creates a StudyRegionDataFrame

# export data to a CSV
displacedHouseholds.toCSV('output_directory/file_name.csv')

# add geometry to the StudyRegionDataFrame
displacedHouseholdsSpatial = displacedHouseholds.addGeometry()

# export data to spatial formats
displacedHouseholdsSpatial.toShapefile('output_directory/file_name.shp') # to Esri Shapefile
displacedHouseholdsSpatial.toGeoJSON('output_directory/file_name.geojson') # to web compatible GeoJSON

Exporting

(planned depreciation) Functionality in migration to the StudyRegion class

class hazpy.legacy.Exporting(exportObj)[source]

Export class for Hazus legacy. Can export CSVs, Shapefiles, PDF Reports, and Jsondatetime A combination of a date and a time.

Use hazus.legacy.createExportObj() to create a base object for keyword arguments

Exporting method logic follows: setup, getData, toCSV, toShapefile, toReport

Keyword arguments:
exportObj: dictionary – {

study_region: str – name of the Hazus study region (HPR name)

output_directory: str – directory location for the outputs

?title: str – title on the report

?meta: str – sub-title on the report (ex: Shakemap v5)

}

members

Exporting.setup()[source]

Establishes the connection to SQL Server

Exporting.getData()[source]

Queries and parses the data from SQL Server, preparing it for exporting

Exporting.toCSV()[source]

Exports the study region data to CSVs

Exporting.toShapefile()[source]

Exports the study region data to Shapefile(s)

Exporting.toReport()[source]

Exports the study region data to a one-page PDF report

Example:

# set up export object
studyRegions = hazpy.legacy.getStudyRegions() # get all study regions
exportObject = hazpy.legacy.createExportObj() # create a template export object
exportObject['study_region'] = studyRegions[0] # set the study region to export
exportObject['output_directory'] = r'C:\Users\user\disasters\event' # set the directory for the output files

# initialize export class
export = hazpy.legacy.Exporting(exportObject)

# run export class methods
export.setup() # make database connections
export.getData() # get and parse the data from SQL Server
export.toCSV() # export data to CSV
export.toShapefile() # export data to Shapefile
export.toReport() # export data to a PDF