repo stringlengths 7 55 | path stringlengths 4 223 | func_name stringlengths 1 134 | original_string stringlengths 75 104k | language stringclasses 1 value | code stringlengths 75 104k | code_tokens listlengths 19 28.4k | docstring stringlengths 1 46.9k | docstring_tokens listlengths 1 1.97k | sha stringlengths 40 40 | url stringlengths 87 315 | partition stringclasses 1 value |
|---|---|---|---|---|---|---|---|---|---|---|---|
hydpy-dev/hydpy | hydpy/exe/servertools.py | ServerState.initialise | def initialise(self, projectname: str, xmlfile: str) -> None:
"""Initialise a *HydPy* project based on the given XML configuration
file agreeing with `HydPyConfigMultipleRuns.xsd`.
We use the `LahnH` project and its rather complex XML configuration
file `multiple_runs.xml` as an example (module |xmltools| provides
information on interpreting this file):
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import print_values, TestIO
>>> from hydpy.exe.servertools import ServerState
>>> state = ServerState()
>>> with TestIO(): # doctest: +ELLIPSIS
... state.initialise('LahnH', 'multiple_runs.xml')
Start HydPy project `LahnH` (...).
Read configuration file `multiple_runs.xml` (...).
Interpret the defined options (...).
Interpret the defined period (...).
Read all network files (...).
Activate the selected network (...).
Read the required control files (...).
Read the required condition files (...).
Read the required time series files (...).
After initialisation, all defined exchange items are available:
>>> for item in state.parameteritems:
... print(item)
SetItem('alpha', 'hland_v1', 'control.alpha', 0)
SetItem('beta', 'hland_v1', 'control.beta', 0)
SetItem('lag', 'hstream_v1', 'control.lag', 0)
SetItem('damp', 'hstream_v1', 'control.damp', 0)
AddItem('sfcf_1', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_2', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_3', 'hland_v1', 'control.sfcf', 'control.rfcf', 1)
>>> for item in state.conditionitems:
... print(item)
SetItem('sm_lahn_2', 'hland_v1', 'states.sm', 0)
SetItem('sm_lahn_1', 'hland_v1', 'states.sm', 1)
SetItem('quh', 'hland_v1', 'logs.quh', 0)
>>> for item in state.getitems:
... print(item)
GetItem('hland_v1', 'fluxes.qt')
GetItem('hland_v1', 'fluxes.qt.series')
GetItem('hland_v1', 'states.sm')
GetItem('hland_v1', 'states.sm.series')
GetItem('nodes', 'nodes.sim.series')
The initialisation also memorises the initial conditions of
all elements:
>>> for element in state.init_conditions:
... print(element)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
stream_dill_lahn_2
stream_lahn_1_lahn_2
stream_lahn_2_lahn_3
Initialisation also prepares all selected series arrays and
reads the required input data:
>>> print_values(
... state.hp.elements.land_dill.model.sequences.inputs.t.series)
-0.298846, -0.811539, -2.493848, -5.968849, -6.999618
>>> state.hp.nodes.dill.sequences.sim.series
InfoArray([ nan, nan, nan, nan, nan])
"""
write = commandtools.print_textandtime
write(f'Start HydPy project `{projectname}`')
hp = hydpytools.HydPy(projectname)
write(f'Read configuration file `{xmlfile}`')
interface = xmltools.XMLInterface(xmlfile)
write('Interpret the defined options')
interface.update_options()
write('Interpret the defined period')
interface.update_timegrids()
write('Read all network files')
hp.prepare_network()
write('Activate the selected network')
hp.update_devices(interface.fullselection)
write('Read the required control files')
hp.init_models()
write('Read the required condition files')
interface.conditions_io.load_conditions()
write('Read the required time series files')
interface.series_io.prepare_series()
interface.exchange.prepare_series()
interface.series_io.load_series()
self.hp = hp
self.parameteritems = interface.exchange.parameteritems
self.conditionitems = interface.exchange.conditionitems
self.getitems = interface.exchange.getitems
self.conditions = {}
self.parameteritemvalues = collections.defaultdict(lambda: {})
self.modifiedconditionitemvalues = collections.defaultdict(lambda: {})
self.getitemvalues = collections.defaultdict(lambda: {})
self.init_conditions = hp.conditions
self.timegrids = {} | python | def initialise(self, projectname: str, xmlfile: str) -> None:
"""Initialise a *HydPy* project based on the given XML configuration
file agreeing with `HydPyConfigMultipleRuns.xsd`.
We use the `LahnH` project and its rather complex XML configuration
file `multiple_runs.xml` as an example (module |xmltools| provides
information on interpreting this file):
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import print_values, TestIO
>>> from hydpy.exe.servertools import ServerState
>>> state = ServerState()
>>> with TestIO(): # doctest: +ELLIPSIS
... state.initialise('LahnH', 'multiple_runs.xml')
Start HydPy project `LahnH` (...).
Read configuration file `multiple_runs.xml` (...).
Interpret the defined options (...).
Interpret the defined period (...).
Read all network files (...).
Activate the selected network (...).
Read the required control files (...).
Read the required condition files (...).
Read the required time series files (...).
After initialisation, all defined exchange items are available:
>>> for item in state.parameteritems:
... print(item)
SetItem('alpha', 'hland_v1', 'control.alpha', 0)
SetItem('beta', 'hland_v1', 'control.beta', 0)
SetItem('lag', 'hstream_v1', 'control.lag', 0)
SetItem('damp', 'hstream_v1', 'control.damp', 0)
AddItem('sfcf_1', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_2', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_3', 'hland_v1', 'control.sfcf', 'control.rfcf', 1)
>>> for item in state.conditionitems:
... print(item)
SetItem('sm_lahn_2', 'hland_v1', 'states.sm', 0)
SetItem('sm_lahn_1', 'hland_v1', 'states.sm', 1)
SetItem('quh', 'hland_v1', 'logs.quh', 0)
>>> for item in state.getitems:
... print(item)
GetItem('hland_v1', 'fluxes.qt')
GetItem('hland_v1', 'fluxes.qt.series')
GetItem('hland_v1', 'states.sm')
GetItem('hland_v1', 'states.sm.series')
GetItem('nodes', 'nodes.sim.series')
The initialisation also memorises the initial conditions of
all elements:
>>> for element in state.init_conditions:
... print(element)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
stream_dill_lahn_2
stream_lahn_1_lahn_2
stream_lahn_2_lahn_3
Initialisation also prepares all selected series arrays and
reads the required input data:
>>> print_values(
... state.hp.elements.land_dill.model.sequences.inputs.t.series)
-0.298846, -0.811539, -2.493848, -5.968849, -6.999618
>>> state.hp.nodes.dill.sequences.sim.series
InfoArray([ nan, nan, nan, nan, nan])
"""
write = commandtools.print_textandtime
write(f'Start HydPy project `{projectname}`')
hp = hydpytools.HydPy(projectname)
write(f'Read configuration file `{xmlfile}`')
interface = xmltools.XMLInterface(xmlfile)
write('Interpret the defined options')
interface.update_options()
write('Interpret the defined period')
interface.update_timegrids()
write('Read all network files')
hp.prepare_network()
write('Activate the selected network')
hp.update_devices(interface.fullselection)
write('Read the required control files')
hp.init_models()
write('Read the required condition files')
interface.conditions_io.load_conditions()
write('Read the required time series files')
interface.series_io.prepare_series()
interface.exchange.prepare_series()
interface.series_io.load_series()
self.hp = hp
self.parameteritems = interface.exchange.parameteritems
self.conditionitems = interface.exchange.conditionitems
self.getitems = interface.exchange.getitems
self.conditions = {}
self.parameteritemvalues = collections.defaultdict(lambda: {})
self.modifiedconditionitemvalues = collections.defaultdict(lambda: {})
self.getitemvalues = collections.defaultdict(lambda: {})
self.init_conditions = hp.conditions
self.timegrids = {} | [
"def",
"initialise",
"(",
"self",
",",
"projectname",
":",
"str",
",",
"xmlfile",
":",
"str",
")",
"->",
"None",
":",
"write",
"=",
"commandtools",
".",
"print_textandtime",
"write",
"(",
"f'Start HydPy project `{projectname}`'",
")",
"hp",
"=",
"hydpytools",
".",
"HydPy",
"(",
"projectname",
")",
"write",
"(",
"f'Read configuration file `{xmlfile}`'",
")",
"interface",
"=",
"xmltools",
".",
"XMLInterface",
"(",
"xmlfile",
")",
"write",
"(",
"'Interpret the defined options'",
")",
"interface",
".",
"update_options",
"(",
")",
"write",
"(",
"'Interpret the defined period'",
")",
"interface",
".",
"update_timegrids",
"(",
")",
"write",
"(",
"'Read all network files'",
")",
"hp",
".",
"prepare_network",
"(",
")",
"write",
"(",
"'Activate the selected network'",
")",
"hp",
".",
"update_devices",
"(",
"interface",
".",
"fullselection",
")",
"write",
"(",
"'Read the required control files'",
")",
"hp",
".",
"init_models",
"(",
")",
"write",
"(",
"'Read the required condition files'",
")",
"interface",
".",
"conditions_io",
".",
"load_conditions",
"(",
")",
"write",
"(",
"'Read the required time series files'",
")",
"interface",
".",
"series_io",
".",
"prepare_series",
"(",
")",
"interface",
".",
"exchange",
".",
"prepare_series",
"(",
")",
"interface",
".",
"series_io",
".",
"load_series",
"(",
")",
"self",
".",
"hp",
"=",
"hp",
"self",
".",
"parameteritems",
"=",
"interface",
".",
"exchange",
".",
"parameteritems",
"self",
".",
"conditionitems",
"=",
"interface",
".",
"exchange",
".",
"conditionitems",
"self",
".",
"getitems",
"=",
"interface",
".",
"exchange",
".",
"getitems",
"self",
".",
"conditions",
"=",
"{",
"}",
"self",
".",
"parameteritemvalues",
"=",
"collections",
".",
"defaultdict",
"(",
"lambda",
":",
"{",
"}",
")",
"self",
".",
"modifiedconditionitemvalues",
"=",
"collections",
".",
"defaultdict",
"(",
"lambda",
":",
"{",
"}",
")",
"self",
".",
"getitemvalues",
"=",
"collections",
".",
"defaultdict",
"(",
"lambda",
":",
"{",
"}",
")",
"self",
".",
"init_conditions",
"=",
"hp",
".",
"conditions",
"self",
".",
"timegrids",
"=",
"{",
"}"
] | Initialise a *HydPy* project based on the given XML configuration
file agreeing with `HydPyConfigMultipleRuns.xsd`.
We use the `LahnH` project and its rather complex XML configuration
file `multiple_runs.xml` as an example (module |xmltools| provides
information on interpreting this file):
>>> from hydpy.core.examples import prepare_full_example_1
>>> prepare_full_example_1()
>>> from hydpy import print_values, TestIO
>>> from hydpy.exe.servertools import ServerState
>>> state = ServerState()
>>> with TestIO(): # doctest: +ELLIPSIS
... state.initialise('LahnH', 'multiple_runs.xml')
Start HydPy project `LahnH` (...).
Read configuration file `multiple_runs.xml` (...).
Interpret the defined options (...).
Interpret the defined period (...).
Read all network files (...).
Activate the selected network (...).
Read the required control files (...).
Read the required condition files (...).
Read the required time series files (...).
After initialisation, all defined exchange items are available:
>>> for item in state.parameteritems:
... print(item)
SetItem('alpha', 'hland_v1', 'control.alpha', 0)
SetItem('beta', 'hland_v1', 'control.beta', 0)
SetItem('lag', 'hstream_v1', 'control.lag', 0)
SetItem('damp', 'hstream_v1', 'control.damp', 0)
AddItem('sfcf_1', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_2', 'hland_v1', 'control.sfcf', 'control.rfcf', 0)
AddItem('sfcf_3', 'hland_v1', 'control.sfcf', 'control.rfcf', 1)
>>> for item in state.conditionitems:
... print(item)
SetItem('sm_lahn_2', 'hland_v1', 'states.sm', 0)
SetItem('sm_lahn_1', 'hland_v1', 'states.sm', 1)
SetItem('quh', 'hland_v1', 'logs.quh', 0)
>>> for item in state.getitems:
... print(item)
GetItem('hland_v1', 'fluxes.qt')
GetItem('hland_v1', 'fluxes.qt.series')
GetItem('hland_v1', 'states.sm')
GetItem('hland_v1', 'states.sm.series')
GetItem('nodes', 'nodes.sim.series')
The initialisation also memorises the initial conditions of
all elements:
>>> for element in state.init_conditions:
... print(element)
land_dill
land_lahn_1
land_lahn_2
land_lahn_3
stream_dill_lahn_2
stream_lahn_1_lahn_2
stream_lahn_2_lahn_3
Initialisation also prepares all selected series arrays and
reads the required input data:
>>> print_values(
... state.hp.elements.land_dill.model.sequences.inputs.t.series)
-0.298846, -0.811539, -2.493848, -5.968849, -6.999618
>>> state.hp.nodes.dill.sequences.sim.series
InfoArray([ nan, nan, nan, nan, nan]) | [
"Initialise",
"a",
"*",
"HydPy",
"*",
"project",
"based",
"on",
"the",
"given",
"XML",
"configuration",
"file",
"agreeing",
"with",
"HydPyConfigMultipleRuns",
".",
"xsd",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L281-L382 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.POST_evaluate | def POST_evaluate(self) -> None:
"""Evaluate any valid Python expression with the *HydPy* server
process and get its result.
Method |HydPyServer.POST_evaluate| serves to test and debug, primarily.
The main documentation on module |servertools| explains its usage.
"""
for name, value in self._inputs.items():
result = eval(value)
self._outputs[name] = objecttools.flatten_repr(result) | python | def POST_evaluate(self) -> None:
"""Evaluate any valid Python expression with the *HydPy* server
process and get its result.
Method |HydPyServer.POST_evaluate| serves to test and debug, primarily.
The main documentation on module |servertools| explains its usage.
"""
for name, value in self._inputs.items():
result = eval(value)
self._outputs[name] = objecttools.flatten_repr(result) | [
"def",
"POST_evaluate",
"(",
"self",
")",
"->",
"None",
":",
"for",
"name",
",",
"value",
"in",
"self",
".",
"_inputs",
".",
"items",
"(",
")",
":",
"result",
"=",
"eval",
"(",
"value",
")",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"objecttools",
".",
"flatten_repr",
"(",
"result",
")"
] | Evaluate any valid Python expression with the *HydPy* server
process and get its result.
Method |HydPyServer.POST_evaluate| serves to test and debug, primarily.
The main documentation on module |servertools| explains its usage. | [
"Evaluate",
"any",
"valid",
"Python",
"expression",
"with",
"the",
"*",
"HydPy",
"*",
"server",
"process",
"and",
"get",
"its",
"result",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L918-L927 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_close_server | def GET_close_server(self) -> None:
"""Stop and close the *HydPy* server."""
def _close_server():
self.server.shutdown()
self.server.server_close()
shutter = threading.Thread(target=_close_server)
shutter.deamon = True
shutter.start() | python | def GET_close_server(self) -> None:
"""Stop and close the *HydPy* server."""
def _close_server():
self.server.shutdown()
self.server.server_close()
shutter = threading.Thread(target=_close_server)
shutter.deamon = True
shutter.start() | [
"def",
"GET_close_server",
"(",
"self",
")",
"->",
"None",
":",
"def",
"_close_server",
"(",
")",
":",
"self",
".",
"server",
".",
"shutdown",
"(",
")",
"self",
".",
"server",
".",
"server_close",
"(",
")",
"shutter",
"=",
"threading",
".",
"Thread",
"(",
"target",
"=",
"_close_server",
")",
"shutter",
".",
"deamon",
"=",
"True",
"shutter",
".",
"start",
"(",
")"
] | Stop and close the *HydPy* server. | [
"Stop",
"and",
"close",
"the",
"*",
"HydPy",
"*",
"server",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L933-L940 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_parameteritemtypes | def GET_parameteritemtypes(self) -> None:
"""Get the types of all current exchange items supposed to change
the values of |Parameter| objects."""
for item in state.parameteritems:
self._outputs[item.name] = self._get_itemtype(item) | python | def GET_parameteritemtypes(self) -> None:
"""Get the types of all current exchange items supposed to change
the values of |Parameter| objects."""
for item in state.parameteritems:
self._outputs[item.name] = self._get_itemtype(item) | [
"def",
"GET_parameteritemtypes",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"parameteritems",
":",
"self",
".",
"_outputs",
"[",
"item",
".",
"name",
"]",
"=",
"self",
".",
"_get_itemtype",
"(",
"item",
")"
] | Get the types of all current exchange items supposed to change
the values of |Parameter| objects. | [
"Get",
"the",
"types",
"of",
"all",
"current",
"exchange",
"items",
"supposed",
"to",
"change",
"the",
"values",
"of",
"|Parameter|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L953-L957 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_conditionitemtypes | def GET_conditionitemtypes(self) -> None:
"""Get the types of all current exchange items supposed to change
the values of |StateSequence| or |LogSequence| objects."""
for item in state.conditionitems:
self._outputs[item.name] = self._get_itemtype(item) | python | def GET_conditionitemtypes(self) -> None:
"""Get the types of all current exchange items supposed to change
the values of |StateSequence| or |LogSequence| objects."""
for item in state.conditionitems:
self._outputs[item.name] = self._get_itemtype(item) | [
"def",
"GET_conditionitemtypes",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"conditionitems",
":",
"self",
".",
"_outputs",
"[",
"item",
".",
"name",
"]",
"=",
"self",
".",
"_get_itemtype",
"(",
"item",
")"
] | Get the types of all current exchange items supposed to change
the values of |StateSequence| or |LogSequence| objects. | [
"Get",
"the",
"types",
"of",
"all",
"current",
"exchange",
"items",
"supposed",
"to",
"change",
"the",
"values",
"of",
"|StateSequence|",
"or",
"|LogSequence|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L959-L963 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_getitemtypes | def GET_getitemtypes(self) -> None:
"""Get the types of all current exchange items supposed to return
the values of |Parameter| or |Sequence| objects or the time series
of |IOSequence| objects."""
for item in state.getitems:
type_ = self._get_itemtype(item)
for name, _ in item.yield_name2value():
self._outputs[name] = type_ | python | def GET_getitemtypes(self) -> None:
"""Get the types of all current exchange items supposed to return
the values of |Parameter| or |Sequence| objects or the time series
of |IOSequence| objects."""
for item in state.getitems:
type_ = self._get_itemtype(item)
for name, _ in item.yield_name2value():
self._outputs[name] = type_ | [
"def",
"GET_getitemtypes",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"getitems",
":",
"type_",
"=",
"self",
".",
"_get_itemtype",
"(",
"item",
")",
"for",
"name",
",",
"_",
"in",
"item",
".",
"yield_name2value",
"(",
")",
":",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"type_"
] | Get the types of all current exchange items supposed to return
the values of |Parameter| or |Sequence| objects or the time series
of |IOSequence| objects. | [
"Get",
"the",
"types",
"of",
"all",
"current",
"exchange",
"items",
"supposed",
"to",
"return",
"the",
"values",
"of",
"|Parameter|",
"or",
"|Sequence|",
"objects",
"or",
"the",
"time",
"series",
"of",
"|IOSequence|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L965-L972 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.POST_timegrid | def POST_timegrid(self) -> None:
"""Change the current simulation |Timegrid|."""
init = hydpy.pub.timegrids.init
sim = hydpy.pub.timegrids.sim
sim.firstdate = self._inputs['firstdate']
sim.lastdate = self._inputs['lastdate']
state.idx1 = init[sim.firstdate]
state.idx2 = init[sim.lastdate] | python | def POST_timegrid(self) -> None:
"""Change the current simulation |Timegrid|."""
init = hydpy.pub.timegrids.init
sim = hydpy.pub.timegrids.sim
sim.firstdate = self._inputs['firstdate']
sim.lastdate = self._inputs['lastdate']
state.idx1 = init[sim.firstdate]
state.idx2 = init[sim.lastdate] | [
"def",
"POST_timegrid",
"(",
"self",
")",
"->",
"None",
":",
"init",
"=",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"init",
"sim",
"=",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"sim",
"sim",
".",
"firstdate",
"=",
"self",
".",
"_inputs",
"[",
"'firstdate'",
"]",
"sim",
".",
"lastdate",
"=",
"self",
".",
"_inputs",
"[",
"'lastdate'",
"]",
"state",
".",
"idx1",
"=",
"init",
"[",
"sim",
".",
"firstdate",
"]",
"state",
".",
"idx2",
"=",
"init",
"[",
"sim",
".",
"lastdate",
"]"
] | Change the current simulation |Timegrid|. | [
"Change",
"the",
"current",
"simulation",
"|Timegrid|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L978-L985 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_parameteritemvalues | def GET_parameteritemvalues(self) -> None:
"""Get the values of all |ChangeItem| objects handling |Parameter|
objects."""
for item in state.parameteritems:
self._outputs[item.name] = item.value | python | def GET_parameteritemvalues(self) -> None:
"""Get the values of all |ChangeItem| objects handling |Parameter|
objects."""
for item in state.parameteritems:
self._outputs[item.name] = item.value | [
"def",
"GET_parameteritemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"parameteritems",
":",
"self",
".",
"_outputs",
"[",
"item",
".",
"name",
"]",
"=",
"item",
".",
"value"
] | Get the values of all |ChangeItem| objects handling |Parameter|
objects. | [
"Get",
"the",
"values",
"of",
"all",
"|ChangeItem|",
"objects",
"handling",
"|Parameter|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1003-L1007 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_conditionitemvalues | def GET_conditionitemvalues(self) -> None:
"""Get the values of all |ChangeItem| objects handling |StateSequence|
or |LogSequence| objects."""
for item in state.conditionitems:
self._outputs[item.name] = item.value | python | def GET_conditionitemvalues(self) -> None:
"""Get the values of all |ChangeItem| objects handling |StateSequence|
or |LogSequence| objects."""
for item in state.conditionitems:
self._outputs[item.name] = item.value | [
"def",
"GET_conditionitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"conditionitems",
":",
"self",
".",
"_outputs",
"[",
"item",
".",
"name",
"]",
"=",
"item",
".",
"value"
] | Get the values of all |ChangeItem| objects handling |StateSequence|
or |LogSequence| objects. | [
"Get",
"the",
"values",
"of",
"all",
"|ChangeItem|",
"objects",
"handling",
"|StateSequence|",
"or",
"|LogSequence|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1014-L1018 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_getitemvalues | def GET_getitemvalues(self) -> None:
"""Get the values of all |Variable| objects observed by the
current |GetItem| objects.
For |GetItem| objects observing time series,
|HydPyServer.GET_getitemvalues| returns only the values within
the current simulation period.
"""
for item in state.getitems:
for name, value in item.yield_name2value(state.idx1, state.idx2):
self._outputs[name] = value | python | def GET_getitemvalues(self) -> None:
"""Get the values of all |Variable| objects observed by the
current |GetItem| objects.
For |GetItem| objects observing time series,
|HydPyServer.GET_getitemvalues| returns only the values within
the current simulation period.
"""
for item in state.getitems:
for name, value in item.yield_name2value(state.idx1, state.idx2):
self._outputs[name] = value | [
"def",
"GET_getitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"getitems",
":",
"for",
"name",
",",
"value",
"in",
"item",
".",
"yield_name2value",
"(",
"state",
".",
"idx1",
",",
"state",
".",
"idx2",
")",
":",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"value"
] | Get the values of all |Variable| objects observed by the
current |GetItem| objects.
For |GetItem| objects observing time series,
|HydPyServer.GET_getitemvalues| returns only the values within
the current simulation period. | [
"Get",
"the",
"values",
"of",
"all",
"|Variable|",
"objects",
"observed",
"by",
"the",
"current",
"|GetItem|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1025-L1035 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_load_conditionvalues | def GET_load_conditionvalues(self) -> None:
"""Assign the |StateSequence| or |LogSequence| object values available
for the current simulation start point to the current |HydPy| instance.
When the simulation start point is identical with the initialisation
time point and you did not save conditions for it beforehand, the
"original" initial conditions are used (normally those of the
conditions files of the respective *HydPy* project).
"""
try:
state.hp.conditions = state.conditions[self._id][state.idx1]
except KeyError:
if state.idx1:
self._statuscode = 500
raise RuntimeError(
f'Conditions for ID `{self._id}` and time point '
f'`{hydpy.pub.timegrids.sim.firstdate}` are required, '
f'but have not been calculated so far.')
else:
state.hp.conditions = state.init_conditions | python | def GET_load_conditionvalues(self) -> None:
"""Assign the |StateSequence| or |LogSequence| object values available
for the current simulation start point to the current |HydPy| instance.
When the simulation start point is identical with the initialisation
time point and you did not save conditions for it beforehand, the
"original" initial conditions are used (normally those of the
conditions files of the respective *HydPy* project).
"""
try:
state.hp.conditions = state.conditions[self._id][state.idx1]
except KeyError:
if state.idx1:
self._statuscode = 500
raise RuntimeError(
f'Conditions for ID `{self._id}` and time point '
f'`{hydpy.pub.timegrids.sim.firstdate}` are required, '
f'but have not been calculated so far.')
else:
state.hp.conditions = state.init_conditions | [
"def",
"GET_load_conditionvalues",
"(",
"self",
")",
"->",
"None",
":",
"try",
":",
"state",
".",
"hp",
".",
"conditions",
"=",
"state",
".",
"conditions",
"[",
"self",
".",
"_id",
"]",
"[",
"state",
".",
"idx1",
"]",
"except",
"KeyError",
":",
"if",
"state",
".",
"idx1",
":",
"self",
".",
"_statuscode",
"=",
"500",
"raise",
"RuntimeError",
"(",
"f'Conditions for ID `{self._id}` and time point '",
"f'`{hydpy.pub.timegrids.sim.firstdate}` are required, '",
"f'but have not been calculated so far.'",
")",
"else",
":",
"state",
".",
"hp",
".",
"conditions",
"=",
"state",
".",
"init_conditions"
] | Assign the |StateSequence| or |LogSequence| object values available
for the current simulation start point to the current |HydPy| instance.
When the simulation start point is identical with the initialisation
time point and you did not save conditions for it beforehand, the
"original" initial conditions are used (normally those of the
conditions files of the respective *HydPy* project). | [
"Assign",
"the",
"|StateSequence|",
"or",
"|LogSequence|",
"object",
"values",
"available",
"for",
"the",
"current",
"simulation",
"start",
"point",
"to",
"the",
"current",
"|HydPy|",
"instance",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1037-L1056 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_save_conditionvalues | def GET_save_conditionvalues(self) -> None:
"""Save the |StateSequence| and |LogSequence| object values of the
current |HydPy| instance for the current simulation endpoint."""
state.conditions[self._id] = state.conditions.get(self._id, {})
state.conditions[self._id][state.idx2] = state.hp.conditions | python | def GET_save_conditionvalues(self) -> None:
"""Save the |StateSequence| and |LogSequence| object values of the
current |HydPy| instance for the current simulation endpoint."""
state.conditions[self._id] = state.conditions.get(self._id, {})
state.conditions[self._id][state.idx2] = state.hp.conditions | [
"def",
"GET_save_conditionvalues",
"(",
"self",
")",
"->",
"None",
":",
"state",
".",
"conditions",
"[",
"self",
".",
"_id",
"]",
"=",
"state",
".",
"conditions",
".",
"get",
"(",
"self",
".",
"_id",
",",
"{",
"}",
")",
"state",
".",
"conditions",
"[",
"self",
".",
"_id",
"]",
"[",
"state",
".",
"idx2",
"]",
"=",
"state",
".",
"hp",
".",
"conditions"
] | Save the |StateSequence| and |LogSequence| object values of the
current |HydPy| instance for the current simulation endpoint. | [
"Save",
"the",
"|StateSequence|",
"and",
"|LogSequence|",
"object",
"values",
"of",
"the",
"current",
"|HydPy|",
"instance",
"for",
"the",
"current",
"simulation",
"endpoint",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1058-L1062 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_save_parameteritemvalues | def GET_save_parameteritemvalues(self) -> None:
"""Save the values of those |ChangeItem| objects which are
handling |Parameter| objects."""
for item in state.parameteritems:
state.parameteritemvalues[self._id][item.name] = item.value.copy() | python | def GET_save_parameteritemvalues(self) -> None:
"""Save the values of those |ChangeItem| objects which are
handling |Parameter| objects."""
for item in state.parameteritems:
state.parameteritemvalues[self._id][item.name] = item.value.copy() | [
"def",
"GET_save_parameteritemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"parameteritems",
":",
"state",
".",
"parameteritemvalues",
"[",
"self",
".",
"_id",
"]",
"[",
"item",
".",
"name",
"]",
"=",
"item",
".",
"value",
".",
"copy",
"(",
")"
] | Save the values of those |ChangeItem| objects which are
handling |Parameter| objects. | [
"Save",
"the",
"values",
"of",
"those",
"|ChangeItem|",
"objects",
"which",
"are",
"handling",
"|Parameter|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1064-L1068 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_savedparameteritemvalues | def GET_savedparameteritemvalues(self) -> None:
"""Get the previously saved values of those |ChangeItem| objects
which are handling |Parameter| objects."""
dict_ = state.parameteritemvalues.get(self._id)
if dict_ is None:
self.GET_parameteritemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | python | def GET_savedparameteritemvalues(self) -> None:
"""Get the previously saved values of those |ChangeItem| objects
which are handling |Parameter| objects."""
dict_ = state.parameteritemvalues.get(self._id)
if dict_ is None:
self.GET_parameteritemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | [
"def",
"GET_savedparameteritemvalues",
"(",
"self",
")",
"->",
"None",
":",
"dict_",
"=",
"state",
".",
"parameteritemvalues",
".",
"get",
"(",
"self",
".",
"_id",
")",
"if",
"dict_",
"is",
"None",
":",
"self",
".",
"GET_parameteritemvalues",
"(",
")",
"else",
":",
"for",
"name",
",",
"value",
"in",
"dict_",
".",
"items",
"(",
")",
":",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"value"
] | Get the previously saved values of those |ChangeItem| objects
which are handling |Parameter| objects. | [
"Get",
"the",
"previously",
"saved",
"values",
"of",
"those",
"|ChangeItem|",
"objects",
"which",
"are",
"handling",
"|Parameter|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1070-L1078 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_save_modifiedconditionitemvalues | def GET_save_modifiedconditionitemvalues(self) -> None:
"""ToDo: extend functionality and add tests"""
for item in state.conditionitems:
state.modifiedconditionitemvalues[self._id][item.name] = \
list(item.device2target.values())[0].value | python | def GET_save_modifiedconditionitemvalues(self) -> None:
"""ToDo: extend functionality and add tests"""
for item in state.conditionitems:
state.modifiedconditionitemvalues[self._id][item.name] = \
list(item.device2target.values())[0].value | [
"def",
"GET_save_modifiedconditionitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"conditionitems",
":",
"state",
".",
"modifiedconditionitemvalues",
"[",
"self",
".",
"_id",
"]",
"[",
"item",
".",
"name",
"]",
"=",
"list",
"(",
"item",
".",
"device2target",
".",
"values",
"(",
")",
")",
"[",
"0",
"]",
".",
"value"
] | ToDo: extend functionality and add tests | [
"ToDo",
":",
"extend",
"functionality",
"and",
"add",
"tests"
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1080-L1084 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_savedmodifiedconditionitemvalues | def GET_savedmodifiedconditionitemvalues(self) -> None:
"""ToDo: extend functionality and add tests"""
dict_ = state.modifiedconditionitemvalues.get(self._id)
if dict_ is None:
self.GET_conditionitemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | python | def GET_savedmodifiedconditionitemvalues(self) -> None:
"""ToDo: extend functionality and add tests"""
dict_ = state.modifiedconditionitemvalues.get(self._id)
if dict_ is None:
self.GET_conditionitemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | [
"def",
"GET_savedmodifiedconditionitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"dict_",
"=",
"state",
".",
"modifiedconditionitemvalues",
".",
"get",
"(",
"self",
".",
"_id",
")",
"if",
"dict_",
"is",
"None",
":",
"self",
".",
"GET_conditionitemvalues",
"(",
")",
"else",
":",
"for",
"name",
",",
"value",
"in",
"dict_",
".",
"items",
"(",
")",
":",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"value"
] | ToDo: extend functionality and add tests | [
"ToDo",
":",
"extend",
"functionality",
"and",
"add",
"tests"
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1086-L1093 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_save_getitemvalues | def GET_save_getitemvalues(self) -> None:
"""Save the values of all current |GetItem| objects."""
for item in state.getitems:
for name, value in item.yield_name2value(state.idx1, state.idx2):
state.getitemvalues[self._id][name] = value | python | def GET_save_getitemvalues(self) -> None:
"""Save the values of all current |GetItem| objects."""
for item in state.getitems:
for name, value in item.yield_name2value(state.idx1, state.idx2):
state.getitemvalues[self._id][name] = value | [
"def",
"GET_save_getitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"for",
"item",
"in",
"state",
".",
"getitems",
":",
"for",
"name",
",",
"value",
"in",
"item",
".",
"yield_name2value",
"(",
"state",
".",
"idx1",
",",
"state",
".",
"idx2",
")",
":",
"state",
".",
"getitemvalues",
"[",
"self",
".",
"_id",
"]",
"[",
"name",
"]",
"=",
"value"
] | Save the values of all current |GetItem| objects. | [
"Save",
"the",
"values",
"of",
"all",
"current",
"|GetItem|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1095-L1099 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_savedgetitemvalues | def GET_savedgetitemvalues(self) -> None:
"""Get the previously saved values of all |GetItem| objects."""
dict_ = state.getitemvalues.get(self._id)
if dict_ is None:
self.GET_getitemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | python | def GET_savedgetitemvalues(self) -> None:
"""Get the previously saved values of all |GetItem| objects."""
dict_ = state.getitemvalues.get(self._id)
if dict_ is None:
self.GET_getitemvalues()
else:
for name, value in dict_.items():
self._outputs[name] = value | [
"def",
"GET_savedgetitemvalues",
"(",
"self",
")",
"->",
"None",
":",
"dict_",
"=",
"state",
".",
"getitemvalues",
".",
"get",
"(",
"self",
".",
"_id",
")",
"if",
"dict_",
"is",
"None",
":",
"self",
".",
"GET_getitemvalues",
"(",
")",
"else",
":",
"for",
"name",
",",
"value",
"in",
"dict_",
".",
"items",
"(",
")",
":",
"self",
".",
"_outputs",
"[",
"name",
"]",
"=",
"value"
] | Get the previously saved values of all |GetItem| objects. | [
"Get",
"the",
"previously",
"saved",
"values",
"of",
"all",
"|GetItem|",
"objects",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1101-L1108 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_save_timegrid | def GET_save_timegrid(self) -> None:
"""Save the current simulation period."""
state.timegrids[self._id] = copy.deepcopy(hydpy.pub.timegrids.sim) | python | def GET_save_timegrid(self) -> None:
"""Save the current simulation period."""
state.timegrids[self._id] = copy.deepcopy(hydpy.pub.timegrids.sim) | [
"def",
"GET_save_timegrid",
"(",
"self",
")",
"->",
"None",
":",
"state",
".",
"timegrids",
"[",
"self",
".",
"_id",
"]",
"=",
"copy",
".",
"deepcopy",
"(",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"sim",
")"
] | Save the current simulation period. | [
"Save",
"the",
"current",
"simulation",
"period",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1110-L1112 | train |
hydpy-dev/hydpy | hydpy/exe/servertools.py | HydPyServer.GET_savedtimegrid | def GET_savedtimegrid(self) -> None:
"""Get the previously saved simulation period."""
try:
self._write_timegrid(state.timegrids[self._id])
except KeyError:
self._write_timegrid(hydpy.pub.timegrids.init) | python | def GET_savedtimegrid(self) -> None:
"""Get the previously saved simulation period."""
try:
self._write_timegrid(state.timegrids[self._id])
except KeyError:
self._write_timegrid(hydpy.pub.timegrids.init) | [
"def",
"GET_savedtimegrid",
"(",
"self",
")",
"->",
"None",
":",
"try",
":",
"self",
".",
"_write_timegrid",
"(",
"state",
".",
"timegrids",
"[",
"self",
".",
"_id",
"]",
")",
"except",
"KeyError",
":",
"self",
".",
"_write_timegrid",
"(",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"init",
")"
] | Get the previously saved simulation period. | [
"Get",
"the",
"previously",
"saved",
"simulation",
"period",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/exe/servertools.py#L1114-L1119 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | trim | def trim(self: 'Variable', lower=None, upper=None) -> None:
"""Trim the value(s) of a |Variable| instance.
Usually, users do not need to apply function |trim| directly.
Instead, some |Variable| subclasses implement their own `trim`
methods relying on function |trim|. Model developers should
implement individual `trim` methods for their |Parameter| or
|Sequence| subclasses when their boundary values depend on the
actual project configuration (one example is soil moisture;
its lowest possible value should possibly be zero in all cases,
but its highest possible value could depend on another parameter
defining the maximum storage capacity).
For the following examples, we prepare a simple (not fully
functional) |Variable| subclass, making use of function |trim|
without any modifications. Function |trim| works slightly
different for variables handling |float|, |int|, and |bool|
values. We start with the most common content type |float|:
>>> from hydpy.core.variabletools import trim, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... SPAN = 1.0, 3.0
... trim = trim
... initinfo = 2.0, False
... __hydpy__connect_variable2subgroup__ = None
First, we enable the printing of warning messages raised by function
|trim|:
>>> from hydpy import pub
>>> pub.options.warntrim = True
When not passing boundary values, function |trim| extracts them from
class attribute `SPAN` of the given |Variable| instance, if available:
>>> var = Var(None)
>>> var.value = 2.0
>>> var.trim()
>>> var
var(2.0)
>>> var.value = 0.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `0.0` and `1.0`, respectively.
>>> var
var(1.0)
>>> var.value = 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
>>> var
var(3.0)
In the examples above, outlier values are set to the respective
boundary value, accompanied by suitable warning messages. For very
tiny deviations, which might be due to precision problems only,
outliers are trimmed but not reported:
>>> var.value = 1.0 - 1e-15
>>> var == 1.0
False
>>> trim(var)
>>> var == 1.0
True
>>> var.value = 3.0 + 1e-15
>>> var == 3.0
False
>>> var.trim()
>>> var == 3.0
True
Use arguments `lower` and `upper` to override the (eventually)
available `SPAN` entries:
>>> var.trim(lower=4.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `3.0` and `4.0`, respectively.
>>> var.trim(upper=3.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
Function |trim| interprets both |None| and |numpy.nan| values as if
no boundary value exists:
>>> import numpy
>>> var.value = 0.0
>>> var.trim(lower=numpy.nan)
>>> var.value = 5.0
>>> var.trim(upper=numpy.nan)
You can disable function |trim| via option |Options.trimvariables|:
>>> with pub.options.trimvariables(False):
... var.value = 5.0
... var.trim()
>>> var
var(5.0)
Alternatively, you can omit the warning messages only:
>>> with pub.options.warntrim(False):
... var.value = 5.0
... var.trim()
>>> var
var(3.0)
If a |Variable| subclass does not have (fixed) boundaries, give it
either no `SPAN` attribute or a |tuple| containing |None| values:
>>> del Var.SPAN
>>> var.value = 5.0
>>> var.trim()
>>> var
var(5.0)
>>> Var.SPAN = (None, None)
>>> var.trim()
>>> var
var(5.0)
The above examples deal with a 0-dimensional |Variable| subclass.
The following examples repeat the most relevant examples for a
2-dimensional subclass:
>>> Var.SPAN = 1.0, 3.0
>>> Var.NDIM = 2
>>> var.shape = 1, 3
>>> var.values = 2.0
>>> var.trim()
>>> var.values = 0.0, 1.0, 2.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 1. 2.]]` and `[[ 1. 1. 2.]]`, \
respectively.
>>> var
var([[1.0, 1.0, 2.0]])
>>> var.values = 2.0, 3.0, 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 2. 3. 4.]]` and `[[ 2. 3. 3.]]`, \
respectively.
>>> var
var([[2.0, 3.0, 3.0]])
>>> var.values = 1.0-1e-15, 2.0, 3.0+1e-15
>>> var.values == (1.0, 2.0, 3.0)
array([[False, True, False]], dtype=bool)
>>> var.trim()
>>> var.values == (1.0, 2.0, 3.0)
array([[ True, True, True]], dtype=bool)
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(lower=numpy.nan, upper=numpy.nan)
>>> var
var([[0.0, 2.0, 4.0]])
>>> var.trim(lower=[numpy.nan, 3.0, 3.0])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 0. 3. 3.]]`, \
respectively.
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(upper=[numpy.nan, 1.0, numpy.nan])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 1. 1. 4.]]`, \
respectively.
For |Variable| subclasses handling |float| values, setting outliers
to the respective boundary value might often be an acceptable approach.
However, this is often not the case for subclasses handling |int|
values, which often serve as option flags (e.g. to enable/disable
a certain hydrological process for different land-use types). Hence,
function |trim| raises an exception instead of a warning and does
not modify the wrong |int| value:
>>> Var.TYPE = int
>>> Var.NDIM = 0
>>> Var.SPAN = 1, 3
>>> var.value = 2
>>> var.trim()
>>> var
var(2)
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> var
var(4)
>>> from hydpy import INT_NAN
>>> var.value = 0
>>> var.trim(lower=0)
>>> var.trim(lower=INT_NAN)
>>> var.value = 4
>>> var.trim(upper=4)
>>> var.trim(upper=INT_NAN)
>>> Var.SPAN = 1, None
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> Var.SPAN = None, 3
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> del Var.SPAN
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
>>> Var.SPAN = 1, 3
>>> Var.NDIM = 2
>>> var.shape = (1, 3)
>>> var.values = 2
>>> var.trim()
>>> var.values = 0, 1, 2
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[0, 1, 2]])
>>> var.values = 2, 3, 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[2, 3, 4]])
>>> var.values = 0, 0, 2
>>> var.trim(lower=[0, INT_NAN, 2])
>>> var.values = 2, 4, 4
>>> var.trim(upper=[2, INT_NAN, 4])
For |bool| values, defining outliers does not make much sense,
which is why function |trim| does nothing when applied on
variables handling |bool| values:
>>> Var.TYPE = bool
>>> var.trim()
If function |trim| encounters an unmanageable type, it raises an
exception like the following:
>>> Var.TYPE = str
>>> var.trim()
Traceback (most recent call last):
...
NotImplementedError: Method `trim` can only be applied on parameters \
handling floating point, integer, or boolean values, but the "value type" \
of parameter `var` is `str`.
>>> pub.options.warntrim = False
"""
if hydpy.pub.options.trimvariables:
if lower is None:
lower = self.SPAN[0]
if upper is None:
upper = self.SPAN[1]
type_ = getattr(self, 'TYPE', float)
if type_ is float:
if self.NDIM == 0:
_trim_float_0d(self, lower, upper)
else:
_trim_float_nd(self, lower, upper)
elif type_ is int:
if self.NDIM == 0:
_trim_int_0d(self, lower, upper)
else:
_trim_int_nd(self, lower, upper)
elif type_ is bool:
pass
else:
raise NotImplementedError(
f'Method `trim` can only be applied on parameters '
f'handling floating point, integer, or boolean values, '
f'but the "value type" of parameter `{self.name}` is '
f'`{objecttools.classname(self.TYPE)}`.') | python | def trim(self: 'Variable', lower=None, upper=None) -> None:
"""Trim the value(s) of a |Variable| instance.
Usually, users do not need to apply function |trim| directly.
Instead, some |Variable| subclasses implement their own `trim`
methods relying on function |trim|. Model developers should
implement individual `trim` methods for their |Parameter| or
|Sequence| subclasses when their boundary values depend on the
actual project configuration (one example is soil moisture;
its lowest possible value should possibly be zero in all cases,
but its highest possible value could depend on another parameter
defining the maximum storage capacity).
For the following examples, we prepare a simple (not fully
functional) |Variable| subclass, making use of function |trim|
without any modifications. Function |trim| works slightly
different for variables handling |float|, |int|, and |bool|
values. We start with the most common content type |float|:
>>> from hydpy.core.variabletools import trim, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... SPAN = 1.0, 3.0
... trim = trim
... initinfo = 2.0, False
... __hydpy__connect_variable2subgroup__ = None
First, we enable the printing of warning messages raised by function
|trim|:
>>> from hydpy import pub
>>> pub.options.warntrim = True
When not passing boundary values, function |trim| extracts them from
class attribute `SPAN` of the given |Variable| instance, if available:
>>> var = Var(None)
>>> var.value = 2.0
>>> var.trim()
>>> var
var(2.0)
>>> var.value = 0.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `0.0` and `1.0`, respectively.
>>> var
var(1.0)
>>> var.value = 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
>>> var
var(3.0)
In the examples above, outlier values are set to the respective
boundary value, accompanied by suitable warning messages. For very
tiny deviations, which might be due to precision problems only,
outliers are trimmed but not reported:
>>> var.value = 1.0 - 1e-15
>>> var == 1.0
False
>>> trim(var)
>>> var == 1.0
True
>>> var.value = 3.0 + 1e-15
>>> var == 3.0
False
>>> var.trim()
>>> var == 3.0
True
Use arguments `lower` and `upper` to override the (eventually)
available `SPAN` entries:
>>> var.trim(lower=4.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `3.0` and `4.0`, respectively.
>>> var.trim(upper=3.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
Function |trim| interprets both |None| and |numpy.nan| values as if
no boundary value exists:
>>> import numpy
>>> var.value = 0.0
>>> var.trim(lower=numpy.nan)
>>> var.value = 5.0
>>> var.trim(upper=numpy.nan)
You can disable function |trim| via option |Options.trimvariables|:
>>> with pub.options.trimvariables(False):
... var.value = 5.0
... var.trim()
>>> var
var(5.0)
Alternatively, you can omit the warning messages only:
>>> with pub.options.warntrim(False):
... var.value = 5.0
... var.trim()
>>> var
var(3.0)
If a |Variable| subclass does not have (fixed) boundaries, give it
either no `SPAN` attribute or a |tuple| containing |None| values:
>>> del Var.SPAN
>>> var.value = 5.0
>>> var.trim()
>>> var
var(5.0)
>>> Var.SPAN = (None, None)
>>> var.trim()
>>> var
var(5.0)
The above examples deal with a 0-dimensional |Variable| subclass.
The following examples repeat the most relevant examples for a
2-dimensional subclass:
>>> Var.SPAN = 1.0, 3.0
>>> Var.NDIM = 2
>>> var.shape = 1, 3
>>> var.values = 2.0
>>> var.trim()
>>> var.values = 0.0, 1.0, 2.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 1. 2.]]` and `[[ 1. 1. 2.]]`, \
respectively.
>>> var
var([[1.0, 1.0, 2.0]])
>>> var.values = 2.0, 3.0, 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 2. 3. 4.]]` and `[[ 2. 3. 3.]]`, \
respectively.
>>> var
var([[2.0, 3.0, 3.0]])
>>> var.values = 1.0-1e-15, 2.0, 3.0+1e-15
>>> var.values == (1.0, 2.0, 3.0)
array([[False, True, False]], dtype=bool)
>>> var.trim()
>>> var.values == (1.0, 2.0, 3.0)
array([[ True, True, True]], dtype=bool)
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(lower=numpy.nan, upper=numpy.nan)
>>> var
var([[0.0, 2.0, 4.0]])
>>> var.trim(lower=[numpy.nan, 3.0, 3.0])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 0. 3. 3.]]`, \
respectively.
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(upper=[numpy.nan, 1.0, numpy.nan])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 1. 1. 4.]]`, \
respectively.
For |Variable| subclasses handling |float| values, setting outliers
to the respective boundary value might often be an acceptable approach.
However, this is often not the case for subclasses handling |int|
values, which often serve as option flags (e.g. to enable/disable
a certain hydrological process for different land-use types). Hence,
function |trim| raises an exception instead of a warning and does
not modify the wrong |int| value:
>>> Var.TYPE = int
>>> Var.NDIM = 0
>>> Var.SPAN = 1, 3
>>> var.value = 2
>>> var.trim()
>>> var
var(2)
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> var
var(4)
>>> from hydpy import INT_NAN
>>> var.value = 0
>>> var.trim(lower=0)
>>> var.trim(lower=INT_NAN)
>>> var.value = 4
>>> var.trim(upper=4)
>>> var.trim(upper=INT_NAN)
>>> Var.SPAN = 1, None
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> Var.SPAN = None, 3
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> del Var.SPAN
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
>>> Var.SPAN = 1, 3
>>> Var.NDIM = 2
>>> var.shape = (1, 3)
>>> var.values = 2
>>> var.trim()
>>> var.values = 0, 1, 2
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[0, 1, 2]])
>>> var.values = 2, 3, 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[2, 3, 4]])
>>> var.values = 0, 0, 2
>>> var.trim(lower=[0, INT_NAN, 2])
>>> var.values = 2, 4, 4
>>> var.trim(upper=[2, INT_NAN, 4])
For |bool| values, defining outliers does not make much sense,
which is why function |trim| does nothing when applied on
variables handling |bool| values:
>>> Var.TYPE = bool
>>> var.trim()
If function |trim| encounters an unmanageable type, it raises an
exception like the following:
>>> Var.TYPE = str
>>> var.trim()
Traceback (most recent call last):
...
NotImplementedError: Method `trim` can only be applied on parameters \
handling floating point, integer, or boolean values, but the "value type" \
of parameter `var` is `str`.
>>> pub.options.warntrim = False
"""
if hydpy.pub.options.trimvariables:
if lower is None:
lower = self.SPAN[0]
if upper is None:
upper = self.SPAN[1]
type_ = getattr(self, 'TYPE', float)
if type_ is float:
if self.NDIM == 0:
_trim_float_0d(self, lower, upper)
else:
_trim_float_nd(self, lower, upper)
elif type_ is int:
if self.NDIM == 0:
_trim_int_0d(self, lower, upper)
else:
_trim_int_nd(self, lower, upper)
elif type_ is bool:
pass
else:
raise NotImplementedError(
f'Method `trim` can only be applied on parameters '
f'handling floating point, integer, or boolean values, '
f'but the "value type" of parameter `{self.name}` is '
f'`{objecttools.classname(self.TYPE)}`.') | [
"def",
"trim",
"(",
"self",
":",
"'Variable'",
",",
"lower",
"=",
"None",
",",
"upper",
"=",
"None",
")",
"->",
"None",
":",
"if",
"hydpy",
".",
"pub",
".",
"options",
".",
"trimvariables",
":",
"if",
"lower",
"is",
"None",
":",
"lower",
"=",
"self",
".",
"SPAN",
"[",
"0",
"]",
"if",
"upper",
"is",
"None",
":",
"upper",
"=",
"self",
".",
"SPAN",
"[",
"1",
"]",
"type_",
"=",
"getattr",
"(",
"self",
",",
"'TYPE'",
",",
"float",
")",
"if",
"type_",
"is",
"float",
":",
"if",
"self",
".",
"NDIM",
"==",
"0",
":",
"_trim_float_0d",
"(",
"self",
",",
"lower",
",",
"upper",
")",
"else",
":",
"_trim_float_nd",
"(",
"self",
",",
"lower",
",",
"upper",
")",
"elif",
"type_",
"is",
"int",
":",
"if",
"self",
".",
"NDIM",
"==",
"0",
":",
"_trim_int_0d",
"(",
"self",
",",
"lower",
",",
"upper",
")",
"else",
":",
"_trim_int_nd",
"(",
"self",
",",
"lower",
",",
"upper",
")",
"elif",
"type_",
"is",
"bool",
":",
"pass",
"else",
":",
"raise",
"NotImplementedError",
"(",
"f'Method `trim` can only be applied on parameters '",
"f'handling floating point, integer, or boolean values, '",
"f'but the \"value type\" of parameter `{self.name}` is '",
"f'`{objecttools.classname(self.TYPE)}`.'",
")"
] | Trim the value(s) of a |Variable| instance.
Usually, users do not need to apply function |trim| directly.
Instead, some |Variable| subclasses implement their own `trim`
methods relying on function |trim|. Model developers should
implement individual `trim` methods for their |Parameter| or
|Sequence| subclasses when their boundary values depend on the
actual project configuration (one example is soil moisture;
its lowest possible value should possibly be zero in all cases,
but its highest possible value could depend on another parameter
defining the maximum storage capacity).
For the following examples, we prepare a simple (not fully
functional) |Variable| subclass, making use of function |trim|
without any modifications. Function |trim| works slightly
different for variables handling |float|, |int|, and |bool|
values. We start with the most common content type |float|:
>>> from hydpy.core.variabletools import trim, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... SPAN = 1.0, 3.0
... trim = trim
... initinfo = 2.0, False
... __hydpy__connect_variable2subgroup__ = None
First, we enable the printing of warning messages raised by function
|trim|:
>>> from hydpy import pub
>>> pub.options.warntrim = True
When not passing boundary values, function |trim| extracts them from
class attribute `SPAN` of the given |Variable| instance, if available:
>>> var = Var(None)
>>> var.value = 2.0
>>> var.trim()
>>> var
var(2.0)
>>> var.value = 0.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `0.0` and `1.0`, respectively.
>>> var
var(1.0)
>>> var.value = 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
>>> var
var(3.0)
In the examples above, outlier values are set to the respective
boundary value, accompanied by suitable warning messages. For very
tiny deviations, which might be due to precision problems only,
outliers are trimmed but not reported:
>>> var.value = 1.0 - 1e-15
>>> var == 1.0
False
>>> trim(var)
>>> var == 1.0
True
>>> var.value = 3.0 + 1e-15
>>> var == 3.0
False
>>> var.trim()
>>> var == 3.0
True
Use arguments `lower` and `upper` to override the (eventually)
available `SPAN` entries:
>>> var.trim(lower=4.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `3.0` and `4.0`, respectively.
>>> var.trim(upper=3.0)
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `4.0` and `3.0`, respectively.
Function |trim| interprets both |None| and |numpy.nan| values as if
no boundary value exists:
>>> import numpy
>>> var.value = 0.0
>>> var.trim(lower=numpy.nan)
>>> var.value = 5.0
>>> var.trim(upper=numpy.nan)
You can disable function |trim| via option |Options.trimvariables|:
>>> with pub.options.trimvariables(False):
... var.value = 5.0
... var.trim()
>>> var
var(5.0)
Alternatively, you can omit the warning messages only:
>>> with pub.options.warntrim(False):
... var.value = 5.0
... var.trim()
>>> var
var(3.0)
If a |Variable| subclass does not have (fixed) boundaries, give it
either no `SPAN` attribute or a |tuple| containing |None| values:
>>> del Var.SPAN
>>> var.value = 5.0
>>> var.trim()
>>> var
var(5.0)
>>> Var.SPAN = (None, None)
>>> var.trim()
>>> var
var(5.0)
The above examples deal with a 0-dimensional |Variable| subclass.
The following examples repeat the most relevant examples for a
2-dimensional subclass:
>>> Var.SPAN = 1.0, 3.0
>>> Var.NDIM = 2
>>> var.shape = 1, 3
>>> var.values = 2.0
>>> var.trim()
>>> var.values = 0.0, 1.0, 2.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 1. 2.]]` and `[[ 1. 1. 2.]]`, \
respectively.
>>> var
var([[1.0, 1.0, 2.0]])
>>> var.values = 2.0, 3.0, 4.0
>>> var.trim()
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 2. 3. 4.]]` and `[[ 2. 3. 3.]]`, \
respectively.
>>> var
var([[2.0, 3.0, 3.0]])
>>> var.values = 1.0-1e-15, 2.0, 3.0+1e-15
>>> var.values == (1.0, 2.0, 3.0)
array([[False, True, False]], dtype=bool)
>>> var.trim()
>>> var.values == (1.0, 2.0, 3.0)
array([[ True, True, True]], dtype=bool)
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(lower=numpy.nan, upper=numpy.nan)
>>> var
var([[0.0, 2.0, 4.0]])
>>> var.trim(lower=[numpy.nan, 3.0, 3.0])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 0. 3. 3.]]`, \
respectively.
>>> var.values = 0.0, 2.0, 4.0
>>> var.trim(upper=[numpy.nan, 1.0, numpy.nan])
Traceback (most recent call last):
...
UserWarning: For variable `var` at least one value needed to be trimmed. \
The old and the new value(s) are `[[ 0. 2. 4.]]` and `[[ 1. 1. 4.]]`, \
respectively.
For |Variable| subclasses handling |float| values, setting outliers
to the respective boundary value might often be an acceptable approach.
However, this is often not the case for subclasses handling |int|
values, which often serve as option flags (e.g. to enable/disable
a certain hydrological process for different land-use types). Hence,
function |trim| raises an exception instead of a warning and does
not modify the wrong |int| value:
>>> Var.TYPE = int
>>> Var.NDIM = 0
>>> Var.SPAN = 1, 3
>>> var.value = 2
>>> var.trim()
>>> var
var(2)
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> var
var(4)
>>> from hydpy import INT_NAN
>>> var.value = 0
>>> var.trim(lower=0)
>>> var.trim(lower=INT_NAN)
>>> var.value = 4
>>> var.trim(upper=4)
>>> var.trim(upper=INT_NAN)
>>> Var.SPAN = 1, None
>>> var.value = 0
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `0` of parameter `var` of element `?` is not valid.
>>> var
var(0)
>>> Var.SPAN = None, 3
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: The value `4` of parameter `var` of element `?` is not valid.
>>> del Var.SPAN
>>> var.value = 0
>>> var.trim()
>>> var.value = 4
>>> var.trim()
>>> Var.SPAN = 1, 3
>>> Var.NDIM = 2
>>> var.shape = (1, 3)
>>> var.values = 2
>>> var.trim()
>>> var.values = 0, 1, 2
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[0, 1, 2]])
>>> var.values = 2, 3, 4
>>> var.trim()
Traceback (most recent call last):
...
ValueError: At least one value of parameter `var` of element `?` \
is not valid.
>>> var
var([[2, 3, 4]])
>>> var.values = 0, 0, 2
>>> var.trim(lower=[0, INT_NAN, 2])
>>> var.values = 2, 4, 4
>>> var.trim(upper=[2, INT_NAN, 4])
For |bool| values, defining outliers does not make much sense,
which is why function |trim| does nothing when applied on
variables handling |bool| values:
>>> Var.TYPE = bool
>>> var.trim()
If function |trim| encounters an unmanageable type, it raises an
exception like the following:
>>> Var.TYPE = str
>>> var.trim()
Traceback (most recent call last):
...
NotImplementedError: Method `trim` can only be applied on parameters \
handling floating point, integer, or boolean values, but the "value type" \
of parameter `var` is `str`.
>>> pub.options.warntrim = False | [
"Trim",
"the",
"value",
"(",
"s",
")",
"of",
"a",
"|Variable|",
"instance",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L47-L376 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | _get_tolerance | def _get_tolerance(values):
"""Return some "numerical accuracy" to be expected for the
given floating point value(s) (see method |trim|)."""
tolerance = numpy.abs(values*1e-15)
if hasattr(tolerance, '__setitem__'):
tolerance[numpy.isinf(tolerance)] = 0.
elif numpy.isinf(tolerance):
tolerance = 0.
return tolerance | python | def _get_tolerance(values):
"""Return some "numerical accuracy" to be expected for the
given floating point value(s) (see method |trim|)."""
tolerance = numpy.abs(values*1e-15)
if hasattr(tolerance, '__setitem__'):
tolerance[numpy.isinf(tolerance)] = 0.
elif numpy.isinf(tolerance):
tolerance = 0.
return tolerance | [
"def",
"_get_tolerance",
"(",
"values",
")",
":",
"tolerance",
"=",
"numpy",
".",
"abs",
"(",
"values",
"*",
"1e-15",
")",
"if",
"hasattr",
"(",
"tolerance",
",",
"'__setitem__'",
")",
":",
"tolerance",
"[",
"numpy",
".",
"isinf",
"(",
"tolerance",
")",
"]",
"=",
"0.",
"elif",
"numpy",
".",
"isinf",
"(",
"tolerance",
")",
":",
"tolerance",
"=",
"0.",
"return",
"tolerance"
] | Return some "numerical accuracy" to be expected for the
given floating point value(s) (see method |trim|). | [
"Return",
"some",
"numerical",
"accuracy",
"to",
"be",
"expected",
"for",
"the",
"given",
"floating",
"point",
"value",
"(",
"s",
")",
"(",
"see",
"method",
"|trim|",
")",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L451-L459 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | _compare_variables_function_generator | def _compare_variables_function_generator(
method_string, aggregation_func):
"""Return a function usable as a comparison method for class |Variable|.
Pass the specific method (e.g. `__eq__`) and the corresponding
operator (e.g. `==`) as strings. Also pass either |numpy.all| or
|numpy.any| for aggregating multiple boolean values.
"""
def comparison_function(self, other):
"""Wrapper for comparison functions for class |Variable|."""
if self is other:
return method_string in ('__eq__', '__le__', '__ge__')
method = getattr(self.value, method_string)
try:
if hasattr(type(other), '__hydpy__get_value__'):
other = other.__hydpy__get_value__()
result = method(other)
if result is NotImplemented:
return result
return aggregation_func(result)
except BaseException:
objecttools.augment_excmessage(
f'While trying to compare variable '
f'{objecttools.elementphrase(self)} with object '
f'`{other}` of type `{objecttools.classname(other)}`')
return comparison_function | python | def _compare_variables_function_generator(
method_string, aggregation_func):
"""Return a function usable as a comparison method for class |Variable|.
Pass the specific method (e.g. `__eq__`) and the corresponding
operator (e.g. `==`) as strings. Also pass either |numpy.all| or
|numpy.any| for aggregating multiple boolean values.
"""
def comparison_function(self, other):
"""Wrapper for comparison functions for class |Variable|."""
if self is other:
return method_string in ('__eq__', '__le__', '__ge__')
method = getattr(self.value, method_string)
try:
if hasattr(type(other), '__hydpy__get_value__'):
other = other.__hydpy__get_value__()
result = method(other)
if result is NotImplemented:
return result
return aggregation_func(result)
except BaseException:
objecttools.augment_excmessage(
f'While trying to compare variable '
f'{objecttools.elementphrase(self)} with object '
f'`{other}` of type `{objecttools.classname(other)}`')
return comparison_function | [
"def",
"_compare_variables_function_generator",
"(",
"method_string",
",",
"aggregation_func",
")",
":",
"def",
"comparison_function",
"(",
"self",
",",
"other",
")",
":",
"\"\"\"Wrapper for comparison functions for class |Variable|.\"\"\"",
"if",
"self",
"is",
"other",
":",
"return",
"method_string",
"in",
"(",
"'__eq__'",
",",
"'__le__'",
",",
"'__ge__'",
")",
"method",
"=",
"getattr",
"(",
"self",
".",
"value",
",",
"method_string",
")",
"try",
":",
"if",
"hasattr",
"(",
"type",
"(",
"other",
")",
",",
"'__hydpy__get_value__'",
")",
":",
"other",
"=",
"other",
".",
"__hydpy__get_value__",
"(",
")",
"result",
"=",
"method",
"(",
"other",
")",
"if",
"result",
"is",
"NotImplemented",
":",
"return",
"result",
"return",
"aggregation_func",
"(",
"result",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to compare variable '",
"f'{objecttools.elementphrase(self)} with object '",
"f'`{other}` of type `{objecttools.classname(other)}`'",
")",
"return",
"comparison_function"
] | Return a function usable as a comparison method for class |Variable|.
Pass the specific method (e.g. `__eq__`) and the corresponding
operator (e.g. `==`) as strings. Also pass either |numpy.all| or
|numpy.any| for aggregating multiple boolean values. | [
"Return",
"a",
"function",
"usable",
"as",
"a",
"comparison",
"method",
"for",
"class",
"|Variable|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L470-L495 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | to_repr | def to_repr(self: Variable, values, brackets1d: Optional[bool] = False) \
-> str:
"""Return a valid string representation for the given |Variable|
object.
Function |to_repr| it thought for internal purposes only, more
specifically for defining string representations of subclasses
of class |Variable| like the following:
>>> from hydpy.core.variabletools import to_repr, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = int
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 1.0, False
>>> var = Var(None)
>>> var.value = 2
>>> var
var(2)
The following examples demonstrate all covered cases. Note that
option `brackets1d` allows choosing between a "vararg" and an
"iterable" string representation for 1-dimensional variables
(the first one being the default):
>>> print(to_repr(var, 2))
var(2)
>>> Var.NDIM = 1
>>> var = Var(None)
>>> var.shape = 3
>>> print(to_repr(var, range(3)))
var(0, 1, 2)
>>> print(to_repr(var, range(3), True))
var([0, 1, 2])
>>> print(to_repr(var, range(30)))
var(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
>>> print(to_repr(var, range(30), True))
var([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> print(to_repr(var, [range(3), range(3, 6)]))
var([[0, 1, 2],
[3, 4, 5]])
>>> print(to_repr(var, [range(30), range(30, 60)]))
var([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]])
"""
prefix = f'{self.name}('
if isinstance(values, str):
string = f'{self.name}({values})'
elif self.NDIM == 0:
string = f'{self.name}({objecttools.repr_(values)})'
elif self.NDIM == 1:
if brackets1d:
string = objecttools.assignrepr_list(values, prefix, 72) + ')'
else:
string = objecttools.assignrepr_values(
values, prefix, 72) + ')'
else:
string = objecttools.assignrepr_list2(values, prefix, 72) + ')'
return '\n'.join(self.commentrepr + [string]) | python | def to_repr(self: Variable, values, brackets1d: Optional[bool] = False) \
-> str:
"""Return a valid string representation for the given |Variable|
object.
Function |to_repr| it thought for internal purposes only, more
specifically for defining string representations of subclasses
of class |Variable| like the following:
>>> from hydpy.core.variabletools import to_repr, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = int
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 1.0, False
>>> var = Var(None)
>>> var.value = 2
>>> var
var(2)
The following examples demonstrate all covered cases. Note that
option `brackets1d` allows choosing between a "vararg" and an
"iterable" string representation for 1-dimensional variables
(the first one being the default):
>>> print(to_repr(var, 2))
var(2)
>>> Var.NDIM = 1
>>> var = Var(None)
>>> var.shape = 3
>>> print(to_repr(var, range(3)))
var(0, 1, 2)
>>> print(to_repr(var, range(3), True))
var([0, 1, 2])
>>> print(to_repr(var, range(30)))
var(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
>>> print(to_repr(var, range(30), True))
var([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> print(to_repr(var, [range(3), range(3, 6)]))
var([[0, 1, 2],
[3, 4, 5]])
>>> print(to_repr(var, [range(30), range(30, 60)]))
var([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]])
"""
prefix = f'{self.name}('
if isinstance(values, str):
string = f'{self.name}({values})'
elif self.NDIM == 0:
string = f'{self.name}({objecttools.repr_(values)})'
elif self.NDIM == 1:
if brackets1d:
string = objecttools.assignrepr_list(values, prefix, 72) + ')'
else:
string = objecttools.assignrepr_values(
values, prefix, 72) + ')'
else:
string = objecttools.assignrepr_list2(values, prefix, 72) + ')'
return '\n'.join(self.commentrepr + [string]) | [
"def",
"to_repr",
"(",
"self",
":",
"Variable",
",",
"values",
",",
"brackets1d",
":",
"Optional",
"[",
"bool",
"]",
"=",
"False",
")",
"->",
"str",
":",
"prefix",
"=",
"f'{self.name}('",
"if",
"isinstance",
"(",
"values",
",",
"str",
")",
":",
"string",
"=",
"f'{self.name}({values})'",
"elif",
"self",
".",
"NDIM",
"==",
"0",
":",
"string",
"=",
"f'{self.name}({objecttools.repr_(values)})'",
"elif",
"self",
".",
"NDIM",
"==",
"1",
":",
"if",
"brackets1d",
":",
"string",
"=",
"objecttools",
".",
"assignrepr_list",
"(",
"values",
",",
"prefix",
",",
"72",
")",
"+",
"')'",
"else",
":",
"string",
"=",
"objecttools",
".",
"assignrepr_values",
"(",
"values",
",",
"prefix",
",",
"72",
")",
"+",
"')'",
"else",
":",
"string",
"=",
"objecttools",
".",
"assignrepr_list2",
"(",
"values",
",",
"prefix",
",",
"72",
")",
"+",
"')'",
"return",
"'\\n'",
".",
"join",
"(",
"self",
".",
"commentrepr",
"+",
"[",
"string",
"]",
")"
] | Return a valid string representation for the given |Variable|
object.
Function |to_repr| it thought for internal purposes only, more
specifically for defining string representations of subclasses
of class |Variable| like the following:
>>> from hydpy.core.variabletools import to_repr, Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = int
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 1.0, False
>>> var = Var(None)
>>> var.value = 2
>>> var
var(2)
The following examples demonstrate all covered cases. Note that
option `brackets1d` allows choosing between a "vararg" and an
"iterable" string representation for 1-dimensional variables
(the first one being the default):
>>> print(to_repr(var, 2))
var(2)
>>> Var.NDIM = 1
>>> var = Var(None)
>>> var.shape = 3
>>> print(to_repr(var, range(3)))
var(0, 1, 2)
>>> print(to_repr(var, range(3), True))
var([0, 1, 2])
>>> print(to_repr(var, range(30)))
var(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29)
>>> print(to_repr(var, range(30), True))
var([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> print(to_repr(var, [range(3), range(3, 6)]))
var([[0, 1, 2],
[3, 4, 5]])
>>> print(to_repr(var, [range(30), range(30, 60)]))
var([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]]) | [
"Return",
"a",
"valid",
"string",
"representation",
"for",
"the",
"given",
"|Variable|",
"object",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L1930-L1997 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | Variable.verify | def verify(self) -> None:
"""Raises a |RuntimeError| if at least one of the required values
of a |Variable| object is |None| or |numpy.nan|. The descriptor
`mask` defines, which values are considered to be necessary.
Example on a 0-dimensional |Variable|:
>>> from hydpy.core.variabletools import Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 0.0, False
>>> var = Var(None)
>>> import numpy
>>> var.shape = ()
>>> var.value = 1.0
>>> var.verify()
>>> var.value = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
Example on a 2-dimensional |Variable|:
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> var.value = numpy.ones((2,3))
>>> var.value[:, 1] = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 2 required values \
have not been set yet.
>>> Var.mask = var.mask
>>> Var.mask[0, 1] = False
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
>>> Var.mask[1, 1] = False
>>> var.verify()
"""
nmbnan: int = numpy.sum(numpy.isnan(
numpy.array(self.value)[self.mask]))
if nmbnan:
if nmbnan == 1:
text = 'value has'
else:
text = 'values have'
raise RuntimeError(
f'For variable {objecttools.devicephrase(self)}, '
f'{nmbnan} required {text} not been set yet.') | python | def verify(self) -> None:
"""Raises a |RuntimeError| if at least one of the required values
of a |Variable| object is |None| or |numpy.nan|. The descriptor
`mask` defines, which values are considered to be necessary.
Example on a 0-dimensional |Variable|:
>>> from hydpy.core.variabletools import Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 0.0, False
>>> var = Var(None)
>>> import numpy
>>> var.shape = ()
>>> var.value = 1.0
>>> var.verify()
>>> var.value = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
Example on a 2-dimensional |Variable|:
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> var.value = numpy.ones((2,3))
>>> var.value[:, 1] = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 2 required values \
have not been set yet.
>>> Var.mask = var.mask
>>> Var.mask[0, 1] = False
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
>>> Var.mask[1, 1] = False
>>> var.verify()
"""
nmbnan: int = numpy.sum(numpy.isnan(
numpy.array(self.value)[self.mask]))
if nmbnan:
if nmbnan == 1:
text = 'value has'
else:
text = 'values have'
raise RuntimeError(
f'For variable {objecttools.devicephrase(self)}, '
f'{nmbnan} required {text} not been set yet.') | [
"def",
"verify",
"(",
"self",
")",
"->",
"None",
":",
"nmbnan",
":",
"int",
"=",
"numpy",
".",
"sum",
"(",
"numpy",
".",
"isnan",
"(",
"numpy",
".",
"array",
"(",
"self",
".",
"value",
")",
"[",
"self",
".",
"mask",
"]",
")",
")",
"if",
"nmbnan",
":",
"if",
"nmbnan",
"==",
"1",
":",
"text",
"=",
"'value has'",
"else",
":",
"text",
"=",
"'values have'",
"raise",
"RuntimeError",
"(",
"f'For variable {objecttools.devicephrase(self)}, '",
"f'{nmbnan} required {text} not been set yet.'",
")"
] | Raises a |RuntimeError| if at least one of the required values
of a |Variable| object is |None| or |numpy.nan|. The descriptor
`mask` defines, which values are considered to be necessary.
Example on a 0-dimensional |Variable|:
>>> from hydpy.core.variabletools import Variable
>>> class Var(Variable):
... NDIM = 0
... TYPE = float
... __hydpy__connect_variable2subgroup__ = None
... initinfo = 0.0, False
>>> var = Var(None)
>>> import numpy
>>> var.shape = ()
>>> var.value = 1.0
>>> var.verify()
>>> var.value = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
Example on a 2-dimensional |Variable|:
>>> Var.NDIM = 2
>>> var = Var(None)
>>> var.shape = (2, 3)
>>> var.value = numpy.ones((2,3))
>>> var.value[:, 1] = numpy.nan
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 2 required values \
have not been set yet.
>>> Var.mask = var.mask
>>> Var.mask[0, 1] = False
>>> var.verify()
Traceback (most recent call last):
...
RuntimeError: For variable `var`, 1 required value has not been set yet.
>>> Var.mask[1, 1] = False
>>> var.verify() | [
"Raises",
"a",
"|RuntimeError|",
"if",
"at",
"least",
"one",
"of",
"the",
"required",
"values",
"of",
"a",
"|Variable|",
"object",
"is",
"|None|",
"or",
"|numpy",
".",
"nan|",
".",
"The",
"descriptor",
"mask",
"defines",
"which",
"values",
"are",
"considered",
"to",
"be",
"necessary",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L1271-L1327 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | Variable.average_values | def average_values(self, *args, **kwargs) -> float:
"""Average the actual values of the |Variable| object.
For 0-dimensional |Variable| objects, the result of method
|Variable.average_values| equals |Variable.value|. The
following example shows this for the sloppily defined class
`SoilMoisture`:
>>> from hydpy.core.variabletools import Variable
>>> class SoilMoisture(Variable):
... NDIM = 0
... TYPE = float
... refweigths = None
... availablemasks = None
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> sm = SoilMoisture(None)
>>> sm.value = 200.0
>>> sm.average_values()
200.0
When the dimensionality of this class is increased to one,
applying method |Variable.average_values| results in the
following error:
>>> SoilMoisture.NDIM = 1
>>> import numpy
>>> SoilMoisture.shape = (3,)
>>> SoilMoisture.value = numpy.array([200.0, 400.0, 500.0])
>>> sm.average_values()
Traceback (most recent call last):
...
AttributeError: While trying to calculate the mean value \
of variable `soilmoisture`, the following error occurred: Variable \
`soilmoisture` does not define any weighting coefficients.
So model developers have to define another (in this case
1-dimensional) |Variable| subclass (usually a |Parameter|
subclass), and make the relevant object available via property
|Variable.refweights|:
>>> class Area(Variable):
... NDIM = 1
... shape = (3,)
... value = numpy.array([1.0, 1.0, 2.0])
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> area = Area(None)
>>> SoilMoisture.refweights = property(lambda self: area)
>>> sm.average_values()
400.0
In the examples above, all single entries of `values` are relevant,
which is the default case. However, subclasses of |Variable| can
define an alternative mask, allowing to make some entries
irrelevant. Assume for example, that our `SoilMoisture` object
contains three single values, each one associated with a specific
hydrological response unit (hru). To indicate that soil moisture
is undefined for the third unit, (maybe because it is a water area),
we set the third entry of the verification mask to |False|:
>>> from hydpy.core.masktools import DefaultMask
>>> class Soil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, True, False])
>>> SoilMoisture.mask = Soil()
>>> sm.average_values()
300.0
Alternatively, method |Variable.average_values| accepts additional
masking information as positional or keyword arguments. Therefore,
the corresponding model must implement some alternative masks,
which are provided by property |Variable.availablemasks|.
We mock this property with a new |Masks| object, handling one
mask for flat soils (only the first hru), one mask for deep soils
(only the second hru), and one mask for water areas (only the
third hru):
>>> class FlatSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, False, False])
>>> class DeepSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, True, False])
>>> class Water(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, False, True])
>>> from hydpy.core import masktools
>>> class Masks(masktools.Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water)
>>> SoilMoisture.availablemasks = Masks(None)
One can pass either the mask classes themselves or their names:
>>> sm.average_values(sm.availablemasks.flatsoil)
200.0
>>> sm.average_values('deepsoil')
400.0
Both variants can be combined:
>>> sm.average_values(sm.availablemasks.deepsoil, 'flatsoil')
300.0
The following error happens if the general mask of the variable
does not contain the given masks:
>>> sm.average_values('flatsoil', 'water')
Traceback (most recent call last):
...
ValueError: While trying to calculate the mean value of variable \
`soilmoisture`, the following error occurred: Based on the arguments \
`('flatsoil', 'water')` and `{}` the mask `CustomMask([ True, False, True])` \
has been determined, which is not a submask of `Soil([ True, True, False])`.
Applying masks with custom options is also supported. One can change
the behaviour of the following mask via the argument `complete`:
>>> class AllOrNothing(DefaultMask):
... @classmethod
... def new(cls, variable, complete):
... if complete:
... bools = [True, True, True]
... else:
... bools = [False, False, False]
... return cls.array2mask(bools)
>>> class Masks(Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water,
... AllOrNothing)
>>> SoilMoisture.availablemasks = Masks(None)
Again, one can apply the mask class directly (but note that one
has to pass the relevant variable as the first argument.):
>>> sm.average_values( # doctest: +ELLIPSIS
... sm.availablemasks.allornothing(sm, complete=True))
Traceback (most recent call last):
...
ValueError: While trying to...
Alternatively, one can pass the mask name as a keyword and pack
the mask's options into a |dict| object:
>>> sm.average_values(allornothing={'complete': False})
nan
You can combine all variants explained above:
>>> sm.average_values(
... 'deepsoil', flatsoil={}, allornothing={'complete': False})
300.0
"""
try:
if not self.NDIM:
return self.value
mask = self.get_submask(*args, **kwargs)
if numpy.any(mask):
weights = self.refweights[mask]
return numpy.sum(weights*self[mask])/numpy.sum(weights)
return numpy.nan
except BaseException:
objecttools.augment_excmessage(
f'While trying to calculate the mean value of variable '
f'{objecttools.devicephrase(self)}') | python | def average_values(self, *args, **kwargs) -> float:
"""Average the actual values of the |Variable| object.
For 0-dimensional |Variable| objects, the result of method
|Variable.average_values| equals |Variable.value|. The
following example shows this for the sloppily defined class
`SoilMoisture`:
>>> from hydpy.core.variabletools import Variable
>>> class SoilMoisture(Variable):
... NDIM = 0
... TYPE = float
... refweigths = None
... availablemasks = None
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> sm = SoilMoisture(None)
>>> sm.value = 200.0
>>> sm.average_values()
200.0
When the dimensionality of this class is increased to one,
applying method |Variable.average_values| results in the
following error:
>>> SoilMoisture.NDIM = 1
>>> import numpy
>>> SoilMoisture.shape = (3,)
>>> SoilMoisture.value = numpy.array([200.0, 400.0, 500.0])
>>> sm.average_values()
Traceback (most recent call last):
...
AttributeError: While trying to calculate the mean value \
of variable `soilmoisture`, the following error occurred: Variable \
`soilmoisture` does not define any weighting coefficients.
So model developers have to define another (in this case
1-dimensional) |Variable| subclass (usually a |Parameter|
subclass), and make the relevant object available via property
|Variable.refweights|:
>>> class Area(Variable):
... NDIM = 1
... shape = (3,)
... value = numpy.array([1.0, 1.0, 2.0])
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> area = Area(None)
>>> SoilMoisture.refweights = property(lambda self: area)
>>> sm.average_values()
400.0
In the examples above, all single entries of `values` are relevant,
which is the default case. However, subclasses of |Variable| can
define an alternative mask, allowing to make some entries
irrelevant. Assume for example, that our `SoilMoisture` object
contains three single values, each one associated with a specific
hydrological response unit (hru). To indicate that soil moisture
is undefined for the third unit, (maybe because it is a water area),
we set the third entry of the verification mask to |False|:
>>> from hydpy.core.masktools import DefaultMask
>>> class Soil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, True, False])
>>> SoilMoisture.mask = Soil()
>>> sm.average_values()
300.0
Alternatively, method |Variable.average_values| accepts additional
masking information as positional or keyword arguments. Therefore,
the corresponding model must implement some alternative masks,
which are provided by property |Variable.availablemasks|.
We mock this property with a new |Masks| object, handling one
mask for flat soils (only the first hru), one mask for deep soils
(only the second hru), and one mask for water areas (only the
third hru):
>>> class FlatSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, False, False])
>>> class DeepSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, True, False])
>>> class Water(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, False, True])
>>> from hydpy.core import masktools
>>> class Masks(masktools.Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water)
>>> SoilMoisture.availablemasks = Masks(None)
One can pass either the mask classes themselves or their names:
>>> sm.average_values(sm.availablemasks.flatsoil)
200.0
>>> sm.average_values('deepsoil')
400.0
Both variants can be combined:
>>> sm.average_values(sm.availablemasks.deepsoil, 'flatsoil')
300.0
The following error happens if the general mask of the variable
does not contain the given masks:
>>> sm.average_values('flatsoil', 'water')
Traceback (most recent call last):
...
ValueError: While trying to calculate the mean value of variable \
`soilmoisture`, the following error occurred: Based on the arguments \
`('flatsoil', 'water')` and `{}` the mask `CustomMask([ True, False, True])` \
has been determined, which is not a submask of `Soil([ True, True, False])`.
Applying masks with custom options is also supported. One can change
the behaviour of the following mask via the argument `complete`:
>>> class AllOrNothing(DefaultMask):
... @classmethod
... def new(cls, variable, complete):
... if complete:
... bools = [True, True, True]
... else:
... bools = [False, False, False]
... return cls.array2mask(bools)
>>> class Masks(Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water,
... AllOrNothing)
>>> SoilMoisture.availablemasks = Masks(None)
Again, one can apply the mask class directly (but note that one
has to pass the relevant variable as the first argument.):
>>> sm.average_values( # doctest: +ELLIPSIS
... sm.availablemasks.allornothing(sm, complete=True))
Traceback (most recent call last):
...
ValueError: While trying to...
Alternatively, one can pass the mask name as a keyword and pack
the mask's options into a |dict| object:
>>> sm.average_values(allornothing={'complete': False})
nan
You can combine all variants explained above:
>>> sm.average_values(
... 'deepsoil', flatsoil={}, allornothing={'complete': False})
300.0
"""
try:
if not self.NDIM:
return self.value
mask = self.get_submask(*args, **kwargs)
if numpy.any(mask):
weights = self.refweights[mask]
return numpy.sum(weights*self[mask])/numpy.sum(weights)
return numpy.nan
except BaseException:
objecttools.augment_excmessage(
f'While trying to calculate the mean value of variable '
f'{objecttools.devicephrase(self)}') | [
"def",
"average_values",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"->",
"float",
":",
"try",
":",
"if",
"not",
"self",
".",
"NDIM",
":",
"return",
"self",
".",
"value",
"mask",
"=",
"self",
".",
"get_submask",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"if",
"numpy",
".",
"any",
"(",
"mask",
")",
":",
"weights",
"=",
"self",
".",
"refweights",
"[",
"mask",
"]",
"return",
"numpy",
".",
"sum",
"(",
"weights",
"*",
"self",
"[",
"mask",
"]",
")",
"/",
"numpy",
".",
"sum",
"(",
"weights",
")",
"return",
"numpy",
".",
"nan",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to calculate the mean value of variable '",
"f'{objecttools.devicephrase(self)}'",
")"
] | Average the actual values of the |Variable| object.
For 0-dimensional |Variable| objects, the result of method
|Variable.average_values| equals |Variable.value|. The
following example shows this for the sloppily defined class
`SoilMoisture`:
>>> from hydpy.core.variabletools import Variable
>>> class SoilMoisture(Variable):
... NDIM = 0
... TYPE = float
... refweigths = None
... availablemasks = None
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> sm = SoilMoisture(None)
>>> sm.value = 200.0
>>> sm.average_values()
200.0
When the dimensionality of this class is increased to one,
applying method |Variable.average_values| results in the
following error:
>>> SoilMoisture.NDIM = 1
>>> import numpy
>>> SoilMoisture.shape = (3,)
>>> SoilMoisture.value = numpy.array([200.0, 400.0, 500.0])
>>> sm.average_values()
Traceback (most recent call last):
...
AttributeError: While trying to calculate the mean value \
of variable `soilmoisture`, the following error occurred: Variable \
`soilmoisture` does not define any weighting coefficients.
So model developers have to define another (in this case
1-dimensional) |Variable| subclass (usually a |Parameter|
subclass), and make the relevant object available via property
|Variable.refweights|:
>>> class Area(Variable):
... NDIM = 1
... shape = (3,)
... value = numpy.array([1.0, 1.0, 2.0])
... __hydpy__connect_variable2subgroup__ = None
... initinfo = None
>>> area = Area(None)
>>> SoilMoisture.refweights = property(lambda self: area)
>>> sm.average_values()
400.0
In the examples above, all single entries of `values` are relevant,
which is the default case. However, subclasses of |Variable| can
define an alternative mask, allowing to make some entries
irrelevant. Assume for example, that our `SoilMoisture` object
contains three single values, each one associated with a specific
hydrological response unit (hru). To indicate that soil moisture
is undefined for the third unit, (maybe because it is a water area),
we set the third entry of the verification mask to |False|:
>>> from hydpy.core.masktools import DefaultMask
>>> class Soil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, True, False])
>>> SoilMoisture.mask = Soil()
>>> sm.average_values()
300.0
Alternatively, method |Variable.average_values| accepts additional
masking information as positional or keyword arguments. Therefore,
the corresponding model must implement some alternative masks,
which are provided by property |Variable.availablemasks|.
We mock this property with a new |Masks| object, handling one
mask for flat soils (only the first hru), one mask for deep soils
(only the second hru), and one mask for water areas (only the
third hru):
>>> class FlatSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([True, False, False])
>>> class DeepSoil(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, True, False])
>>> class Water(DefaultMask):
... @classmethod
... def new(cls, variable, **kwargs):
... return cls.array2mask([False, False, True])
>>> from hydpy.core import masktools
>>> class Masks(masktools.Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water)
>>> SoilMoisture.availablemasks = Masks(None)
One can pass either the mask classes themselves or their names:
>>> sm.average_values(sm.availablemasks.flatsoil)
200.0
>>> sm.average_values('deepsoil')
400.0
Both variants can be combined:
>>> sm.average_values(sm.availablemasks.deepsoil, 'flatsoil')
300.0
The following error happens if the general mask of the variable
does not contain the given masks:
>>> sm.average_values('flatsoil', 'water')
Traceback (most recent call last):
...
ValueError: While trying to calculate the mean value of variable \
`soilmoisture`, the following error occurred: Based on the arguments \
`('flatsoil', 'water')` and `{}` the mask `CustomMask([ True, False, True])` \
has been determined, which is not a submask of `Soil([ True, True, False])`.
Applying masks with custom options is also supported. One can change
the behaviour of the following mask via the argument `complete`:
>>> class AllOrNothing(DefaultMask):
... @classmethod
... def new(cls, variable, complete):
... if complete:
... bools = [True, True, True]
... else:
... bools = [False, False, False]
... return cls.array2mask(bools)
>>> class Masks(Masks):
... CLASSES = (FlatSoil,
... DeepSoil,
... Water,
... AllOrNothing)
>>> SoilMoisture.availablemasks = Masks(None)
Again, one can apply the mask class directly (but note that one
has to pass the relevant variable as the first argument.):
>>> sm.average_values( # doctest: +ELLIPSIS
... sm.availablemasks.allornothing(sm, complete=True))
Traceback (most recent call last):
...
ValueError: While trying to...
Alternatively, one can pass the mask name as a keyword and pack
the mask's options into a |dict| object:
>>> sm.average_values(allornothing={'complete': False})
nan
You can combine all variants explained above:
>>> sm.average_values(
... 'deepsoil', flatsoil={}, allornothing={'complete': False})
300.0 | [
"Average",
"the",
"actual",
"values",
"of",
"the",
"|Variable|",
"object",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L1339-L1510 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | Variable.get_submask | def get_submask(self, *args, **kwargs) -> masktools.CustomMask:
"""Get a sub-mask of the mask handled by the actual |Variable| object
based on the given arguments.
See the documentation on method |Variable.average_values| for
further information.
"""
if args or kwargs:
masks = self.availablemasks
mask = masktools.CustomMask(numpy.full(self.shape, False))
for arg in args:
mask = mask + self._prepare_mask(arg, masks)
for key, value in kwargs.items():
mask = mask + self._prepare_mask(key, masks, **value)
if mask not in self.mask:
raise ValueError(
f'Based on the arguments `{args}` and `{kwargs}` '
f'the mask `{repr(mask)}` has been determined, '
f'which is not a submask of `{repr(self.mask)}`.')
else:
mask = self.mask
return mask | python | def get_submask(self, *args, **kwargs) -> masktools.CustomMask:
"""Get a sub-mask of the mask handled by the actual |Variable| object
based on the given arguments.
See the documentation on method |Variable.average_values| for
further information.
"""
if args or kwargs:
masks = self.availablemasks
mask = masktools.CustomMask(numpy.full(self.shape, False))
for arg in args:
mask = mask + self._prepare_mask(arg, masks)
for key, value in kwargs.items():
mask = mask + self._prepare_mask(key, masks, **value)
if mask not in self.mask:
raise ValueError(
f'Based on the arguments `{args}` and `{kwargs}` '
f'the mask `{repr(mask)}` has been determined, '
f'which is not a submask of `{repr(self.mask)}`.')
else:
mask = self.mask
return mask | [
"def",
"get_submask",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"->",
"masktools",
".",
"CustomMask",
":",
"if",
"args",
"or",
"kwargs",
":",
"masks",
"=",
"self",
".",
"availablemasks",
"mask",
"=",
"masktools",
".",
"CustomMask",
"(",
"numpy",
".",
"full",
"(",
"self",
".",
"shape",
",",
"False",
")",
")",
"for",
"arg",
"in",
"args",
":",
"mask",
"=",
"mask",
"+",
"self",
".",
"_prepare_mask",
"(",
"arg",
",",
"masks",
")",
"for",
"key",
",",
"value",
"in",
"kwargs",
".",
"items",
"(",
")",
":",
"mask",
"=",
"mask",
"+",
"self",
".",
"_prepare_mask",
"(",
"key",
",",
"masks",
",",
"*",
"*",
"value",
")",
"if",
"mask",
"not",
"in",
"self",
".",
"mask",
":",
"raise",
"ValueError",
"(",
"f'Based on the arguments `{args}` and `{kwargs}` '",
"f'the mask `{repr(mask)}` has been determined, '",
"f'which is not a submask of `{repr(self.mask)}`.'",
")",
"else",
":",
"mask",
"=",
"self",
".",
"mask",
"return",
"mask"
] | Get a sub-mask of the mask handled by the actual |Variable| object
based on the given arguments.
See the documentation on method |Variable.average_values| for
further information. | [
"Get",
"a",
"sub",
"-",
"mask",
"of",
"the",
"mask",
"handled",
"by",
"the",
"actual",
"|Variable|",
"object",
"based",
"on",
"the",
"given",
"arguments",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L1517-L1538 | train |
hydpy-dev/hydpy | hydpy/core/variabletools.py | Variable.commentrepr | def commentrepr(self) -> List[str]:
"""A list with comments for making string representations
more informative.
With option |Options.reprcomments| being disabled,
|Variable.commentrepr| is empty.
"""
if hydpy.pub.options.reprcomments:
return [f'# {line}' for line in
textwrap.wrap(objecttools.description(self), 72)]
return [] | python | def commentrepr(self) -> List[str]:
"""A list with comments for making string representations
more informative.
With option |Options.reprcomments| being disabled,
|Variable.commentrepr| is empty.
"""
if hydpy.pub.options.reprcomments:
return [f'# {line}' for line in
textwrap.wrap(objecttools.description(self), 72)]
return [] | [
"def",
"commentrepr",
"(",
"self",
")",
"->",
"List",
"[",
"str",
"]",
":",
"if",
"hydpy",
".",
"pub",
".",
"options",
".",
"reprcomments",
":",
"return",
"[",
"f'# {line}'",
"for",
"line",
"in",
"textwrap",
".",
"wrap",
"(",
"objecttools",
".",
"description",
"(",
"self",
")",
",",
"72",
")",
"]",
"return",
"[",
"]"
] | A list with comments for making string representations
more informative.
With option |Options.reprcomments| being disabled,
|Variable.commentrepr| is empty. | [
"A",
"list",
"with",
"comments",
"for",
"making",
"string",
"representations",
"more",
"informative",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/variabletools.py#L1744-L1754 | train |
wind39/spartacus | Spartacus/Report.py | AddTable | def AddTable(p_workSheet = None, p_headerDict = None, p_startColumn = 1, p_startRow = 1, p_headerHeight = None, p_data = None, p_mainTable = False, p_conditionalFormatting = None, p_tableStyleInfo = None, p_withFilters = True):
"""Insert a table in a given worksheet.
Args:
p_workSheet (openpyxl.worksheet.worksheet.Worksheet): the worksheet where the table will be inserted. Defaults to None.
p_headerDict (collections.OrderedDict): an ordered dict that contains table header columns.
Notes:
Each entry is in the following form:
Key: Name of the column to be searched in p_data.Columns.
Value: Spartacus.Report.Field instance.
Examples:
p_headerDict = collections.OrderedDict([
(
'field_one',
Field(
p_name = 'Code',
p_width = 15,
p_data = Data(
p_type = 'int'
)
)
),
(
'field_two',
Field(
p_name = 'Result',
p_width = 15,
p_data = Data(
p_type = 'int_formula'
)
)
)
])
p_startColumn (int): the column number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_startRow (int): the row number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_headerHeight (float): the header row height in pt. Defaults to None.
Notes:
Must be a non-negative number or None.
p_data (Spartacus.Database.DataTable): the datatable that contains the data that will be inserted into the excel table. Defaults to None.
Notes:
If the corresponding column data type in p_headerDict is some kind of formula, then below wildcards can be used:
#row#: the current row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_data = Spartacus.Database.DataTable that contains:
Columns: ['field_one', 'field_two'].
Rows: [
[
'HAHAHA',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
],
[
'HEHEHE',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
]
]
p_mainTable (bool): if this table is the main table of the current worksheet. Defaults to False.
Notes:
If it's the main table, then it will consider p_width, p_hidden and freeze panes in the first table row. The 3 parameters are ignored otherwise.
p_conditionalFormatting (Spartacus.Report.ConditionalFormatting): a conditional formatting that should be applied to data rows. Defaults to None.
Notes:
Will be applied to all data rows of this table.
A wildcard can be used and be replaced properly:
#row#: the current data row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_conditionalFormatting = ConditionalFormatting(
p_formula = '$Y#row# = 2',
p_differentialStyle = openpyxl.styles.differential.DifferentialStyle(
fill = openpyxl.styles.PatternFill(
bgColor = 'D3D3D3'
)
)
)
p_tableStyleInfo (openpyxl.worksheet.table.TableStyleInfo): a style to be applied to this table. Defaults to None.
Notes:
Will not be applied to summaries, if any.
Examples:
p_tableStyleInfo = openpyxl.worksheet.table.TableStyleInfo(
name = 'TableStyleMedium23',
showFirstColumn = True,
showLastColumn = True,
showRowStripes = True,
showColumnStripes = False
)
p_withFilters (bool): if the table must contain auto-filters.
Yields:
int: Every 1000 lines inserted into the table, yields actual line number.
Raises:
Spartacus.Report.Exception: custom exceptions occurred in this script.
"""
if not isinstance(p_workSheet, openpyxl.worksheet.worksheet.Worksheet):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_workSheet" must be of type "openpyxl.worksheet.worksheet.Worksheet".')
if not isinstance(p_headerDict, collections.OrderedDict):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_headerDict" must be of type "collections.OrderedDict".')
if not isinstance(p_startColumn, int):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startColumn" must be of type "int".')
if p_startColumn < 1:
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startColumn" must be a positive integer.')
if not isinstance(p_startRow, int):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startRow" must be of type "int".')
if p_startRow < 1:
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startRow" must be a positive integer.')
if p_headerHeight is not None and not isinstance(p_headerHeight, int) and not isinstance(p_headerHeight, float):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_headerHeight" must be None or of type "int" or "float".')
if not isinstance(p_data, Spartacus.Database.DataTable):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_data" must be of type "Spartacus.Database.DataTable".')
if not isinstance(p_mainTable, bool):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_mainTable" must be of type "bool".')
if p_conditionalFormatting is not None and not isinstance(p_conditionalFormatting, ConditionalFormatting):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_conditionalFormatting" must be None or of type "Spartacus.Report.ConditionalFormatting".')
if p_tableStyleInfo is not None and not isinstance(p_tableStyleInfo, openpyxl.worksheet.table.TableStyleInfo):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_tableStyleInfo" must be None or of type "openpyxl.worksheet.table.TableStyleInfo".')
if p_withFilters is not None and not isinstance(p_withFilters, bool):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_withFilters" must be None or of type "bool".')
#Format Header
if p_headerHeight is not None:
p_workSheet.row_dimensions[p_startRow].height = p_headerHeight
v_headerList = list(p_headerDict.keys())
for i in range(len(v_headerList)):
v_header = p_headerDict[v_headerList[i]]
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_cell = p_workSheet['{0}{1}'.format(v_letter, p_startRow)]
v_cell.value = v_header.name
if p_mainTable:
p_workSheet.column_dimensions[v_letter].width = v_header.width
p_workSheet.column_dimensions[v_letter].hidden = v_header.hidden
if v_header.comment is not None:
v_cell.comment = v_header.comment
if v_header.border is not None:
v_cell.border = v_header.border
if v_header.font is not None:
v_cell.font = v_header.font
if v_header.fill is not None:
v_cell.fill = v_header.fill
if v_header.alignment is not None:
v_cell.alignment = v_header.alignment
if p_mainTable:
p_workSheet.freeze_panes = 'A{0}'.format(p_startRow + 1)
#used in formula fields, if it's the case
v_pattern = re.compile(r'#column_[^\n\r#]*#')
v_line = 0
#Fill content
for v_row in p_data.Rows:
v_line += 1
for i in range(len(v_headerList)):
v_headerData = p_headerDict[v_headerList[i]].data
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_cell = p_workSheet['{0}{1}'.format(v_letter, v_line + p_startRow)] #Plus p_startRow to "jump" report header lines
if v_headerData.border is not None:
v_cell.border = v_headerData.border
if v_headerData.font is not None:
v_cell.font = v_headerData.font
if v_headerData.fill is not None:
v_cell.fill = v_headerData.fill
if v_headerData.alignment is not None:
v_cell.alignment = v_headerData.alignment
if v_headerData.type == 'int':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = int(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '0'
elif v_headerData.type == 'float':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '#,##0.00'
elif v_headerData.type == 'float4':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '#,##0.0000'
elif v_headerData.type == 'percent':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '0.00%'
elif v_headerData.type == 'date':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = 'DD/MM/YYYY'
elif v_headerData.type == 'str':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
elif v_headerData.type == 'bool':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = bool(v_row[v_headerList[i]]) if v_row[v_headerList[i]] is not None and str(v_row[v_headerList[i]]).strip() != '' else ''
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
if v_headerData.type == 'int_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '0'
elif v_headerData.type == 'float_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '#,##0.00'
elif v_headerData.type == 'float4_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '#,##0.0000'
elif v_headerData.type == 'percent_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '0.00%'
elif v_headerData.type == 'date_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = 'DD/MM/YYYY'
elif v_headerData.type == 'str_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
if v_line % 1000 == 0:
yield v_line
v_lastLine = len(p_data.Rows) + p_startRow
#Apply conditional formatting, if any
if p_conditionalFormatting is not None:
v_startLetter = openpyxl.utils.get_column_letter(p_startColumn)
v_finalLetter = openpyxl.utils.get_column_letter(len(v_headerList) + p_startColumn - 1)
v_formula = p_conditionalFormatting.formula.replace('#row#', str(p_startRow + 1))
v_match = re.search(v_pattern, v_formula)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_formula[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_formula = v_formula[:v_start] + v_matchColumn + v_formula[v_end:]
v_match = re.search(v_pattern, v_formula)
v_rule = openpyxl.formatting.rule.Rule(
type = 'expression',
formula = [v_formula],
dxf = p_conditionalFormatting.differentialStyle
)
p_workSheet.conditional_formatting.add(
'{0}{1}:{2}{3}'.format(v_startLetter, p_startRow + 1, v_finalLetter, v_lastLine),
v_rule
)
#Build Summary
for i in range(len(v_headerList)):
v_headerSummaryList = p_headerDict[v_headerList[i]].summaryList
for v_headerSummary in v_headerSummaryList:
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_index = p_startRow - 1
if v_headerSummary.index < 0:
v_index = p_startRow + v_headerSummary.index
elif v_headerSummary.index > 0:
v_index = v_lastLine + v_headerSummary.index
v_value = v_headerSummary.function.replace('#column#', v_letter).replace('#start_row#', str(p_startRow + 1)).replace('#end_row#', str(v_lastLine))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell = p_workSheet['{0}{1}'.format(v_letter, v_index)]
v_cell.value = v_value
if v_headerSummary.border is not None:
v_cell.border = v_headerSummary.border
if v_headerSummary.font is not None:
v_cell.font = v_headerSummary.font
if v_headerSummary.fill is not None:
v_cell.fill = v_headerSummary.fill
if v_headerSummary.type == 'int':
v_cell.number_format = '0'
elif v_headerSummary.type == 'float':
v_cell.number_format = '#,##0.00'
elif v_headerSummary.type == 'float4':
v_cell.number_format = '#,##0.0000'
elif v_headerSummary.type == 'percent':
v_cell.number_format = '0.00%'
#Create a new table and add it to worksheet
v_name = 'Table_{0}_{1}'.format(p_workSheet.title.replace(' ', ''), len(p_workSheet._tables) + 1) #excel doesn't accept same displayName in more than one table.
v_name = ''.join([c for c in v_name if c.isalnum()]) #Excel doesn't accept non-alphanumeric characters.
v_table = openpyxl.worksheet.table.Table(
displayName = v_name,
ref = '{0}{1}:{2}{3}'.format(
openpyxl.utils.get_column_letter(p_startColumn),
p_startRow,
openpyxl.utils.get_column_letter(p_startColumn + len(v_headerList) - 1),
v_lastLine
)
)
if p_tableStyleInfo is not None:
v_table.tableStyleInfo = p_tableStyleInfo
if not p_withFilters:
v_table.headerRowCount = 0
p_workSheet.add_table(v_table) | python | def AddTable(p_workSheet = None, p_headerDict = None, p_startColumn = 1, p_startRow = 1, p_headerHeight = None, p_data = None, p_mainTable = False, p_conditionalFormatting = None, p_tableStyleInfo = None, p_withFilters = True):
"""Insert a table in a given worksheet.
Args:
p_workSheet (openpyxl.worksheet.worksheet.Worksheet): the worksheet where the table will be inserted. Defaults to None.
p_headerDict (collections.OrderedDict): an ordered dict that contains table header columns.
Notes:
Each entry is in the following form:
Key: Name of the column to be searched in p_data.Columns.
Value: Spartacus.Report.Field instance.
Examples:
p_headerDict = collections.OrderedDict([
(
'field_one',
Field(
p_name = 'Code',
p_width = 15,
p_data = Data(
p_type = 'int'
)
)
),
(
'field_two',
Field(
p_name = 'Result',
p_width = 15,
p_data = Data(
p_type = 'int_formula'
)
)
)
])
p_startColumn (int): the column number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_startRow (int): the row number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_headerHeight (float): the header row height in pt. Defaults to None.
Notes:
Must be a non-negative number or None.
p_data (Spartacus.Database.DataTable): the datatable that contains the data that will be inserted into the excel table. Defaults to None.
Notes:
If the corresponding column data type in p_headerDict is some kind of formula, then below wildcards can be used:
#row#: the current row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_data = Spartacus.Database.DataTable that contains:
Columns: ['field_one', 'field_two'].
Rows: [
[
'HAHAHA',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
],
[
'HEHEHE',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
]
]
p_mainTable (bool): if this table is the main table of the current worksheet. Defaults to False.
Notes:
If it's the main table, then it will consider p_width, p_hidden and freeze panes in the first table row. The 3 parameters are ignored otherwise.
p_conditionalFormatting (Spartacus.Report.ConditionalFormatting): a conditional formatting that should be applied to data rows. Defaults to None.
Notes:
Will be applied to all data rows of this table.
A wildcard can be used and be replaced properly:
#row#: the current data row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_conditionalFormatting = ConditionalFormatting(
p_formula = '$Y#row# = 2',
p_differentialStyle = openpyxl.styles.differential.DifferentialStyle(
fill = openpyxl.styles.PatternFill(
bgColor = 'D3D3D3'
)
)
)
p_tableStyleInfo (openpyxl.worksheet.table.TableStyleInfo): a style to be applied to this table. Defaults to None.
Notes:
Will not be applied to summaries, if any.
Examples:
p_tableStyleInfo = openpyxl.worksheet.table.TableStyleInfo(
name = 'TableStyleMedium23',
showFirstColumn = True,
showLastColumn = True,
showRowStripes = True,
showColumnStripes = False
)
p_withFilters (bool): if the table must contain auto-filters.
Yields:
int: Every 1000 lines inserted into the table, yields actual line number.
Raises:
Spartacus.Report.Exception: custom exceptions occurred in this script.
"""
if not isinstance(p_workSheet, openpyxl.worksheet.worksheet.Worksheet):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_workSheet" must be of type "openpyxl.worksheet.worksheet.Worksheet".')
if not isinstance(p_headerDict, collections.OrderedDict):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_headerDict" must be of type "collections.OrderedDict".')
if not isinstance(p_startColumn, int):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startColumn" must be of type "int".')
if p_startColumn < 1:
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startColumn" must be a positive integer.')
if not isinstance(p_startRow, int):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startRow" must be of type "int".')
if p_startRow < 1:
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_startRow" must be a positive integer.')
if p_headerHeight is not None and not isinstance(p_headerHeight, int) and not isinstance(p_headerHeight, float):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_headerHeight" must be None or of type "int" or "float".')
if not isinstance(p_data, Spartacus.Database.DataTable):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_data" must be of type "Spartacus.Database.DataTable".')
if not isinstance(p_mainTable, bool):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_mainTable" must be of type "bool".')
if p_conditionalFormatting is not None and not isinstance(p_conditionalFormatting, ConditionalFormatting):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_conditionalFormatting" must be None or of type "Spartacus.Report.ConditionalFormatting".')
if p_tableStyleInfo is not None and not isinstance(p_tableStyleInfo, openpyxl.worksheet.table.TableStyleInfo):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_tableStyleInfo" must be None or of type "openpyxl.worksheet.table.TableStyleInfo".')
if p_withFilters is not None and not isinstance(p_withFilters, bool):
raise Spartacus.Report.Exception('Error during execution of method "Static.AddTable": Parameter "p_withFilters" must be None or of type "bool".')
#Format Header
if p_headerHeight is not None:
p_workSheet.row_dimensions[p_startRow].height = p_headerHeight
v_headerList = list(p_headerDict.keys())
for i in range(len(v_headerList)):
v_header = p_headerDict[v_headerList[i]]
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_cell = p_workSheet['{0}{1}'.format(v_letter, p_startRow)]
v_cell.value = v_header.name
if p_mainTable:
p_workSheet.column_dimensions[v_letter].width = v_header.width
p_workSheet.column_dimensions[v_letter].hidden = v_header.hidden
if v_header.comment is not None:
v_cell.comment = v_header.comment
if v_header.border is not None:
v_cell.border = v_header.border
if v_header.font is not None:
v_cell.font = v_header.font
if v_header.fill is not None:
v_cell.fill = v_header.fill
if v_header.alignment is not None:
v_cell.alignment = v_header.alignment
if p_mainTable:
p_workSheet.freeze_panes = 'A{0}'.format(p_startRow + 1)
#used in formula fields, if it's the case
v_pattern = re.compile(r'#column_[^\n\r#]*#')
v_line = 0
#Fill content
for v_row in p_data.Rows:
v_line += 1
for i in range(len(v_headerList)):
v_headerData = p_headerDict[v_headerList[i]].data
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_cell = p_workSheet['{0}{1}'.format(v_letter, v_line + p_startRow)] #Plus p_startRow to "jump" report header lines
if v_headerData.border is not None:
v_cell.border = v_headerData.border
if v_headerData.font is not None:
v_cell.font = v_headerData.font
if v_headerData.fill is not None:
v_cell.fill = v_headerData.fill
if v_headerData.alignment is not None:
v_cell.alignment = v_headerData.alignment
if v_headerData.type == 'int':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = int(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '0'
elif v_headerData.type == 'float':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '#,##0.00'
elif v_headerData.type == 'float4':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '#,##0.0000'
elif v_headerData.type == 'percent':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = float(v_row[v_headerList[i]])
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = '0.00%'
elif v_headerData.type == 'date':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
v_cell.number_format = 'DD/MM/YYYY'
elif v_headerData.type == 'str':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
elif v_headerData.type == 'bool':
v_key = str(v_row[v_headerList[i]])
if v_key in v_headerData.valueMapping:
v_cell.value = v_headerData.valueMapping[v_key]
else:
try:
v_cell.value = bool(v_row[v_headerList[i]]) if v_row[v_headerList[i]] is not None and str(v_row[v_headerList[i]]).strip() != '' else ''
except (Exception, TypeError, ValueError):
v_cell.value = v_row[v_headerList[i]] if v_row[v_headerList[i]] is not None else ''
if v_headerData.type == 'int_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '0'
elif v_headerData.type == 'float_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '#,##0.00'
elif v_headerData.type == 'float4_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '#,##0.0000'
elif v_headerData.type == 'percent_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = '0.00%'
elif v_headerData.type == 'date_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
v_cell.number_format = 'DD/MM/YYYY'
elif v_headerData.type == 'str_formula':
v_value = v_row[v_headerList[i]].replace('#row#', str(p_startRow + v_line))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell.value = v_value
if v_line % 1000 == 0:
yield v_line
v_lastLine = len(p_data.Rows) + p_startRow
#Apply conditional formatting, if any
if p_conditionalFormatting is not None:
v_startLetter = openpyxl.utils.get_column_letter(p_startColumn)
v_finalLetter = openpyxl.utils.get_column_letter(len(v_headerList) + p_startColumn - 1)
v_formula = p_conditionalFormatting.formula.replace('#row#', str(p_startRow + 1))
v_match = re.search(v_pattern, v_formula)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_formula[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_formula = v_formula[:v_start] + v_matchColumn + v_formula[v_end:]
v_match = re.search(v_pattern, v_formula)
v_rule = openpyxl.formatting.rule.Rule(
type = 'expression',
formula = [v_formula],
dxf = p_conditionalFormatting.differentialStyle
)
p_workSheet.conditional_formatting.add(
'{0}{1}:{2}{3}'.format(v_startLetter, p_startRow + 1, v_finalLetter, v_lastLine),
v_rule
)
#Build Summary
for i in range(len(v_headerList)):
v_headerSummaryList = p_headerDict[v_headerList[i]].summaryList
for v_headerSummary in v_headerSummaryList:
v_letter = openpyxl.utils.get_column_letter(i + p_startColumn)
v_index = p_startRow - 1
if v_headerSummary.index < 0:
v_index = p_startRow + v_headerSummary.index
elif v_headerSummary.index > 0:
v_index = v_lastLine + v_headerSummary.index
v_value = v_headerSummary.function.replace('#column#', v_letter).replace('#start_row#', str(p_startRow + 1)).replace('#end_row#', str(v_lastLine))
v_match = re.search(v_pattern, v_value)
while v_match is not None:
v_start = v_match.start()
v_end = v_match.end()
v_matchColumn = openpyxl.utils.get_column_letter(p_startColumn + v_headerList.index(v_value[v_start + 8 : v_end - 1])) #Discard starting #column_ and ending # in match
v_value = v_value[:v_start] + v_matchColumn + v_value[v_end:]
v_match = re.search(v_pattern, v_value)
v_cell = p_workSheet['{0}{1}'.format(v_letter, v_index)]
v_cell.value = v_value
if v_headerSummary.border is not None:
v_cell.border = v_headerSummary.border
if v_headerSummary.font is not None:
v_cell.font = v_headerSummary.font
if v_headerSummary.fill is not None:
v_cell.fill = v_headerSummary.fill
if v_headerSummary.type == 'int':
v_cell.number_format = '0'
elif v_headerSummary.type == 'float':
v_cell.number_format = '#,##0.00'
elif v_headerSummary.type == 'float4':
v_cell.number_format = '#,##0.0000'
elif v_headerSummary.type == 'percent':
v_cell.number_format = '0.00%'
#Create a new table and add it to worksheet
v_name = 'Table_{0}_{1}'.format(p_workSheet.title.replace(' ', ''), len(p_workSheet._tables) + 1) #excel doesn't accept same displayName in more than one table.
v_name = ''.join([c for c in v_name if c.isalnum()]) #Excel doesn't accept non-alphanumeric characters.
v_table = openpyxl.worksheet.table.Table(
displayName = v_name,
ref = '{0}{1}:{2}{3}'.format(
openpyxl.utils.get_column_letter(p_startColumn),
p_startRow,
openpyxl.utils.get_column_letter(p_startColumn + len(v_headerList) - 1),
v_lastLine
)
)
if p_tableStyleInfo is not None:
v_table.tableStyleInfo = p_tableStyleInfo
if not p_withFilters:
v_table.headerRowCount = 0
p_workSheet.add_table(v_table) | [
"def",
"AddTable",
"(",
"p_workSheet",
"=",
"None",
",",
"p_headerDict",
"=",
"None",
",",
"p_startColumn",
"=",
"1",
",",
"p_startRow",
"=",
"1",
",",
"p_headerHeight",
"=",
"None",
",",
"p_data",
"=",
"None",
",",
"p_mainTable",
"=",
"False",
",",
"p_conditionalFormatting",
"=",
"None",
",",
"p_tableStyleInfo",
"=",
"None",
",",
"p_withFilters",
"=",
"True",
")",
":",
"if",
"not",
"isinstance",
"(",
"p_workSheet",
",",
"openpyxl",
".",
"worksheet",
".",
"worksheet",
".",
"Worksheet",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_workSheet\" must be of type \"openpyxl.worksheet.worksheet.Worksheet\".'",
")",
"if",
"not",
"isinstance",
"(",
"p_headerDict",
",",
"collections",
".",
"OrderedDict",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_headerDict\" must be of type \"collections.OrderedDict\".'",
")",
"if",
"not",
"isinstance",
"(",
"p_startColumn",
",",
"int",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_startColumn\" must be of type \"int\".'",
")",
"if",
"p_startColumn",
"<",
"1",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_startColumn\" must be a positive integer.'",
")",
"if",
"not",
"isinstance",
"(",
"p_startRow",
",",
"int",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_startRow\" must be of type \"int\".'",
")",
"if",
"p_startRow",
"<",
"1",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_startRow\" must be a positive integer.'",
")",
"if",
"p_headerHeight",
"is",
"not",
"None",
"and",
"not",
"isinstance",
"(",
"p_headerHeight",
",",
"int",
")",
"and",
"not",
"isinstance",
"(",
"p_headerHeight",
",",
"float",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_headerHeight\" must be None or of type \"int\" or \"float\".'",
")",
"if",
"not",
"isinstance",
"(",
"p_data",
",",
"Spartacus",
".",
"Database",
".",
"DataTable",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_data\" must be of type \"Spartacus.Database.DataTable\".'",
")",
"if",
"not",
"isinstance",
"(",
"p_mainTable",
",",
"bool",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_mainTable\" must be of type \"bool\".'",
")",
"if",
"p_conditionalFormatting",
"is",
"not",
"None",
"and",
"not",
"isinstance",
"(",
"p_conditionalFormatting",
",",
"ConditionalFormatting",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_conditionalFormatting\" must be None or of type \"Spartacus.Report.ConditionalFormatting\".'",
")",
"if",
"p_tableStyleInfo",
"is",
"not",
"None",
"and",
"not",
"isinstance",
"(",
"p_tableStyleInfo",
",",
"openpyxl",
".",
"worksheet",
".",
"table",
".",
"TableStyleInfo",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_tableStyleInfo\" must be None or of type \"openpyxl.worksheet.table.TableStyleInfo\".'",
")",
"if",
"p_withFilters",
"is",
"not",
"None",
"and",
"not",
"isinstance",
"(",
"p_withFilters",
",",
"bool",
")",
":",
"raise",
"Spartacus",
".",
"Report",
".",
"Exception",
"(",
"'Error during execution of method \"Static.AddTable\": Parameter \"p_withFilters\" must be None or of type \"bool\".'",
")",
"#Format Header",
"if",
"p_headerHeight",
"is",
"not",
"None",
":",
"p_workSheet",
".",
"row_dimensions",
"[",
"p_startRow",
"]",
".",
"height",
"=",
"p_headerHeight",
"v_headerList",
"=",
"list",
"(",
"p_headerDict",
".",
"keys",
"(",
")",
")",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"v_headerList",
")",
")",
":",
"v_header",
"=",
"p_headerDict",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"v_letter",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"i",
"+",
"p_startColumn",
")",
"v_cell",
"=",
"p_workSheet",
"[",
"'{0}{1}'",
".",
"format",
"(",
"v_letter",
",",
"p_startRow",
")",
"]",
"v_cell",
".",
"value",
"=",
"v_header",
".",
"name",
"if",
"p_mainTable",
":",
"p_workSheet",
".",
"column_dimensions",
"[",
"v_letter",
"]",
".",
"width",
"=",
"v_header",
".",
"width",
"p_workSheet",
".",
"column_dimensions",
"[",
"v_letter",
"]",
".",
"hidden",
"=",
"v_header",
".",
"hidden",
"if",
"v_header",
".",
"comment",
"is",
"not",
"None",
":",
"v_cell",
".",
"comment",
"=",
"v_header",
".",
"comment",
"if",
"v_header",
".",
"border",
"is",
"not",
"None",
":",
"v_cell",
".",
"border",
"=",
"v_header",
".",
"border",
"if",
"v_header",
".",
"font",
"is",
"not",
"None",
":",
"v_cell",
".",
"font",
"=",
"v_header",
".",
"font",
"if",
"v_header",
".",
"fill",
"is",
"not",
"None",
":",
"v_cell",
".",
"fill",
"=",
"v_header",
".",
"fill",
"if",
"v_header",
".",
"alignment",
"is",
"not",
"None",
":",
"v_cell",
".",
"alignment",
"=",
"v_header",
".",
"alignment",
"if",
"p_mainTable",
":",
"p_workSheet",
".",
"freeze_panes",
"=",
"'A{0}'",
".",
"format",
"(",
"p_startRow",
"+",
"1",
")",
"#used in formula fields, if it's the case",
"v_pattern",
"=",
"re",
".",
"compile",
"(",
"r'#column_[^\\n\\r#]*#'",
")",
"v_line",
"=",
"0",
"#Fill content",
"for",
"v_row",
"in",
"p_data",
".",
"Rows",
":",
"v_line",
"+=",
"1",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"v_headerList",
")",
")",
":",
"v_headerData",
"=",
"p_headerDict",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"data",
"v_letter",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"i",
"+",
"p_startColumn",
")",
"v_cell",
"=",
"p_workSheet",
"[",
"'{0}{1}'",
".",
"format",
"(",
"v_letter",
",",
"v_line",
"+",
"p_startRow",
")",
"]",
"#Plus p_startRow to \"jump\" report header lines",
"if",
"v_headerData",
".",
"border",
"is",
"not",
"None",
":",
"v_cell",
".",
"border",
"=",
"v_headerData",
".",
"border",
"if",
"v_headerData",
".",
"font",
"is",
"not",
"None",
":",
"v_cell",
".",
"font",
"=",
"v_headerData",
".",
"font",
"if",
"v_headerData",
".",
"fill",
"is",
"not",
"None",
":",
"v_cell",
".",
"fill",
"=",
"v_headerData",
".",
"fill",
"if",
"v_headerData",
".",
"alignment",
"is",
"not",
"None",
":",
"v_cell",
".",
"alignment",
"=",
"v_headerData",
".",
"alignment",
"if",
"v_headerData",
".",
"type",
"==",
"'int'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"try",
":",
"v_cell",
".",
"value",
"=",
"int",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"except",
"(",
"Exception",
",",
"TypeError",
",",
"ValueError",
")",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"v_cell",
".",
"number_format",
"=",
"'0'",
"elif",
"v_headerData",
".",
"type",
"==",
"'float'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"try",
":",
"v_cell",
".",
"value",
"=",
"float",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"except",
"(",
"Exception",
",",
"TypeError",
",",
"ValueError",
")",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"v_cell",
".",
"number_format",
"=",
"'#,##0.00'",
"elif",
"v_headerData",
".",
"type",
"==",
"'float4'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"try",
":",
"v_cell",
".",
"value",
"=",
"float",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"except",
"(",
"Exception",
",",
"TypeError",
",",
"ValueError",
")",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"v_cell",
".",
"number_format",
"=",
"'#,##0.0000'",
"elif",
"v_headerData",
".",
"type",
"==",
"'percent'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"try",
":",
"v_cell",
".",
"value",
"=",
"float",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"except",
"(",
"Exception",
",",
"TypeError",
",",
"ValueError",
")",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"v_cell",
".",
"number_format",
"=",
"'0.00%'",
"elif",
"v_headerData",
".",
"type",
"==",
"'date'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"v_cell",
".",
"number_format",
"=",
"'DD/MM/YYYY'",
"elif",
"v_headerData",
".",
"type",
"==",
"'str'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"elif",
"v_headerData",
".",
"type",
"==",
"'bool'",
":",
"v_key",
"=",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_key",
"in",
"v_headerData",
".",
"valueMapping",
":",
"v_cell",
".",
"value",
"=",
"v_headerData",
".",
"valueMapping",
"[",
"v_key",
"]",
"else",
":",
"try",
":",
"v_cell",
".",
"value",
"=",
"bool",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"and",
"str",
"(",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
")",
".",
"strip",
"(",
")",
"!=",
"''",
"else",
"''",
"except",
"(",
"Exception",
",",
"TypeError",
",",
"ValueError",
")",
":",
"v_cell",
".",
"value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"if",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
"is",
"not",
"None",
"else",
"''",
"if",
"v_headerData",
".",
"type",
"==",
"'int_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"v_cell",
".",
"number_format",
"=",
"'0'",
"elif",
"v_headerData",
".",
"type",
"==",
"'float_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"v_cell",
".",
"number_format",
"=",
"'#,##0.00'",
"elif",
"v_headerData",
".",
"type",
"==",
"'float4_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"v_cell",
".",
"number_format",
"=",
"'#,##0.0000'",
"elif",
"v_headerData",
".",
"type",
"==",
"'percent_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"v_cell",
".",
"number_format",
"=",
"'0.00%'",
"elif",
"v_headerData",
".",
"type",
"==",
"'date_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"v_cell",
".",
"number_format",
"=",
"'DD/MM/YYYY'",
"elif",
"v_headerData",
".",
"type",
"==",
"'str_formula'",
":",
"v_value",
"=",
"v_row",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"v_line",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
".",
"value",
"=",
"v_value",
"if",
"v_line",
"%",
"1000",
"==",
"0",
":",
"yield",
"v_line",
"v_lastLine",
"=",
"len",
"(",
"p_data",
".",
"Rows",
")",
"+",
"p_startRow",
"#Apply conditional formatting, if any",
"if",
"p_conditionalFormatting",
"is",
"not",
"None",
":",
"v_startLetter",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
")",
"v_finalLetter",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"len",
"(",
"v_headerList",
")",
"+",
"p_startColumn",
"-",
"1",
")",
"v_formula",
"=",
"p_conditionalFormatting",
".",
"formula",
".",
"replace",
"(",
"'#row#'",
",",
"str",
"(",
"p_startRow",
"+",
"1",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_formula",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_formula",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_formula",
"=",
"v_formula",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_formula",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_formula",
")",
"v_rule",
"=",
"openpyxl",
".",
"formatting",
".",
"rule",
".",
"Rule",
"(",
"type",
"=",
"'expression'",
",",
"formula",
"=",
"[",
"v_formula",
"]",
",",
"dxf",
"=",
"p_conditionalFormatting",
".",
"differentialStyle",
")",
"p_workSheet",
".",
"conditional_formatting",
".",
"add",
"(",
"'{0}{1}:{2}{3}'",
".",
"format",
"(",
"v_startLetter",
",",
"p_startRow",
"+",
"1",
",",
"v_finalLetter",
",",
"v_lastLine",
")",
",",
"v_rule",
")",
"#Build Summary",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"v_headerList",
")",
")",
":",
"v_headerSummaryList",
"=",
"p_headerDict",
"[",
"v_headerList",
"[",
"i",
"]",
"]",
".",
"summaryList",
"for",
"v_headerSummary",
"in",
"v_headerSummaryList",
":",
"v_letter",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"i",
"+",
"p_startColumn",
")",
"v_index",
"=",
"p_startRow",
"-",
"1",
"if",
"v_headerSummary",
".",
"index",
"<",
"0",
":",
"v_index",
"=",
"p_startRow",
"+",
"v_headerSummary",
".",
"index",
"elif",
"v_headerSummary",
".",
"index",
">",
"0",
":",
"v_index",
"=",
"v_lastLine",
"+",
"v_headerSummary",
".",
"index",
"v_value",
"=",
"v_headerSummary",
".",
"function",
".",
"replace",
"(",
"'#column#'",
",",
"v_letter",
")",
".",
"replace",
"(",
"'#start_row#'",
",",
"str",
"(",
"p_startRow",
"+",
"1",
")",
")",
".",
"replace",
"(",
"'#end_row#'",
",",
"str",
"(",
"v_lastLine",
")",
")",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"while",
"v_match",
"is",
"not",
"None",
":",
"v_start",
"=",
"v_match",
".",
"start",
"(",
")",
"v_end",
"=",
"v_match",
".",
"end",
"(",
")",
"v_matchColumn",
"=",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"v_headerList",
".",
"index",
"(",
"v_value",
"[",
"v_start",
"+",
"8",
":",
"v_end",
"-",
"1",
"]",
")",
")",
"#Discard starting #column_ and ending # in match",
"v_value",
"=",
"v_value",
"[",
":",
"v_start",
"]",
"+",
"v_matchColumn",
"+",
"v_value",
"[",
"v_end",
":",
"]",
"v_match",
"=",
"re",
".",
"search",
"(",
"v_pattern",
",",
"v_value",
")",
"v_cell",
"=",
"p_workSheet",
"[",
"'{0}{1}'",
".",
"format",
"(",
"v_letter",
",",
"v_index",
")",
"]",
"v_cell",
".",
"value",
"=",
"v_value",
"if",
"v_headerSummary",
".",
"border",
"is",
"not",
"None",
":",
"v_cell",
".",
"border",
"=",
"v_headerSummary",
".",
"border",
"if",
"v_headerSummary",
".",
"font",
"is",
"not",
"None",
":",
"v_cell",
".",
"font",
"=",
"v_headerSummary",
".",
"font",
"if",
"v_headerSummary",
".",
"fill",
"is",
"not",
"None",
":",
"v_cell",
".",
"fill",
"=",
"v_headerSummary",
".",
"fill",
"if",
"v_headerSummary",
".",
"type",
"==",
"'int'",
":",
"v_cell",
".",
"number_format",
"=",
"'0'",
"elif",
"v_headerSummary",
".",
"type",
"==",
"'float'",
":",
"v_cell",
".",
"number_format",
"=",
"'#,##0.00'",
"elif",
"v_headerSummary",
".",
"type",
"==",
"'float4'",
":",
"v_cell",
".",
"number_format",
"=",
"'#,##0.0000'",
"elif",
"v_headerSummary",
".",
"type",
"==",
"'percent'",
":",
"v_cell",
".",
"number_format",
"=",
"'0.00%'",
"#Create a new table and add it to worksheet",
"v_name",
"=",
"'Table_{0}_{1}'",
".",
"format",
"(",
"p_workSheet",
".",
"title",
".",
"replace",
"(",
"' '",
",",
"''",
")",
",",
"len",
"(",
"p_workSheet",
".",
"_tables",
")",
"+",
"1",
")",
"#excel doesn't accept same displayName in more than one table.",
"v_name",
"=",
"''",
".",
"join",
"(",
"[",
"c",
"for",
"c",
"in",
"v_name",
"if",
"c",
".",
"isalnum",
"(",
")",
"]",
")",
"#Excel doesn't accept non-alphanumeric characters.",
"v_table",
"=",
"openpyxl",
".",
"worksheet",
".",
"table",
".",
"Table",
"(",
"displayName",
"=",
"v_name",
",",
"ref",
"=",
"'{0}{1}:{2}{3}'",
".",
"format",
"(",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
")",
",",
"p_startRow",
",",
"openpyxl",
".",
"utils",
".",
"get_column_letter",
"(",
"p_startColumn",
"+",
"len",
"(",
"v_headerList",
")",
"-",
"1",
")",
",",
"v_lastLine",
")",
")",
"if",
"p_tableStyleInfo",
"is",
"not",
"None",
":",
"v_table",
".",
"tableStyleInfo",
"=",
"p_tableStyleInfo",
"if",
"not",
"p_withFilters",
":",
"v_table",
".",
"headerRowCount",
"=",
"0",
"p_workSheet",
".",
"add_table",
"(",
"v_table",
")"
] | Insert a table in a given worksheet.
Args:
p_workSheet (openpyxl.worksheet.worksheet.Worksheet): the worksheet where the table will be inserted. Defaults to None.
p_headerDict (collections.OrderedDict): an ordered dict that contains table header columns.
Notes:
Each entry is in the following form:
Key: Name of the column to be searched in p_data.Columns.
Value: Spartacus.Report.Field instance.
Examples:
p_headerDict = collections.OrderedDict([
(
'field_one',
Field(
p_name = 'Code',
p_width = 15,
p_data = Data(
p_type = 'int'
)
)
),
(
'field_two',
Field(
p_name = 'Result',
p_width = 15,
p_data = Data(
p_type = 'int_formula'
)
)
)
])
p_startColumn (int): the column number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_startRow (int): the row number where the table should start. Defaults to 1.
Notes:
Must be a positive integer.
p_headerHeight (float): the header row height in pt. Defaults to None.
Notes:
Must be a non-negative number or None.
p_data (Spartacus.Database.DataTable): the datatable that contains the data that will be inserted into the excel table. Defaults to None.
Notes:
If the corresponding column data type in p_headerDict is some kind of formula, then below wildcards can be used:
#row#: the current row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_data = Spartacus.Database.DataTable that contains:
Columns: ['field_one', 'field_two'].
Rows: [
[
'HAHAHA',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
],
[
'HEHEHE',
'=if(#column_field_one##row# = "HAHAHA", 1, 0)'
]
]
p_mainTable (bool): if this table is the main table of the current worksheet. Defaults to False.
Notes:
If it's the main table, then it will consider p_width, p_hidden and freeze panes in the first table row. The 3 parameters are ignored otherwise.
p_conditionalFormatting (Spartacus.Report.ConditionalFormatting): a conditional formatting that should be applied to data rows. Defaults to None.
Notes:
Will be applied to all data rows of this table.
A wildcard can be used and be replaced properly:
#row#: the current data row.
#column_columname#: will be replaced by the letter of the column.
Examples:
p_conditionalFormatting = ConditionalFormatting(
p_formula = '$Y#row# = 2',
p_differentialStyle = openpyxl.styles.differential.DifferentialStyle(
fill = openpyxl.styles.PatternFill(
bgColor = 'D3D3D3'
)
)
)
p_tableStyleInfo (openpyxl.worksheet.table.TableStyleInfo): a style to be applied to this table. Defaults to None.
Notes:
Will not be applied to summaries, if any.
Examples:
p_tableStyleInfo = openpyxl.worksheet.table.TableStyleInfo(
name = 'TableStyleMedium23',
showFirstColumn = True,
showLastColumn = True,
showRowStripes = True,
showColumnStripes = False
)
p_withFilters (bool): if the table must contain auto-filters.
Yields:
int: Every 1000 lines inserted into the table, yields actual line number.
Raises:
Spartacus.Report.Exception: custom exceptions occurred in this script. | [
"Insert",
"a",
"table",
"in",
"a",
"given",
"worksheet",
"."
] | 622261e9c5d05c2e385d81171acb910c63aa1669 | https://github.com/wind39/spartacus/blob/622261e9c5d05c2e385d81171acb910c63aa1669/Spartacus/Report.py#L607-L1054 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | get_controlfileheader | def get_controlfileheader(
model: Union[str, 'modeltools.Model'],
parameterstep: timetools.PeriodConstrArg = None,
simulationstep: timetools.PeriodConstrArg = None) -> str:
"""Return the header of a regular or auxiliary parameter control file.
The header contains the default coding information, the import command
for the given model and the actual parameter and simulation step sizes.
The first example shows that, if you pass the model argument as a
string, you have to take care that this string makes sense:
>>> from hydpy.core.parametertools import get_controlfileheader, Parameter
>>> from hydpy import Period, prepare_model, pub, Timegrids, Timegrid
>>> print(get_controlfileheader(model='no model class',
... parameterstep='-1h',
... simulationstep=Period('1h')))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.no model class import *
<BLANKLINE>
simulationstep('1h')
parameterstep('-1h')
<BLANKLINE>
<BLANKLINE>
The second example shows the saver option to pass the proper model
object. It also shows that function |get_controlfileheader| tries
to gain the parameter and simulation step sizes from the global
|Timegrids| object contained in the module |pub| when necessary:
>>> model = prepare_model('lland_v1')
>>> _ = Parameter.parameterstep('1d')
>>> pub.timegrids = '2000.01.01', '2001.01.01', '1h'
>>> print(get_controlfileheader(model=model))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.lland_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
<BLANKLINE>
"""
with Parameter.parameterstep(parameterstep):
if simulationstep is None:
simulationstep = Parameter.simulationstep
else:
simulationstep = timetools.Period(simulationstep)
return (f"# -*- coding: utf-8 -*-\n\n"
f"from hydpy.models.{model} import *\n\n"
f"simulationstep('{simulationstep}')\n"
f"parameterstep('{Parameter.parameterstep}')\n\n") | python | def get_controlfileheader(
model: Union[str, 'modeltools.Model'],
parameterstep: timetools.PeriodConstrArg = None,
simulationstep: timetools.PeriodConstrArg = None) -> str:
"""Return the header of a regular or auxiliary parameter control file.
The header contains the default coding information, the import command
for the given model and the actual parameter and simulation step sizes.
The first example shows that, if you pass the model argument as a
string, you have to take care that this string makes sense:
>>> from hydpy.core.parametertools import get_controlfileheader, Parameter
>>> from hydpy import Period, prepare_model, pub, Timegrids, Timegrid
>>> print(get_controlfileheader(model='no model class',
... parameterstep='-1h',
... simulationstep=Period('1h')))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.no model class import *
<BLANKLINE>
simulationstep('1h')
parameterstep('-1h')
<BLANKLINE>
<BLANKLINE>
The second example shows the saver option to pass the proper model
object. It also shows that function |get_controlfileheader| tries
to gain the parameter and simulation step sizes from the global
|Timegrids| object contained in the module |pub| when necessary:
>>> model = prepare_model('lland_v1')
>>> _ = Parameter.parameterstep('1d')
>>> pub.timegrids = '2000.01.01', '2001.01.01', '1h'
>>> print(get_controlfileheader(model=model))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.lland_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
<BLANKLINE>
"""
with Parameter.parameterstep(parameterstep):
if simulationstep is None:
simulationstep = Parameter.simulationstep
else:
simulationstep = timetools.Period(simulationstep)
return (f"# -*- coding: utf-8 -*-\n\n"
f"from hydpy.models.{model} import *\n\n"
f"simulationstep('{simulationstep}')\n"
f"parameterstep('{Parameter.parameterstep}')\n\n") | [
"def",
"get_controlfileheader",
"(",
"model",
":",
"Union",
"[",
"str",
",",
"'modeltools.Model'",
"]",
",",
"parameterstep",
":",
"timetools",
".",
"PeriodConstrArg",
"=",
"None",
",",
"simulationstep",
":",
"timetools",
".",
"PeriodConstrArg",
"=",
"None",
")",
"->",
"str",
":",
"with",
"Parameter",
".",
"parameterstep",
"(",
"parameterstep",
")",
":",
"if",
"simulationstep",
"is",
"None",
":",
"simulationstep",
"=",
"Parameter",
".",
"simulationstep",
"else",
":",
"simulationstep",
"=",
"timetools",
".",
"Period",
"(",
"simulationstep",
")",
"return",
"(",
"f\"# -*- coding: utf-8 -*-\\n\\n\"",
"f\"from hydpy.models.{model} import *\\n\\n\"",
"f\"simulationstep('{simulationstep}')\\n\"",
"f\"parameterstep('{Parameter.parameterstep}')\\n\\n\"",
")"
] | Return the header of a regular or auxiliary parameter control file.
The header contains the default coding information, the import command
for the given model and the actual parameter and simulation step sizes.
The first example shows that, if you pass the model argument as a
string, you have to take care that this string makes sense:
>>> from hydpy.core.parametertools import get_controlfileheader, Parameter
>>> from hydpy import Period, prepare_model, pub, Timegrids, Timegrid
>>> print(get_controlfileheader(model='no model class',
... parameterstep='-1h',
... simulationstep=Period('1h')))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.no model class import *
<BLANKLINE>
simulationstep('1h')
parameterstep('-1h')
<BLANKLINE>
<BLANKLINE>
The second example shows the saver option to pass the proper model
object. It also shows that function |get_controlfileheader| tries
to gain the parameter and simulation step sizes from the global
|Timegrids| object contained in the module |pub| when necessary:
>>> model = prepare_model('lland_v1')
>>> _ = Parameter.parameterstep('1d')
>>> pub.timegrids = '2000.01.01', '2001.01.01', '1h'
>>> print(get_controlfileheader(model=model))
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.lland_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
<BLANKLINE> | [
"Return",
"the",
"header",
"of",
"a",
"regular",
"or",
"auxiliary",
"parameter",
"control",
"file",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L28-L80 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Constants._prepare_docstrings | def _prepare_docstrings(self, frame):
"""Assign docstrings to the constants handled by |Constants|
to make them available in the interactive mode of Python."""
if config.USEAUTODOC:
filename = inspect.getsourcefile(frame)
with open(filename) as file_:
sources = file_.read().split('"""')
for code, doc in zip(sources[::2], sources[1::2]):
code = code.strip()
key = code.split('\n')[-1].split()[0]
value = self.get(key)
if value:
value.__doc__ = doc | python | def _prepare_docstrings(self, frame):
"""Assign docstrings to the constants handled by |Constants|
to make them available in the interactive mode of Python."""
if config.USEAUTODOC:
filename = inspect.getsourcefile(frame)
with open(filename) as file_:
sources = file_.read().split('"""')
for code, doc in zip(sources[::2], sources[1::2]):
code = code.strip()
key = code.split('\n')[-1].split()[0]
value = self.get(key)
if value:
value.__doc__ = doc | [
"def",
"_prepare_docstrings",
"(",
"self",
",",
"frame",
")",
":",
"if",
"config",
".",
"USEAUTODOC",
":",
"filename",
"=",
"inspect",
".",
"getsourcefile",
"(",
"frame",
")",
"with",
"open",
"(",
"filename",
")",
"as",
"file_",
":",
"sources",
"=",
"file_",
".",
"read",
"(",
")",
".",
"split",
"(",
"'\"\"\"'",
")",
"for",
"code",
",",
"doc",
"in",
"zip",
"(",
"sources",
"[",
":",
":",
"2",
"]",
",",
"sources",
"[",
"1",
":",
":",
"2",
"]",
")",
":",
"code",
"=",
"code",
".",
"strip",
"(",
")",
"key",
"=",
"code",
".",
"split",
"(",
"'\\n'",
")",
"[",
"-",
"1",
"]",
".",
"split",
"(",
")",
"[",
"0",
"]",
"value",
"=",
"self",
".",
"get",
"(",
"key",
")",
"if",
"value",
":",
"value",
".",
"__doc__",
"=",
"doc"
] | Assign docstrings to the constants handled by |Constants|
to make them available in the interactive mode of Python. | [
"Assign",
"docstrings",
"to",
"the",
"constants",
"handled",
"by",
"|Constants|",
"to",
"make",
"them",
"available",
"in",
"the",
"interactive",
"mode",
"of",
"Python",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L106-L118 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameters.update | def update(self) -> None:
"""Call method |Parameter.update| of all "secondary" parameters.
Directly after initialisation, neither the primary (`control`)
parameters nor the secondary (`derived`) parameters of
application model |hstream_v1| are ready for usage:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1d')
>>> derived
nmbsegments(?)
c1(?)
c3(?)
c2(?)
Trying to update the values of the secondary parameters while the
primary ones are still not defined, raises errors like the following:
>>> model.parameters.update()
Traceback (most recent call last):
...
AttributeError: While trying to update parameter ``nmbsegments` \
of element `?``, the following error occurred: For variable `lag`, \
no value has been defined so far.
With proper values both for parameter |hstream_control.Lag| and
|hstream_control.Damp|, updating the derived parameters succeeds:
>>> lag(0.0)
>>> damp(0.0)
>>> model.parameters.update()
>>> derived
nmbsegments(0)
c1(0.0)
c3(0.0)
c2(1.0)
"""
for subpars in self.secondary_subpars:
for par in subpars:
try:
par.update()
except BaseException:
objecttools.augment_excmessage(
f'While trying to update parameter '
f'`{objecttools.elementphrase(par)}`') | python | def update(self) -> None:
"""Call method |Parameter.update| of all "secondary" parameters.
Directly after initialisation, neither the primary (`control`)
parameters nor the secondary (`derived`) parameters of
application model |hstream_v1| are ready for usage:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1d')
>>> derived
nmbsegments(?)
c1(?)
c3(?)
c2(?)
Trying to update the values of the secondary parameters while the
primary ones are still not defined, raises errors like the following:
>>> model.parameters.update()
Traceback (most recent call last):
...
AttributeError: While trying to update parameter ``nmbsegments` \
of element `?``, the following error occurred: For variable `lag`, \
no value has been defined so far.
With proper values both for parameter |hstream_control.Lag| and
|hstream_control.Damp|, updating the derived parameters succeeds:
>>> lag(0.0)
>>> damp(0.0)
>>> model.parameters.update()
>>> derived
nmbsegments(0)
c1(0.0)
c3(0.0)
c2(1.0)
"""
for subpars in self.secondary_subpars:
for par in subpars:
try:
par.update()
except BaseException:
objecttools.augment_excmessage(
f'While trying to update parameter '
f'`{objecttools.elementphrase(par)}`') | [
"def",
"update",
"(",
"self",
")",
"->",
"None",
":",
"for",
"subpars",
"in",
"self",
".",
"secondary_subpars",
":",
"for",
"par",
"in",
"subpars",
":",
"try",
":",
"par",
".",
"update",
"(",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to update parameter '",
"f'`{objecttools.elementphrase(par)}`'",
")"
] | Call method |Parameter.update| of all "secondary" parameters.
Directly after initialisation, neither the primary (`control`)
parameters nor the secondary (`derived`) parameters of
application model |hstream_v1| are ready for usage:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1d')
>>> derived
nmbsegments(?)
c1(?)
c3(?)
c2(?)
Trying to update the values of the secondary parameters while the
primary ones are still not defined, raises errors like the following:
>>> model.parameters.update()
Traceback (most recent call last):
...
AttributeError: While trying to update parameter ``nmbsegments` \
of element `?``, the following error occurred: For variable `lag`, \
no value has been defined so far.
With proper values both for parameter |hstream_control.Lag| and
|hstream_control.Damp|, updating the derived parameters succeeds:
>>> lag(0.0)
>>> damp(0.0)
>>> model.parameters.update()
>>> derived
nmbsegments(0)
c1(0.0)
c3(0.0)
c2(1.0) | [
"Call",
"method",
"|Parameter",
".",
"update|",
"of",
"all",
"secondary",
"parameters",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L143-L188 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameters.save_controls | def save_controls(self, filepath: Optional[str] = None,
parameterstep: timetools.PeriodConstrArg = None,
simulationstep: timetools.PeriodConstrArg = None,
auxfiler: 'auxfiletools.Auxfiler' = None):
"""Write the control parameters to file.
Usually, a control file consists of a header (see the documentation
on the method |get_controlfileheader|) and the string representations
of the individual |Parameter| objects handled by the `control`
|SubParameters| object.
The main functionality of method |Parameters.save_controls| is
demonstrated in the documentation on the method |HydPy.save_controls|
of class |HydPy|, which one would apply to write the parameter
information of complete *HydPy* projects. However, to call
|Parameters.save_controls| on individual |Parameters| objects
offers the advantage to choose an arbitrary file path, as shown
in the following example:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1h')
>>> lag(1.0)
>>> damp(0.5)
>>> from hydpy import Open
>>> with Open():
... model.parameters.save_controls('otherdir/otherfile.py')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
otherdir/otherfile.py
-------------------------------------
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
lag(1.0)
damp(0.5)
<BLANKLINE>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Without a given file path and a proper project configuration,
method |Parameters.save_controls| raises the following error:
>>> model.parameters.save_controls()
Traceback (most recent call last):
...
RuntimeError: To save the control parameters of a model to a file, \
its filename must be known. This can be done, by passing a filename to \
function `save_controls` directly. But in complete HydPy applications, \
it is usally assumed to be consistent with the name of the element \
handling the model.
"""
if self.control:
variable2auxfile = getattr(auxfiler, str(self.model), None)
lines = [get_controlfileheader(
self.model, parameterstep, simulationstep)]
with Parameter.parameterstep(parameterstep):
for par in self.control:
if variable2auxfile:
auxfilename = variable2auxfile.get_filename(par)
if auxfilename:
lines.append(
f"{par.name}(auxfile='{auxfilename}')\n")
continue
lines.append(repr(par) + '\n')
text = ''.join(lines)
if filepath:
with open(filepath, mode='w', encoding='utf-8') as controlfile:
controlfile.write(text)
else:
filename = objecttools.devicename(self)
if filename == '?':
raise RuntimeError(
'To save the control parameters of a model to a file, '
'its filename must be known. This can be done, by '
'passing a filename to function `save_controls` '
'directly. But in complete HydPy applications, it is '
'usally assumed to be consistent with the name of the '
'element handling the model.')
hydpy.pub.controlmanager.save_file(filename, text) | python | def save_controls(self, filepath: Optional[str] = None,
parameterstep: timetools.PeriodConstrArg = None,
simulationstep: timetools.PeriodConstrArg = None,
auxfiler: 'auxfiletools.Auxfiler' = None):
"""Write the control parameters to file.
Usually, a control file consists of a header (see the documentation
on the method |get_controlfileheader|) and the string representations
of the individual |Parameter| objects handled by the `control`
|SubParameters| object.
The main functionality of method |Parameters.save_controls| is
demonstrated in the documentation on the method |HydPy.save_controls|
of class |HydPy|, which one would apply to write the parameter
information of complete *HydPy* projects. However, to call
|Parameters.save_controls| on individual |Parameters| objects
offers the advantage to choose an arbitrary file path, as shown
in the following example:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1h')
>>> lag(1.0)
>>> damp(0.5)
>>> from hydpy import Open
>>> with Open():
... model.parameters.save_controls('otherdir/otherfile.py')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
otherdir/otherfile.py
-------------------------------------
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
lag(1.0)
damp(0.5)
<BLANKLINE>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Without a given file path and a proper project configuration,
method |Parameters.save_controls| raises the following error:
>>> model.parameters.save_controls()
Traceback (most recent call last):
...
RuntimeError: To save the control parameters of a model to a file, \
its filename must be known. This can be done, by passing a filename to \
function `save_controls` directly. But in complete HydPy applications, \
it is usally assumed to be consistent with the name of the element \
handling the model.
"""
if self.control:
variable2auxfile = getattr(auxfiler, str(self.model), None)
lines = [get_controlfileheader(
self.model, parameterstep, simulationstep)]
with Parameter.parameterstep(parameterstep):
for par in self.control:
if variable2auxfile:
auxfilename = variable2auxfile.get_filename(par)
if auxfilename:
lines.append(
f"{par.name}(auxfile='{auxfilename}')\n")
continue
lines.append(repr(par) + '\n')
text = ''.join(lines)
if filepath:
with open(filepath, mode='w', encoding='utf-8') as controlfile:
controlfile.write(text)
else:
filename = objecttools.devicename(self)
if filename == '?':
raise RuntimeError(
'To save the control parameters of a model to a file, '
'its filename must be known. This can be done, by '
'passing a filename to function `save_controls` '
'directly. But in complete HydPy applications, it is '
'usally assumed to be consistent with the name of the '
'element handling the model.')
hydpy.pub.controlmanager.save_file(filename, text) | [
"def",
"save_controls",
"(",
"self",
",",
"filepath",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
",",
"parameterstep",
":",
"timetools",
".",
"PeriodConstrArg",
"=",
"None",
",",
"simulationstep",
":",
"timetools",
".",
"PeriodConstrArg",
"=",
"None",
",",
"auxfiler",
":",
"'auxfiletools.Auxfiler'",
"=",
"None",
")",
":",
"if",
"self",
".",
"control",
":",
"variable2auxfile",
"=",
"getattr",
"(",
"auxfiler",
",",
"str",
"(",
"self",
".",
"model",
")",
",",
"None",
")",
"lines",
"=",
"[",
"get_controlfileheader",
"(",
"self",
".",
"model",
",",
"parameterstep",
",",
"simulationstep",
")",
"]",
"with",
"Parameter",
".",
"parameterstep",
"(",
"parameterstep",
")",
":",
"for",
"par",
"in",
"self",
".",
"control",
":",
"if",
"variable2auxfile",
":",
"auxfilename",
"=",
"variable2auxfile",
".",
"get_filename",
"(",
"par",
")",
"if",
"auxfilename",
":",
"lines",
".",
"append",
"(",
"f\"{par.name}(auxfile='{auxfilename}')\\n\"",
")",
"continue",
"lines",
".",
"append",
"(",
"repr",
"(",
"par",
")",
"+",
"'\\n'",
")",
"text",
"=",
"''",
".",
"join",
"(",
"lines",
")",
"if",
"filepath",
":",
"with",
"open",
"(",
"filepath",
",",
"mode",
"=",
"'w'",
",",
"encoding",
"=",
"'utf-8'",
")",
"as",
"controlfile",
":",
"controlfile",
".",
"write",
"(",
"text",
")",
"else",
":",
"filename",
"=",
"objecttools",
".",
"devicename",
"(",
"self",
")",
"if",
"filename",
"==",
"'?'",
":",
"raise",
"RuntimeError",
"(",
"'To save the control parameters of a model to a file, '",
"'its filename must be known. This can be done, by '",
"'passing a filename to function `save_controls` '",
"'directly. But in complete HydPy applications, it is '",
"'usally assumed to be consistent with the name of the '",
"'element handling the model.'",
")",
"hydpy",
".",
"pub",
".",
"controlmanager",
".",
"save_file",
"(",
"filename",
",",
"text",
")"
] | Write the control parameters to file.
Usually, a control file consists of a header (see the documentation
on the method |get_controlfileheader|) and the string representations
of the individual |Parameter| objects handled by the `control`
|SubParameters| object.
The main functionality of method |Parameters.save_controls| is
demonstrated in the documentation on the method |HydPy.save_controls|
of class |HydPy|, which one would apply to write the parameter
information of complete *HydPy* projects. However, to call
|Parameters.save_controls| on individual |Parameters| objects
offers the advantage to choose an arbitrary file path, as shown
in the following example:
>>> from hydpy.models.hstream_v1 import *
>>> parameterstep('1d')
>>> simulationstep('1h')
>>> lag(1.0)
>>> damp(0.5)
>>> from hydpy import Open
>>> with Open():
... model.parameters.save_controls('otherdir/otherfile.py')
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
otherdir/otherfile.py
-------------------------------------
# -*- coding: utf-8 -*-
<BLANKLINE>
from hydpy.models.hstream_v1 import *
<BLANKLINE>
simulationstep('1h')
parameterstep('1d')
<BLANKLINE>
lag(1.0)
damp(0.5)
<BLANKLINE>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Without a given file path and a proper project configuration,
method |Parameters.save_controls| raises the following error:
>>> model.parameters.save_controls()
Traceback (most recent call last):
...
RuntimeError: To save the control parameters of a model to a file, \
its filename must be known. This can be done, by passing a filename to \
function `save_controls` directly. But in complete HydPy applications, \
it is usally assumed to be consistent with the name of the element \
handling the model. | [
"Write",
"the",
"control",
"parameters",
"to",
"file",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L190-L272 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter._get_values_from_auxiliaryfile | def _get_values_from_auxiliaryfile(self, auxfile):
"""Try to return the parameter values from the auxiliary control file
with the given name.
Things are a little complicated here. To understand this method, you
should first take a look at the |parameterstep| function.
"""
try:
frame = inspect.currentframe().f_back.f_back
while frame:
namespace = frame.f_locals
try:
subnamespace = {'model': namespace['model'],
'focus': self}
break
except KeyError:
frame = frame.f_back
else:
raise RuntimeError(
'Cannot determine the corresponding model. Use the '
'`auxfile` keyword in usual parameter control files only.')
filetools.ControlManager.read2dict(auxfile, subnamespace)
try:
subself = subnamespace[self.name]
except KeyError:
raise RuntimeError(
f'The selected file does not define value(s) for '
f'parameter {self.name}')
return subself.values
except BaseException:
objecttools.augment_excmessage(
f'While trying to extract information for parameter '
f'`{self.name}` from file `{auxfile}`') | python | def _get_values_from_auxiliaryfile(self, auxfile):
"""Try to return the parameter values from the auxiliary control file
with the given name.
Things are a little complicated here. To understand this method, you
should first take a look at the |parameterstep| function.
"""
try:
frame = inspect.currentframe().f_back.f_back
while frame:
namespace = frame.f_locals
try:
subnamespace = {'model': namespace['model'],
'focus': self}
break
except KeyError:
frame = frame.f_back
else:
raise RuntimeError(
'Cannot determine the corresponding model. Use the '
'`auxfile` keyword in usual parameter control files only.')
filetools.ControlManager.read2dict(auxfile, subnamespace)
try:
subself = subnamespace[self.name]
except KeyError:
raise RuntimeError(
f'The selected file does not define value(s) for '
f'parameter {self.name}')
return subself.values
except BaseException:
objecttools.augment_excmessage(
f'While trying to extract information for parameter '
f'`{self.name}` from file `{auxfile}`') | [
"def",
"_get_values_from_auxiliaryfile",
"(",
"self",
",",
"auxfile",
")",
":",
"try",
":",
"frame",
"=",
"inspect",
".",
"currentframe",
"(",
")",
".",
"f_back",
".",
"f_back",
"while",
"frame",
":",
"namespace",
"=",
"frame",
".",
"f_locals",
"try",
":",
"subnamespace",
"=",
"{",
"'model'",
":",
"namespace",
"[",
"'model'",
"]",
",",
"'focus'",
":",
"self",
"}",
"break",
"except",
"KeyError",
":",
"frame",
"=",
"frame",
".",
"f_back",
"else",
":",
"raise",
"RuntimeError",
"(",
"'Cannot determine the corresponding model. Use the '",
"'`auxfile` keyword in usual parameter control files only.'",
")",
"filetools",
".",
"ControlManager",
".",
"read2dict",
"(",
"auxfile",
",",
"subnamespace",
")",
"try",
":",
"subself",
"=",
"subnamespace",
"[",
"self",
".",
"name",
"]",
"except",
"KeyError",
":",
"raise",
"RuntimeError",
"(",
"f'The selected file does not define value(s) for '",
"f'parameter {self.name}'",
")",
"return",
"subself",
".",
"values",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to extract information for parameter '",
"f'`{self.name}` from file `{auxfile}`'",
")"
] | Try to return the parameter values from the auxiliary control file
with the given name.
Things are a little complicated here. To understand this method, you
should first take a look at the |parameterstep| function. | [
"Try",
"to",
"return",
"the",
"parameter",
"values",
"from",
"the",
"auxiliary",
"control",
"file",
"with",
"the",
"given",
"name",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L920-L952 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter.initinfo | def initinfo(self) -> Tuple[Union[float, int, bool], bool]:
"""The actual initial value of the given parameter.
Some |Parameter| subclasses define another value for class
attribute `INIT` than |None| to provide a default value.
Let's define a parameter test class and prepare a function for
initialising it and connecting the resulting instance to a
|SubParameters| object:
>>> from hydpy.core.parametertools import Parameter, SubParameters
>>> class Test(Parameter):
... NDIM = 0
... TYPE = float
... TIME = None
... INIT = 2.0
>>> class SubGroup(SubParameters):
... CLASSES = (Test,)
>>> def prepare():
... subpars = SubGroup(None)
... test = Test(subpars)
... test.__hydpy__connect_variable2subgroup__()
... return test
By default, making use of the `INIT` attribute is disabled:
>>> test = prepare()
>>> test
test(?)
Enable it through setting |Options.usedefaultvalues| to |True|:
>>> from hydpy import pub
>>> pub.options.usedefaultvalues = True
>>> test = prepare()
>>> test
test(2.0)
When no `INIT` attribute is defined, enabling
|Options.usedefaultvalues| has no effect, of course:
>>> del Test.INIT
>>> test = prepare()
>>> test
test(?)
For time-dependent parameter values, the `INIT` attribute is assumed
to be related to a |Parameterstep| of one day:
>>> test.parameterstep = '2d'
>>> test.simulationstep = '12h'
>>> Test.INIT = 2.0
>>> Test.TIME = True
>>> test = prepare()
>>> test
test(4.0)
>>> test.value
1.0
"""
init = self.INIT
if (init is not None) and hydpy.pub.options.usedefaultvalues:
with Parameter.parameterstep('1d'):
return self.apply_timefactor(init), True
return variabletools.TYPE2MISSINGVALUE[self.TYPE], False | python | def initinfo(self) -> Tuple[Union[float, int, bool], bool]:
"""The actual initial value of the given parameter.
Some |Parameter| subclasses define another value for class
attribute `INIT` than |None| to provide a default value.
Let's define a parameter test class and prepare a function for
initialising it and connecting the resulting instance to a
|SubParameters| object:
>>> from hydpy.core.parametertools import Parameter, SubParameters
>>> class Test(Parameter):
... NDIM = 0
... TYPE = float
... TIME = None
... INIT = 2.0
>>> class SubGroup(SubParameters):
... CLASSES = (Test,)
>>> def prepare():
... subpars = SubGroup(None)
... test = Test(subpars)
... test.__hydpy__connect_variable2subgroup__()
... return test
By default, making use of the `INIT` attribute is disabled:
>>> test = prepare()
>>> test
test(?)
Enable it through setting |Options.usedefaultvalues| to |True|:
>>> from hydpy import pub
>>> pub.options.usedefaultvalues = True
>>> test = prepare()
>>> test
test(2.0)
When no `INIT` attribute is defined, enabling
|Options.usedefaultvalues| has no effect, of course:
>>> del Test.INIT
>>> test = prepare()
>>> test
test(?)
For time-dependent parameter values, the `INIT` attribute is assumed
to be related to a |Parameterstep| of one day:
>>> test.parameterstep = '2d'
>>> test.simulationstep = '12h'
>>> Test.INIT = 2.0
>>> Test.TIME = True
>>> test = prepare()
>>> test
test(4.0)
>>> test.value
1.0
"""
init = self.INIT
if (init is not None) and hydpy.pub.options.usedefaultvalues:
with Parameter.parameterstep('1d'):
return self.apply_timefactor(init), True
return variabletools.TYPE2MISSINGVALUE[self.TYPE], False | [
"def",
"initinfo",
"(",
"self",
")",
"->",
"Tuple",
"[",
"Union",
"[",
"float",
",",
"int",
",",
"bool",
"]",
",",
"bool",
"]",
":",
"init",
"=",
"self",
".",
"INIT",
"if",
"(",
"init",
"is",
"not",
"None",
")",
"and",
"hydpy",
".",
"pub",
".",
"options",
".",
"usedefaultvalues",
":",
"with",
"Parameter",
".",
"parameterstep",
"(",
"'1d'",
")",
":",
"return",
"self",
".",
"apply_timefactor",
"(",
"init",
")",
",",
"True",
"return",
"variabletools",
".",
"TYPE2MISSINGVALUE",
"[",
"self",
".",
"TYPE",
"]",
",",
"False"
] | The actual initial value of the given parameter.
Some |Parameter| subclasses define another value for class
attribute `INIT` than |None| to provide a default value.
Let's define a parameter test class and prepare a function for
initialising it and connecting the resulting instance to a
|SubParameters| object:
>>> from hydpy.core.parametertools import Parameter, SubParameters
>>> class Test(Parameter):
... NDIM = 0
... TYPE = float
... TIME = None
... INIT = 2.0
>>> class SubGroup(SubParameters):
... CLASSES = (Test,)
>>> def prepare():
... subpars = SubGroup(None)
... test = Test(subpars)
... test.__hydpy__connect_variable2subgroup__()
... return test
By default, making use of the `INIT` attribute is disabled:
>>> test = prepare()
>>> test
test(?)
Enable it through setting |Options.usedefaultvalues| to |True|:
>>> from hydpy import pub
>>> pub.options.usedefaultvalues = True
>>> test = prepare()
>>> test
test(2.0)
When no `INIT` attribute is defined, enabling
|Options.usedefaultvalues| has no effect, of course:
>>> del Test.INIT
>>> test = prepare()
>>> test
test(?)
For time-dependent parameter values, the `INIT` attribute is assumed
to be related to a |Parameterstep| of one day:
>>> test.parameterstep = '2d'
>>> test.simulationstep = '12h'
>>> Test.INIT = 2.0
>>> Test.TIME = True
>>> test = prepare()
>>> test
test(4.0)
>>> test.value
1.0 | [
"The",
"actual",
"initial",
"value",
"of",
"the",
"given",
"parameter",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L971-L1034 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter.get_timefactor | def get_timefactor(cls) -> float:
"""Factor to adjust a new value of a time-dependent parameter.
For a time-dependent parameter, its effective value depends on the
simulation step size. Method |Parameter.get_timefactor| returns
the fraction between the current simulation step size and the
current parameter step size.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.simulationstep.delete()
Period()
Method |Parameter.get_timefactor| raises the following error
when time information is not available:
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.get_timefactor()
Traceback (most recent call last):
...
RuntimeError: To calculate the conversion factor for adapting the \
values of the time-dependent parameters, you need to define both a \
parameter and a simulation time step size first.
One can define both time step sizes directly:
>>> _ = Parameter.parameterstep('1d')
>>> _ = Parameter.simulationstep('6h')
>>> Parameter.get_timefactor()
0.25
As usual, the "global" simulation step size of the |Timegrids|
object of module |pub| is prefered:
>>> from hydpy import pub
>>> pub.timegrids = '2000-01-01', '2001-01-01', '12h'
>>> Parameter.get_timefactor()
0.5
"""
try:
parfactor = hydpy.pub.timegrids.parfactor
except RuntimeError:
if not (cls.parameterstep and cls.simulationstep):
raise RuntimeError(
f'To calculate the conversion factor for adapting '
f'the values of the time-dependent parameters, '
f'you need to define both a parameter and a simulation '
f'time step size first.')
else:
date1 = timetools.Date('2000.01.01')
date2 = date1 + cls.simulationstep
parfactor = timetools.Timegrids(timetools.Timegrid(
date1, date2, cls.simulationstep)).parfactor
return parfactor(cls.parameterstep) | python | def get_timefactor(cls) -> float:
"""Factor to adjust a new value of a time-dependent parameter.
For a time-dependent parameter, its effective value depends on the
simulation step size. Method |Parameter.get_timefactor| returns
the fraction between the current simulation step size and the
current parameter step size.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.simulationstep.delete()
Period()
Method |Parameter.get_timefactor| raises the following error
when time information is not available:
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.get_timefactor()
Traceback (most recent call last):
...
RuntimeError: To calculate the conversion factor for adapting the \
values of the time-dependent parameters, you need to define both a \
parameter and a simulation time step size first.
One can define both time step sizes directly:
>>> _ = Parameter.parameterstep('1d')
>>> _ = Parameter.simulationstep('6h')
>>> Parameter.get_timefactor()
0.25
As usual, the "global" simulation step size of the |Timegrids|
object of module |pub| is prefered:
>>> from hydpy import pub
>>> pub.timegrids = '2000-01-01', '2001-01-01', '12h'
>>> Parameter.get_timefactor()
0.5
"""
try:
parfactor = hydpy.pub.timegrids.parfactor
except RuntimeError:
if not (cls.parameterstep and cls.simulationstep):
raise RuntimeError(
f'To calculate the conversion factor for adapting '
f'the values of the time-dependent parameters, '
f'you need to define both a parameter and a simulation '
f'time step size first.')
else:
date1 = timetools.Date('2000.01.01')
date2 = date1 + cls.simulationstep
parfactor = timetools.Timegrids(timetools.Timegrid(
date1, date2, cls.simulationstep)).parfactor
return parfactor(cls.parameterstep) | [
"def",
"get_timefactor",
"(",
"cls",
")",
"->",
"float",
":",
"try",
":",
"parfactor",
"=",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"parfactor",
"except",
"RuntimeError",
":",
"if",
"not",
"(",
"cls",
".",
"parameterstep",
"and",
"cls",
".",
"simulationstep",
")",
":",
"raise",
"RuntimeError",
"(",
"f'To calculate the conversion factor for adapting '",
"f'the values of the time-dependent parameters, '",
"f'you need to define both a parameter and a simulation '",
"f'time step size first.'",
")",
"else",
":",
"date1",
"=",
"timetools",
".",
"Date",
"(",
"'2000.01.01'",
")",
"date2",
"=",
"date1",
"+",
"cls",
".",
"simulationstep",
"parfactor",
"=",
"timetools",
".",
"Timegrids",
"(",
"timetools",
".",
"Timegrid",
"(",
"date1",
",",
"date2",
",",
"cls",
".",
"simulationstep",
")",
")",
".",
"parfactor",
"return",
"parfactor",
"(",
"cls",
".",
"parameterstep",
")"
] | Factor to adjust a new value of a time-dependent parameter.
For a time-dependent parameter, its effective value depends on the
simulation step size. Method |Parameter.get_timefactor| returns
the fraction between the current simulation step size and the
current parameter step size.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.simulationstep.delete()
Period()
Method |Parameter.get_timefactor| raises the following error
when time information is not available:
>>> from hydpy.core.parametertools import Parameter
>>> Parameter.get_timefactor()
Traceback (most recent call last):
...
RuntimeError: To calculate the conversion factor for adapting the \
values of the time-dependent parameters, you need to define both a \
parameter and a simulation time step size first.
One can define both time step sizes directly:
>>> _ = Parameter.parameterstep('1d')
>>> _ = Parameter.simulationstep('6h')
>>> Parameter.get_timefactor()
0.25
As usual, the "global" simulation step size of the |Timegrids|
object of module |pub| is prefered:
>>> from hydpy import pub
>>> pub.timegrids = '2000-01-01', '2001-01-01', '12h'
>>> Parameter.get_timefactor()
0.5 | [
"Factor",
"to",
"adjust",
"a",
"new",
"value",
"of",
"a",
"time",
"-",
"dependent",
"parameter",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1037-L1093 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter.apply_timefactor | def apply_timefactor(cls, values):
"""Change and return the given value(s) in accordance with
|Parameter.get_timefactor| and the type of time-dependence
of the actual parameter subclass.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the same conversion factor returned by method
|Parameter.get_timefactor|, method |Parameter.apply_timefactor|
behaves differently depending on the `TIME` attribute of the
respective |Parameter| subclass. We first prepare a parameter
test class and define both the parameter and simulation step size:
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
|None| means the value(s) of the parameter are not time-dependent
(e.g. maximum storage capacity). Hence, |Parameter.apply_timefactor|
returns the original value(s):
>>> Par.apply_timefactor(4.0)
4.0
|True| means the effective parameter value is proportional to
the simulation step size (e.g. travel time). Hence,
|Parameter.apply_timefactor| returns a reduced value in the
next example (where the simulation step size is smaller than
the parameter step size):
>>> Par.TIME = True
>>> Par.apply_timefactor(4.0)
1.0
|False| means the effective parameter value is inversely
proportional to the simulation step size (e.g. storage
coefficient). Hence, |Parameter.apply_timefactor| returns
an increased value in the next example:
>>> Par.TIME = False
>>> Par.apply_timefactor(4.0)
16.0
"""
if cls.TIME is True:
return values * cls.get_timefactor()
if cls.TIME is False:
return values / cls.get_timefactor()
return values | python | def apply_timefactor(cls, values):
"""Change and return the given value(s) in accordance with
|Parameter.get_timefactor| and the type of time-dependence
of the actual parameter subclass.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the same conversion factor returned by method
|Parameter.get_timefactor|, method |Parameter.apply_timefactor|
behaves differently depending on the `TIME` attribute of the
respective |Parameter| subclass. We first prepare a parameter
test class and define both the parameter and simulation step size:
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
|None| means the value(s) of the parameter are not time-dependent
(e.g. maximum storage capacity). Hence, |Parameter.apply_timefactor|
returns the original value(s):
>>> Par.apply_timefactor(4.0)
4.0
|True| means the effective parameter value is proportional to
the simulation step size (e.g. travel time). Hence,
|Parameter.apply_timefactor| returns a reduced value in the
next example (where the simulation step size is smaller than
the parameter step size):
>>> Par.TIME = True
>>> Par.apply_timefactor(4.0)
1.0
|False| means the effective parameter value is inversely
proportional to the simulation step size (e.g. storage
coefficient). Hence, |Parameter.apply_timefactor| returns
an increased value in the next example:
>>> Par.TIME = False
>>> Par.apply_timefactor(4.0)
16.0
"""
if cls.TIME is True:
return values * cls.get_timefactor()
if cls.TIME is False:
return values / cls.get_timefactor()
return values | [
"def",
"apply_timefactor",
"(",
"cls",
",",
"values",
")",
":",
"if",
"cls",
".",
"TIME",
"is",
"True",
":",
"return",
"values",
"*",
"cls",
".",
"get_timefactor",
"(",
")",
"if",
"cls",
".",
"TIME",
"is",
"False",
":",
"return",
"values",
"/",
"cls",
".",
"get_timefactor",
"(",
")",
"return",
"values"
] | Change and return the given value(s) in accordance with
|Parameter.get_timefactor| and the type of time-dependence
of the actual parameter subclass.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the same conversion factor returned by method
|Parameter.get_timefactor|, method |Parameter.apply_timefactor|
behaves differently depending on the `TIME` attribute of the
respective |Parameter| subclass. We first prepare a parameter
test class and define both the parameter and simulation step size:
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
|None| means the value(s) of the parameter are not time-dependent
(e.g. maximum storage capacity). Hence, |Parameter.apply_timefactor|
returns the original value(s):
>>> Par.apply_timefactor(4.0)
4.0
|True| means the effective parameter value is proportional to
the simulation step size (e.g. travel time). Hence,
|Parameter.apply_timefactor| returns a reduced value in the
next example (where the simulation step size is smaller than
the parameter step size):
>>> Par.TIME = True
>>> Par.apply_timefactor(4.0)
1.0
|False| means the effective parameter value is inversely
proportional to the simulation step size (e.g. storage
coefficient). Hence, |Parameter.apply_timefactor| returns
an increased value in the next example:
>>> Par.TIME = False
>>> Par.apply_timefactor(4.0)
16.0 | [
"Change",
"and",
"return",
"the",
"given",
"value",
"(",
"s",
")",
"in",
"accordance",
"with",
"|Parameter",
".",
"get_timefactor|",
"and",
"the",
"type",
"of",
"time",
"-",
"dependence",
"of",
"the",
"actual",
"parameter",
"subclass",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1100-L1152 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter.revert_timefactor | def revert_timefactor(cls, values):
"""The inverse version of method |Parameter.apply_timefactor|.
See the explanations on method Parameter.apply_timefactor| to
understand the following examples:
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
>>> Par.revert_timefactor(4.0)
4.0
>>> Par.TIME = True
>>> Par.revert_timefactor(4.0)
16.0
>>> Par.TIME = False
>>> Par.revert_timefactor(4.0)
1.0
"""
if cls.TIME is True:
return values / cls.get_timefactor()
if cls.TIME is False:
return values * cls.get_timefactor()
return values | python | def revert_timefactor(cls, values):
"""The inverse version of method |Parameter.apply_timefactor|.
See the explanations on method Parameter.apply_timefactor| to
understand the following examples:
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
>>> Par.revert_timefactor(4.0)
4.0
>>> Par.TIME = True
>>> Par.revert_timefactor(4.0)
16.0
>>> Par.TIME = False
>>> Par.revert_timefactor(4.0)
1.0
"""
if cls.TIME is True:
return values / cls.get_timefactor()
if cls.TIME is False:
return values * cls.get_timefactor()
return values | [
"def",
"revert_timefactor",
"(",
"cls",
",",
"values",
")",
":",
"if",
"cls",
".",
"TIME",
"is",
"True",
":",
"return",
"values",
"/",
"cls",
".",
"get_timefactor",
"(",
")",
"if",
"cls",
".",
"TIME",
"is",
"False",
":",
"return",
"values",
"*",
"cls",
".",
"get_timefactor",
"(",
")",
"return",
"values"
] | The inverse version of method |Parameter.apply_timefactor|.
See the explanations on method Parameter.apply_timefactor| to
understand the following examples:
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
>>> from hydpy.core.parametertools import Parameter
>>> class Par(Parameter):
... TIME = None
>>> Par.parameterstep = '1d'
>>> Par.simulationstep = '6h'
>>> Par.revert_timefactor(4.0)
4.0
>>> Par.TIME = True
>>> Par.revert_timefactor(4.0)
16.0
>>> Par.TIME = False
>>> Par.revert_timefactor(4.0)
1.0 | [
"The",
"inverse",
"version",
"of",
"method",
"|Parameter",
".",
"apply_timefactor|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1155-L1186 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | Parameter.compress_repr | def compress_repr(self) -> Optional[str]:
"""Try to find a compressed parameter value representation and
return it.
|Parameter.compress_repr| raises a |NotImplementedError| when
failing to find a compressed representation.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the following examples, we define a 1-dimensional sequence
handling time-dependent floating point values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 1
... TYPE = float
... TIME = True
>>> test = Test(None)
Before and directly after defining the parameter shape, `nan`
is returned:
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = 4
>>> test
test(?)
Due to the time-dependence of the values of our test class,
we need to specify a parameter and a simulation time step:
>>> test.parameterstep = '1d'
>>> test.simulationstep = '8h'
Compression succeeds when all required values are identical:
>>> test(3.0, 3.0, 3.0, 3.0)
>>> test.values
array([ 1., 1., 1., 1.])
>>> test.compress_repr()
'3.0'
>>> test
test(3.0)
Method |Parameter.compress_repr| returns |None| in case the
required values are not identical:
>>> test(1.0, 2.0, 3.0, 3.0)
>>> test.compress_repr()
>>> test
test(1.0, 2.0, 3.0, 3.0)
If some values are not required, indicate this by the `mask`
descriptor:
>>> import numpy
>>> test(3.0, 3.0, 3.0, numpy.nan)
>>> test
test(3.0, 3.0, 3.0, nan)
>>> Test.mask = numpy.array([True, True, True, False])
>>> test
test(3.0)
For a shape of zero, the string representing includes an empty list:
>>> test.shape = 0
>>> test.compress_repr()
'[]'
>>> test
test([])
Method |Parameter.compress_repr| works similarly for different
|Parameter| subclasses. The following examples focus on a
2-dimensional parameter handling integer values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 2
... TYPE = int
... TIME = None
>>> test = Test(None)
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = (2, 3)
>>> test
test(?)
>>> test([[3, 3, 3],
... [3, 3, 3]])
>>> test
test(3)
>>> test([[3, 3, -999999],
... [3, 3, 3]])
>>> test
test([[3, 3, -999999],
[3, 3, 3]])
>>> Test.mask = numpy.array([
... [True, True, False],
... [True, True, True]])
>>> test
test(3)
>>> test.shape = (0, 0)
>>> test
test([[]])
"""
if not hasattr(self, 'value'):
return '?'
if not self:
return f"{self.NDIM * '['}{self.NDIM * ']'}"
unique = numpy.unique(self[self.mask])
if sum(numpy.isnan(unique)) == len(unique.flatten()):
unique = numpy.array([numpy.nan])
else:
unique = self.revert_timefactor(unique)
if len(unique) == 1:
return objecttools.repr_(unique[0])
return None | python | def compress_repr(self) -> Optional[str]:
"""Try to find a compressed parameter value representation and
return it.
|Parameter.compress_repr| raises a |NotImplementedError| when
failing to find a compressed representation.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the following examples, we define a 1-dimensional sequence
handling time-dependent floating point values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 1
... TYPE = float
... TIME = True
>>> test = Test(None)
Before and directly after defining the parameter shape, `nan`
is returned:
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = 4
>>> test
test(?)
Due to the time-dependence of the values of our test class,
we need to specify a parameter and a simulation time step:
>>> test.parameterstep = '1d'
>>> test.simulationstep = '8h'
Compression succeeds when all required values are identical:
>>> test(3.0, 3.0, 3.0, 3.0)
>>> test.values
array([ 1., 1., 1., 1.])
>>> test.compress_repr()
'3.0'
>>> test
test(3.0)
Method |Parameter.compress_repr| returns |None| in case the
required values are not identical:
>>> test(1.0, 2.0, 3.0, 3.0)
>>> test.compress_repr()
>>> test
test(1.0, 2.0, 3.0, 3.0)
If some values are not required, indicate this by the `mask`
descriptor:
>>> import numpy
>>> test(3.0, 3.0, 3.0, numpy.nan)
>>> test
test(3.0, 3.0, 3.0, nan)
>>> Test.mask = numpy.array([True, True, True, False])
>>> test
test(3.0)
For a shape of zero, the string representing includes an empty list:
>>> test.shape = 0
>>> test.compress_repr()
'[]'
>>> test
test([])
Method |Parameter.compress_repr| works similarly for different
|Parameter| subclasses. The following examples focus on a
2-dimensional parameter handling integer values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 2
... TYPE = int
... TIME = None
>>> test = Test(None)
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = (2, 3)
>>> test
test(?)
>>> test([[3, 3, 3],
... [3, 3, 3]])
>>> test
test(3)
>>> test([[3, 3, -999999],
... [3, 3, 3]])
>>> test
test([[3, 3, -999999],
[3, 3, 3]])
>>> Test.mask = numpy.array([
... [True, True, False],
... [True, True, True]])
>>> test
test(3)
>>> test.shape = (0, 0)
>>> test
test([[]])
"""
if not hasattr(self, 'value'):
return '?'
if not self:
return f"{self.NDIM * '['}{self.NDIM * ']'}"
unique = numpy.unique(self[self.mask])
if sum(numpy.isnan(unique)) == len(unique.flatten()):
unique = numpy.array([numpy.nan])
else:
unique = self.revert_timefactor(unique)
if len(unique) == 1:
return objecttools.repr_(unique[0])
return None | [
"def",
"compress_repr",
"(",
"self",
")",
"->",
"Optional",
"[",
"str",
"]",
":",
"if",
"not",
"hasattr",
"(",
"self",
",",
"'value'",
")",
":",
"return",
"'?'",
"if",
"not",
"self",
":",
"return",
"f\"{self.NDIM * '['}{self.NDIM * ']'}\"",
"unique",
"=",
"numpy",
".",
"unique",
"(",
"self",
"[",
"self",
".",
"mask",
"]",
")",
"if",
"sum",
"(",
"numpy",
".",
"isnan",
"(",
"unique",
")",
")",
"==",
"len",
"(",
"unique",
".",
"flatten",
"(",
")",
")",
":",
"unique",
"=",
"numpy",
".",
"array",
"(",
"[",
"numpy",
".",
"nan",
"]",
")",
"else",
":",
"unique",
"=",
"self",
".",
"revert_timefactor",
"(",
"unique",
")",
"if",
"len",
"(",
"unique",
")",
"==",
"1",
":",
"return",
"objecttools",
".",
"repr_",
"(",
"unique",
"[",
"0",
"]",
")",
"return",
"None"
] | Try to find a compressed parameter value representation and
return it.
|Parameter.compress_repr| raises a |NotImplementedError| when
failing to find a compressed representation.
.. testsetup::
>>> from hydpy import pub
>>> del pub.timegrids
For the following examples, we define a 1-dimensional sequence
handling time-dependent floating point values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 1
... TYPE = float
... TIME = True
>>> test = Test(None)
Before and directly after defining the parameter shape, `nan`
is returned:
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = 4
>>> test
test(?)
Due to the time-dependence of the values of our test class,
we need to specify a parameter and a simulation time step:
>>> test.parameterstep = '1d'
>>> test.simulationstep = '8h'
Compression succeeds when all required values are identical:
>>> test(3.0, 3.0, 3.0, 3.0)
>>> test.values
array([ 1., 1., 1., 1.])
>>> test.compress_repr()
'3.0'
>>> test
test(3.0)
Method |Parameter.compress_repr| returns |None| in case the
required values are not identical:
>>> test(1.0, 2.0, 3.0, 3.0)
>>> test.compress_repr()
>>> test
test(1.0, 2.0, 3.0, 3.0)
If some values are not required, indicate this by the `mask`
descriptor:
>>> import numpy
>>> test(3.0, 3.0, 3.0, numpy.nan)
>>> test
test(3.0, 3.0, 3.0, nan)
>>> Test.mask = numpy.array([True, True, True, False])
>>> test
test(3.0)
For a shape of zero, the string representing includes an empty list:
>>> test.shape = 0
>>> test.compress_repr()
'[]'
>>> test
test([])
Method |Parameter.compress_repr| works similarly for different
|Parameter| subclasses. The following examples focus on a
2-dimensional parameter handling integer values:
>>> from hydpy.core.parametertools import Parameter
>>> class Test(Parameter):
... NDIM = 2
... TYPE = int
... TIME = None
>>> test = Test(None)
>>> test.compress_repr()
'?'
>>> test
test(?)
>>> test.shape = (2, 3)
>>> test
test(?)
>>> test([[3, 3, 3],
... [3, 3, 3]])
>>> test
test(3)
>>> test([[3, 3, -999999],
... [3, 3, 3]])
>>> test
test([[3, 3, -999999],
[3, 3, 3]])
>>> Test.mask = numpy.array([
... [True, True, False],
... [True, True, True]])
>>> test
test(3)
>>> test.shape = (0, 0)
>>> test
test([[]]) | [
"Try",
"to",
"find",
"a",
"compressed",
"parameter",
"value",
"representation",
"and",
"return",
"it",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1209-L1336 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | NameParameter.compress_repr | def compress_repr(self) -> str:
"""Works as |Parameter.compress_repr|, but returns a
string with constant names instead of constant values.
See the main documentation on class |NameParameter| for
further information.
"""
string = super().compress_repr()
if string in ('?', '[]'):
return string
if string is None:
values = self.values
else:
values = [int(string)]
invmap = {value: key for key, value in
self.CONSTANTS.items()}
result = ', '.join(
invmap.get(value, repr(value)) for value in values)
if len(self) > 255:
result = f'[{result}]'
return result | python | def compress_repr(self) -> str:
"""Works as |Parameter.compress_repr|, but returns a
string with constant names instead of constant values.
See the main documentation on class |NameParameter| for
further information.
"""
string = super().compress_repr()
if string in ('?', '[]'):
return string
if string is None:
values = self.values
else:
values = [int(string)]
invmap = {value: key for key, value in
self.CONSTANTS.items()}
result = ', '.join(
invmap.get(value, repr(value)) for value in values)
if len(self) > 255:
result = f'[{result}]'
return result | [
"def",
"compress_repr",
"(",
"self",
")",
"->",
"str",
":",
"string",
"=",
"super",
"(",
")",
".",
"compress_repr",
"(",
")",
"if",
"string",
"in",
"(",
"'?'",
",",
"'[]'",
")",
":",
"return",
"string",
"if",
"string",
"is",
"None",
":",
"values",
"=",
"self",
".",
"values",
"else",
":",
"values",
"=",
"[",
"int",
"(",
"string",
")",
"]",
"invmap",
"=",
"{",
"value",
":",
"key",
"for",
"key",
",",
"value",
"in",
"self",
".",
"CONSTANTS",
".",
"items",
"(",
")",
"}",
"result",
"=",
"', '",
".",
"join",
"(",
"invmap",
".",
"get",
"(",
"value",
",",
"repr",
"(",
"value",
")",
")",
"for",
"value",
"in",
"values",
")",
"if",
"len",
"(",
"self",
")",
">",
"255",
":",
"result",
"=",
"f'[{result}]'",
"return",
"result"
] | Works as |Parameter.compress_repr|, but returns a
string with constant names instead of constant values.
See the main documentation on class |NameParameter| for
further information. | [
"Works",
"as",
"|Parameter",
".",
"compress_repr|",
"but",
"returns",
"a",
"string",
"with",
"constant",
"names",
"instead",
"of",
"constant",
"values",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1431-L1451 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | ZipParameter.compress_repr | def compress_repr(self) -> Optional[str]:
"""Works as |Parameter.compress_repr|, but alternatively
tries to compress by following an external classification.
See the main documentation on class |ZipParameter| for
further information.
"""
string = super().compress_repr()
if string is not None:
return string
results = []
mask = self.mask
refindices = mask.refindices.values
for (key, value) in self.MODEL_CONSTANTS.items():
if value in mask.RELEVANT_VALUES:
unique = numpy.unique(self.values[refindices == value])
unique = self.revert_timefactor(unique)
length = len(unique)
if length == 1:
results.append(
f'{key.lower()}={objecttools.repr_(unique[0])}')
elif length > 1:
return None
return ', '.join(sorted(results)) | python | def compress_repr(self) -> Optional[str]:
"""Works as |Parameter.compress_repr|, but alternatively
tries to compress by following an external classification.
See the main documentation on class |ZipParameter| for
further information.
"""
string = super().compress_repr()
if string is not None:
return string
results = []
mask = self.mask
refindices = mask.refindices.values
for (key, value) in self.MODEL_CONSTANTS.items():
if value in mask.RELEVANT_VALUES:
unique = numpy.unique(self.values[refindices == value])
unique = self.revert_timefactor(unique)
length = len(unique)
if length == 1:
results.append(
f'{key.lower()}={objecttools.repr_(unique[0])}')
elif length > 1:
return None
return ', '.join(sorted(results)) | [
"def",
"compress_repr",
"(",
"self",
")",
"->",
"Optional",
"[",
"str",
"]",
":",
"string",
"=",
"super",
"(",
")",
".",
"compress_repr",
"(",
")",
"if",
"string",
"is",
"not",
"None",
":",
"return",
"string",
"results",
"=",
"[",
"]",
"mask",
"=",
"self",
".",
"mask",
"refindices",
"=",
"mask",
".",
"refindices",
".",
"values",
"for",
"(",
"key",
",",
"value",
")",
"in",
"self",
".",
"MODEL_CONSTANTS",
".",
"items",
"(",
")",
":",
"if",
"value",
"in",
"mask",
".",
"RELEVANT_VALUES",
":",
"unique",
"=",
"numpy",
".",
"unique",
"(",
"self",
".",
"values",
"[",
"refindices",
"==",
"value",
"]",
")",
"unique",
"=",
"self",
".",
"revert_timefactor",
"(",
"unique",
")",
"length",
"=",
"len",
"(",
"unique",
")",
"if",
"length",
"==",
"1",
":",
"results",
".",
"append",
"(",
"f'{key.lower()}={objecttools.repr_(unique[0])}'",
")",
"elif",
"length",
">",
"1",
":",
"return",
"None",
"return",
"', '",
".",
"join",
"(",
"sorted",
"(",
"results",
")",
")"
] | Works as |Parameter.compress_repr|, but alternatively
tries to compress by following an external classification.
See the main documentation on class |ZipParameter| for
further information. | [
"Works",
"as",
"|Parameter",
".",
"compress_repr|",
"but",
"alternatively",
"tries",
"to",
"compress",
"by",
"following",
"an",
"external",
"classification",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1647-L1670 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | SeasonalParameter.refresh | def refresh(self) -> None:
"""Update the actual simulation values based on the toy-value pairs.
Usually, one does not need to call refresh explicitly. The
"magic" methods __call__, __setattr__, and __delattr__ invoke
it automatically, when required.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
When a |SeasonalParameter| object does not contain any toy-value
pairs yet, the method |SeasonalParameter.refresh| sets all actual
simulation values to zero:
>>> par.values = 1.
>>> par.refresh()
>>> par.values[0]
0.0
When there is only one toy-value pair, its values are relevant
for all actual simulation values:
>>> par.toy_1 = 2. # calls refresh automatically
>>> par.values[0]
2.0
Method |SeasonalParameter.refresh| performs a linear interpolation
for the central time points of each simulation time step. Hence,
in the following example, the original values of the toy-value
pairs do not show up:
>>> par.toy_12_31 = 4.
>>> from hydpy import round_
>>> round_(par.values[0])
2.00274
>>> round_(par.values[-2])
3.99726
>>> par.values[-1]
3.0
If one wants to preserve the original values in this example, one
would have to set the corresponding toy instances in the middle of
some simulation step intervals:
>>> del par.toy_1
>>> del par.toy_12_31
>>> par.toy_1_1_12 = 2
>>> par.toy_12_31_12 = 4.
>>> par.values[0]
2.0
>>> round_(par.values[1])
2.005479
>>> round_(par.values[-2])
3.994521
>>> par.values[-1]
4.0
"""
if not self:
self.values[:] = 0.
elif len(self) == 1:
values = list(self._toy2values.values())[0]
self.values[:] = self.apply_timefactor(values)
else:
for idx, date in enumerate(
timetools.TOY.centred_timegrid(self.simulationstep)):
values = self.interp(date)
self.values[idx] = self.apply_timefactor(values) | python | def refresh(self) -> None:
"""Update the actual simulation values based on the toy-value pairs.
Usually, one does not need to call refresh explicitly. The
"magic" methods __call__, __setattr__, and __delattr__ invoke
it automatically, when required.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
When a |SeasonalParameter| object does not contain any toy-value
pairs yet, the method |SeasonalParameter.refresh| sets all actual
simulation values to zero:
>>> par.values = 1.
>>> par.refresh()
>>> par.values[0]
0.0
When there is only one toy-value pair, its values are relevant
for all actual simulation values:
>>> par.toy_1 = 2. # calls refresh automatically
>>> par.values[0]
2.0
Method |SeasonalParameter.refresh| performs a linear interpolation
for the central time points of each simulation time step. Hence,
in the following example, the original values of the toy-value
pairs do not show up:
>>> par.toy_12_31 = 4.
>>> from hydpy import round_
>>> round_(par.values[0])
2.00274
>>> round_(par.values[-2])
3.99726
>>> par.values[-1]
3.0
If one wants to preserve the original values in this example, one
would have to set the corresponding toy instances in the middle of
some simulation step intervals:
>>> del par.toy_1
>>> del par.toy_12_31
>>> par.toy_1_1_12 = 2
>>> par.toy_12_31_12 = 4.
>>> par.values[0]
2.0
>>> round_(par.values[1])
2.005479
>>> round_(par.values[-2])
3.994521
>>> par.values[-1]
4.0
"""
if not self:
self.values[:] = 0.
elif len(self) == 1:
values = list(self._toy2values.values())[0]
self.values[:] = self.apply_timefactor(values)
else:
for idx, date in enumerate(
timetools.TOY.centred_timegrid(self.simulationstep)):
values = self.interp(date)
self.values[idx] = self.apply_timefactor(values) | [
"def",
"refresh",
"(",
"self",
")",
"->",
"None",
":",
"if",
"not",
"self",
":",
"self",
".",
"values",
"[",
":",
"]",
"=",
"0.",
"elif",
"len",
"(",
"self",
")",
"==",
"1",
":",
"values",
"=",
"list",
"(",
"self",
".",
"_toy2values",
".",
"values",
"(",
")",
")",
"[",
"0",
"]",
"self",
".",
"values",
"[",
":",
"]",
"=",
"self",
".",
"apply_timefactor",
"(",
"values",
")",
"else",
":",
"for",
"idx",
",",
"date",
"in",
"enumerate",
"(",
"timetools",
".",
"TOY",
".",
"centred_timegrid",
"(",
"self",
".",
"simulationstep",
")",
")",
":",
"values",
"=",
"self",
".",
"interp",
"(",
"date",
")",
"self",
".",
"values",
"[",
"idx",
"]",
"=",
"self",
".",
"apply_timefactor",
"(",
"values",
")"
] | Update the actual simulation values based on the toy-value pairs.
Usually, one does not need to call refresh explicitly. The
"magic" methods __call__, __setattr__, and __delattr__ invoke
it automatically, when required.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
When a |SeasonalParameter| object does not contain any toy-value
pairs yet, the method |SeasonalParameter.refresh| sets all actual
simulation values to zero:
>>> par.values = 1.
>>> par.refresh()
>>> par.values[0]
0.0
When there is only one toy-value pair, its values are relevant
for all actual simulation values:
>>> par.toy_1 = 2. # calls refresh automatically
>>> par.values[0]
2.0
Method |SeasonalParameter.refresh| performs a linear interpolation
for the central time points of each simulation time step. Hence,
in the following example, the original values of the toy-value
pairs do not show up:
>>> par.toy_12_31 = 4.
>>> from hydpy import round_
>>> round_(par.values[0])
2.00274
>>> round_(par.values[-2])
3.99726
>>> par.values[-1]
3.0
If one wants to preserve the original values in this example, one
would have to set the corresponding toy instances in the middle of
some simulation step intervals:
>>> del par.toy_1
>>> del par.toy_12_31
>>> par.toy_1_1_12 = 2
>>> par.toy_12_31_12 = 4.
>>> par.values[0]
2.0
>>> round_(par.values[1])
2.005479
>>> round_(par.values[-2])
3.994521
>>> par.values[-1]
4.0 | [
"Update",
"the",
"actual",
"simulation",
"values",
"based",
"on",
"the",
"toy",
"-",
"value",
"pairs",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1861-L1935 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | SeasonalParameter.interp | def interp(self, date: timetools.Date) -> float:
"""Perform a linear value interpolation for the given `date` and
return the result.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
Define three toy-value pairs:
>>> par(_1=2.0, _2=5.0, _12_31=4.0)
Passing a |Date| object matching a |TOY| object exactly returns
the corresponding |float| value:
>>> from hydpy import Date
>>> par.interp(Date('2000.01.01'))
2.0
>>> par.interp(Date('2000.02.01'))
5.0
>>> par.interp(Date('2000.12.31'))
4.0
For all intermediate points, |SeasonalParameter.interp| performs
a linear interpolation:
>>> from hydpy import round_
>>> round_(par.interp(Date('2000.01.02')))
2.096774
>>> round_(par.interp(Date('2000.01.31')))
4.903226
>>> round_(par.interp(Date('2000.02.02')))
4.997006
>>> round_(par.interp(Date('2000.12.30')))
4.002994
Linear interpolation is also allowed between the first and the
last pair when they do not capture the endpoints of the year:
>>> par(_1_2=2.0, _12_30=4.0)
>>> round_(par.interp(Date('2000.12.29')))
3.99449
>>> par.interp(Date('2000.12.30'))
4.0
>>> round_(par.interp(Date('2000.12.31')))
3.333333
>>> round_(par.interp(Date('2000.01.01')))
2.666667
>>> par.interp(Date('2000.01.02'))
2.0
>>> round_(par.interp(Date('2000.01.03')))
2.00551
The following example briefly shows interpolation performed for
a 2-dimensional parameter:
>>> Par.NDIM = 2
>>> par = Par(None)
>>> par.shape = (None, 2)
>>> par(_1_1=[1., 2.], _1_3=[-3, 0.])
>>> result = par.interp(Date('2000.01.02'))
>>> round_(result[0])
-1.0
>>> round_(result[1])
1.0
"""
xnew = timetools.TOY(date)
xys = list(self)
for idx, (x_1, y_1) in enumerate(xys):
if x_1 > xnew:
x_0, y_0 = xys[idx-1]
break
else:
x_0, y_0 = xys[-1]
x_1, y_1 = xys[0]
return y_0+(y_1-y_0)/(x_1-x_0)*(xnew-x_0) | python | def interp(self, date: timetools.Date) -> float:
"""Perform a linear value interpolation for the given `date` and
return the result.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
Define three toy-value pairs:
>>> par(_1=2.0, _2=5.0, _12_31=4.0)
Passing a |Date| object matching a |TOY| object exactly returns
the corresponding |float| value:
>>> from hydpy import Date
>>> par.interp(Date('2000.01.01'))
2.0
>>> par.interp(Date('2000.02.01'))
5.0
>>> par.interp(Date('2000.12.31'))
4.0
For all intermediate points, |SeasonalParameter.interp| performs
a linear interpolation:
>>> from hydpy import round_
>>> round_(par.interp(Date('2000.01.02')))
2.096774
>>> round_(par.interp(Date('2000.01.31')))
4.903226
>>> round_(par.interp(Date('2000.02.02')))
4.997006
>>> round_(par.interp(Date('2000.12.30')))
4.002994
Linear interpolation is also allowed between the first and the
last pair when they do not capture the endpoints of the year:
>>> par(_1_2=2.0, _12_30=4.0)
>>> round_(par.interp(Date('2000.12.29')))
3.99449
>>> par.interp(Date('2000.12.30'))
4.0
>>> round_(par.interp(Date('2000.12.31')))
3.333333
>>> round_(par.interp(Date('2000.01.01')))
2.666667
>>> par.interp(Date('2000.01.02'))
2.0
>>> round_(par.interp(Date('2000.01.03')))
2.00551
The following example briefly shows interpolation performed for
a 2-dimensional parameter:
>>> Par.NDIM = 2
>>> par = Par(None)
>>> par.shape = (None, 2)
>>> par(_1_1=[1., 2.], _1_3=[-3, 0.])
>>> result = par.interp(Date('2000.01.02'))
>>> round_(result[0])
-1.0
>>> round_(result[1])
1.0
"""
xnew = timetools.TOY(date)
xys = list(self)
for idx, (x_1, y_1) in enumerate(xys):
if x_1 > xnew:
x_0, y_0 = xys[idx-1]
break
else:
x_0, y_0 = xys[-1]
x_1, y_1 = xys[0]
return y_0+(y_1-y_0)/(x_1-x_0)*(xnew-x_0) | [
"def",
"interp",
"(",
"self",
",",
"date",
":",
"timetools",
".",
"Date",
")",
"->",
"float",
":",
"xnew",
"=",
"timetools",
".",
"TOY",
"(",
"date",
")",
"xys",
"=",
"list",
"(",
"self",
")",
"for",
"idx",
",",
"(",
"x_1",
",",
"y_1",
")",
"in",
"enumerate",
"(",
"xys",
")",
":",
"if",
"x_1",
">",
"xnew",
":",
"x_0",
",",
"y_0",
"=",
"xys",
"[",
"idx",
"-",
"1",
"]",
"break",
"else",
":",
"x_0",
",",
"y_0",
"=",
"xys",
"[",
"-",
"1",
"]",
"x_1",
",",
"y_1",
"=",
"xys",
"[",
"0",
"]",
"return",
"y_0",
"+",
"(",
"y_1",
"-",
"y_0",
")",
"/",
"(",
"x_1",
"-",
"x_0",
")",
"*",
"(",
"xnew",
"-",
"x_0",
")"
] | Perform a linear value interpolation for the given `date` and
return the result.
Instantiate a 1-dimensional |SeasonalParameter| object:
>>> from hydpy.core.parametertools import SeasonalParameter
>>> class Par(SeasonalParameter):
... NDIM = 1
... TYPE = float
... TIME = None
>>> par = Par(None)
>>> par.simulationstep = '1d'
>>> par.shape = (None,)
Define three toy-value pairs:
>>> par(_1=2.0, _2=5.0, _12_31=4.0)
Passing a |Date| object matching a |TOY| object exactly returns
the corresponding |float| value:
>>> from hydpy import Date
>>> par.interp(Date('2000.01.01'))
2.0
>>> par.interp(Date('2000.02.01'))
5.0
>>> par.interp(Date('2000.12.31'))
4.0
For all intermediate points, |SeasonalParameter.interp| performs
a linear interpolation:
>>> from hydpy import round_
>>> round_(par.interp(Date('2000.01.02')))
2.096774
>>> round_(par.interp(Date('2000.01.31')))
4.903226
>>> round_(par.interp(Date('2000.02.02')))
4.997006
>>> round_(par.interp(Date('2000.12.30')))
4.002994
Linear interpolation is also allowed between the first and the
last pair when they do not capture the endpoints of the year:
>>> par(_1_2=2.0, _12_30=4.0)
>>> round_(par.interp(Date('2000.12.29')))
3.99449
>>> par.interp(Date('2000.12.30'))
4.0
>>> round_(par.interp(Date('2000.12.31')))
3.333333
>>> round_(par.interp(Date('2000.01.01')))
2.666667
>>> par.interp(Date('2000.01.02'))
2.0
>>> round_(par.interp(Date('2000.01.03')))
2.00551
The following example briefly shows interpolation performed for
a 2-dimensional parameter:
>>> Par.NDIM = 2
>>> par = Par(None)
>>> par.shape = (None, 2)
>>> par(_1_1=[1., 2.], _1_3=[-3, 0.])
>>> result = par.interp(Date('2000.01.02'))
>>> round_(result[0])
-1.0
>>> round_(result[1])
1.0 | [
"Perform",
"a",
"linear",
"value",
"interpolation",
"for",
"the",
"given",
"date",
"and",
"return",
"the",
"result",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L1937-L2019 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | RelSubweightsMixin.update | def update(self) -> None:
"""Update subclass of |RelSubweightsMixin| based on `refweights`."""
mask = self.mask
weights = self.refweights[mask]
self[~mask] = numpy.nan
self[mask] = weights/numpy.sum(weights) | python | def update(self) -> None:
"""Update subclass of |RelSubweightsMixin| based on `refweights`."""
mask = self.mask
weights = self.refweights[mask]
self[~mask] = numpy.nan
self[mask] = weights/numpy.sum(weights) | [
"def",
"update",
"(",
"self",
")",
"->",
"None",
":",
"mask",
"=",
"self",
".",
"mask",
"weights",
"=",
"self",
".",
"refweights",
"[",
"mask",
"]",
"self",
"[",
"~",
"mask",
"]",
"=",
"numpy",
".",
"nan",
"self",
"[",
"mask",
"]",
"=",
"weights",
"/",
"numpy",
".",
"sum",
"(",
"weights",
")"
] | Update subclass of |RelSubweightsMixin| based on `refweights`. | [
"Update",
"subclass",
"of",
"|RelSubweightsMixin|",
"based",
"on",
"refweights",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L2476-L2481 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | SolverParameter.alternative_initvalue | def alternative_initvalue(self) -> Union[bool, int, float]:
"""A user-defined value to be used instead of the value of class
constant `INIT`.
See the main documentation on class |SolverParameter| for more
information.
"""
if self._alternative_initvalue is None:
raise AttributeError(
f'No alternative initial value for solver parameter '
f'{objecttools.elementphrase(self)} has been defined so far.')
else:
return self._alternative_initvalue | python | def alternative_initvalue(self) -> Union[bool, int, float]:
"""A user-defined value to be used instead of the value of class
constant `INIT`.
See the main documentation on class |SolverParameter| for more
information.
"""
if self._alternative_initvalue is None:
raise AttributeError(
f'No alternative initial value for solver parameter '
f'{objecttools.elementphrase(self)} has been defined so far.')
else:
return self._alternative_initvalue | [
"def",
"alternative_initvalue",
"(",
"self",
")",
"->",
"Union",
"[",
"bool",
",",
"int",
",",
"float",
"]",
":",
"if",
"self",
".",
"_alternative_initvalue",
"is",
"None",
":",
"raise",
"AttributeError",
"(",
"f'No alternative initial value for solver parameter '",
"f'{objecttools.elementphrase(self)} has been defined so far.'",
")",
"else",
":",
"return",
"self",
".",
"_alternative_initvalue"
] | A user-defined value to be used instead of the value of class
constant `INIT`.
See the main documentation on class |SolverParameter| for more
information. | [
"A",
"user",
"-",
"defined",
"value",
"to",
"be",
"used",
"instead",
"of",
"the",
"value",
"of",
"class",
"constant",
"INIT",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L2740-L2752 | train |
hydpy-dev/hydpy | hydpy/core/parametertools.py | TOYParameter.update | def update(self) -> None:
"""Reference the actual |Indexer.timeofyear| array of the
|Indexer| object available in module |pub|.
>>> from hydpy import pub
>>> pub.timegrids = '27.02.2004', '3.03.2004', '1d'
>>> from hydpy.core.parametertools import TOYParameter
>>> toyparameter = TOYParameter(None)
>>> toyparameter.update()
>>> toyparameter
toyparameter(57, 58, 59, 60, 61)
"""
indexarray = hydpy.pub.indexer.timeofyear
self.shape = indexarray.shape
self.values = indexarray | python | def update(self) -> None:
"""Reference the actual |Indexer.timeofyear| array of the
|Indexer| object available in module |pub|.
>>> from hydpy import pub
>>> pub.timegrids = '27.02.2004', '3.03.2004', '1d'
>>> from hydpy.core.parametertools import TOYParameter
>>> toyparameter = TOYParameter(None)
>>> toyparameter.update()
>>> toyparameter
toyparameter(57, 58, 59, 60, 61)
"""
indexarray = hydpy.pub.indexer.timeofyear
self.shape = indexarray.shape
self.values = indexarray | [
"def",
"update",
"(",
"self",
")",
"->",
"None",
":",
"indexarray",
"=",
"hydpy",
".",
"pub",
".",
"indexer",
".",
"timeofyear",
"self",
".",
"shape",
"=",
"indexarray",
".",
"shape",
"self",
".",
"values",
"=",
"indexarray"
] | Reference the actual |Indexer.timeofyear| array of the
|Indexer| object available in module |pub|.
>>> from hydpy import pub
>>> pub.timegrids = '27.02.2004', '3.03.2004', '1d'
>>> from hydpy.core.parametertools import TOYParameter
>>> toyparameter = TOYParameter(None)
>>> toyparameter.update()
>>> toyparameter
toyparameter(57, 58, 59, 60, 61) | [
"Reference",
"the",
"actual",
"|Indexer",
".",
"timeofyear|",
"array",
"of",
"the",
"|Indexer|",
"object",
"available",
"in",
"module",
"|pub|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/parametertools.py#L2792-L2806 | train |
arteria/django-openinghours | openinghours/utils.py | get_premises_model | def get_premises_model():
"""
Support for custom company premises model
with developer friendly validation.
"""
try:
app_label, model_name = PREMISES_MODEL.split('.')
except ValueError:
raise ImproperlyConfigured("OPENINGHOURS_PREMISES_MODEL must be of the"
" form 'app_label.model_name'")
premises_model = get_model(app_label=app_label, model_name=model_name)
if premises_model is None:
raise ImproperlyConfigured("OPENINGHOURS_PREMISES_MODEL refers to"
" model '%s' that has not been installed"
% PREMISES_MODEL)
return premises_model | python | def get_premises_model():
"""
Support for custom company premises model
with developer friendly validation.
"""
try:
app_label, model_name = PREMISES_MODEL.split('.')
except ValueError:
raise ImproperlyConfigured("OPENINGHOURS_PREMISES_MODEL must be of the"
" form 'app_label.model_name'")
premises_model = get_model(app_label=app_label, model_name=model_name)
if premises_model is None:
raise ImproperlyConfigured("OPENINGHOURS_PREMISES_MODEL refers to"
" model '%s' that has not been installed"
% PREMISES_MODEL)
return premises_model | [
"def",
"get_premises_model",
"(",
")",
":",
"try",
":",
"app_label",
",",
"model_name",
"=",
"PREMISES_MODEL",
".",
"split",
"(",
"'.'",
")",
"except",
"ValueError",
":",
"raise",
"ImproperlyConfigured",
"(",
"\"OPENINGHOURS_PREMISES_MODEL must be of the\"",
"\" form 'app_label.model_name'\"",
")",
"premises_model",
"=",
"get_model",
"(",
"app_label",
"=",
"app_label",
",",
"model_name",
"=",
"model_name",
")",
"if",
"premises_model",
"is",
"None",
":",
"raise",
"ImproperlyConfigured",
"(",
"\"OPENINGHOURS_PREMISES_MODEL refers to\"",
"\" model '%s' that has not been installed\"",
"%",
"PREMISES_MODEL",
")",
"return",
"premises_model"
] | Support for custom company premises model
with developer friendly validation. | [
"Support",
"for",
"custom",
"company",
"premises",
"model",
"with",
"developer",
"friendly",
"validation",
"."
] | 6bad47509a14d65a3a5a08777455f4cc8b4961fa | https://github.com/arteria/django-openinghours/blob/6bad47509a14d65a3a5a08777455f4cc8b4961fa/openinghours/utils.py#L13-L28 | train |
arteria/django-openinghours | openinghours/utils.py | get_now | def get_now():
"""
Allows to access global request and read a timestamp from query.
"""
if not get_current_request:
return datetime.datetime.now()
request = get_current_request()
if request:
openinghours_now = request.GET.get('openinghours-now')
if openinghours_now:
return datetime.datetime.strptime(openinghours_now, '%Y%m%d%H%M%S')
return datetime.datetime.now() | python | def get_now():
"""
Allows to access global request and read a timestamp from query.
"""
if not get_current_request:
return datetime.datetime.now()
request = get_current_request()
if request:
openinghours_now = request.GET.get('openinghours-now')
if openinghours_now:
return datetime.datetime.strptime(openinghours_now, '%Y%m%d%H%M%S')
return datetime.datetime.now() | [
"def",
"get_now",
"(",
")",
":",
"if",
"not",
"get_current_request",
":",
"return",
"datetime",
".",
"datetime",
".",
"now",
"(",
")",
"request",
"=",
"get_current_request",
"(",
")",
"if",
"request",
":",
"openinghours_now",
"=",
"request",
".",
"GET",
".",
"get",
"(",
"'openinghours-now'",
")",
"if",
"openinghours_now",
":",
"return",
"datetime",
".",
"datetime",
".",
"strptime",
"(",
"openinghours_now",
",",
"'%Y%m%d%H%M%S'",
")",
"return",
"datetime",
".",
"datetime",
".",
"now",
"(",
")"
] | Allows to access global request and read a timestamp from query. | [
"Allows",
"to",
"access",
"global",
"request",
"and",
"read",
"a",
"timestamp",
"from",
"query",
"."
] | 6bad47509a14d65a3a5a08777455f4cc8b4961fa | https://github.com/arteria/django-openinghours/blob/6bad47509a14d65a3a5a08777455f4cc8b4961fa/openinghours/utils.py#L33-L44 | train |
arteria/django-openinghours | openinghours/utils.py | get_closing_rule_for_now | def get_closing_rule_for_now(location):
"""
Returns QuerySet of ClosingRules that are currently valid
"""
now = get_now()
if location:
return ClosingRules.objects.filter(company=location,
start__lte=now, end__gte=now)
return Company.objects.first().closingrules_set.filter(start__lte=now,
end__gte=now) | python | def get_closing_rule_for_now(location):
"""
Returns QuerySet of ClosingRules that are currently valid
"""
now = get_now()
if location:
return ClosingRules.objects.filter(company=location,
start__lte=now, end__gte=now)
return Company.objects.first().closingrules_set.filter(start__lte=now,
end__gte=now) | [
"def",
"get_closing_rule_for_now",
"(",
"location",
")",
":",
"now",
"=",
"get_now",
"(",
")",
"if",
"location",
":",
"return",
"ClosingRules",
".",
"objects",
".",
"filter",
"(",
"company",
"=",
"location",
",",
"start__lte",
"=",
"now",
",",
"end__gte",
"=",
"now",
")",
"return",
"Company",
".",
"objects",
".",
"first",
"(",
")",
".",
"closingrules_set",
".",
"filter",
"(",
"start__lte",
"=",
"now",
",",
"end__gte",
"=",
"now",
")"
] | Returns QuerySet of ClosingRules that are currently valid | [
"Returns",
"QuerySet",
"of",
"ClosingRules",
"that",
"are",
"currently",
"valid"
] | 6bad47509a14d65a3a5a08777455f4cc8b4961fa | https://github.com/arteria/django-openinghours/blob/6bad47509a14d65a3a5a08777455f4cc8b4961fa/openinghours/utils.py#L47-L58 | train |
arteria/django-openinghours | openinghours/utils.py | is_open | def is_open(location, now=None):
"""
Is the company currently open? Pass "now" to test with a specific
timestamp. Can be used stand-alone or as a helper.
"""
if now is None:
now = get_now()
if has_closing_rule_for_now(location):
return False
now_time = datetime.time(now.hour, now.minute, now.second)
if location:
ohs = OpeningHours.objects.filter(company=location)
else:
ohs = Company.objects.first().openinghours_set.all()
for oh in ohs:
is_open = False
# start and end is on the same day
if (oh.weekday == now.isoweekday() and
oh.from_hour <= now_time and
now_time <= oh.to_hour):
is_open = oh
# start and end are not on the same day and we test on the start day
if (oh.weekday == now.isoweekday() and
oh.from_hour <= now_time and
((oh.to_hour < oh.from_hour) and
(now_time < datetime.time(23, 59, 59)))):
is_open = oh
# start and end are not on the same day and we test on the end day
if (oh.weekday == (now.isoweekday() - 1) % 7 and
oh.from_hour >= now_time and
oh.to_hour >= now_time and
oh.to_hour < oh.from_hour):
is_open = oh
# print " 'Special' case after midnight", oh
if is_open is not False:
return oh
return False | python | def is_open(location, now=None):
"""
Is the company currently open? Pass "now" to test with a specific
timestamp. Can be used stand-alone or as a helper.
"""
if now is None:
now = get_now()
if has_closing_rule_for_now(location):
return False
now_time = datetime.time(now.hour, now.minute, now.second)
if location:
ohs = OpeningHours.objects.filter(company=location)
else:
ohs = Company.objects.first().openinghours_set.all()
for oh in ohs:
is_open = False
# start and end is on the same day
if (oh.weekday == now.isoweekday() and
oh.from_hour <= now_time and
now_time <= oh.to_hour):
is_open = oh
# start and end are not on the same day and we test on the start day
if (oh.weekday == now.isoweekday() and
oh.from_hour <= now_time and
((oh.to_hour < oh.from_hour) and
(now_time < datetime.time(23, 59, 59)))):
is_open = oh
# start and end are not on the same day and we test on the end day
if (oh.weekday == (now.isoweekday() - 1) % 7 and
oh.from_hour >= now_time and
oh.to_hour >= now_time and
oh.to_hour < oh.from_hour):
is_open = oh
# print " 'Special' case after midnight", oh
if is_open is not False:
return oh
return False | [
"def",
"is_open",
"(",
"location",
",",
"now",
"=",
"None",
")",
":",
"if",
"now",
"is",
"None",
":",
"now",
"=",
"get_now",
"(",
")",
"if",
"has_closing_rule_for_now",
"(",
"location",
")",
":",
"return",
"False",
"now_time",
"=",
"datetime",
".",
"time",
"(",
"now",
".",
"hour",
",",
"now",
".",
"minute",
",",
"now",
".",
"second",
")",
"if",
"location",
":",
"ohs",
"=",
"OpeningHours",
".",
"objects",
".",
"filter",
"(",
"company",
"=",
"location",
")",
"else",
":",
"ohs",
"=",
"Company",
".",
"objects",
".",
"first",
"(",
")",
".",
"openinghours_set",
".",
"all",
"(",
")",
"for",
"oh",
"in",
"ohs",
":",
"is_open",
"=",
"False",
"# start and end is on the same day",
"if",
"(",
"oh",
".",
"weekday",
"==",
"now",
".",
"isoweekday",
"(",
")",
"and",
"oh",
".",
"from_hour",
"<=",
"now_time",
"and",
"now_time",
"<=",
"oh",
".",
"to_hour",
")",
":",
"is_open",
"=",
"oh",
"# start and end are not on the same day and we test on the start day",
"if",
"(",
"oh",
".",
"weekday",
"==",
"now",
".",
"isoweekday",
"(",
")",
"and",
"oh",
".",
"from_hour",
"<=",
"now_time",
"and",
"(",
"(",
"oh",
".",
"to_hour",
"<",
"oh",
".",
"from_hour",
")",
"and",
"(",
"now_time",
"<",
"datetime",
".",
"time",
"(",
"23",
",",
"59",
",",
"59",
")",
")",
")",
")",
":",
"is_open",
"=",
"oh",
"# start and end are not on the same day and we test on the end day",
"if",
"(",
"oh",
".",
"weekday",
"==",
"(",
"now",
".",
"isoweekday",
"(",
")",
"-",
"1",
")",
"%",
"7",
"and",
"oh",
".",
"from_hour",
">=",
"now_time",
"and",
"oh",
".",
"to_hour",
">=",
"now_time",
"and",
"oh",
".",
"to_hour",
"<",
"oh",
".",
"from_hour",
")",
":",
"is_open",
"=",
"oh",
"# print \" 'Special' case after midnight\", oh",
"if",
"is_open",
"is",
"not",
"False",
":",
"return",
"oh",
"return",
"False"
] | Is the company currently open? Pass "now" to test with a specific
timestamp. Can be used stand-alone or as a helper. | [
"Is",
"the",
"company",
"currently",
"open?",
"Pass",
"now",
"to",
"test",
"with",
"a",
"specific",
"timestamp",
".",
"Can",
"be",
"used",
"stand",
"-",
"alone",
"or",
"as",
"a",
"helper",
"."
] | 6bad47509a14d65a3a5a08777455f4cc8b4961fa | https://github.com/arteria/django-openinghours/blob/6bad47509a14d65a3a5a08777455f4cc8b4961fa/openinghours/utils.py#L69-L111 | train |
arteria/django-openinghours | openinghours/utils.py | next_time_open | def next_time_open(location):
"""
Returns the next possible opening hours object, or (False, None)
if location is currently open or there is no such object
I.e. when is the company open for the next time?
"""
if not is_open(location):
now = get_now()
now_time = datetime.time(now.hour, now.minute, now.second)
found_opening_hours = False
for i in range(8):
l_weekday = (now.isoweekday() + i) % 7
ohs = OpeningHours.objects.filter(company=location,
weekday=l_weekday
).order_by('weekday',
'from_hour')
if ohs.count():
for oh in ohs:
future_now = now + datetime.timedelta(days=i)
# same day issue
tmp_now = datetime.datetime(future_now.year,
future_now.month,
future_now.day,
oh.from_hour.hour,
oh.from_hour.minute,
oh.from_hour.second)
if tmp_now < now:
tmp_now = now # be sure to set the bound correctly...
if is_open(location, now=tmp_now):
found_opening_hours = oh
break
if found_opening_hours is not False:
return found_opening_hours, tmp_now
return False, None | python | def next_time_open(location):
"""
Returns the next possible opening hours object, or (False, None)
if location is currently open or there is no such object
I.e. when is the company open for the next time?
"""
if not is_open(location):
now = get_now()
now_time = datetime.time(now.hour, now.minute, now.second)
found_opening_hours = False
for i in range(8):
l_weekday = (now.isoweekday() + i) % 7
ohs = OpeningHours.objects.filter(company=location,
weekday=l_weekday
).order_by('weekday',
'from_hour')
if ohs.count():
for oh in ohs:
future_now = now + datetime.timedelta(days=i)
# same day issue
tmp_now = datetime.datetime(future_now.year,
future_now.month,
future_now.day,
oh.from_hour.hour,
oh.from_hour.minute,
oh.from_hour.second)
if tmp_now < now:
tmp_now = now # be sure to set the bound correctly...
if is_open(location, now=tmp_now):
found_opening_hours = oh
break
if found_opening_hours is not False:
return found_opening_hours, tmp_now
return False, None | [
"def",
"next_time_open",
"(",
"location",
")",
":",
"if",
"not",
"is_open",
"(",
"location",
")",
":",
"now",
"=",
"get_now",
"(",
")",
"now_time",
"=",
"datetime",
".",
"time",
"(",
"now",
".",
"hour",
",",
"now",
".",
"minute",
",",
"now",
".",
"second",
")",
"found_opening_hours",
"=",
"False",
"for",
"i",
"in",
"range",
"(",
"8",
")",
":",
"l_weekday",
"=",
"(",
"now",
".",
"isoweekday",
"(",
")",
"+",
"i",
")",
"%",
"7",
"ohs",
"=",
"OpeningHours",
".",
"objects",
".",
"filter",
"(",
"company",
"=",
"location",
",",
"weekday",
"=",
"l_weekday",
")",
".",
"order_by",
"(",
"'weekday'",
",",
"'from_hour'",
")",
"if",
"ohs",
".",
"count",
"(",
")",
":",
"for",
"oh",
"in",
"ohs",
":",
"future_now",
"=",
"now",
"+",
"datetime",
".",
"timedelta",
"(",
"days",
"=",
"i",
")",
"# same day issue",
"tmp_now",
"=",
"datetime",
".",
"datetime",
"(",
"future_now",
".",
"year",
",",
"future_now",
".",
"month",
",",
"future_now",
".",
"day",
",",
"oh",
".",
"from_hour",
".",
"hour",
",",
"oh",
".",
"from_hour",
".",
"minute",
",",
"oh",
".",
"from_hour",
".",
"second",
")",
"if",
"tmp_now",
"<",
"now",
":",
"tmp_now",
"=",
"now",
"# be sure to set the bound correctly...",
"if",
"is_open",
"(",
"location",
",",
"now",
"=",
"tmp_now",
")",
":",
"found_opening_hours",
"=",
"oh",
"break",
"if",
"found_opening_hours",
"is",
"not",
"False",
":",
"return",
"found_opening_hours",
",",
"tmp_now",
"return",
"False",
",",
"None"
] | Returns the next possible opening hours object, or (False, None)
if location is currently open or there is no such object
I.e. when is the company open for the next time? | [
"Returns",
"the",
"next",
"possible",
"opening",
"hours",
"object",
"or",
"(",
"False",
"None",
")",
"if",
"location",
"is",
"currently",
"open",
"or",
"there",
"is",
"no",
"such",
"object",
"I",
".",
"e",
".",
"when",
"is",
"the",
"company",
"open",
"for",
"the",
"next",
"time?"
] | 6bad47509a14d65a3a5a08777455f4cc8b4961fa | https://github.com/arteria/django-openinghours/blob/6bad47509a14d65a3a5a08777455f4cc8b4961fa/openinghours/utils.py#L114-L148 | train |
hydpy-dev/hydpy | hydpy/models/hstream/hstream_states.py | QJoints.refweights | def refweights(self):
"""A |numpy| |numpy.ndarray| with equal weights for all segment
junctions..
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> states.qjoints.shape = 5
>>> states.qjoints.refweights
array([ 0.2, 0.2, 0.2, 0.2, 0.2])
"""
# pylint: disable=unsubscriptable-object
# due to a pylint bug (see https://github.com/PyCQA/pylint/issues/870)
return numpy.full(self.shape, 1./self.shape[0], dtype=float) | python | def refweights(self):
"""A |numpy| |numpy.ndarray| with equal weights for all segment
junctions..
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> states.qjoints.shape = 5
>>> states.qjoints.refweights
array([ 0.2, 0.2, 0.2, 0.2, 0.2])
"""
# pylint: disable=unsubscriptable-object
# due to a pylint bug (see https://github.com/PyCQA/pylint/issues/870)
return numpy.full(self.shape, 1./self.shape[0], dtype=float) | [
"def",
"refweights",
"(",
"self",
")",
":",
"# pylint: disable=unsubscriptable-object",
"# due to a pylint bug (see https://github.com/PyCQA/pylint/issues/870)",
"return",
"numpy",
".",
"full",
"(",
"self",
".",
"shape",
",",
"1.",
"/",
"self",
".",
"shape",
"[",
"0",
"]",
",",
"dtype",
"=",
"float",
")"
] | A |numpy| |numpy.ndarray| with equal weights for all segment
junctions..
>>> from hydpy.models.hstream import *
>>> parameterstep('1d')
>>> states.qjoints.shape = 5
>>> states.qjoints.refweights
array([ 0.2, 0.2, 0.2, 0.2, 0.2]) | [
"A",
"|numpy|",
"|numpy",
".",
"ndarray|",
"with",
"equal",
"weights",
"for",
"all",
"segment",
"junctions",
".."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/hstream/hstream_states.py#L57-L69 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | Folder2Path.add | def add(self, directory, path=None) -> None:
"""Add a directory and optionally its path."""
objecttools.valid_variable_identifier(directory)
if path is None:
path = directory
setattr(self, directory, path) | python | def add(self, directory, path=None) -> None:
"""Add a directory and optionally its path."""
objecttools.valid_variable_identifier(directory)
if path is None:
path = directory
setattr(self, directory, path) | [
"def",
"add",
"(",
"self",
",",
"directory",
",",
"path",
"=",
"None",
")",
"->",
"None",
":",
"objecttools",
".",
"valid_variable_identifier",
"(",
"directory",
")",
"if",
"path",
"is",
"None",
":",
"path",
"=",
"directory",
"setattr",
"(",
"self",
",",
"directory",
",",
"path",
")"
] | Add a directory and optionally its path. | [
"Add",
"a",
"directory",
"and",
"optionally",
"its",
"path",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L108-L113 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.basepath | def basepath(self) -> str:
"""Absolute path pointing to the available working directories.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename'
"""
return os.path.abspath(
os.path.join(self.projectdir, self.BASEDIR)) | python | def basepath(self) -> str:
"""Absolute path pointing to the available working directories.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename'
"""
return os.path.abspath(
os.path.join(self.projectdir, self.BASEDIR)) | [
"def",
"basepath",
"(",
"self",
")",
"->",
"str",
":",
"return",
"os",
".",
"path",
".",
"abspath",
"(",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"projectdir",
",",
"self",
".",
"BASEDIR",
")",
")"
] | Absolute path pointing to the available working directories.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename' | [
"Absolute",
"path",
"pointing",
"to",
"the",
"available",
"working",
"directories",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L218-L231 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.availabledirs | def availabledirs(self) -> Folder2Path:
"""Names and paths of the available working directories.
Available working directories are those beeing stored in the
base directory of the respective |FileManager| subclass.
Folders with names starting with an underscore are ignored
(use this for directories handling additional data files,
if you like). Zipped directories, which can be unpacked
on the fly, do also count as available directories:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename/folder1')
... os.makedirs('projectname/basename/folder2')
... open('projectname/basename/folder3.zip', 'w').close()
... os.makedirs('projectname/basename/_folder4')
... open('projectname/basename/folder5.tar', 'w').close()
... filemanager.availabledirs # doctest: +ELLIPSIS
Folder2Path(folder1=.../projectname/basename/folder1,
folder2=.../projectname/basename/folder2,
folder3=.../projectname/basename/folder3.zip)
"""
directories = Folder2Path()
for directory in os.listdir(self.basepath):
if not directory.startswith('_'):
path = os.path.join(self.basepath, directory)
if os.path.isdir(path):
directories.add(directory, path)
elif directory.endswith('.zip'):
directories.add(directory[:-4], path)
return directories | python | def availabledirs(self) -> Folder2Path:
"""Names and paths of the available working directories.
Available working directories are those beeing stored in the
base directory of the respective |FileManager| subclass.
Folders with names starting with an underscore are ignored
(use this for directories handling additional data files,
if you like). Zipped directories, which can be unpacked
on the fly, do also count as available directories:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename/folder1')
... os.makedirs('projectname/basename/folder2')
... open('projectname/basename/folder3.zip', 'w').close()
... os.makedirs('projectname/basename/_folder4')
... open('projectname/basename/folder5.tar', 'w').close()
... filemanager.availabledirs # doctest: +ELLIPSIS
Folder2Path(folder1=.../projectname/basename/folder1,
folder2=.../projectname/basename/folder2,
folder3=.../projectname/basename/folder3.zip)
"""
directories = Folder2Path()
for directory in os.listdir(self.basepath):
if not directory.startswith('_'):
path = os.path.join(self.basepath, directory)
if os.path.isdir(path):
directories.add(directory, path)
elif directory.endswith('.zip'):
directories.add(directory[:-4], path)
return directories | [
"def",
"availabledirs",
"(",
"self",
")",
"->",
"Folder2Path",
":",
"directories",
"=",
"Folder2Path",
"(",
")",
"for",
"directory",
"in",
"os",
".",
"listdir",
"(",
"self",
".",
"basepath",
")",
":",
"if",
"not",
"directory",
".",
"startswith",
"(",
"'_'",
")",
":",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"basepath",
",",
"directory",
")",
"if",
"os",
".",
"path",
".",
"isdir",
"(",
"path",
")",
":",
"directories",
".",
"add",
"(",
"directory",
",",
"path",
")",
"elif",
"directory",
".",
"endswith",
"(",
"'.zip'",
")",
":",
"directories",
".",
"add",
"(",
"directory",
"[",
":",
"-",
"4",
"]",
",",
"path",
")",
"return",
"directories"
] | Names and paths of the available working directories.
Available working directories are those beeing stored in the
base directory of the respective |FileManager| subclass.
Folders with names starting with an underscore are ignored
(use this for directories handling additional data files,
if you like). Zipped directories, which can be unpacked
on the fly, do also count as available directories:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename/folder1')
... os.makedirs('projectname/basename/folder2')
... open('projectname/basename/folder3.zip', 'w').close()
... os.makedirs('projectname/basename/_folder4')
... open('projectname/basename/folder5.tar', 'w').close()
... filemanager.availabledirs # doctest: +ELLIPSIS
Folder2Path(folder1=.../projectname/basename/folder1,
folder2=.../projectname/basename/folder2,
folder3=.../projectname/basename/folder3.zip) | [
"Names",
"and",
"paths",
"of",
"the",
"available",
"working",
"directories",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L234-L270 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.currentdir | def currentdir(self) -> str:
"""Name of the current working directory containing the relevant files.
To show most of the functionality of |property|
|FileManager.currentdir| (unpacking zip files on the fly is
explained in the documentation on function
(|FileManager.zip_currentdir|), we first prepare a |FileManager|
object corresponding to the |FileManager.basepath|
`projectname/basename`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename')
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename'
At first, the base directory is empty and asking for the
current working directory results in the following error:
>>> with TestIO():
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`.../projectname/basename` does not contain any available directories.
If only one directory exists, it is considered as the current
working directory automatically:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.currentdir
'dir1'
|property| |FileManager.currentdir| memorises the name of the
current working directory, even if another directory is later
added to the base path:
>>> with TestIO():
... os.mkdir('projectname/basename/dir2')
... filemanager.currentdir
'dir1'
Set the value of |FileManager.currentdir| to |None| to let it
forget the memorised directory. After that, asking for the
current working directory now results in another error, as
it is not clear which directory to select:
>>> with TestIO():
... filemanager.currentdir = None
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`....../projectname/basename` does contain multiple available directories \
(dir1 and dir2).
Setting |FileManager.currentdir| manually solves the problem:
>>> with TestIO():
... filemanager.currentdir = 'dir1'
... filemanager.currentdir
'dir1'
Remove the current working directory `dir1` with the `del` statement:
>>> with TestIO():
... del filemanager.currentdir
... os.path.exists('projectname/basename/dir1')
False
|FileManager| subclasses can define a default directory name.
When many directories exist and none is selected manually, the
default directory is selected automatically. The following
example shows an error message due to multiple directories
without any having the default name:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.DEFAULTDIR = 'dir3'
... del filemanager.currentdir
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: The \
default directory (dir3) is not among the available directories (dir1 and dir2).
We can fix this by adding the required default directory manually:
>>> with TestIO():
... os.mkdir('projectname/basename/dir3')
... filemanager.currentdir
'dir3'
Setting the |FileManager.currentdir| to `dir4` not only overwrites
the default name, but also creates the required folder:
>>> with TestIO():
... filemanager.currentdir = 'dir4'
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4']
Failed attempts in removing directories result in error messages
like the following one:
>>> import shutil
>>> from unittest.mock import patch
>>> with patch.object(shutil, 'rmtree', side_effect=AttributeError):
... with TestIO():
... del filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError: While trying to delete the current working directory \
`.../projectname/basename/dir4` of the FileManager object, the following \
error occurred: ...
Then, the current working directory still exists and is remembered
by |FileManager.currentdir|:
>>> with TestIO():
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4']
"""
if self._currentdir is None:
directories = self.availabledirs.folders
if len(directories) == 1:
self.currentdir = directories[0]
elif self.DEFAULTDIR in directories:
self.currentdir = self.DEFAULTDIR
else:
prefix = (f'The current working directory of the '
f'{objecttools.classname(self)} object '
f'has not been defined manually and cannot '
f'be determined automatically:')
if not directories:
raise RuntimeError(
f'{prefix} `{objecttools.repr_(self.basepath)}` '
f'does not contain any available directories.')
if self.DEFAULTDIR is None:
raise RuntimeError(
f'{prefix} `{objecttools.repr_(self.basepath)}` '
f'does contain multiple available directories '
f'({objecttools.enumeration(directories)}).')
raise RuntimeError(
f'{prefix} The default directory ({self.DEFAULTDIR}) '
f'is not among the available directories '
f'({objecttools.enumeration(directories)}).')
return self._currentdir | python | def currentdir(self) -> str:
"""Name of the current working directory containing the relevant files.
To show most of the functionality of |property|
|FileManager.currentdir| (unpacking zip files on the fly is
explained in the documentation on function
(|FileManager.zip_currentdir|), we first prepare a |FileManager|
object corresponding to the |FileManager.basepath|
`projectname/basename`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename')
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename'
At first, the base directory is empty and asking for the
current working directory results in the following error:
>>> with TestIO():
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`.../projectname/basename` does not contain any available directories.
If only one directory exists, it is considered as the current
working directory automatically:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.currentdir
'dir1'
|property| |FileManager.currentdir| memorises the name of the
current working directory, even if another directory is later
added to the base path:
>>> with TestIO():
... os.mkdir('projectname/basename/dir2')
... filemanager.currentdir
'dir1'
Set the value of |FileManager.currentdir| to |None| to let it
forget the memorised directory. After that, asking for the
current working directory now results in another error, as
it is not clear which directory to select:
>>> with TestIO():
... filemanager.currentdir = None
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`....../projectname/basename` does contain multiple available directories \
(dir1 and dir2).
Setting |FileManager.currentdir| manually solves the problem:
>>> with TestIO():
... filemanager.currentdir = 'dir1'
... filemanager.currentdir
'dir1'
Remove the current working directory `dir1` with the `del` statement:
>>> with TestIO():
... del filemanager.currentdir
... os.path.exists('projectname/basename/dir1')
False
|FileManager| subclasses can define a default directory name.
When many directories exist and none is selected manually, the
default directory is selected automatically. The following
example shows an error message due to multiple directories
without any having the default name:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.DEFAULTDIR = 'dir3'
... del filemanager.currentdir
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: The \
default directory (dir3) is not among the available directories (dir1 and dir2).
We can fix this by adding the required default directory manually:
>>> with TestIO():
... os.mkdir('projectname/basename/dir3')
... filemanager.currentdir
'dir3'
Setting the |FileManager.currentdir| to `dir4` not only overwrites
the default name, but also creates the required folder:
>>> with TestIO():
... filemanager.currentdir = 'dir4'
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4']
Failed attempts in removing directories result in error messages
like the following one:
>>> import shutil
>>> from unittest.mock import patch
>>> with patch.object(shutil, 'rmtree', side_effect=AttributeError):
... with TestIO():
... del filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError: While trying to delete the current working directory \
`.../projectname/basename/dir4` of the FileManager object, the following \
error occurred: ...
Then, the current working directory still exists and is remembered
by |FileManager.currentdir|:
>>> with TestIO():
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4']
"""
if self._currentdir is None:
directories = self.availabledirs.folders
if len(directories) == 1:
self.currentdir = directories[0]
elif self.DEFAULTDIR in directories:
self.currentdir = self.DEFAULTDIR
else:
prefix = (f'The current working directory of the '
f'{objecttools.classname(self)} object '
f'has not been defined manually and cannot '
f'be determined automatically:')
if not directories:
raise RuntimeError(
f'{prefix} `{objecttools.repr_(self.basepath)}` '
f'does not contain any available directories.')
if self.DEFAULTDIR is None:
raise RuntimeError(
f'{prefix} `{objecttools.repr_(self.basepath)}` '
f'does contain multiple available directories '
f'({objecttools.enumeration(directories)}).')
raise RuntimeError(
f'{prefix} The default directory ({self.DEFAULTDIR}) '
f'is not among the available directories '
f'({objecttools.enumeration(directories)}).')
return self._currentdir | [
"def",
"currentdir",
"(",
"self",
")",
"->",
"str",
":",
"if",
"self",
".",
"_currentdir",
"is",
"None",
":",
"directories",
"=",
"self",
".",
"availabledirs",
".",
"folders",
"if",
"len",
"(",
"directories",
")",
"==",
"1",
":",
"self",
".",
"currentdir",
"=",
"directories",
"[",
"0",
"]",
"elif",
"self",
".",
"DEFAULTDIR",
"in",
"directories",
":",
"self",
".",
"currentdir",
"=",
"self",
".",
"DEFAULTDIR",
"else",
":",
"prefix",
"=",
"(",
"f'The current working directory of the '",
"f'{objecttools.classname(self)} object '",
"f'has not been defined manually and cannot '",
"f'be determined automatically:'",
")",
"if",
"not",
"directories",
":",
"raise",
"RuntimeError",
"(",
"f'{prefix} `{objecttools.repr_(self.basepath)}` '",
"f'does not contain any available directories.'",
")",
"if",
"self",
".",
"DEFAULTDIR",
"is",
"None",
":",
"raise",
"RuntimeError",
"(",
"f'{prefix} `{objecttools.repr_(self.basepath)}` '",
"f'does contain multiple available directories '",
"f'({objecttools.enumeration(directories)}).'",
")",
"raise",
"RuntimeError",
"(",
"f'{prefix} The default directory ({self.DEFAULTDIR}) '",
"f'is not among the available directories '",
"f'({objecttools.enumeration(directories)}).'",
")",
"return",
"self",
".",
"_currentdir"
] | Name of the current working directory containing the relevant files.
To show most of the functionality of |property|
|FileManager.currentdir| (unpacking zip files on the fly is
explained in the documentation on function
(|FileManager.zip_currentdir|), we first prepare a |FileManager|
object corresponding to the |FileManager.basepath|
`projectname/basename`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> with TestIO():
... os.makedirs('projectname/basename')
... repr_(filemanager.basepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename'
At first, the base directory is empty and asking for the
current working directory results in the following error:
>>> with TestIO():
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`.../projectname/basename` does not contain any available directories.
If only one directory exists, it is considered as the current
working directory automatically:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.currentdir
'dir1'
|property| |FileManager.currentdir| memorises the name of the
current working directory, even if another directory is later
added to the base path:
>>> with TestIO():
... os.mkdir('projectname/basename/dir2')
... filemanager.currentdir
'dir1'
Set the value of |FileManager.currentdir| to |None| to let it
forget the memorised directory. After that, asking for the
current working directory now results in another error, as
it is not clear which directory to select:
>>> with TestIO():
... filemanager.currentdir = None
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: \
`....../projectname/basename` does contain multiple available directories \
(dir1 and dir2).
Setting |FileManager.currentdir| manually solves the problem:
>>> with TestIO():
... filemanager.currentdir = 'dir1'
... filemanager.currentdir
'dir1'
Remove the current working directory `dir1` with the `del` statement:
>>> with TestIO():
... del filemanager.currentdir
... os.path.exists('projectname/basename/dir1')
False
|FileManager| subclasses can define a default directory name.
When many directories exist and none is selected manually, the
default directory is selected automatically. The following
example shows an error message due to multiple directories
without any having the default name:
>>> with TestIO():
... os.mkdir('projectname/basename/dir1')
... filemanager.DEFAULTDIR = 'dir3'
... del filemanager.currentdir
... filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
RuntimeError: The current working directory of the FileManager object \
has not been defined manually and cannot be determined automatically: The \
default directory (dir3) is not among the available directories (dir1 and dir2).
We can fix this by adding the required default directory manually:
>>> with TestIO():
... os.mkdir('projectname/basename/dir3')
... filemanager.currentdir
'dir3'
Setting the |FileManager.currentdir| to `dir4` not only overwrites
the default name, but also creates the required folder:
>>> with TestIO():
... filemanager.currentdir = 'dir4'
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4']
Failed attempts in removing directories result in error messages
like the following one:
>>> import shutil
>>> from unittest.mock import patch
>>> with patch.object(shutil, 'rmtree', side_effect=AttributeError):
... with TestIO():
... del filemanager.currentdir # doctest: +ELLIPSIS
Traceback (most recent call last):
...
AttributeError: While trying to delete the current working directory \
`.../projectname/basename/dir4` of the FileManager object, the following \
error occurred: ...
Then, the current working directory still exists and is remembered
by |FileManager.currentdir|:
>>> with TestIO():
... filemanager.currentdir
'dir4'
>>> with TestIO():
... sorted(os.listdir('projectname/basename'))
['dir1', 'dir2', 'dir3', 'dir4'] | [
"Name",
"of",
"the",
"current",
"working",
"directory",
"containing",
"the",
"relevant",
"files",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L273-L435 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.currentpath | def currentpath(self) -> str:
"""Absolute path of the current working directory.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... repr_(filemanager.currentpath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir'
"""
return os.path.join(self.basepath, self.currentdir) | python | def currentpath(self) -> str:
"""Absolute path of the current working directory.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... repr_(filemanager.currentpath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir'
"""
return os.path.join(self.basepath, self.currentdir) | [
"def",
"currentpath",
"(",
"self",
")",
"->",
"str",
":",
"return",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"basepath",
",",
"self",
".",
"currentdir",
")"
] | Absolute path of the current working directory.
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... repr_(filemanager.currentpath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir' | [
"Absolute",
"path",
"of",
"the",
"current",
"working",
"directory",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L469-L482 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.filenames | def filenames(self) -> List[str]:
"""Names of the files contained in the the current working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.npy']
"""
return sorted(
fn for fn in os.listdir(self.currentpath)
if not fn.startswith('_')) | python | def filenames(self) -> List[str]:
"""Names of the files contained in the the current working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.npy']
"""
return sorted(
fn for fn in os.listdir(self.currentpath)
if not fn.startswith('_')) | [
"def",
"filenames",
"(",
"self",
")",
"->",
"List",
"[",
"str",
"]",
":",
"return",
"sorted",
"(",
"fn",
"for",
"fn",
"in",
"os",
".",
"listdir",
"(",
"self",
".",
"currentpath",
")",
"if",
"not",
"fn",
".",
"startswith",
"(",
"'_'",
")",
")"
] | Names of the files contained in the the current working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.npy'] | [
"Names",
"of",
"the",
"files",
"contained",
"in",
"the",
"the",
"current",
"working",
"directory",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L485-L505 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.filepaths | def filepaths(self) -> List[str]:
"""Absolute path names of the files contained in the current
working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... for filepath in filemanager.filepaths:
... repr_(filepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir/file1.txt'
'...hydpy/tests/iotesting/projectname/basename/testdir/file2.npy'
"""
path = self.currentpath
return [os.path.join(path, name) for name in self.filenames] | python | def filepaths(self) -> List[str]:
"""Absolute path names of the files contained in the current
working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... for filepath in filemanager.filepaths:
... repr_(filepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir/file1.txt'
'...hydpy/tests/iotesting/projectname/basename/testdir/file2.npy'
"""
path = self.currentpath
return [os.path.join(path, name) for name in self.filenames] | [
"def",
"filepaths",
"(",
"self",
")",
"->",
"List",
"[",
"str",
"]",
":",
"path",
"=",
"self",
".",
"currentpath",
"return",
"[",
"os",
".",
"path",
".",
"join",
"(",
"path",
",",
"name",
")",
"for",
"name",
"in",
"self",
".",
"filenames",
"]"
] | Absolute path names of the files contained in the current
working directory.
Files names starting with underscores are ignored:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> from hydpy import repr_, TestIO
>>> with TestIO():
... filemanager.currentdir = 'testdir'
... open('projectname/basename/testdir/file1.txt', 'w').close()
... open('projectname/basename/testdir/file2.npy', 'w').close()
... open('projectname/basename/testdir/_file1.nc', 'w').close()
... for filepath in filemanager.filepaths:
... repr_(filepath) # doctest: +ELLIPSIS
'...hydpy/tests/iotesting/projectname/basename/testdir/file1.txt'
'...hydpy/tests/iotesting/projectname/basename/testdir/file2.npy' | [
"Absolute",
"path",
"names",
"of",
"the",
"files",
"contained",
"in",
"the",
"current",
"working",
"directory",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L508-L530 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | FileManager.zip_currentdir | def zip_currentdir(self) -> None:
"""Pack the current working directory in a `zip` file.
|FileManager| subclasses allow for manual packing and automatic
unpacking of working directories. The only supported format is `zip`.
To avoid possible inconsistencies, origin directories and zip
files are removed after packing or unpacking, respectively.
As an example scenario, we prepare a |FileManager| object with
the current working directory `folder` containing the files
`test1.txt` and `text2.txt`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> basepath = 'projectname/basename'
>>> with TestIO():
... os.makedirs(basepath)
... filemanager.currentdir = 'folder'
... open(f'{basepath}/folder/file1.txt', 'w').close()
... open(f'{basepath}/folder/file2.txt', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.txt']
The directories existing under the base path are identical
with the ones returned by property |FileManager.availabledirs|:
>>> with TestIO():
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
After packing the current working directory manually, it is
still counted as a available directory:
>>> with TestIO():
... filemanager.zip_currentdir()
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder.zip']
Folder2Path(folder=.../projectname/basename/folder.zip)
Instead of the complete directory, only the contained files
are packed:
>>> from zipfile import ZipFile
>>> with TestIO():
... with ZipFile('projectname/basename/folder.zip', 'r') as zp:
... sorted(zp.namelist())
['file1.txt', 'file2.txt']
The zip file is unpacked again, as soon as `folder` becomes
the current working directory:
>>> with TestIO():
... filemanager.currentdir = 'folder'
... sorted(os.listdir(basepath))
... filemanager.availabledirs
... filemanager.filenames # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
['file1.txt', 'file2.txt']
"""
with zipfile.ZipFile(f'{self.currentpath}.zip', 'w') as zipfile_:
for filepath, filename in zip(self.filepaths, self.filenames):
zipfile_.write(filename=filepath, arcname=filename)
del self.currentdir | python | def zip_currentdir(self) -> None:
"""Pack the current working directory in a `zip` file.
|FileManager| subclasses allow for manual packing and automatic
unpacking of working directories. The only supported format is `zip`.
To avoid possible inconsistencies, origin directories and zip
files are removed after packing or unpacking, respectively.
As an example scenario, we prepare a |FileManager| object with
the current working directory `folder` containing the files
`test1.txt` and `text2.txt`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> basepath = 'projectname/basename'
>>> with TestIO():
... os.makedirs(basepath)
... filemanager.currentdir = 'folder'
... open(f'{basepath}/folder/file1.txt', 'w').close()
... open(f'{basepath}/folder/file2.txt', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.txt']
The directories existing under the base path are identical
with the ones returned by property |FileManager.availabledirs|:
>>> with TestIO():
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
After packing the current working directory manually, it is
still counted as a available directory:
>>> with TestIO():
... filemanager.zip_currentdir()
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder.zip']
Folder2Path(folder=.../projectname/basename/folder.zip)
Instead of the complete directory, only the contained files
are packed:
>>> from zipfile import ZipFile
>>> with TestIO():
... with ZipFile('projectname/basename/folder.zip', 'r') as zp:
... sorted(zp.namelist())
['file1.txt', 'file2.txt']
The zip file is unpacked again, as soon as `folder` becomes
the current working directory:
>>> with TestIO():
... filemanager.currentdir = 'folder'
... sorted(os.listdir(basepath))
... filemanager.availabledirs
... filemanager.filenames # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
['file1.txt', 'file2.txt']
"""
with zipfile.ZipFile(f'{self.currentpath}.zip', 'w') as zipfile_:
for filepath, filename in zip(self.filepaths, self.filenames):
zipfile_.write(filename=filepath, arcname=filename)
del self.currentdir | [
"def",
"zip_currentdir",
"(",
"self",
")",
"->",
"None",
":",
"with",
"zipfile",
".",
"ZipFile",
"(",
"f'{self.currentpath}.zip'",
",",
"'w'",
")",
"as",
"zipfile_",
":",
"for",
"filepath",
",",
"filename",
"in",
"zip",
"(",
"self",
".",
"filepaths",
",",
"self",
".",
"filenames",
")",
":",
"zipfile_",
".",
"write",
"(",
"filename",
"=",
"filepath",
",",
"arcname",
"=",
"filename",
")",
"del",
"self",
".",
"currentdir"
] | Pack the current working directory in a `zip` file.
|FileManager| subclasses allow for manual packing and automatic
unpacking of working directories. The only supported format is `zip`.
To avoid possible inconsistencies, origin directories and zip
files are removed after packing or unpacking, respectively.
As an example scenario, we prepare a |FileManager| object with
the current working directory `folder` containing the files
`test1.txt` and `text2.txt`:
>>> from hydpy.core.filetools import FileManager
>>> filemanager = FileManager()
>>> filemanager.BASEDIR = 'basename'
>>> filemanager.projectdir = 'projectname'
>>> import os
>>> from hydpy import repr_, TestIO
>>> TestIO.clear()
>>> basepath = 'projectname/basename'
>>> with TestIO():
... os.makedirs(basepath)
... filemanager.currentdir = 'folder'
... open(f'{basepath}/folder/file1.txt', 'w').close()
... open(f'{basepath}/folder/file2.txt', 'w').close()
... filemanager.filenames
['file1.txt', 'file2.txt']
The directories existing under the base path are identical
with the ones returned by property |FileManager.availabledirs|:
>>> with TestIO():
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
After packing the current working directory manually, it is
still counted as a available directory:
>>> with TestIO():
... filemanager.zip_currentdir()
... sorted(os.listdir(basepath))
... filemanager.availabledirs # doctest: +ELLIPSIS
['folder.zip']
Folder2Path(folder=.../projectname/basename/folder.zip)
Instead of the complete directory, only the contained files
are packed:
>>> from zipfile import ZipFile
>>> with TestIO():
... with ZipFile('projectname/basename/folder.zip', 'r') as zp:
... sorted(zp.namelist())
['file1.txt', 'file2.txt']
The zip file is unpacked again, as soon as `folder` becomes
the current working directory:
>>> with TestIO():
... filemanager.currentdir = 'folder'
... sorted(os.listdir(basepath))
... filemanager.availabledirs
... filemanager.filenames # doctest: +ELLIPSIS
['folder']
Folder2Path(folder=.../projectname/basename/folder)
['file1.txt', 'file2.txt'] | [
"Pack",
"the",
"current",
"working",
"directory",
"in",
"a",
"zip",
"file",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L532-L603 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | NetworkManager.load_files | def load_files(self) -> selectiontools.Selections:
"""Read all network files of the current working directory, structure
their contents in a |selectiontools.Selections| object, and return it.
"""
devicetools.Node.clear_all()
devicetools.Element.clear_all()
selections = selectiontools.Selections()
for (filename, path) in zip(self.filenames, self.filepaths):
# Ensure both `Node` and `Element`start with a `fresh` memory.
devicetools.Node.extract_new()
devicetools.Element.extract_new()
try:
info = runpy.run_path(path)
except BaseException:
objecttools.augment_excmessage(
f'While trying to load the network file `{path}`')
try:
node: devicetools.Node = info['Node']
element: devicetools.Element = info['Element']
selections += selectiontools.Selection(
filename.split('.')[0],
node.extract_new(),
element.extract_new())
except KeyError as exc:
raise RuntimeError(
f'The class {exc.args[0]} cannot be loaded from the '
f'network file `{path}`.')
selections += selectiontools.Selection(
'complete',
info['Node'].query_all(),
info['Element'].query_all())
return selections | python | def load_files(self) -> selectiontools.Selections:
"""Read all network files of the current working directory, structure
their contents in a |selectiontools.Selections| object, and return it.
"""
devicetools.Node.clear_all()
devicetools.Element.clear_all()
selections = selectiontools.Selections()
for (filename, path) in zip(self.filenames, self.filepaths):
# Ensure both `Node` and `Element`start with a `fresh` memory.
devicetools.Node.extract_new()
devicetools.Element.extract_new()
try:
info = runpy.run_path(path)
except BaseException:
objecttools.augment_excmessage(
f'While trying to load the network file `{path}`')
try:
node: devicetools.Node = info['Node']
element: devicetools.Element = info['Element']
selections += selectiontools.Selection(
filename.split('.')[0],
node.extract_new(),
element.extract_new())
except KeyError as exc:
raise RuntimeError(
f'The class {exc.args[0]} cannot be loaded from the '
f'network file `{path}`.')
selections += selectiontools.Selection(
'complete',
info['Node'].query_all(),
info['Element'].query_all())
return selections | [
"def",
"load_files",
"(",
"self",
")",
"->",
"selectiontools",
".",
"Selections",
":",
"devicetools",
".",
"Node",
".",
"clear_all",
"(",
")",
"devicetools",
".",
"Element",
".",
"clear_all",
"(",
")",
"selections",
"=",
"selectiontools",
".",
"Selections",
"(",
")",
"for",
"(",
"filename",
",",
"path",
")",
"in",
"zip",
"(",
"self",
".",
"filenames",
",",
"self",
".",
"filepaths",
")",
":",
"# Ensure both `Node` and `Element`start with a `fresh` memory.",
"devicetools",
".",
"Node",
".",
"extract_new",
"(",
")",
"devicetools",
".",
"Element",
".",
"extract_new",
"(",
")",
"try",
":",
"info",
"=",
"runpy",
".",
"run_path",
"(",
"path",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to load the network file `{path}`'",
")",
"try",
":",
"node",
":",
"devicetools",
".",
"Node",
"=",
"info",
"[",
"'Node'",
"]",
"element",
":",
"devicetools",
".",
"Element",
"=",
"info",
"[",
"'Element'",
"]",
"selections",
"+=",
"selectiontools",
".",
"Selection",
"(",
"filename",
".",
"split",
"(",
"'.'",
")",
"[",
"0",
"]",
",",
"node",
".",
"extract_new",
"(",
")",
",",
"element",
".",
"extract_new",
"(",
")",
")",
"except",
"KeyError",
"as",
"exc",
":",
"raise",
"RuntimeError",
"(",
"f'The class {exc.args[0]} cannot be loaded from the '",
"f'network file `{path}`.'",
")",
"selections",
"+=",
"selectiontools",
".",
"Selection",
"(",
"'complete'",
",",
"info",
"[",
"'Node'",
"]",
".",
"query_all",
"(",
")",
",",
"info",
"[",
"'Element'",
"]",
".",
"query_all",
"(",
")",
")",
"return",
"selections"
] | Read all network files of the current working directory, structure
their contents in a |selectiontools.Selections| object, and return it. | [
"Read",
"all",
"network",
"files",
"of",
"the",
"current",
"working",
"directory",
"structure",
"their",
"contents",
"in",
"a",
"|selectiontools",
".",
"Selections|",
"object",
"and",
"return",
"it",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L731-L763 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | NetworkManager.save_files | def save_files(self, selections) -> None:
"""Save the |Selection| objects contained in the given |Selections|
instance to separate network files."""
try:
currentpath = self.currentpath
selections = selectiontools.Selections(selections)
for selection in selections:
if selection.name == 'complete':
continue
path = os.path.join(currentpath, selection.name+'.py')
selection.save_networkfile(filepath=path)
except BaseException:
objecttools.augment_excmessage(
'While trying to save selections `%s` into network files'
% selections) | python | def save_files(self, selections) -> None:
"""Save the |Selection| objects contained in the given |Selections|
instance to separate network files."""
try:
currentpath = self.currentpath
selections = selectiontools.Selections(selections)
for selection in selections:
if selection.name == 'complete':
continue
path = os.path.join(currentpath, selection.name+'.py')
selection.save_networkfile(filepath=path)
except BaseException:
objecttools.augment_excmessage(
'While trying to save selections `%s` into network files'
% selections) | [
"def",
"save_files",
"(",
"self",
",",
"selections",
")",
"->",
"None",
":",
"try",
":",
"currentpath",
"=",
"self",
".",
"currentpath",
"selections",
"=",
"selectiontools",
".",
"Selections",
"(",
"selections",
")",
"for",
"selection",
"in",
"selections",
":",
"if",
"selection",
".",
"name",
"==",
"'complete'",
":",
"continue",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"currentpath",
",",
"selection",
".",
"name",
"+",
"'.py'",
")",
"selection",
".",
"save_networkfile",
"(",
"filepath",
"=",
"path",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to save selections `%s` into network files'",
"%",
"selections",
")"
] | Save the |Selection| objects contained in the given |Selections|
instance to separate network files. | [
"Save",
"the",
"|Selection|",
"objects",
"contained",
"in",
"the",
"given",
"|Selections|",
"instance",
"to",
"separate",
"network",
"files",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L765-L779 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | NetworkManager.delete_files | def delete_files(self, selections) -> None:
"""Delete the network files corresponding to the given selections
(e.g. a |list| of |str| objects or a |Selections| object)."""
try:
currentpath = self.currentpath
for selection in selections:
name = str(selection)
if name == 'complete':
continue
if not name.endswith('.py'):
name += '.py'
path = os.path.join(currentpath, name)
os.remove(path)
except BaseException:
objecttools.augment_excmessage(
f'While trying to remove the network files of '
f'selections `{selections}`') | python | def delete_files(self, selections) -> None:
"""Delete the network files corresponding to the given selections
(e.g. a |list| of |str| objects or a |Selections| object)."""
try:
currentpath = self.currentpath
for selection in selections:
name = str(selection)
if name == 'complete':
continue
if not name.endswith('.py'):
name += '.py'
path = os.path.join(currentpath, name)
os.remove(path)
except BaseException:
objecttools.augment_excmessage(
f'While trying to remove the network files of '
f'selections `{selections}`') | [
"def",
"delete_files",
"(",
"self",
",",
"selections",
")",
"->",
"None",
":",
"try",
":",
"currentpath",
"=",
"self",
".",
"currentpath",
"for",
"selection",
"in",
"selections",
":",
"name",
"=",
"str",
"(",
"selection",
")",
"if",
"name",
"==",
"'complete'",
":",
"continue",
"if",
"not",
"name",
".",
"endswith",
"(",
"'.py'",
")",
":",
"name",
"+=",
"'.py'",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"currentpath",
",",
"name",
")",
"os",
".",
"remove",
"(",
"path",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"f'While trying to remove the network files of '",
"f'selections `{selections}`'",
")"
] | Delete the network files corresponding to the given selections
(e.g. a |list| of |str| objects or a |Selections| object). | [
"Delete",
"the",
"network",
"files",
"corresponding",
"to",
"the",
"given",
"selections",
"(",
"e",
".",
"g",
".",
"a",
"|list|",
"of",
"|str|",
"objects",
"or",
"a",
"|Selections|",
"object",
")",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L781-L797 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | ControlManager.load_file | def load_file(self, element=None, filename=None, clear_registry=True):
"""Return the namespace of the given file (and eventually of its
corresponding auxiliary subfiles) as a |dict|.
By default, the internal registry is cleared when a control file and
all its corresponding auxiliary files have been loaded. You can
change this behaviour by passing `False` for the `clear_registry`
argument. This might decrease model initialization times
significantly. But then it is your own responsibility to call
method |ControlManager.clear_registry| when necessary (before
reloading a changed control file).
"""
if not filename:
filename = element.name
type(self)._workingpath = self.currentpath
info = {}
if element:
info['element'] = element
try:
self.read2dict(filename, info)
finally:
type(self)._workingpath = '.'
if clear_registry:
self._registry.clear()
return info | python | def load_file(self, element=None, filename=None, clear_registry=True):
"""Return the namespace of the given file (and eventually of its
corresponding auxiliary subfiles) as a |dict|.
By default, the internal registry is cleared when a control file and
all its corresponding auxiliary files have been loaded. You can
change this behaviour by passing `False` for the `clear_registry`
argument. This might decrease model initialization times
significantly. But then it is your own responsibility to call
method |ControlManager.clear_registry| when necessary (before
reloading a changed control file).
"""
if not filename:
filename = element.name
type(self)._workingpath = self.currentpath
info = {}
if element:
info['element'] = element
try:
self.read2dict(filename, info)
finally:
type(self)._workingpath = '.'
if clear_registry:
self._registry.clear()
return info | [
"def",
"load_file",
"(",
"self",
",",
"element",
"=",
"None",
",",
"filename",
"=",
"None",
",",
"clear_registry",
"=",
"True",
")",
":",
"if",
"not",
"filename",
":",
"filename",
"=",
"element",
".",
"name",
"type",
"(",
"self",
")",
".",
"_workingpath",
"=",
"self",
".",
"currentpath",
"info",
"=",
"{",
"}",
"if",
"element",
":",
"info",
"[",
"'element'",
"]",
"=",
"element",
"try",
":",
"self",
".",
"read2dict",
"(",
"filename",
",",
"info",
")",
"finally",
":",
"type",
"(",
"self",
")",
".",
"_workingpath",
"=",
"'.'",
"if",
"clear_registry",
":",
"self",
".",
"_registry",
".",
"clear",
"(",
")",
"return",
"info"
] | Return the namespace of the given file (and eventually of its
corresponding auxiliary subfiles) as a |dict|.
By default, the internal registry is cleared when a control file and
all its corresponding auxiliary files have been loaded. You can
change this behaviour by passing `False` for the `clear_registry`
argument. This might decrease model initialization times
significantly. But then it is your own responsibility to call
method |ControlManager.clear_registry| when necessary (before
reloading a changed control file). | [
"Return",
"the",
"namespace",
"of",
"the",
"given",
"file",
"(",
"and",
"eventually",
"of",
"its",
"corresponding",
"auxiliary",
"subfiles",
")",
"as",
"a",
"|dict|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L810-L834 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | ControlManager.read2dict | def read2dict(cls, filename, info):
"""Read the control parameters from the given path (and its
auxiliary paths, where appropriate) and store them in the given
|dict| object `info`.
Note that the |dict| `info` can be used to feed information
into the execution of control files. Use this method only if you
are completely sure on how the control parameter import of HydPy
works. Otherwise, you should most probably prefer to use
|ControlManager.load_file|.
"""
if not filename.endswith('.py'):
filename += '.py'
path = os.path.join(cls._workingpath, filename)
try:
if path not in cls._registry:
with open(path) as file_:
cls._registry[path] = file_.read()
exec(cls._registry[path], {}, info)
except BaseException:
objecttools.augment_excmessage(
'While trying to load the control file `%s`'
% path)
if 'model' not in info:
raise IOError(
'Model parameters cannot be loaded from control file `%s`. '
'Please refer to the HydPy documentation on how to prepare '
'control files properly.'
% path) | python | def read2dict(cls, filename, info):
"""Read the control parameters from the given path (and its
auxiliary paths, where appropriate) and store them in the given
|dict| object `info`.
Note that the |dict| `info` can be used to feed information
into the execution of control files. Use this method only if you
are completely sure on how the control parameter import of HydPy
works. Otherwise, you should most probably prefer to use
|ControlManager.load_file|.
"""
if not filename.endswith('.py'):
filename += '.py'
path = os.path.join(cls._workingpath, filename)
try:
if path not in cls._registry:
with open(path) as file_:
cls._registry[path] = file_.read()
exec(cls._registry[path], {}, info)
except BaseException:
objecttools.augment_excmessage(
'While trying to load the control file `%s`'
% path)
if 'model' not in info:
raise IOError(
'Model parameters cannot be loaded from control file `%s`. '
'Please refer to the HydPy documentation on how to prepare '
'control files properly.'
% path) | [
"def",
"read2dict",
"(",
"cls",
",",
"filename",
",",
"info",
")",
":",
"if",
"not",
"filename",
".",
"endswith",
"(",
"'.py'",
")",
":",
"filename",
"+=",
"'.py'",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"cls",
".",
"_workingpath",
",",
"filename",
")",
"try",
":",
"if",
"path",
"not",
"in",
"cls",
".",
"_registry",
":",
"with",
"open",
"(",
"path",
")",
"as",
"file_",
":",
"cls",
".",
"_registry",
"[",
"path",
"]",
"=",
"file_",
".",
"read",
"(",
")",
"exec",
"(",
"cls",
".",
"_registry",
"[",
"path",
"]",
",",
"{",
"}",
",",
"info",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to load the control file `%s`'",
"%",
"path",
")",
"if",
"'model'",
"not",
"in",
"info",
":",
"raise",
"IOError",
"(",
"'Model parameters cannot be loaded from control file `%s`. '",
"'Please refer to the HydPy documentation on how to prepare '",
"'control files properly.'",
"%",
"path",
")"
] | Read the control parameters from the given path (and its
auxiliary paths, where appropriate) and store them in the given
|dict| object `info`.
Note that the |dict| `info` can be used to feed information
into the execution of control files. Use this method only if you
are completely sure on how the control parameter import of HydPy
works. Otherwise, you should most probably prefer to use
|ControlManager.load_file|. | [
"Read",
"the",
"control",
"parameters",
"from",
"the",
"given",
"path",
"(",
"and",
"its",
"auxiliary",
"paths",
"where",
"appropriate",
")",
"and",
"store",
"them",
"in",
"the",
"given",
"|dict|",
"object",
"info",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L837-L865 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | ControlManager.save_file | def save_file(self, filename, text):
"""Save the given text under the given control filename and the
current path."""
if not filename.endswith('.py'):
filename += '.py'
path = os.path.join(self.currentpath, filename)
with open(path, 'w', encoding="utf-8") as file_:
file_.write(text) | python | def save_file(self, filename, text):
"""Save the given text under the given control filename and the
current path."""
if not filename.endswith('.py'):
filename += '.py'
path = os.path.join(self.currentpath, filename)
with open(path, 'w', encoding="utf-8") as file_:
file_.write(text) | [
"def",
"save_file",
"(",
"self",
",",
"filename",
",",
"text",
")",
":",
"if",
"not",
"filename",
".",
"endswith",
"(",
"'.py'",
")",
":",
"filename",
"+=",
"'.py'",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"currentpath",
",",
"filename",
")",
"with",
"open",
"(",
"path",
",",
"'w'",
",",
"encoding",
"=",
"\"utf-8\"",
")",
"as",
"file_",
":",
"file_",
".",
"write",
"(",
"text",
")"
] | Save the given text under the given control filename and the
current path. | [
"Save",
"the",
"given",
"text",
"under",
"the",
"given",
"control",
"filename",
"and",
"the",
"current",
"path",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L873-L880 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | ConditionManager.load_file | def load_file(self, filename):
"""Read and return the content of the given file.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation start date. If
such an directory does not exist, it is created immediately.
"""
_defaultdir = self.DEFAULTDIR
try:
if not filename.endswith('.py'):
filename += '.py'
try:
self.DEFAULTDIR = (
'init_' + hydpy.pub.timegrids.sim.firstdate.to_string('os'))
except KeyError:
pass
filepath = os.path.join(self.currentpath, filename)
with open(filepath) as file_:
return file_.read()
except BaseException:
objecttools.augment_excmessage(
'While trying to read the conditions file `%s`'
% filename)
finally:
self.DEFAULTDIR = _defaultdir | python | def load_file(self, filename):
"""Read and return the content of the given file.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation start date. If
such an directory does not exist, it is created immediately.
"""
_defaultdir = self.DEFAULTDIR
try:
if not filename.endswith('.py'):
filename += '.py'
try:
self.DEFAULTDIR = (
'init_' + hydpy.pub.timegrids.sim.firstdate.to_string('os'))
except KeyError:
pass
filepath = os.path.join(self.currentpath, filename)
with open(filepath) as file_:
return file_.read()
except BaseException:
objecttools.augment_excmessage(
'While trying to read the conditions file `%s`'
% filename)
finally:
self.DEFAULTDIR = _defaultdir | [
"def",
"load_file",
"(",
"self",
",",
"filename",
")",
":",
"_defaultdir",
"=",
"self",
".",
"DEFAULTDIR",
"try",
":",
"if",
"not",
"filename",
".",
"endswith",
"(",
"'.py'",
")",
":",
"filename",
"+=",
"'.py'",
"try",
":",
"self",
".",
"DEFAULTDIR",
"=",
"(",
"'init_'",
"+",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"sim",
".",
"firstdate",
".",
"to_string",
"(",
"'os'",
")",
")",
"except",
"KeyError",
":",
"pass",
"filepath",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"currentpath",
",",
"filename",
")",
"with",
"open",
"(",
"filepath",
")",
"as",
"file_",
":",
"return",
"file_",
".",
"read",
"(",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to read the conditions file `%s`'",
"%",
"filename",
")",
"finally",
":",
"self",
".",
"DEFAULTDIR",
"=",
"_defaultdir"
] | Read and return the content of the given file.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation start date. If
such an directory does not exist, it is created immediately. | [
"Read",
"and",
"return",
"the",
"content",
"of",
"the",
"given",
"file",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L889-L913 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | ConditionManager.save_file | def save_file(self, filename, text):
"""Save the given text under the given condition filename and the
current path.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation end date. If
such an directory does not exist, it is created immediately.
"""
_defaultdir = self.DEFAULTDIR
try:
if not filename.endswith('.py'):
filename += '.py'
try:
self.DEFAULTDIR = (
'init_' + hydpy.pub.timegrids.sim.lastdate.to_string('os'))
except AttributeError:
pass
path = os.path.join(self.currentpath, filename)
with open(path, 'w', encoding="utf-8") as file_:
file_.write(text)
except BaseException:
objecttools.augment_excmessage(
'While trying to write the conditions file `%s`'
% filename)
finally:
self.DEFAULTDIR = _defaultdir | python | def save_file(self, filename, text):
"""Save the given text under the given condition filename and the
current path.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation end date. If
such an directory does not exist, it is created immediately.
"""
_defaultdir = self.DEFAULTDIR
try:
if not filename.endswith('.py'):
filename += '.py'
try:
self.DEFAULTDIR = (
'init_' + hydpy.pub.timegrids.sim.lastdate.to_string('os'))
except AttributeError:
pass
path = os.path.join(self.currentpath, filename)
with open(path, 'w', encoding="utf-8") as file_:
file_.write(text)
except BaseException:
objecttools.augment_excmessage(
'While trying to write the conditions file `%s`'
% filename)
finally:
self.DEFAULTDIR = _defaultdir | [
"def",
"save_file",
"(",
"self",
",",
"filename",
",",
"text",
")",
":",
"_defaultdir",
"=",
"self",
".",
"DEFAULTDIR",
"try",
":",
"if",
"not",
"filename",
".",
"endswith",
"(",
"'.py'",
")",
":",
"filename",
"+=",
"'.py'",
"try",
":",
"self",
".",
"DEFAULTDIR",
"=",
"(",
"'init_'",
"+",
"hydpy",
".",
"pub",
".",
"timegrids",
".",
"sim",
".",
"lastdate",
".",
"to_string",
"(",
"'os'",
")",
")",
"except",
"AttributeError",
":",
"pass",
"path",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"currentpath",
",",
"filename",
")",
"with",
"open",
"(",
"path",
",",
"'w'",
",",
"encoding",
"=",
"\"utf-8\"",
")",
"as",
"file_",
":",
"file_",
".",
"write",
"(",
"text",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to write the conditions file `%s`'",
"%",
"filename",
")",
"finally",
":",
"self",
".",
"DEFAULTDIR",
"=",
"_defaultdir"
] | Save the given text under the given condition filename and the
current path.
If the current directory is not defined explicitly, the directory
name is constructed with the actual simulation end date. If
such an directory does not exist, it is created immediately. | [
"Save",
"the",
"given",
"text",
"under",
"the",
"given",
"condition",
"filename",
"and",
"the",
"current",
"path",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L915-L940 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | SequenceManager.load_file | def load_file(self, sequence):
"""Load data from an "external" data file an pass it to
the given |IOSequence|."""
try:
if sequence.filetype_ext == 'npy':
sequence.series = sequence.adjust_series(
*self._load_npy(sequence))
elif sequence.filetype_ext == 'asc':
sequence.series = sequence.adjust_series(
*self._load_asc(sequence))
elif sequence.filetype_ext == 'nc':
self._load_nc(sequence)
except BaseException:
objecttools.augment_excmessage(
'While trying to load the external data of sequence %s'
% objecttools.devicephrase(sequence)) | python | def load_file(self, sequence):
"""Load data from an "external" data file an pass it to
the given |IOSequence|."""
try:
if sequence.filetype_ext == 'npy':
sequence.series = sequence.adjust_series(
*self._load_npy(sequence))
elif sequence.filetype_ext == 'asc':
sequence.series = sequence.adjust_series(
*self._load_asc(sequence))
elif sequence.filetype_ext == 'nc':
self._load_nc(sequence)
except BaseException:
objecttools.augment_excmessage(
'While trying to load the external data of sequence %s'
% objecttools.devicephrase(sequence)) | [
"def",
"load_file",
"(",
"self",
",",
"sequence",
")",
":",
"try",
":",
"if",
"sequence",
".",
"filetype_ext",
"==",
"'npy'",
":",
"sequence",
".",
"series",
"=",
"sequence",
".",
"adjust_series",
"(",
"*",
"self",
".",
"_load_npy",
"(",
"sequence",
")",
")",
"elif",
"sequence",
".",
"filetype_ext",
"==",
"'asc'",
":",
"sequence",
".",
"series",
"=",
"sequence",
".",
"adjust_series",
"(",
"*",
"self",
".",
"_load_asc",
"(",
"sequence",
")",
")",
"elif",
"sequence",
".",
"filetype_ext",
"==",
"'nc'",
":",
"self",
".",
"_load_nc",
"(",
"sequence",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to load the external data of sequence %s'",
"%",
"objecttools",
".",
"devicephrase",
"(",
"sequence",
")",
")"
] | Load data from an "external" data file an pass it to
the given |IOSequence|. | [
"Load",
"data",
"from",
"an",
"external",
"data",
"file",
"an",
"pass",
"it",
"to",
"the",
"given",
"|IOSequence|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L1415-L1430 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | SequenceManager.save_file | def save_file(self, sequence, array=None):
"""Write the date stored in |IOSequence.series| of the given
|IOSequence| into an "external" data file. """
if array is None:
array = sequence.aggregate_series()
try:
if sequence.filetype_ext == 'nc':
self._save_nc(sequence, array)
else:
filepath = sequence.filepath_ext
if ((array is not None) and
(array.info['type'] != 'unmodified')):
filepath = (f'{filepath[:-4]}_{array.info["type"]}'
f'{filepath[-4:]}')
if not sequence.overwrite_ext and os.path.exists(filepath):
raise OSError(
f'Sequence {objecttools.devicephrase(sequence)} '
f'is not allowed to overwrite the existing file '
f'`{sequence.filepath_ext}`.')
if sequence.filetype_ext == 'npy':
self._save_npy(array, filepath)
elif sequence.filetype_ext == 'asc':
self._save_asc(array, filepath)
except BaseException:
objecttools.augment_excmessage(
'While trying to save the external data of sequence %s'
% objecttools.devicephrase(sequence)) | python | def save_file(self, sequence, array=None):
"""Write the date stored in |IOSequence.series| of the given
|IOSequence| into an "external" data file. """
if array is None:
array = sequence.aggregate_series()
try:
if sequence.filetype_ext == 'nc':
self._save_nc(sequence, array)
else:
filepath = sequence.filepath_ext
if ((array is not None) and
(array.info['type'] != 'unmodified')):
filepath = (f'{filepath[:-4]}_{array.info["type"]}'
f'{filepath[-4:]}')
if not sequence.overwrite_ext and os.path.exists(filepath):
raise OSError(
f'Sequence {objecttools.devicephrase(sequence)} '
f'is not allowed to overwrite the existing file '
f'`{sequence.filepath_ext}`.')
if sequence.filetype_ext == 'npy':
self._save_npy(array, filepath)
elif sequence.filetype_ext == 'asc':
self._save_asc(array, filepath)
except BaseException:
objecttools.augment_excmessage(
'While trying to save the external data of sequence %s'
% objecttools.devicephrase(sequence)) | [
"def",
"save_file",
"(",
"self",
",",
"sequence",
",",
"array",
"=",
"None",
")",
":",
"if",
"array",
"is",
"None",
":",
"array",
"=",
"sequence",
".",
"aggregate_series",
"(",
")",
"try",
":",
"if",
"sequence",
".",
"filetype_ext",
"==",
"'nc'",
":",
"self",
".",
"_save_nc",
"(",
"sequence",
",",
"array",
")",
"else",
":",
"filepath",
"=",
"sequence",
".",
"filepath_ext",
"if",
"(",
"(",
"array",
"is",
"not",
"None",
")",
"and",
"(",
"array",
".",
"info",
"[",
"'type'",
"]",
"!=",
"'unmodified'",
")",
")",
":",
"filepath",
"=",
"(",
"f'{filepath[:-4]}_{array.info[\"type\"]}'",
"f'{filepath[-4:]}'",
")",
"if",
"not",
"sequence",
".",
"overwrite_ext",
"and",
"os",
".",
"path",
".",
"exists",
"(",
"filepath",
")",
":",
"raise",
"OSError",
"(",
"f'Sequence {objecttools.devicephrase(sequence)} '",
"f'is not allowed to overwrite the existing file '",
"f'`{sequence.filepath_ext}`.'",
")",
"if",
"sequence",
".",
"filetype_ext",
"==",
"'npy'",
":",
"self",
".",
"_save_npy",
"(",
"array",
",",
"filepath",
")",
"elif",
"sequence",
".",
"filetype_ext",
"==",
"'asc'",
":",
"self",
".",
"_save_asc",
"(",
"array",
",",
"filepath",
")",
"except",
"BaseException",
":",
"objecttools",
".",
"augment_excmessage",
"(",
"'While trying to save the external data of sequence %s'",
"%",
"objecttools",
".",
"devicephrase",
"(",
"sequence",
")",
")"
] | Write the date stored in |IOSequence.series| of the given
|IOSequence| into an "external" data file. | [
"Write",
"the",
"date",
"stored",
"in",
"|IOSequence",
".",
"series|",
"of",
"the",
"given",
"|IOSequence|",
"into",
"an",
"external",
"data",
"file",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L1450-L1476 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | SequenceManager.open_netcdf_reader | def open_netcdf_reader(self, flatten=False, isolate=False, timeaxis=1):
"""Prepare a new |NetCDFInterface| object for reading data."""
self._netcdf_reader = netcdftools.NetCDFInterface(
flatten=bool(flatten),
isolate=bool(isolate),
timeaxis=int(timeaxis)) | python | def open_netcdf_reader(self, flatten=False, isolate=False, timeaxis=1):
"""Prepare a new |NetCDFInterface| object for reading data."""
self._netcdf_reader = netcdftools.NetCDFInterface(
flatten=bool(flatten),
isolate=bool(isolate),
timeaxis=int(timeaxis)) | [
"def",
"open_netcdf_reader",
"(",
"self",
",",
"flatten",
"=",
"False",
",",
"isolate",
"=",
"False",
",",
"timeaxis",
"=",
"1",
")",
":",
"self",
".",
"_netcdf_reader",
"=",
"netcdftools",
".",
"NetCDFInterface",
"(",
"flatten",
"=",
"bool",
"(",
"flatten",
")",
",",
"isolate",
"=",
"bool",
"(",
"isolate",
")",
",",
"timeaxis",
"=",
"int",
"(",
"timeaxis",
")",
")"
] | Prepare a new |NetCDFInterface| object for reading data. | [
"Prepare",
"a",
"new",
"|NetCDFInterface|",
"object",
"for",
"reading",
"data",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L1528-L1533 | train |
hydpy-dev/hydpy | hydpy/core/filetools.py | SequenceManager.open_netcdf_writer | def open_netcdf_writer(self, flatten=False, isolate=False, timeaxis=1):
"""Prepare a new |NetCDFInterface| object for writing data."""
self._netcdf_writer = netcdftools.NetCDFInterface(
flatten=bool(flatten),
isolate=bool(isolate),
timeaxis=int(timeaxis)) | python | def open_netcdf_writer(self, flatten=False, isolate=False, timeaxis=1):
"""Prepare a new |NetCDFInterface| object for writing data."""
self._netcdf_writer = netcdftools.NetCDFInterface(
flatten=bool(flatten),
isolate=bool(isolate),
timeaxis=int(timeaxis)) | [
"def",
"open_netcdf_writer",
"(",
"self",
",",
"flatten",
"=",
"False",
",",
"isolate",
"=",
"False",
",",
"timeaxis",
"=",
"1",
")",
":",
"self",
".",
"_netcdf_writer",
"=",
"netcdftools",
".",
"NetCDFInterface",
"(",
"flatten",
"=",
"bool",
"(",
"flatten",
")",
",",
"isolate",
"=",
"bool",
"(",
"isolate",
")",
",",
"timeaxis",
"=",
"int",
"(",
"timeaxis",
")",
")"
] | Prepare a new |NetCDFInterface| object for writing data. | [
"Prepare",
"a",
"new",
"|NetCDFInterface|",
"object",
"for",
"writing",
"data",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/core/filetools.py#L1572-L1577 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_nkor_v1 | def calc_nkor_v1(self):
"""Adjust the given precipitation values.
Required control parameters:
|NHRU|
|KG|
Required input sequence:
|Nied|
Calculated flux sequence:
|NKor|
Basic equation:
:math:`NKor = KG \\cdot Nied`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kg(0.8, 1.0, 1.2)
>>> inputs.nied = 10.0
>>> model.calc_nkor_v1()
>>> fluxes.nkor
nkor(8.0, 10.0, 12.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.nkor[k] = con.kg[k] * inp.nied | python | def calc_nkor_v1(self):
"""Adjust the given precipitation values.
Required control parameters:
|NHRU|
|KG|
Required input sequence:
|Nied|
Calculated flux sequence:
|NKor|
Basic equation:
:math:`NKor = KG \\cdot Nied`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kg(0.8, 1.0, 1.2)
>>> inputs.nied = 10.0
>>> model.calc_nkor_v1()
>>> fluxes.nkor
nkor(8.0, 10.0, 12.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.nkor[k] = con.kg[k] * inp.nied | [
"def",
"calc_nkor_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"inp",
"=",
"self",
".",
"sequences",
".",
"inputs",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"flu",
".",
"nkor",
"[",
"k",
"]",
"=",
"con",
".",
"kg",
"[",
"k",
"]",
"*",
"inp",
".",
"nied"
] | Adjust the given precipitation values.
Required control parameters:
|NHRU|
|KG|
Required input sequence:
|Nied|
Calculated flux sequence:
|NKor|
Basic equation:
:math:`NKor = KG \\cdot Nied`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kg(0.8, 1.0, 1.2)
>>> inputs.nied = 10.0
>>> model.calc_nkor_v1()
>>> fluxes.nkor
nkor(8.0, 10.0, 12.0) | [
"Adjust",
"the",
"given",
"precipitation",
"values",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L13-L44 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_tkor_v1 | def calc_tkor_v1(self):
"""Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.tkor[k] = con.kt[k] + inp.teml | python | def calc_tkor_v1(self):
"""Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.tkor[k] = con.kt[k] + inp.teml | [
"def",
"calc_tkor_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"inp",
"=",
"self",
".",
"sequences",
".",
"inputs",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"flu",
".",
"tkor",
"[",
"k",
"]",
"=",
"con",
".",
"kt",
"[",
"k",
"]",
"+",
"inp",
".",
"teml"
] | Adjust the given air temperature values.
Required control parameters:
|NHRU|
|KT|
Required input sequence:
|TemL|
Calculated flux sequence:
|TKor|
Basic equation:
:math:`TKor = KT + TemL`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(3)
>>> kt(-2.0, 0.0, 2.0)
>>> inputs.teml(1.)
>>> model.calc_tkor_v1()
>>> fluxes.tkor
tkor(-1.0, 1.0, 3.0) | [
"Adjust",
"the",
"given",
"air",
"temperature",
"values",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L47-L78 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_et0_v1 | def calc_et0_v1(self):
"""Calculate reference evapotranspiration after Turc-Wendling.
Required control parameters:
|NHRU|
|KE|
|KF|
|HNN|
Required input sequence:
|Glob|
Required flux sequence:
|TKor|
Calculated flux sequence:
|ET0|
Basic equation:
:math:`ET0 = KE \\cdot
\\frac{(8.64 \\cdot Glob+93 \\cdot KF) \\cdot (TKor+22)}
{165 \\cdot (TKor+123) \\cdot (1 + 0.00019 \\cdot min(HNN, 600))}`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(3)
>>> ke(1.1)
>>> kf(0.6)
>>> hnn(200.0, 600.0, 1000.0)
>>> inputs.glob = 200.0
>>> fluxes.tkor = 15.0
>>> model.calc_et0_v1()
>>> fluxes.et0
et0(3.07171, 2.86215, 2.86215)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.et0[k] = (con.ke[k]*(((8.64*inp.glob+93.*con.kf[k]) *
(flu.tkor[k]+22.)) /
(165.*(flu.tkor[k]+123.) *
(1.+0.00019*min(con.hnn[k], 600.))))) | python | def calc_et0_v1(self):
"""Calculate reference evapotranspiration after Turc-Wendling.
Required control parameters:
|NHRU|
|KE|
|KF|
|HNN|
Required input sequence:
|Glob|
Required flux sequence:
|TKor|
Calculated flux sequence:
|ET0|
Basic equation:
:math:`ET0 = KE \\cdot
\\frac{(8.64 \\cdot Glob+93 \\cdot KF) \\cdot (TKor+22)}
{165 \\cdot (TKor+123) \\cdot (1 + 0.00019 \\cdot min(HNN, 600))}`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(3)
>>> ke(1.1)
>>> kf(0.6)
>>> hnn(200.0, 600.0, 1000.0)
>>> inputs.glob = 200.0
>>> fluxes.tkor = 15.0
>>> model.calc_et0_v1()
>>> fluxes.et0
et0(3.07171, 2.86215, 2.86215)
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.et0[k] = (con.ke[k]*(((8.64*inp.glob+93.*con.kf[k]) *
(flu.tkor[k]+22.)) /
(165.*(flu.tkor[k]+123.) *
(1.+0.00019*min(con.hnn[k], 600.))))) | [
"def",
"calc_et0_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"inp",
"=",
"self",
".",
"sequences",
".",
"inputs",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"flu",
".",
"et0",
"[",
"k",
"]",
"=",
"(",
"con",
".",
"ke",
"[",
"k",
"]",
"*",
"(",
"(",
"(",
"8.64",
"*",
"inp",
".",
"glob",
"+",
"93.",
"*",
"con",
".",
"kf",
"[",
"k",
"]",
")",
"*",
"(",
"flu",
".",
"tkor",
"[",
"k",
"]",
"+",
"22.",
")",
")",
"/",
"(",
"165.",
"*",
"(",
"flu",
".",
"tkor",
"[",
"k",
"]",
"+",
"123.",
")",
"*",
"(",
"1.",
"+",
"0.00019",
"*",
"min",
"(",
"con",
".",
"hnn",
"[",
"k",
"]",
",",
"600.",
")",
")",
")",
")",
")"
] | Calculate reference evapotranspiration after Turc-Wendling.
Required control parameters:
|NHRU|
|KE|
|KF|
|HNN|
Required input sequence:
|Glob|
Required flux sequence:
|TKor|
Calculated flux sequence:
|ET0|
Basic equation:
:math:`ET0 = KE \\cdot
\\frac{(8.64 \\cdot Glob+93 \\cdot KF) \\cdot (TKor+22)}
{165 \\cdot (TKor+123) \\cdot (1 + 0.00019 \\cdot min(HNN, 600))}`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(3)
>>> ke(1.1)
>>> kf(0.6)
>>> hnn(200.0, 600.0, 1000.0)
>>> inputs.glob = 200.0
>>> fluxes.tkor = 15.0
>>> model.calc_et0_v1()
>>> fluxes.et0
et0(3.07171, 2.86215, 2.86215) | [
"Calculate",
"reference",
"evapotranspiration",
"after",
"Turc",
"-",
"Wendling",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L81-L126 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_et0_wet0_v1 | def calc_et0_wet0_v1(self):
"""Correct the given reference evapotranspiration and update the
corresponding log sequence.
Required control parameters:
|NHRU|
|KE|
|WfET0|
Required input sequence:
|PET|
Calculated flux sequence:
|ET0|
Updated log sequence:
|WET0|
Basic equations:
:math:`ET0_{new} = WfET0 \\cdot KE \\cdot PET +
(1-WfET0) \\cdot ET0_{alt}`
Example:
Prepare four hydrological response units with different value
combinations of parameters |KE| and |WfET0|:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(4)
>>> ke(0.8, 1.2, 0.8, 1.2)
>>> wfet0(2.0, 2.0, 0.2, 0.2)
Note that the actual value of time dependend parameter |WfET0|
is reduced due the difference between the given parameter and
simulation time steps:
>>> from hydpy import round_
>>> round_(wfet0.values)
1.0, 1.0, 0.1, 0.1
For the first two hydrological response units, the given |PET|
value is modified by -0.4 mm and +0.4 mm, respectively. For the
other two response units, which weight the "new" evaporation
value with 10 %, |ET0| does deviate from the old value of |WET0|
by -0.04 mm and +0.04 mm only:
>>> inputs.pet = 2.0
>>> logs.wet0 = 2.0
>>> model.calc_et0_wet0_v1()
>>> fluxes.et0
et0(1.6, 2.4, 1.96, 2.04)
>>> logs.wet0
wet0([[1.6, 2.4, 1.96, 2.04]])
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for k in range(con.nhru):
flu.et0[k] = (con.wfet0[k]*con.ke[k]*inp.pet +
(1.-con.wfet0[k])*log.wet0[0, k])
log.wet0[0, k] = flu.et0[k] | python | def calc_et0_wet0_v1(self):
"""Correct the given reference evapotranspiration and update the
corresponding log sequence.
Required control parameters:
|NHRU|
|KE|
|WfET0|
Required input sequence:
|PET|
Calculated flux sequence:
|ET0|
Updated log sequence:
|WET0|
Basic equations:
:math:`ET0_{new} = WfET0 \\cdot KE \\cdot PET +
(1-WfET0) \\cdot ET0_{alt}`
Example:
Prepare four hydrological response units with different value
combinations of parameters |KE| and |WfET0|:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(4)
>>> ke(0.8, 1.2, 0.8, 1.2)
>>> wfet0(2.0, 2.0, 0.2, 0.2)
Note that the actual value of time dependend parameter |WfET0|
is reduced due the difference between the given parameter and
simulation time steps:
>>> from hydpy import round_
>>> round_(wfet0.values)
1.0, 1.0, 0.1, 0.1
For the first two hydrological response units, the given |PET|
value is modified by -0.4 mm and +0.4 mm, respectively. For the
other two response units, which weight the "new" evaporation
value with 10 %, |ET0| does deviate from the old value of |WET0|
by -0.04 mm and +0.04 mm only:
>>> inputs.pet = 2.0
>>> logs.wet0 = 2.0
>>> model.calc_et0_wet0_v1()
>>> fluxes.et0
et0(1.6, 2.4, 1.96, 2.04)
>>> logs.wet0
wet0([[1.6, 2.4, 1.96, 2.04]])
"""
con = self.parameters.control.fastaccess
inp = self.sequences.inputs.fastaccess
flu = self.sequences.fluxes.fastaccess
log = self.sequences.logs.fastaccess
for k in range(con.nhru):
flu.et0[k] = (con.wfet0[k]*con.ke[k]*inp.pet +
(1.-con.wfet0[k])*log.wet0[0, k])
log.wet0[0, k] = flu.et0[k] | [
"def",
"calc_et0_wet0_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"inp",
"=",
"self",
".",
"sequences",
".",
"inputs",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"log",
"=",
"self",
".",
"sequences",
".",
"logs",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"flu",
".",
"et0",
"[",
"k",
"]",
"=",
"(",
"con",
".",
"wfet0",
"[",
"k",
"]",
"*",
"con",
".",
"ke",
"[",
"k",
"]",
"*",
"inp",
".",
"pet",
"+",
"(",
"1.",
"-",
"con",
".",
"wfet0",
"[",
"k",
"]",
")",
"*",
"log",
".",
"wet0",
"[",
"0",
",",
"k",
"]",
")",
"log",
".",
"wet0",
"[",
"0",
",",
"k",
"]",
"=",
"flu",
".",
"et0",
"[",
"k",
"]"
] | Correct the given reference evapotranspiration and update the
corresponding log sequence.
Required control parameters:
|NHRU|
|KE|
|WfET0|
Required input sequence:
|PET|
Calculated flux sequence:
|ET0|
Updated log sequence:
|WET0|
Basic equations:
:math:`ET0_{new} = WfET0 \\cdot KE \\cdot PET +
(1-WfET0) \\cdot ET0_{alt}`
Example:
Prepare four hydrological response units with different value
combinations of parameters |KE| and |WfET0|:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(4)
>>> ke(0.8, 1.2, 0.8, 1.2)
>>> wfet0(2.0, 2.0, 0.2, 0.2)
Note that the actual value of time dependend parameter |WfET0|
is reduced due the difference between the given parameter and
simulation time steps:
>>> from hydpy import round_
>>> round_(wfet0.values)
1.0, 1.0, 0.1, 0.1
For the first two hydrological response units, the given |PET|
value is modified by -0.4 mm and +0.4 mm, respectively. For the
other two response units, which weight the "new" evaporation
value with 10 %, |ET0| does deviate from the old value of |WET0|
by -0.04 mm and +0.04 mm only:
>>> inputs.pet = 2.0
>>> logs.wet0 = 2.0
>>> model.calc_et0_wet0_v1()
>>> fluxes.et0
et0(1.6, 2.4, 1.96, 2.04)
>>> logs.wet0
wet0([[1.6, 2.4, 1.96, 2.04]]) | [
"Correct",
"the",
"given",
"reference",
"evapotranspiration",
"and",
"update",
"the",
"corresponding",
"log",
"sequence",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L129-L192 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_evpo_v1 | def calc_evpo_v1(self):
"""Calculate land use and month specific values of potential
evapotranspiration.
Required control parameters:
|NHRU|
|Lnk|
|FLn|
Required derived parameter:
|MOY|
Required flux sequence:
|ET0|
Calculated flux sequence:
|EvPo|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`EvPo = FLn \\cdot ET0`
Example:
For clarity, this is more of a kind of an integration example.
Parameter |FLn| both depends on time (the actual month) and space
(the actual land use). Firstly, let us define a initialization
time period spanning the transition from June to July:
>>> from hydpy import pub
>>> pub.timegrids = '30.06.2000', '02.07.2000', '1d'
Secondly, assume that the considered subbasin is differenciated in
two HRUs, one of primarily consisting of arable land and the other
one of deciduous forests:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(2)
>>> lnk(ACKER, LAUBW)
Thirdly, set the |FLn|
values, one for the relevant months and land use classes:
>>> fln.acker_jun = 1.299
>>> fln.acker_jul = 1.304
>>> fln.laubw_jun = 1.350
>>> fln.laubw_jul = 1.365
Fourthly, the index array connecting the simulation time steps
defined above and the month indexes (0...11) can be retrieved
from the |pub| module. This can be done manually more
conveniently via its update method:
>>> derived.moy.update()
>>> derived.moy
moy(5, 6)
Finally, the actual method (with its simple equation) is applied
as usual:
>>> fluxes.et0 = 2.0
>>> model.idx_sim = 0
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.598, 2.7)
>>> model.idx_sim = 1
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.608, 2.73)
Reset module |pub| to not interfere the following examples:
>>> del pub.timegrids
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.evpo[k] = con.fln[con.lnk[k]-1, der.moy[self.idx_sim]] * flu.et0[k] | python | def calc_evpo_v1(self):
"""Calculate land use and month specific values of potential
evapotranspiration.
Required control parameters:
|NHRU|
|Lnk|
|FLn|
Required derived parameter:
|MOY|
Required flux sequence:
|ET0|
Calculated flux sequence:
|EvPo|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`EvPo = FLn \\cdot ET0`
Example:
For clarity, this is more of a kind of an integration example.
Parameter |FLn| both depends on time (the actual month) and space
(the actual land use). Firstly, let us define a initialization
time period spanning the transition from June to July:
>>> from hydpy import pub
>>> pub.timegrids = '30.06.2000', '02.07.2000', '1d'
Secondly, assume that the considered subbasin is differenciated in
two HRUs, one of primarily consisting of arable land and the other
one of deciduous forests:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(2)
>>> lnk(ACKER, LAUBW)
Thirdly, set the |FLn|
values, one for the relevant months and land use classes:
>>> fln.acker_jun = 1.299
>>> fln.acker_jul = 1.304
>>> fln.laubw_jun = 1.350
>>> fln.laubw_jul = 1.365
Fourthly, the index array connecting the simulation time steps
defined above and the month indexes (0...11) can be retrieved
from the |pub| module. This can be done manually more
conveniently via its update method:
>>> derived.moy.update()
>>> derived.moy
moy(5, 6)
Finally, the actual method (with its simple equation) is applied
as usual:
>>> fluxes.et0 = 2.0
>>> model.idx_sim = 0
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.598, 2.7)
>>> model.idx_sim = 1
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.608, 2.73)
Reset module |pub| to not interfere the following examples:
>>> del pub.timegrids
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
flu.evpo[k] = con.fln[con.lnk[k]-1, der.moy[self.idx_sim]] * flu.et0[k] | [
"def",
"calc_evpo_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"flu",
".",
"evpo",
"[",
"k",
"]",
"=",
"con",
".",
"fln",
"[",
"con",
".",
"lnk",
"[",
"k",
"]",
"-",
"1",
",",
"der",
".",
"moy",
"[",
"self",
".",
"idx_sim",
"]",
"]",
"*",
"flu",
".",
"et0",
"[",
"k",
"]"
] | Calculate land use and month specific values of potential
evapotranspiration.
Required control parameters:
|NHRU|
|Lnk|
|FLn|
Required derived parameter:
|MOY|
Required flux sequence:
|ET0|
Calculated flux sequence:
|EvPo|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`EvPo = FLn \\cdot ET0`
Example:
For clarity, this is more of a kind of an integration example.
Parameter |FLn| both depends on time (the actual month) and space
(the actual land use). Firstly, let us define a initialization
time period spanning the transition from June to July:
>>> from hydpy import pub
>>> pub.timegrids = '30.06.2000', '02.07.2000', '1d'
Secondly, assume that the considered subbasin is differenciated in
two HRUs, one of primarily consisting of arable land and the other
one of deciduous forests:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(2)
>>> lnk(ACKER, LAUBW)
Thirdly, set the |FLn|
values, one for the relevant months and land use classes:
>>> fln.acker_jun = 1.299
>>> fln.acker_jul = 1.304
>>> fln.laubw_jun = 1.350
>>> fln.laubw_jul = 1.365
Fourthly, the index array connecting the simulation time steps
defined above and the month indexes (0...11) can be retrieved
from the |pub| module. This can be done manually more
conveniently via its update method:
>>> derived.moy.update()
>>> derived.moy
moy(5, 6)
Finally, the actual method (with its simple equation) is applied
as usual:
>>> fluxes.et0 = 2.0
>>> model.idx_sim = 0
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.598, 2.7)
>>> model.idx_sim = 1
>>> model.calc_evpo_v1()
>>> fluxes.evpo
evpo(2.608, 2.73)
Reset module |pub| to not interfere the following examples:
>>> del pub.timegrids | [
"Calculate",
"land",
"use",
"and",
"month",
"specific",
"values",
"of",
"potential",
"evapotranspiration",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L195-L276 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_nbes_inzp_v1 | def calc_nbes_inzp_v1(self):
"""Calculate stand precipitation and update the interception storage
accordingly.
Required control parameters:
|NHRU|
|Lnk|
Required derived parameter:
|KInz|
Required flux sequence:
|NKor|
Calculated flux sequence:
|NBes|
Updated state sequence:
|Inzp|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`NBes = \\Bigl \\lbrace
{
{PKor \\ | \\ Inzp = KInz}
\\atop
{0 \\ | \\ Inzp < KInz}
}`
Examples:
Initialize five HRUs with different land usages:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(SIED_D, FEUCHT, GLETS, FLUSS, SEE)
Define |KInz| values for July the selected land usages directly:
>>> derived.kinz.sied_d_jul = 2.0
>>> derived.kinz.feucht_jul = 1.0
>>> derived.kinz.glets_jul = 0.0
>>> derived.kinz.fluss_jul = 1.0
>>> derived.kinz.see_jul = 1.0
Now we prepare a |MOY| object, that assumes that the first, second,
and third simulation time steps are in June, July, and August
respectively (we make use of the value defined above for July, but
setting the values of parameter |MOY| this way allows for a more
rigorous testing of proper indexing):
>>> derived.moy.shape = 3
>>> derived.moy = 5, 6, 7
>>> model.idx_sim = 1
The dense settlement (|SIED_D|), the wetland area (|FEUCHT|), and
both water areas (|FLUSS| and |SEE|) start with a initial interception
storage of 1/2 mm, the glacier (|GLETS|) and water areas (|FLUSS| and
|SEE|) start with 0 mm. In the first example, actual precipition
is 1 mm:
>>> states.inzp = 0.5, 0.5, 0.0, 1.0, 1.0
>>> fluxes.nkor = 1.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(1.5, 1.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.5, 1.0, 0.0, 0.0)
Only for the settled area, interception capacity is not exceeded,
meaning no stand precipitation occurs. Note that it is common in
define zero interception capacities for glacier areas, but not
mandatory. Also note that the |KInz|, |Inzp| and |NKor| values
given for both water areas are ignored completely, and |Inzp|
and |NBes| are simply set to zero.
If there is no precipitation, there is of course also no stand
precipitation and interception storage remains unchanged:
>>> states.inzp = 0.5, 0.5, 0.0, 0.0, 0.0
>>> fluxes.nkor = 0.
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.5, 0.5, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.0, 0.0, 0.0, 0.0)
Interception capacities change discontinuously between consecutive
months. This can result in little stand precipitation events in
periods without precipitation:
>>> states.inzp = 1.0, 0.0, 0.0, 0.0, 0.0
>>> derived.kinz.sied_d_jul = 0.6
>>> fluxes.nkor = 0.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.6, 0.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.4, 0.0, 0.0, 0.0, 0.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.nbes[k] = 0.
sta.inzp[k] = 0.
else:
flu.nbes[k] = \
max(flu.nkor[k]+sta.inzp[k] -
der.kinz[con.lnk[k]-1, der.moy[self.idx_sim]], 0.)
sta.inzp[k] += flu.nkor[k]-flu.nbes[k] | python | def calc_nbes_inzp_v1(self):
"""Calculate stand precipitation and update the interception storage
accordingly.
Required control parameters:
|NHRU|
|Lnk|
Required derived parameter:
|KInz|
Required flux sequence:
|NKor|
Calculated flux sequence:
|NBes|
Updated state sequence:
|Inzp|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`NBes = \\Bigl \\lbrace
{
{PKor \\ | \\ Inzp = KInz}
\\atop
{0 \\ | \\ Inzp < KInz}
}`
Examples:
Initialize five HRUs with different land usages:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(SIED_D, FEUCHT, GLETS, FLUSS, SEE)
Define |KInz| values for July the selected land usages directly:
>>> derived.kinz.sied_d_jul = 2.0
>>> derived.kinz.feucht_jul = 1.0
>>> derived.kinz.glets_jul = 0.0
>>> derived.kinz.fluss_jul = 1.0
>>> derived.kinz.see_jul = 1.0
Now we prepare a |MOY| object, that assumes that the first, second,
and third simulation time steps are in June, July, and August
respectively (we make use of the value defined above for July, but
setting the values of parameter |MOY| this way allows for a more
rigorous testing of proper indexing):
>>> derived.moy.shape = 3
>>> derived.moy = 5, 6, 7
>>> model.idx_sim = 1
The dense settlement (|SIED_D|), the wetland area (|FEUCHT|), and
both water areas (|FLUSS| and |SEE|) start with a initial interception
storage of 1/2 mm, the glacier (|GLETS|) and water areas (|FLUSS| and
|SEE|) start with 0 mm. In the first example, actual precipition
is 1 mm:
>>> states.inzp = 0.5, 0.5, 0.0, 1.0, 1.0
>>> fluxes.nkor = 1.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(1.5, 1.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.5, 1.0, 0.0, 0.0)
Only for the settled area, interception capacity is not exceeded,
meaning no stand precipitation occurs. Note that it is common in
define zero interception capacities for glacier areas, but not
mandatory. Also note that the |KInz|, |Inzp| and |NKor| values
given for both water areas are ignored completely, and |Inzp|
and |NBes| are simply set to zero.
If there is no precipitation, there is of course also no stand
precipitation and interception storage remains unchanged:
>>> states.inzp = 0.5, 0.5, 0.0, 0.0, 0.0
>>> fluxes.nkor = 0.
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.5, 0.5, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.0, 0.0, 0.0, 0.0)
Interception capacities change discontinuously between consecutive
months. This can result in little stand precipitation events in
periods without precipitation:
>>> states.inzp = 1.0, 0.0, 0.0, 0.0, 0.0
>>> derived.kinz.sied_d_jul = 0.6
>>> fluxes.nkor = 0.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.6, 0.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.4, 0.0, 0.0, 0.0, 0.0)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.nbes[k] = 0.
sta.inzp[k] = 0.
else:
flu.nbes[k] = \
max(flu.nkor[k]+sta.inzp[k] -
der.kinz[con.lnk[k]-1, der.moy[self.idx_sim]], 0.)
sta.inzp[k] += flu.nkor[k]-flu.nbes[k] | [
"def",
"calc_nbes_inzp_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"flu",
".",
"nbes",
"[",
"k",
"]",
"=",
"0.",
"sta",
".",
"inzp",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"flu",
".",
"nbes",
"[",
"k",
"]",
"=",
"max",
"(",
"flu",
".",
"nkor",
"[",
"k",
"]",
"+",
"sta",
".",
"inzp",
"[",
"k",
"]",
"-",
"der",
".",
"kinz",
"[",
"con",
".",
"lnk",
"[",
"k",
"]",
"-",
"1",
",",
"der",
".",
"moy",
"[",
"self",
".",
"idx_sim",
"]",
"]",
",",
"0.",
")",
"sta",
".",
"inzp",
"[",
"k",
"]",
"+=",
"flu",
".",
"nkor",
"[",
"k",
"]",
"-",
"flu",
".",
"nbes",
"[",
"k",
"]"
] | Calculate stand precipitation and update the interception storage
accordingly.
Required control parameters:
|NHRU|
|Lnk|
Required derived parameter:
|KInz|
Required flux sequence:
|NKor|
Calculated flux sequence:
|NBes|
Updated state sequence:
|Inzp|
Additional requirements:
|Model.idx_sim|
Basic equation:
:math:`NBes = \\Bigl \\lbrace
{
{PKor \\ | \\ Inzp = KInz}
\\atop
{0 \\ | \\ Inzp < KInz}
}`
Examples:
Initialize five HRUs with different land usages:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(SIED_D, FEUCHT, GLETS, FLUSS, SEE)
Define |KInz| values for July the selected land usages directly:
>>> derived.kinz.sied_d_jul = 2.0
>>> derived.kinz.feucht_jul = 1.0
>>> derived.kinz.glets_jul = 0.0
>>> derived.kinz.fluss_jul = 1.0
>>> derived.kinz.see_jul = 1.0
Now we prepare a |MOY| object, that assumes that the first, second,
and third simulation time steps are in June, July, and August
respectively (we make use of the value defined above for July, but
setting the values of parameter |MOY| this way allows for a more
rigorous testing of proper indexing):
>>> derived.moy.shape = 3
>>> derived.moy = 5, 6, 7
>>> model.idx_sim = 1
The dense settlement (|SIED_D|), the wetland area (|FEUCHT|), and
both water areas (|FLUSS| and |SEE|) start with a initial interception
storage of 1/2 mm, the glacier (|GLETS|) and water areas (|FLUSS| and
|SEE|) start with 0 mm. In the first example, actual precipition
is 1 mm:
>>> states.inzp = 0.5, 0.5, 0.0, 1.0, 1.0
>>> fluxes.nkor = 1.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(1.5, 1.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.5, 1.0, 0.0, 0.0)
Only for the settled area, interception capacity is not exceeded,
meaning no stand precipitation occurs. Note that it is common in
define zero interception capacities for glacier areas, but not
mandatory. Also note that the |KInz|, |Inzp| and |NKor| values
given for both water areas are ignored completely, and |Inzp|
and |NBes| are simply set to zero.
If there is no precipitation, there is of course also no stand
precipitation and interception storage remains unchanged:
>>> states.inzp = 0.5, 0.5, 0.0, 0.0, 0.0
>>> fluxes.nkor = 0.
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.5, 0.5, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.0, 0.0, 0.0, 0.0, 0.0)
Interception capacities change discontinuously between consecutive
months. This can result in little stand precipitation events in
periods without precipitation:
>>> states.inzp = 1.0, 0.0, 0.0, 0.0, 0.0
>>> derived.kinz.sied_d_jul = 0.6
>>> fluxes.nkor = 0.0
>>> model.calc_nbes_inzp_v1()
>>> states.inzp
inzp(0.6, 0.0, 0.0, 0.0, 0.0)
>>> fluxes.nbes
nbes(0.4, 0.0, 0.0, 0.0, 0.0) | [
"Calculate",
"stand",
"precipitation",
"and",
"update",
"the",
"interception",
"storage",
"accordingly",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L279-L394 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_evi_inzp_v1 | def calc_evi_inzp_v1(self):
"""Calculate interception evaporation and update the interception
storage accordingly.
Required control parameters:
|NHRU|
|Lnk|
|TRefT|
|TRefN|
Required flux sequence:
|EvPo|
Calculated flux sequence:
|EvI|
Updated state sequence:
|Inzp|
Basic equation:
:math:`EvI = \\Bigl \\lbrace
{
{EvPo \\ | \\ Inzp > 0}
\\atop
{0 \\ | \\ Inzp = 0}
}`
Examples:
Initialize five HRUs with different combinations of land usage
and initial interception storage and apply a value of potential
evaporation of 3 mm on each one:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER)
>>> states.inzp = 2.0, 2.0, 0.0, 2.0, 4.0
>>> fluxes.evpo = 3.0
>>> model.calc_evi_inzp_v1()
>>> states.inzp
inzp(0.0, 0.0, 0.0, 0.0, 1.0)
>>> fluxes.evi
evi(3.0, 3.0, 0.0, 2.0, 3.0)
For arable land (|ACKER|) and most other land types, interception
evaporation (|EvI|) is identical with potential evapotranspiration
(|EvPo|), as long as it is met by available intercepted water
([Inzp|). Only water areas (|FLUSS| and |SEE|), |EvI| is
generally equal to |EvPo| (but this might be corrected by a method
called after |calc_evi_inzp_v1| has been applied) and [Inzp| is
set to zero.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.evi[k] = flu.evpo[k]
sta.inzp[k] = 0.
else:
flu.evi[k] = min(flu.evpo[k], sta.inzp[k])
sta.inzp[k] -= flu.evi[k] | python | def calc_evi_inzp_v1(self):
"""Calculate interception evaporation and update the interception
storage accordingly.
Required control parameters:
|NHRU|
|Lnk|
|TRefT|
|TRefN|
Required flux sequence:
|EvPo|
Calculated flux sequence:
|EvI|
Updated state sequence:
|Inzp|
Basic equation:
:math:`EvI = \\Bigl \\lbrace
{
{EvPo \\ | \\ Inzp > 0}
\\atop
{0 \\ | \\ Inzp = 0}
}`
Examples:
Initialize five HRUs with different combinations of land usage
and initial interception storage and apply a value of potential
evaporation of 3 mm on each one:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER)
>>> states.inzp = 2.0, 2.0, 0.0, 2.0, 4.0
>>> fluxes.evpo = 3.0
>>> model.calc_evi_inzp_v1()
>>> states.inzp
inzp(0.0, 0.0, 0.0, 0.0, 1.0)
>>> fluxes.evi
evi(3.0, 3.0, 0.0, 2.0, 3.0)
For arable land (|ACKER|) and most other land types, interception
evaporation (|EvI|) is identical with potential evapotranspiration
(|EvPo|), as long as it is met by available intercepted water
([Inzp|). Only water areas (|FLUSS| and |SEE|), |EvI| is
generally equal to |EvPo| (but this might be corrected by a method
called after |calc_evi_inzp_v1| has been applied) and [Inzp| is
set to zero.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.evi[k] = flu.evpo[k]
sta.inzp[k] = 0.
else:
flu.evi[k] = min(flu.evpo[k], sta.inzp[k])
sta.inzp[k] -= flu.evi[k] | [
"def",
"calc_evi_inzp_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"flu",
".",
"evi",
"[",
"k",
"]",
"=",
"flu",
".",
"evpo",
"[",
"k",
"]",
"sta",
".",
"inzp",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"flu",
".",
"evi",
"[",
"k",
"]",
"=",
"min",
"(",
"flu",
".",
"evpo",
"[",
"k",
"]",
",",
"sta",
".",
"inzp",
"[",
"k",
"]",
")",
"sta",
".",
"inzp",
"[",
"k",
"]",
"-=",
"flu",
".",
"evi",
"[",
"k",
"]"
] | Calculate interception evaporation and update the interception
storage accordingly.
Required control parameters:
|NHRU|
|Lnk|
|TRefT|
|TRefN|
Required flux sequence:
|EvPo|
Calculated flux sequence:
|EvI|
Updated state sequence:
|Inzp|
Basic equation:
:math:`EvI = \\Bigl \\lbrace
{
{EvPo \\ | \\ Inzp > 0}
\\atop
{0 \\ | \\ Inzp = 0}
}`
Examples:
Initialize five HRUs with different combinations of land usage
and initial interception storage and apply a value of potential
evaporation of 3 mm on each one:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(5)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER)
>>> states.inzp = 2.0, 2.0, 0.0, 2.0, 4.0
>>> fluxes.evpo = 3.0
>>> model.calc_evi_inzp_v1()
>>> states.inzp
inzp(0.0, 0.0, 0.0, 0.0, 1.0)
>>> fluxes.evi
evi(3.0, 3.0, 0.0, 2.0, 3.0)
For arable land (|ACKER|) and most other land types, interception
evaporation (|EvI|) is identical with potential evapotranspiration
(|EvPo|), as long as it is met by available intercepted water
([Inzp|). Only water areas (|FLUSS| and |SEE|), |EvI| is
generally equal to |EvPo| (but this might be corrected by a method
called after |calc_evi_inzp_v1| has been applied) and [Inzp| is
set to zero. | [
"Calculate",
"interception",
"evaporation",
"and",
"update",
"the",
"interception",
"storage",
"accordingly",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L397-L459 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_sbes_v1 | def calc_sbes_v1(self):
"""Calculate the frozen part of stand precipitation.
Required control parameters:
|NHRU|
|TGr|
|TSp|
Required flux sequences:
|TKor|
|NBes|
Calculated flux sequence:
|SBes|
Examples:
In the first example, the threshold temperature of seven hydrological
response units is 0 °C and the corresponding temperature interval of
mixed precipitation 2 °C:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> tgr(0.0)
>>> tsp(2.0)
The value of |NBes| is zero above 1 °C and equal to the value of
|NBes| below -1 °C. Between these temperature values, |NBes|
decreases linearly:
>>> fluxes.nbes = 4.0
>>> fluxes.tkor = -10.0, -1.0, -0.5, 0.0, 0.5, 1.0, 10.0
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 3.0, 2.0, 1.0, 0.0, 0.0)
Note the special case of a zero temperature interval. With the
actual temperature being equal to the threshold temperature, the
the value of `sbes` is zero:
>>> tsp(0.)
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 4.0, 0.0, 0.0, 0.0, 0.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if flu.nbes[k] <= 0.:
flu.sbes[k] = 0.
elif flu.tkor[k] >= (con.tgr[k]+con.tsp[k]/2.):
flu.sbes[k] = 0.
elif flu.tkor[k] <= (con.tgr[k]-con.tsp[k]/2.):
flu.sbes[k] = flu.nbes[k]
else:
flu.sbes[k] = ((((con.tgr[k]+con.tsp[k]/2.)-flu.tkor[k]) /
con.tsp[k])*flu.nbes[k]) | python | def calc_sbes_v1(self):
"""Calculate the frozen part of stand precipitation.
Required control parameters:
|NHRU|
|TGr|
|TSp|
Required flux sequences:
|TKor|
|NBes|
Calculated flux sequence:
|SBes|
Examples:
In the first example, the threshold temperature of seven hydrological
response units is 0 °C and the corresponding temperature interval of
mixed precipitation 2 °C:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> tgr(0.0)
>>> tsp(2.0)
The value of |NBes| is zero above 1 °C and equal to the value of
|NBes| below -1 °C. Between these temperature values, |NBes|
decreases linearly:
>>> fluxes.nbes = 4.0
>>> fluxes.tkor = -10.0, -1.0, -0.5, 0.0, 0.5, 1.0, 10.0
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 3.0, 2.0, 1.0, 0.0, 0.0)
Note the special case of a zero temperature interval. With the
actual temperature being equal to the threshold temperature, the
the value of `sbes` is zero:
>>> tsp(0.)
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 4.0, 0.0, 0.0, 0.0, 0.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if flu.nbes[k] <= 0.:
flu.sbes[k] = 0.
elif flu.tkor[k] >= (con.tgr[k]+con.tsp[k]/2.):
flu.sbes[k] = 0.
elif flu.tkor[k] <= (con.tgr[k]-con.tsp[k]/2.):
flu.sbes[k] = flu.nbes[k]
else:
flu.sbes[k] = ((((con.tgr[k]+con.tsp[k]/2.)-flu.tkor[k]) /
con.tsp[k])*flu.nbes[k]) | [
"def",
"calc_sbes_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"flu",
".",
"nbes",
"[",
"k",
"]",
"<=",
"0.",
":",
"flu",
".",
"sbes",
"[",
"k",
"]",
"=",
"0.",
"elif",
"flu",
".",
"tkor",
"[",
"k",
"]",
">=",
"(",
"con",
".",
"tgr",
"[",
"k",
"]",
"+",
"con",
".",
"tsp",
"[",
"k",
"]",
"/",
"2.",
")",
":",
"flu",
".",
"sbes",
"[",
"k",
"]",
"=",
"0.",
"elif",
"flu",
".",
"tkor",
"[",
"k",
"]",
"<=",
"(",
"con",
".",
"tgr",
"[",
"k",
"]",
"-",
"con",
".",
"tsp",
"[",
"k",
"]",
"/",
"2.",
")",
":",
"flu",
".",
"sbes",
"[",
"k",
"]",
"=",
"flu",
".",
"nbes",
"[",
"k",
"]",
"else",
":",
"flu",
".",
"sbes",
"[",
"k",
"]",
"=",
"(",
"(",
"(",
"(",
"con",
".",
"tgr",
"[",
"k",
"]",
"+",
"con",
".",
"tsp",
"[",
"k",
"]",
"/",
"2.",
")",
"-",
"flu",
".",
"tkor",
"[",
"k",
"]",
")",
"/",
"con",
".",
"tsp",
"[",
"k",
"]",
")",
"*",
"flu",
".",
"nbes",
"[",
"k",
"]",
")"
] | Calculate the frozen part of stand precipitation.
Required control parameters:
|NHRU|
|TGr|
|TSp|
Required flux sequences:
|TKor|
|NBes|
Calculated flux sequence:
|SBes|
Examples:
In the first example, the threshold temperature of seven hydrological
response units is 0 °C and the corresponding temperature interval of
mixed precipitation 2 °C:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> tgr(0.0)
>>> tsp(2.0)
The value of |NBes| is zero above 1 °C and equal to the value of
|NBes| below -1 °C. Between these temperature values, |NBes|
decreases linearly:
>>> fluxes.nbes = 4.0
>>> fluxes.tkor = -10.0, -1.0, -0.5, 0.0, 0.5, 1.0, 10.0
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 3.0, 2.0, 1.0, 0.0, 0.0)
Note the special case of a zero temperature interval. With the
actual temperature being equal to the threshold temperature, the
the value of `sbes` is zero:
>>> tsp(0.)
>>> model.calc_sbes_v1()
>>> fluxes.sbes
sbes(4.0, 4.0, 4.0, 0.0, 0.0, 0.0, 0.0) | [
"Calculate",
"the",
"frozen",
"part",
"of",
"stand",
"precipitation",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L462-L519 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_wgtf_v1 | def calc_wgtf_v1(self):
"""Calculate the potential snowmelt.
Required control parameters:
|NHRU|
|Lnk|
|GTF|
|TRefT|
|TRefN|
|RSchmelz|
|CPWasser|
Required flux sequence:
|TKor|
Calculated fluxes sequence:
|WGTF|
Basic equation:
:math:`WGTF = max(GTF \\cdot (TKor - TRefT), 0) +
max(\\frac{CPWasser}{RSchmelz} \\cdot (TKor - TRefN), 0)`
Examples:
Initialize seven HRUs with identical degree-day factors and
temperature thresholds, but different combinations of land use
and air temperature:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(7)
>>> lnk(ACKER, LAUBW, FLUSS, SEE, ACKER, ACKER, ACKER)
>>> gtf(5.0)
>>> treft(0.0)
>>> trefn(1.0)
>>> fluxes.tkor = 2.0, 2.0, 2.0, 2.0, -1.0, 0.0, 1.0
Compared to most other LARSIM parameters, the specific heat capacity
and melt heat capacity of water can be seen as fixed properties:
>>> cpwasser(4.1868)
>>> rschmelz(334.0)
Note that the values of the degree-day factor are only half
as much as the given value, due to the simulation step size
being only half as long as the parameter step size:
>>> gtf
gtf(5.0)
>>> gtf.values
array([ 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5])
After performing the calculation, one can see that the potential
melting rate is identical for the first two HRUs (|ACKER| and
|LAUBW|). The land use class results in no difference, except for
water areas (third and forth HRU, |FLUSS| and |SEE|), where no
potential melt needs to be calculated. The last three HRUs (again
|ACKER|) show the usual behaviour of the degree day method, when the
actual temperature is below (fourth HRU), equal to (fifth HRU) or
above (sixths zone) the threshold temperature. Additionally, the
first two zones show the influence of the additional energy intake
due to "warm" precipitation. Obviously, this additional term is
quite negligible for common parameterizations, even if lower
values for the separate threshold temperature |TRefT| would be
taken into account:
>>> model.calc_wgtf_v1()
>>> fluxes.wgtf
wgtf(5.012535, 5.012535, 0.0, 0.0, 0.0, 0.0, 2.5)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.wgtf[k] = 0.
else:
flu.wgtf[k] = (
max(con.gtf[k]*(flu.tkor[k]-con.treft[k]), 0) +
max(con.cpwasser/con.rschmelz*(flu.tkor[k]-con.trefn[k]), 0.)) | python | def calc_wgtf_v1(self):
"""Calculate the potential snowmelt.
Required control parameters:
|NHRU|
|Lnk|
|GTF|
|TRefT|
|TRefN|
|RSchmelz|
|CPWasser|
Required flux sequence:
|TKor|
Calculated fluxes sequence:
|WGTF|
Basic equation:
:math:`WGTF = max(GTF \\cdot (TKor - TRefT), 0) +
max(\\frac{CPWasser}{RSchmelz} \\cdot (TKor - TRefN), 0)`
Examples:
Initialize seven HRUs with identical degree-day factors and
temperature thresholds, but different combinations of land use
and air temperature:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(7)
>>> lnk(ACKER, LAUBW, FLUSS, SEE, ACKER, ACKER, ACKER)
>>> gtf(5.0)
>>> treft(0.0)
>>> trefn(1.0)
>>> fluxes.tkor = 2.0, 2.0, 2.0, 2.0, -1.0, 0.0, 1.0
Compared to most other LARSIM parameters, the specific heat capacity
and melt heat capacity of water can be seen as fixed properties:
>>> cpwasser(4.1868)
>>> rschmelz(334.0)
Note that the values of the degree-day factor are only half
as much as the given value, due to the simulation step size
being only half as long as the parameter step size:
>>> gtf
gtf(5.0)
>>> gtf.values
array([ 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5])
After performing the calculation, one can see that the potential
melting rate is identical for the first two HRUs (|ACKER| and
|LAUBW|). The land use class results in no difference, except for
water areas (third and forth HRU, |FLUSS| and |SEE|), where no
potential melt needs to be calculated. The last three HRUs (again
|ACKER|) show the usual behaviour of the degree day method, when the
actual temperature is below (fourth HRU), equal to (fifth HRU) or
above (sixths zone) the threshold temperature. Additionally, the
first two zones show the influence of the additional energy intake
due to "warm" precipitation. Obviously, this additional term is
quite negligible for common parameterizations, even if lower
values for the separate threshold temperature |TRefT| would be
taken into account:
>>> model.calc_wgtf_v1()
>>> fluxes.wgtf
wgtf(5.012535, 5.012535, 0.0, 0.0, 0.0, 0.0, 2.5)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
flu.wgtf[k] = 0.
else:
flu.wgtf[k] = (
max(con.gtf[k]*(flu.tkor[k]-con.treft[k]), 0) +
max(con.cpwasser/con.rschmelz*(flu.tkor[k]-con.trefn[k]), 0.)) | [
"def",
"calc_wgtf_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"flu",
".",
"wgtf",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"flu",
".",
"wgtf",
"[",
"k",
"]",
"=",
"(",
"max",
"(",
"con",
".",
"gtf",
"[",
"k",
"]",
"*",
"(",
"flu",
".",
"tkor",
"[",
"k",
"]",
"-",
"con",
".",
"treft",
"[",
"k",
"]",
")",
",",
"0",
")",
"+",
"max",
"(",
"con",
".",
"cpwasser",
"/",
"con",
".",
"rschmelz",
"*",
"(",
"flu",
".",
"tkor",
"[",
"k",
"]",
"-",
"con",
".",
"trefn",
"[",
"k",
"]",
")",
",",
"0.",
")",
")"
] | Calculate the potential snowmelt.
Required control parameters:
|NHRU|
|Lnk|
|GTF|
|TRefT|
|TRefN|
|RSchmelz|
|CPWasser|
Required flux sequence:
|TKor|
Calculated fluxes sequence:
|WGTF|
Basic equation:
:math:`WGTF = max(GTF \\cdot (TKor - TRefT), 0) +
max(\\frac{CPWasser}{RSchmelz} \\cdot (TKor - TRefN), 0)`
Examples:
Initialize seven HRUs with identical degree-day factors and
temperature thresholds, but different combinations of land use
and air temperature:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(7)
>>> lnk(ACKER, LAUBW, FLUSS, SEE, ACKER, ACKER, ACKER)
>>> gtf(5.0)
>>> treft(0.0)
>>> trefn(1.0)
>>> fluxes.tkor = 2.0, 2.0, 2.0, 2.0, -1.0, 0.0, 1.0
Compared to most other LARSIM parameters, the specific heat capacity
and melt heat capacity of water can be seen as fixed properties:
>>> cpwasser(4.1868)
>>> rschmelz(334.0)
Note that the values of the degree-day factor are only half
as much as the given value, due to the simulation step size
being only half as long as the parameter step size:
>>> gtf
gtf(5.0)
>>> gtf.values
array([ 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5])
After performing the calculation, one can see that the potential
melting rate is identical for the first two HRUs (|ACKER| and
|LAUBW|). The land use class results in no difference, except for
water areas (third and forth HRU, |FLUSS| and |SEE|), where no
potential melt needs to be calculated. The last three HRUs (again
|ACKER|) show the usual behaviour of the degree day method, when the
actual temperature is below (fourth HRU), equal to (fifth HRU) or
above (sixths zone) the threshold temperature. Additionally, the
first two zones show the influence of the additional energy intake
due to "warm" precipitation. Obviously, this additional term is
quite negligible for common parameterizations, even if lower
values for the separate threshold temperature |TRefT| would be
taken into account:
>>> model.calc_wgtf_v1()
>>> fluxes.wgtf
wgtf(5.012535, 5.012535, 0.0, 0.0, 0.0, 0.0, 2.5) | [
"Calculate",
"the",
"potential",
"snowmelt",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L522-L601 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_schm_wats_v1 | def calc_schm_wats_v1(self):
"""Calculate the actual amount of water melting within the snow cover.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequences:
|SBes|
|WGTF|
Calculated flux sequence:
|Schm|
Updated state sequence:
|WATS|
Basic equations:
:math:`\\frac{dWATS}{dt} = SBes - Schm`
:math:`Schm = \\Bigl \\lbrace
{
{WGTF \\ | \\ WATS > 0}
\\atop
{0 \\ | \\ WATS = 0}
}`
Examples:
Initialize two water (|FLUSS| and |SEE|) and four arable land
(|ACKER|) HRUs. Assume the same values for the initial amount
of frozen water (|WATS|) and the frozen part of stand precipitation
(|SBes|), but different values for potential snowmelt (|WGTF|):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> states.wats = 2.0
>>> fluxes.sbes = 1.0
>>> fluxes.wgtf = 1.0, 1.0, 0.0, 1.0, 3.0, 5.0
>>> model.calc_schm_wats_v1()
>>> states.wats
wats(0.0, 0.0, 3.0, 2.0, 0.0, 0.0)
>>> fluxes.schm
schm(0.0, 0.0, 0.0, 1.0, 3.0, 3.0)
For the water areas, both the frozen amount of water and actual melt
are set to zero. For all other land use classes, actual melt
is either limited by potential melt or the available frozen water,
which is the sum of initial frozen water and the frozen part
of stand precipitation.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.wats[k] = 0.
flu.schm[k] = 0.
else:
sta.wats[k] += flu.sbes[k]
flu.schm[k] = min(flu.wgtf[k], sta.wats[k])
sta.wats[k] -= flu.schm[k] | python | def calc_schm_wats_v1(self):
"""Calculate the actual amount of water melting within the snow cover.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequences:
|SBes|
|WGTF|
Calculated flux sequence:
|Schm|
Updated state sequence:
|WATS|
Basic equations:
:math:`\\frac{dWATS}{dt} = SBes - Schm`
:math:`Schm = \\Bigl \\lbrace
{
{WGTF \\ | \\ WATS > 0}
\\atop
{0 \\ | \\ WATS = 0}
}`
Examples:
Initialize two water (|FLUSS| and |SEE|) and four arable land
(|ACKER|) HRUs. Assume the same values for the initial amount
of frozen water (|WATS|) and the frozen part of stand precipitation
(|SBes|), but different values for potential snowmelt (|WGTF|):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> states.wats = 2.0
>>> fluxes.sbes = 1.0
>>> fluxes.wgtf = 1.0, 1.0, 0.0, 1.0, 3.0, 5.0
>>> model.calc_schm_wats_v1()
>>> states.wats
wats(0.0, 0.0, 3.0, 2.0, 0.0, 0.0)
>>> fluxes.schm
schm(0.0, 0.0, 0.0, 1.0, 3.0, 3.0)
For the water areas, both the frozen amount of water and actual melt
are set to zero. For all other land use classes, actual melt
is either limited by potential melt or the available frozen water,
which is the sum of initial frozen water and the frozen part
of stand precipitation.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.wats[k] = 0.
flu.schm[k] = 0.
else:
sta.wats[k] += flu.sbes[k]
flu.schm[k] = min(flu.wgtf[k], sta.wats[k])
sta.wats[k] -= flu.schm[k] | [
"def",
"calc_schm_wats_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"sta",
".",
"wats",
"[",
"k",
"]",
"=",
"0.",
"flu",
".",
"schm",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"sta",
".",
"wats",
"[",
"k",
"]",
"+=",
"flu",
".",
"sbes",
"[",
"k",
"]",
"flu",
".",
"schm",
"[",
"k",
"]",
"=",
"min",
"(",
"flu",
".",
"wgtf",
"[",
"k",
"]",
",",
"sta",
".",
"wats",
"[",
"k",
"]",
")",
"sta",
".",
"wats",
"[",
"k",
"]",
"-=",
"flu",
".",
"schm",
"[",
"k",
"]"
] | Calculate the actual amount of water melting within the snow cover.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequences:
|SBes|
|WGTF|
Calculated flux sequence:
|Schm|
Updated state sequence:
|WATS|
Basic equations:
:math:`\\frac{dWATS}{dt} = SBes - Schm`
:math:`Schm = \\Bigl \\lbrace
{
{WGTF \\ | \\ WATS > 0}
\\atop
{0 \\ | \\ WATS = 0}
}`
Examples:
Initialize two water (|FLUSS| and |SEE|) and four arable land
(|ACKER|) HRUs. Assume the same values for the initial amount
of frozen water (|WATS|) and the frozen part of stand precipitation
(|SBes|), but different values for potential snowmelt (|WGTF|):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> states.wats = 2.0
>>> fluxes.sbes = 1.0
>>> fluxes.wgtf = 1.0, 1.0, 0.0, 1.0, 3.0, 5.0
>>> model.calc_schm_wats_v1()
>>> states.wats
wats(0.0, 0.0, 3.0, 2.0, 0.0, 0.0)
>>> fluxes.schm
schm(0.0, 0.0, 0.0, 1.0, 3.0, 3.0)
For the water areas, both the frozen amount of water and actual melt
are set to zero. For all other land use classes, actual melt
is either limited by potential melt or the available frozen water,
which is the sum of initial frozen water and the frozen part
of stand precipitation. | [
"Calculate",
"the",
"actual",
"amount",
"of",
"water",
"melting",
"within",
"the",
"snow",
"cover",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L604-L666 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_wada_waes_v1 | def calc_wada_waes_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|PWMax|
Required flux sequences:
|NBes|
Calculated flux sequence:
|WaDa|
Updated state sequence:
|WAeS|
Basic equations:
:math:`\\frac{dWAeS}{dt} = NBes - WaDa`
:math:`WAeS \\leq PWMax \\cdot WATS`
Examples:
For simplicity, the threshold parameter |PWMax| is set to a value
of two for each of the six initialized HRUs. Thus, snow cover can
hold as much liquid water as it contains frozen water. Stand
precipitation is also always set to the same value, but the initial
conditions of the snow cover are varied:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> pwmax(2.0)
>>> fluxes.nbes = 1.0
>>> states.wats = 0.0, 0.0, 0.0, 1.0, 1.0, 1.0
>>> states.waes = 1.0, 1.0, 0.0, 1.0, 1.5, 2.0
>>> model.calc_wada_waes_v1()
>>> states.waes
waes(0.0, 0.0, 0.0, 2.0, 2.0, 2.0)
>>> fluxes.wada
wada(1.0, 1.0, 1.0, 0.0, 0.5, 1.0)
Note the special cases of the first two HRUs of type |FLUSS| and
|SEE|. For water areas, stand precipitaton |NBes| is generally
passed to |WaDa| and |WAeS| is set to zero. For all other land
use classes (of which only |ACKER| is selected), only the amount
of |NBes| exceeding the actual snow holding capacity is passed
to |WaDa|.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.waes[k] = 0.
flu.wada[k] = flu.nbes[k]
else:
sta.waes[k] += flu.nbes[k]
flu.wada[k] = max(sta.waes[k]-con.pwmax[k]*sta.wats[k], 0.)
sta.waes[k] -= flu.wada[k] | python | def calc_wada_waes_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|PWMax|
Required flux sequences:
|NBes|
Calculated flux sequence:
|WaDa|
Updated state sequence:
|WAeS|
Basic equations:
:math:`\\frac{dWAeS}{dt} = NBes - WaDa`
:math:`WAeS \\leq PWMax \\cdot WATS`
Examples:
For simplicity, the threshold parameter |PWMax| is set to a value
of two for each of the six initialized HRUs. Thus, snow cover can
hold as much liquid water as it contains frozen water. Stand
precipitation is also always set to the same value, but the initial
conditions of the snow cover are varied:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> pwmax(2.0)
>>> fluxes.nbes = 1.0
>>> states.wats = 0.0, 0.0, 0.0, 1.0, 1.0, 1.0
>>> states.waes = 1.0, 1.0, 0.0, 1.0, 1.5, 2.0
>>> model.calc_wada_waes_v1()
>>> states.waes
waes(0.0, 0.0, 0.0, 2.0, 2.0, 2.0)
>>> fluxes.wada
wada(1.0, 1.0, 1.0, 0.0, 0.5, 1.0)
Note the special cases of the first two HRUs of type |FLUSS| and
|SEE|. For water areas, stand precipitaton |NBes| is generally
passed to |WaDa| and |WAeS| is set to zero. For all other land
use classes (of which only |ACKER| is selected), only the amount
of |NBes| exceeding the actual snow holding capacity is passed
to |WaDa|.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (WASSER, FLUSS, SEE):
sta.waes[k] = 0.
flu.wada[k] = flu.nbes[k]
else:
sta.waes[k] += flu.nbes[k]
flu.wada[k] = max(sta.waes[k]-con.pwmax[k]*sta.wats[k], 0.)
sta.waes[k] -= flu.wada[k] | [
"def",
"calc_wada_waes_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"sta",
".",
"waes",
"[",
"k",
"]",
"=",
"0.",
"flu",
".",
"wada",
"[",
"k",
"]",
"=",
"flu",
".",
"nbes",
"[",
"k",
"]",
"else",
":",
"sta",
".",
"waes",
"[",
"k",
"]",
"+=",
"flu",
".",
"nbes",
"[",
"k",
"]",
"flu",
".",
"wada",
"[",
"k",
"]",
"=",
"max",
"(",
"sta",
".",
"waes",
"[",
"k",
"]",
"-",
"con",
".",
"pwmax",
"[",
"k",
"]",
"*",
"sta",
".",
"wats",
"[",
"k",
"]",
",",
"0.",
")",
"sta",
".",
"waes",
"[",
"k",
"]",
"-=",
"flu",
".",
"wada",
"[",
"k",
"]"
] | Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|PWMax|
Required flux sequences:
|NBes|
Calculated flux sequence:
|WaDa|
Updated state sequence:
|WAeS|
Basic equations:
:math:`\\frac{dWAeS}{dt} = NBes - WaDa`
:math:`WAeS \\leq PWMax \\cdot WATS`
Examples:
For simplicity, the threshold parameter |PWMax| is set to a value
of two for each of the six initialized HRUs. Thus, snow cover can
hold as much liquid water as it contains frozen water. Stand
precipitation is also always set to the same value, but the initial
conditions of the snow cover are varied:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(6)
>>> lnk(FLUSS, SEE, ACKER, ACKER, ACKER, ACKER)
>>> pwmax(2.0)
>>> fluxes.nbes = 1.0
>>> states.wats = 0.0, 0.0, 0.0, 1.0, 1.0, 1.0
>>> states.waes = 1.0, 1.0, 0.0, 1.0, 1.5, 2.0
>>> model.calc_wada_waes_v1()
>>> states.waes
waes(0.0, 0.0, 0.0, 2.0, 2.0, 2.0)
>>> fluxes.wada
wada(1.0, 1.0, 1.0, 0.0, 0.5, 1.0)
Note the special cases of the first two HRUs of type |FLUSS| and
|SEE|. For water areas, stand precipitaton |NBes| is generally
passed to |WaDa| and |WAeS| is set to zero. For all other land
use classes (of which only |ACKER| is selected), only the amount
of |NBes| exceeding the actual snow holding capacity is passed
to |WaDa|. | [
"Calculate",
"the",
"actual",
"water",
"release",
"from",
"the",
"snow",
"cover",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L669-L729 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_evb_v1 | def calc_evb_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|GrasRef_R|
Required state sequence:
|BoWa|
Required flux sequences:
|EvPo|
|EvI|
Calculated flux sequence:
|EvB|
Basic equations:
:math:`temp = exp(-GrasRef_R \\cdot \\frac{BoWa}{NFk})`
:math:`EvB = (EvPo - EvI) \\cdot
\\frac{1 - temp}{1 + temp -2 \\cdot exp(-GrasRef_R)}`
Examples:
Soil evaporation is calculated neither for water nor for sealed
areas (see the first three HRUs of type |FLUSS|, |SEE|, and |VERS|).
All other land use classes are handled in accordance with a
recommendation of the set of codes described in ATV-DVWK-M 504
(arable land |ACKER| has been selected for the last four HRUs
arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> grasref_r(5.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0)
>>> fluxes.evpo = 5.0
>>> fluxes.evi = 3.0
>>> states.bowa = 50.0, 50.0, 50.0, 0.0, 0.0, 50.0, 100.0
>>> model.calc_evb_v1()
>>> fluxes.evb
evb(0.0, 0.0, 0.0, 0.0, 0.0, 1.717962, 2.0)
In case usable field capacity (|NFk|) is zero, soil evaporation
(|EvB|) is generally set to zero (see the forth HRU). The last
three HRUs demonstrate the rise in soil evaporation with increasing
soil moisture, which is lessening in the high soil moisture range.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if (con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or (con.nfk[k] <= 0.):
flu.evb[k] = 0.
else:
d_temp = modelutils.exp(-con.grasref_r *
sta.bowa[k]/con.nfk[k])
flu.evb[k] = ((flu.evpo[k]-flu.evi[k]) * (1.-d_temp) /
(1.+d_temp-2.*modelutils.exp(-con.grasref_r))) | python | def calc_evb_v1(self):
"""Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|GrasRef_R|
Required state sequence:
|BoWa|
Required flux sequences:
|EvPo|
|EvI|
Calculated flux sequence:
|EvB|
Basic equations:
:math:`temp = exp(-GrasRef_R \\cdot \\frac{BoWa}{NFk})`
:math:`EvB = (EvPo - EvI) \\cdot
\\frac{1 - temp}{1 + temp -2 \\cdot exp(-GrasRef_R)}`
Examples:
Soil evaporation is calculated neither for water nor for sealed
areas (see the first three HRUs of type |FLUSS|, |SEE|, and |VERS|).
All other land use classes are handled in accordance with a
recommendation of the set of codes described in ATV-DVWK-M 504
(arable land |ACKER| has been selected for the last four HRUs
arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> grasref_r(5.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0)
>>> fluxes.evpo = 5.0
>>> fluxes.evi = 3.0
>>> states.bowa = 50.0, 50.0, 50.0, 0.0, 0.0, 50.0, 100.0
>>> model.calc_evb_v1()
>>> fluxes.evb
evb(0.0, 0.0, 0.0, 0.0, 0.0, 1.717962, 2.0)
In case usable field capacity (|NFk|) is zero, soil evaporation
(|EvB|) is generally set to zero (see the forth HRU). The last
three HRUs demonstrate the rise in soil evaporation with increasing
soil moisture, which is lessening in the high soil moisture range.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if (con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or (con.nfk[k] <= 0.):
flu.evb[k] = 0.
else:
d_temp = modelutils.exp(-con.grasref_r *
sta.bowa[k]/con.nfk[k])
flu.evb[k] = ((flu.evpo[k]-flu.evi[k]) * (1.-d_temp) /
(1.+d_temp-2.*modelutils.exp(-con.grasref_r))) | [
"def",
"calc_evb_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"(",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
")",
"or",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"<=",
"0.",
")",
":",
"flu",
".",
"evb",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"d_temp",
"=",
"modelutils",
".",
"exp",
"(",
"-",
"con",
".",
"grasref_r",
"*",
"sta",
".",
"bowa",
"[",
"k",
"]",
"/",
"con",
".",
"nfk",
"[",
"k",
"]",
")",
"flu",
".",
"evb",
"[",
"k",
"]",
"=",
"(",
"(",
"flu",
".",
"evpo",
"[",
"k",
"]",
"-",
"flu",
".",
"evi",
"[",
"k",
"]",
")",
"*",
"(",
"1.",
"-",
"d_temp",
")",
"/",
"(",
"1.",
"+",
"d_temp",
"-",
"2.",
"*",
"modelutils",
".",
"exp",
"(",
"-",
"con",
".",
"grasref_r",
")",
")",
")"
] | Calculate the actual water release from the snow cover.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|GrasRef_R|
Required state sequence:
|BoWa|
Required flux sequences:
|EvPo|
|EvI|
Calculated flux sequence:
|EvB|
Basic equations:
:math:`temp = exp(-GrasRef_R \\cdot \\frac{BoWa}{NFk})`
:math:`EvB = (EvPo - EvI) \\cdot
\\frac{1 - temp}{1 + temp -2 \\cdot exp(-GrasRef_R)}`
Examples:
Soil evaporation is calculated neither for water nor for sealed
areas (see the first three HRUs of type |FLUSS|, |SEE|, and |VERS|).
All other land use classes are handled in accordance with a
recommendation of the set of codes described in ATV-DVWK-M 504
(arable land |ACKER| has been selected for the last four HRUs
arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> grasref_r(5.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0)
>>> fluxes.evpo = 5.0
>>> fluxes.evi = 3.0
>>> states.bowa = 50.0, 50.0, 50.0, 0.0, 0.0, 50.0, 100.0
>>> model.calc_evb_v1()
>>> fluxes.evb
evb(0.0, 0.0, 0.0, 0.0, 0.0, 1.717962, 2.0)
In case usable field capacity (|NFk|) is zero, soil evaporation
(|EvB|) is generally set to zero (see the forth HRU). The last
three HRUs demonstrate the rise in soil evaporation with increasing
soil moisture, which is lessening in the high soil moisture range. | [
"Calculate",
"the",
"actual",
"water",
"release",
"from",
"the",
"snow",
"cover",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L732-L793 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qbb_v1 | def calc_qbb_v1(self):
"""Calculate the amount of base flow released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|Beta|
|FBeta|
Required derived parameter:
|WB|
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QBB|
Basic equations:
:math:`Beta_{eff} = \\Bigl \\lbrace
{
{Beta \\ | \\ BoWa \\leq WZ}
\\atop
{Beta \\cdot (1+(FBeta-1)\\cdot\\frac{BoWa-WZ}{NFk-WZ}) \\|\\ BoWa > WZ}
}`
:math:`QBB = \\Bigl \\lbrace
{
{0 \\ | \\ BoWa \\leq WB}
\\atop
{Beta_{eff} \\cdot (BoWa - WB) \\|\\ BoWa > WB}
}`
Examples:
For water and sealed areas, no base flow is calculated (see the
first three HRUs of type |VERS|, |FLUSS|, and |SEE|). No principal
distinction is made between the remaining land use classes (arable
land |ACKER| has been selected for the last five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> beta(0.04)
>>> fbeta(2.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wb(10.0)
>>> derived.wz(70.0)
Note the time dependence of parameter |Beta|:
>>> beta
beta(0.04)
>>> beta.values
array([ 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
In the first example, the actual soil water content |BoWa| is set
to low values. For values below the threshold |WB|, not percolation
occurs. Above |WB| (but below |WZ|), |QBB| increases linearly by
an amount defined by parameter |Beta|:
>>> states.bowa = 20.0, 20.0, 20.0, 0.0, 0.0, 10.0, 20.0, 20.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2)
Note that for the last two HRUs the same amount of
base flow generation is determined, in spite of the fact
that both exhibit different relative soil moistures. It is
common to modify this "pure absolute dependency" to a "mixed
absolute/relative dependency" through defining the values of
parameter |WB| indirectly via parameter |RelWB|.
In the second example, the actual soil water content |BoWa| is set
to high values. For values below threshold |WZ|, the discussion above
remains valid. For values above |WZ|, percolation shows a nonlinear
behaviour when factor |FBeta| is set to values larger than one:
>>> nfk(0.0, 0.0, 0.0, 100.0, 100.0, 100.0, 100.0, 200.0)
>>> states.bowa = 0.0, 0.0, 0.0, 60.0, 70.0, 80.0, 100.0, 200.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 1.0, 1.2, 1.866667, 3.6, 7.6)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k]) or (con.nfk[k] <= 0.)):
flu.qbb[k] = 0.
elif sta.bowa[k] <= der.wz[k]:
flu.qbb[k] = con.beta[k]*(sta.bowa[k]-der.wb[k])
else:
flu.qbb[k] = (con.beta[k]*(sta.bowa[k]-der.wb[k]) *
(1.+(con.fbeta[k]-1.)*((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k])))) | python | def calc_qbb_v1(self):
"""Calculate the amount of base flow released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|Beta|
|FBeta|
Required derived parameter:
|WB|
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QBB|
Basic equations:
:math:`Beta_{eff} = \\Bigl \\lbrace
{
{Beta \\ | \\ BoWa \\leq WZ}
\\atop
{Beta \\cdot (1+(FBeta-1)\\cdot\\frac{BoWa-WZ}{NFk-WZ}) \\|\\ BoWa > WZ}
}`
:math:`QBB = \\Bigl \\lbrace
{
{0 \\ | \\ BoWa \\leq WB}
\\atop
{Beta_{eff} \\cdot (BoWa - WB) \\|\\ BoWa > WB}
}`
Examples:
For water and sealed areas, no base flow is calculated (see the
first three HRUs of type |VERS|, |FLUSS|, and |SEE|). No principal
distinction is made between the remaining land use classes (arable
land |ACKER| has been selected for the last five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> beta(0.04)
>>> fbeta(2.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wb(10.0)
>>> derived.wz(70.0)
Note the time dependence of parameter |Beta|:
>>> beta
beta(0.04)
>>> beta.values
array([ 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
In the first example, the actual soil water content |BoWa| is set
to low values. For values below the threshold |WB|, not percolation
occurs. Above |WB| (but below |WZ|), |QBB| increases linearly by
an amount defined by parameter |Beta|:
>>> states.bowa = 20.0, 20.0, 20.0, 0.0, 0.0, 10.0, 20.0, 20.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2)
Note that for the last two HRUs the same amount of
base flow generation is determined, in spite of the fact
that both exhibit different relative soil moistures. It is
common to modify this "pure absolute dependency" to a "mixed
absolute/relative dependency" through defining the values of
parameter |WB| indirectly via parameter |RelWB|.
In the second example, the actual soil water content |BoWa| is set
to high values. For values below threshold |WZ|, the discussion above
remains valid. For values above |WZ|, percolation shows a nonlinear
behaviour when factor |FBeta| is set to values larger than one:
>>> nfk(0.0, 0.0, 0.0, 100.0, 100.0, 100.0, 100.0, 200.0)
>>> states.bowa = 0.0, 0.0, 0.0, 60.0, 70.0, 80.0, 100.0, 200.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 1.0, 1.2, 1.866667, 3.6, 7.6)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k]) or (con.nfk[k] <= 0.)):
flu.qbb[k] = 0.
elif sta.bowa[k] <= der.wz[k]:
flu.qbb[k] = con.beta[k]*(sta.bowa[k]-der.wb[k])
else:
flu.qbb[k] = (con.beta[k]*(sta.bowa[k]-der.wb[k]) *
(1.+(con.fbeta[k]-1.)*((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k])))) | [
"def",
"calc_qbb_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"(",
"(",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
")",
"or",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"<=",
"der",
".",
"wb",
"[",
"k",
"]",
")",
"or",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"<=",
"0.",
")",
")",
":",
"flu",
".",
"qbb",
"[",
"k",
"]",
"=",
"0.",
"elif",
"sta",
".",
"bowa",
"[",
"k",
"]",
"<=",
"der",
".",
"wz",
"[",
"k",
"]",
":",
"flu",
".",
"qbb",
"[",
"k",
"]",
"=",
"con",
".",
"beta",
"[",
"k",
"]",
"*",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"-",
"der",
".",
"wb",
"[",
"k",
"]",
")",
"else",
":",
"flu",
".",
"qbb",
"[",
"k",
"]",
"=",
"(",
"con",
".",
"beta",
"[",
"k",
"]",
"*",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"-",
"der",
".",
"wb",
"[",
"k",
"]",
")",
"*",
"(",
"1.",
"+",
"(",
"con",
".",
"fbeta",
"[",
"k",
"]",
"-",
"1.",
")",
"*",
"(",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"-",
"der",
".",
"wz",
"[",
"k",
"]",
")",
"/",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"-",
"der",
".",
"wz",
"[",
"k",
"]",
")",
")",
")",
")"
] | Calculate the amount of base flow released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|Beta|
|FBeta|
Required derived parameter:
|WB|
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QBB|
Basic equations:
:math:`Beta_{eff} = \\Bigl \\lbrace
{
{Beta \\ | \\ BoWa \\leq WZ}
\\atop
{Beta \\cdot (1+(FBeta-1)\\cdot\\frac{BoWa-WZ}{NFk-WZ}) \\|\\ BoWa > WZ}
}`
:math:`QBB = \\Bigl \\lbrace
{
{0 \\ | \\ BoWa \\leq WB}
\\atop
{Beta_{eff} \\cdot (BoWa - WB) \\|\\ BoWa > WB}
}`
Examples:
For water and sealed areas, no base flow is calculated (see the
first three HRUs of type |VERS|, |FLUSS|, and |SEE|). No principal
distinction is made between the remaining land use classes (arable
land |ACKER| has been selected for the last five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> beta(0.04)
>>> fbeta(2.0)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wb(10.0)
>>> derived.wz(70.0)
Note the time dependence of parameter |Beta|:
>>> beta
beta(0.04)
>>> beta.values
array([ 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02, 0.02])
In the first example, the actual soil water content |BoWa| is set
to low values. For values below the threshold |WB|, not percolation
occurs. Above |WB| (but below |WZ|), |QBB| increases linearly by
an amount defined by parameter |Beta|:
>>> states.bowa = 20.0, 20.0, 20.0, 0.0, 0.0, 10.0, 20.0, 20.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.2)
Note that for the last two HRUs the same amount of
base flow generation is determined, in spite of the fact
that both exhibit different relative soil moistures. It is
common to modify this "pure absolute dependency" to a "mixed
absolute/relative dependency" through defining the values of
parameter |WB| indirectly via parameter |RelWB|.
In the second example, the actual soil water content |BoWa| is set
to high values. For values below threshold |WZ|, the discussion above
remains valid. For values above |WZ|, percolation shows a nonlinear
behaviour when factor |FBeta| is set to values larger than one:
>>> nfk(0.0, 0.0, 0.0, 100.0, 100.0, 100.0, 100.0, 200.0)
>>> states.bowa = 0.0, 0.0, 0.0, 60.0, 70.0, 80.0, 100.0, 200.0
>>> model.calc_qbb_v1()
>>> fluxes.qbb
qbb(0.0, 0.0, 0.0, 1.0, 1.2, 1.866667, 3.6, 7.6) | [
"Calculate",
"the",
"amount",
"of",
"base",
"flow",
"released",
"from",
"the",
"soil",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L796-L896 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qib1_v1 | def calc_qib1_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
Required derived parameter:
|WB|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB1|
Basic equation:
:math:`QIB1 = DMin \\cdot \\frac{BoWa}{NFk}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(101.0, 101.0, 101.0, 0.0, 101.0, 101.0, 101.0, 202.0)
>>> derived.wb(10.0)
>>> states.bowa = 10.1, 10.1, 10.1, 0.0, 0.0, 10.0, 10.1, 10.1
Note the time dependence of parameter |DMin|:
>>> dmin
dmin(4.0)
>>> dmin.values
array([ 2., 2., 2., 2., 2., 2., 2., 2.])
Compared to the calculation of |QBB|, the following results show
some relevant differences:
>>> model.calc_qib1_v1()
>>> fluxes.qib1
qib1(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.1)
Firstly, as demonstrated with the help of the seventh and the
eight HRU, the generation of the first interflow component |QIB1|
depends on relative soil moisture. Secondly, as demonstrated with
the help the sixth and seventh HRU, it starts abruptly whenever
the slightest exceedance of the threshold parameter |WB| occurs.
Such sharp discontinuouties are a potential source of trouble.
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k])):
flu.qib1[k] = 0.
else:
flu.qib1[k] = con.dmin[k]*(sta.bowa[k]/con.nfk[k]) | python | def calc_qib1_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
Required derived parameter:
|WB|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB1|
Basic equation:
:math:`QIB1 = DMin \\cdot \\frac{BoWa}{NFk}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(101.0, 101.0, 101.0, 0.0, 101.0, 101.0, 101.0, 202.0)
>>> derived.wb(10.0)
>>> states.bowa = 10.1, 10.1, 10.1, 0.0, 0.0, 10.0, 10.1, 10.1
Note the time dependence of parameter |DMin|:
>>> dmin
dmin(4.0)
>>> dmin.values
array([ 2., 2., 2., 2., 2., 2., 2., 2.])
Compared to the calculation of |QBB|, the following results show
some relevant differences:
>>> model.calc_qib1_v1()
>>> fluxes.qib1
qib1(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.1)
Firstly, as demonstrated with the help of the seventh and the
eight HRU, the generation of the first interflow component |QIB1|
depends on relative soil moisture. Secondly, as demonstrated with
the help the sixth and seventh HRU, it starts abruptly whenever
the slightest exceedance of the threshold parameter |WB| occurs.
Such sharp discontinuouties are a potential source of trouble.
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wb[k])):
flu.qib1[k] = 0.
else:
flu.qib1[k] = con.dmin[k]*(sta.bowa[k]/con.nfk[k]) | [
"def",
"calc_qib1_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"(",
"(",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
")",
"or",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"<=",
"der",
".",
"wb",
"[",
"k",
"]",
")",
")",
":",
"flu",
".",
"qib1",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"flu",
".",
"qib1",
"[",
"k",
"]",
"=",
"con",
".",
"dmin",
"[",
"k",
"]",
"*",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"/",
"con",
".",
"nfk",
"[",
"k",
"]",
")"
] | Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
Required derived parameter:
|WB|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB1|
Basic equation:
:math:`QIB1 = DMin \\cdot \\frac{BoWa}{NFk}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(101.0, 101.0, 101.0, 0.0, 101.0, 101.0, 101.0, 202.0)
>>> derived.wb(10.0)
>>> states.bowa = 10.1, 10.1, 10.1, 0.0, 0.0, 10.0, 10.1, 10.1
Note the time dependence of parameter |DMin|:
>>> dmin
dmin(4.0)
>>> dmin.values
array([ 2., 2., 2., 2., 2., 2., 2., 2.])
Compared to the calculation of |QBB|, the following results show
some relevant differences:
>>> model.calc_qib1_v1()
>>> fluxes.qib1
qib1(0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 0.1)
Firstly, as demonstrated with the help of the seventh and the
eight HRU, the generation of the first interflow component |QIB1|
depends on relative soil moisture. Secondly, as demonstrated with
the help the sixth and seventh HRU, it starts abruptly whenever
the slightest exceedance of the threshold parameter |WB| occurs.
Such sharp discontinuouties are a potential source of trouble. | [
"Calculate",
"the",
"first",
"inflow",
"component",
"released",
"from",
"the",
"soil",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L899-L969 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qib2_v1 | def calc_qib2_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
|DMax|
Required derived parameter:
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB2|
Basic equation:
:math:`QIB2 = (DMax-DMin) \\cdot
(\\frac{BoWa-WZ}{NFk-WZ})^\\frac{3}{2}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last
five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(100.0, 100.0, 100.0, 50.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wz(50.0)
>>> states.bowa = 100.0, 100.0, 100.0, 50.1, 50.0, 75.0, 100.0, 100.0
Note the time dependence of parameters |DMin| (see the example above)
and |DMax|:
>>> dmax
dmax(10.0)
>>> dmax.values
array([ 5., 5., 5., 5., 5., 5., 5., 5.])
The following results show that he calculation of |QIB2| both
resembles those of |QBB| and |QIB1| in some regards:
>>> model.calc_qib2_v1()
>>> fluxes.qib2
qib2(0.0, 0.0, 0.0, 0.0, 0.0, 1.06066, 3.0, 0.57735)
In the given example, the maximum rate of total interflow
generation is 5 mm/12h (parameter |DMax|). For the seventh zone,
which contains a saturated soil, the value calculated for the
second interflow component (|QIB2|) is 3 mm/h. The "missing"
value of 2 mm/12h is be calculated by method |calc_qib1_v1|.
(The fourth zone, which is slightly oversaturated, is only intended
to demonstrate that zero division due to |NFk| = |WZ| is circumvented.)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wz[k]) or (con.nfk[k] <= der.wz[k])):
flu.qib2[k] = 0.
else:
flu.qib2[k] = ((con.dmax[k]-con.dmin[k]) *
((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k]))**1.5) | python | def calc_qib2_v1(self):
"""Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
|DMax|
Required derived parameter:
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB2|
Basic equation:
:math:`QIB2 = (DMax-DMin) \\cdot
(\\frac{BoWa-WZ}{NFk-WZ})^\\frac{3}{2}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last
five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(100.0, 100.0, 100.0, 50.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wz(50.0)
>>> states.bowa = 100.0, 100.0, 100.0, 50.1, 50.0, 75.0, 100.0, 100.0
Note the time dependence of parameters |DMin| (see the example above)
and |DMax|:
>>> dmax
dmax(10.0)
>>> dmax.values
array([ 5., 5., 5., 5., 5., 5., 5., 5.])
The following results show that he calculation of |QIB2| both
resembles those of |QBB| and |QIB1| in some regards:
>>> model.calc_qib2_v1()
>>> fluxes.qib2
qib2(0.0, 0.0, 0.0, 0.0, 0.0, 1.06066, 3.0, 0.57735)
In the given example, the maximum rate of total interflow
generation is 5 mm/12h (parameter |DMax|). For the seventh zone,
which contains a saturated soil, the value calculated for the
second interflow component (|QIB2|) is 3 mm/h. The "missing"
value of 2 mm/12h is be calculated by method |calc_qib1_v1|.
(The fourth zone, which is slightly oversaturated, is only intended
to demonstrate that zero division due to |NFk| = |WZ| is circumvented.)
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
for k in range(con.nhru):
if ((con.lnk[k] in (VERS, WASSER, FLUSS, SEE)) or
(sta.bowa[k] <= der.wz[k]) or (con.nfk[k] <= der.wz[k])):
flu.qib2[k] = 0.
else:
flu.qib2[k] = ((con.dmax[k]-con.dmin[k]) *
((sta.bowa[k]-der.wz[k]) /
(con.nfk[k]-der.wz[k]))**1.5) | [
"def",
"calc_qib2_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"(",
"(",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
")",
"or",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"<=",
"der",
".",
"wz",
"[",
"k",
"]",
")",
"or",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"<=",
"der",
".",
"wz",
"[",
"k",
"]",
")",
")",
":",
"flu",
".",
"qib2",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"flu",
".",
"qib2",
"[",
"k",
"]",
"=",
"(",
"(",
"con",
".",
"dmax",
"[",
"k",
"]",
"-",
"con",
".",
"dmin",
"[",
"k",
"]",
")",
"*",
"(",
"(",
"sta",
".",
"bowa",
"[",
"k",
"]",
"-",
"der",
".",
"wz",
"[",
"k",
"]",
")",
"/",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"-",
"der",
".",
"wz",
"[",
"k",
"]",
")",
")",
"**",
"1.5",
")"
] | Calculate the first inflow component released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|DMin|
|DMax|
Required derived parameter:
|WZ|
Required state sequence:
|BoWa|
Calculated flux sequence:
|QIB2|
Basic equation:
:math:`QIB2 = (DMax-DMin) \\cdot
(\\frac{BoWa-WZ}{NFk-WZ})^\\frac{3}{2}`
Examples:
For water and sealed areas, no interflow is calculated (the first
three HRUs are of type |FLUSS|, |SEE|, and |VERS|, respectively).
No principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last
five HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(8)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> dmax(10.0)
>>> dmin(4.0)
>>> nfk(100.0, 100.0, 100.0, 50.0, 100.0, 100.0, 100.0, 200.0)
>>> derived.wz(50.0)
>>> states.bowa = 100.0, 100.0, 100.0, 50.1, 50.0, 75.0, 100.0, 100.0
Note the time dependence of parameters |DMin| (see the example above)
and |DMax|:
>>> dmax
dmax(10.0)
>>> dmax.values
array([ 5., 5., 5., 5., 5., 5., 5., 5.])
The following results show that he calculation of |QIB2| both
resembles those of |QBB| and |QIB1| in some regards:
>>> model.calc_qib2_v1()
>>> fluxes.qib2
qib2(0.0, 0.0, 0.0, 0.0, 0.0, 1.06066, 3.0, 0.57735)
In the given example, the maximum rate of total interflow
generation is 5 mm/12h (parameter |DMax|). For the seventh zone,
which contains a saturated soil, the value calculated for the
second interflow component (|QIB2|) is 3 mm/h. The "missing"
value of 2 mm/12h is be calculated by method |calc_qib1_v1|.
(The fourth zone, which is slightly oversaturated, is only intended
to demonstrate that zero division due to |NFk| = |WZ| is circumvented.) | [
"Calculate",
"the",
"first",
"inflow",
"component",
"released",
"from",
"the",
"soil",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L972-L1049 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qdb_v1 | def calc_qdb_v1(self):
"""Calculate direct runoff released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|BSf|
Required state sequence:
|BoWa|
Required flux sequence:
|WaDa|
Calculated flux sequence:
|QDB|
Basic equations:
:math:`QDB = \\Bigl \\lbrace
{
{max(Exz, 0) \\ | \\ SfA \\leq 0}
\\atop
{max(Exz + NFk \\cdot SfA^{BSf+1}, 0) \\ | \\ SfA > 0}
}`
:math:`SFA = (1 - \\frac{BoWa}{NFk})^\\frac{1}{BSf+1} -
\\frac{WaDa}{(BSf+1) \\cdot NFk}`
:math:`Exz = (BoWa + WaDa) - NFk`
Examples:
For water areas (|FLUSS| and |SEE|), sealed areas (|VERS|), and
areas without any soil storage capacity, all water is completely
routed as direct runoff |QDB| (see the first four HRUs). No
principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(9)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> bsf(0.4)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 100.0, 100.0)
>>> fluxes.wada = 10.0
>>> states.bowa = (
... 100.0, 100.0, 100.0, 0.0, -0.1, 0.0, 50.0, 100.0, 100.1)
>>> model.calc_qdb_v1()
>>> fluxes.qdb
qdb(10.0, 10.0, 10.0, 10.0, 0.142039, 0.144959, 1.993649, 10.0, 10.1)
With the common |BSf| value of 0.4, the discharge coefficient
increases more or less exponentially with soil moisture.
For soil moisture values slightly below zero or above usable
field capacity, plausible amounts of generated direct runoff
are ensured.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.qdb[k] = 0.
elif ((con.lnk[k] in (VERS, FLUSS, SEE)) or
(con.nfk[k] <= 0.)):
flu.qdb[k] = flu.wada[k]
else:
if sta.bowa[k] < con.nfk[k]:
aid.sfa[k] = (
(1.-sta.bowa[k]/con.nfk[k])**(1./(con.bsf[k]+1.)) -
(flu.wada[k]/((con.bsf[k]+1.)*con.nfk[k])))
else:
aid.sfa[k] = 0.
aid.exz[k] = sta.bowa[k]+flu.wada[k]-con.nfk[k]
flu.qdb[k] = aid.exz[k]
if aid.sfa[k] > 0.:
flu.qdb[k] += aid.sfa[k]**(con.bsf[k]+1.)*con.nfk[k]
flu.qdb[k] = max(flu.qdb[k], 0.) | python | def calc_qdb_v1(self):
"""Calculate direct runoff released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|BSf|
Required state sequence:
|BoWa|
Required flux sequence:
|WaDa|
Calculated flux sequence:
|QDB|
Basic equations:
:math:`QDB = \\Bigl \\lbrace
{
{max(Exz, 0) \\ | \\ SfA \\leq 0}
\\atop
{max(Exz + NFk \\cdot SfA^{BSf+1}, 0) \\ | \\ SfA > 0}
}`
:math:`SFA = (1 - \\frac{BoWa}{NFk})^\\frac{1}{BSf+1} -
\\frac{WaDa}{(BSf+1) \\cdot NFk}`
:math:`Exz = (BoWa + WaDa) - NFk`
Examples:
For water areas (|FLUSS| and |SEE|), sealed areas (|VERS|), and
areas without any soil storage capacity, all water is completely
routed as direct runoff |QDB| (see the first four HRUs). No
principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(9)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> bsf(0.4)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 100.0, 100.0)
>>> fluxes.wada = 10.0
>>> states.bowa = (
... 100.0, 100.0, 100.0, 0.0, -0.1, 0.0, 50.0, 100.0, 100.1)
>>> model.calc_qdb_v1()
>>> fluxes.qdb
qdb(10.0, 10.0, 10.0, 10.0, 0.142039, 0.144959, 1.993649, 10.0, 10.1)
With the common |BSf| value of 0.4, the discharge coefficient
increases more or less exponentially with soil moisture.
For soil moisture values slightly below zero or above usable
field capacity, plausible amounts of generated direct runoff
are ensured.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.qdb[k] = 0.
elif ((con.lnk[k] in (VERS, FLUSS, SEE)) or
(con.nfk[k] <= 0.)):
flu.qdb[k] = flu.wada[k]
else:
if sta.bowa[k] < con.nfk[k]:
aid.sfa[k] = (
(1.-sta.bowa[k]/con.nfk[k])**(1./(con.bsf[k]+1.)) -
(flu.wada[k]/((con.bsf[k]+1.)*con.nfk[k])))
else:
aid.sfa[k] = 0.
aid.exz[k] = sta.bowa[k]+flu.wada[k]-con.nfk[k]
flu.qdb[k] = aid.exz[k]
if aid.sfa[k] > 0.:
flu.qdb[k] += aid.sfa[k]**(con.bsf[k]+1.)*con.nfk[k]
flu.qdb[k] = max(flu.qdb[k], 0.) | [
"def",
"calc_qdb_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"aid",
"=",
"self",
".",
"sequences",
".",
"aides",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"==",
"WASSER",
":",
"flu",
".",
"qdb",
"[",
"k",
"]",
"=",
"0.",
"elif",
"(",
"(",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"FLUSS",
",",
"SEE",
")",
")",
"or",
"(",
"con",
".",
"nfk",
"[",
"k",
"]",
"<=",
"0.",
")",
")",
":",
"flu",
".",
"qdb",
"[",
"k",
"]",
"=",
"flu",
".",
"wada",
"[",
"k",
"]",
"else",
":",
"if",
"sta",
".",
"bowa",
"[",
"k",
"]",
"<",
"con",
".",
"nfk",
"[",
"k",
"]",
":",
"aid",
".",
"sfa",
"[",
"k",
"]",
"=",
"(",
"(",
"1.",
"-",
"sta",
".",
"bowa",
"[",
"k",
"]",
"/",
"con",
".",
"nfk",
"[",
"k",
"]",
")",
"**",
"(",
"1.",
"/",
"(",
"con",
".",
"bsf",
"[",
"k",
"]",
"+",
"1.",
")",
")",
"-",
"(",
"flu",
".",
"wada",
"[",
"k",
"]",
"/",
"(",
"(",
"con",
".",
"bsf",
"[",
"k",
"]",
"+",
"1.",
")",
"*",
"con",
".",
"nfk",
"[",
"k",
"]",
")",
")",
")",
"else",
":",
"aid",
".",
"sfa",
"[",
"k",
"]",
"=",
"0.",
"aid",
".",
"exz",
"[",
"k",
"]",
"=",
"sta",
".",
"bowa",
"[",
"k",
"]",
"+",
"flu",
".",
"wada",
"[",
"k",
"]",
"-",
"con",
".",
"nfk",
"[",
"k",
"]",
"flu",
".",
"qdb",
"[",
"k",
"]",
"=",
"aid",
".",
"exz",
"[",
"k",
"]",
"if",
"aid",
".",
"sfa",
"[",
"k",
"]",
">",
"0.",
":",
"flu",
".",
"qdb",
"[",
"k",
"]",
"+=",
"aid",
".",
"sfa",
"[",
"k",
"]",
"**",
"(",
"con",
".",
"bsf",
"[",
"k",
"]",
"+",
"1.",
")",
"*",
"con",
".",
"nfk",
"[",
"k",
"]",
"flu",
".",
"qdb",
"[",
"k",
"]",
"=",
"max",
"(",
"flu",
".",
"qdb",
"[",
"k",
"]",
",",
"0.",
")"
] | Calculate direct runoff released from the soil.
Required control parameters:
|NHRU|
|Lnk|
|NFk|
|BSf|
Required state sequence:
|BoWa|
Required flux sequence:
|WaDa|
Calculated flux sequence:
|QDB|
Basic equations:
:math:`QDB = \\Bigl \\lbrace
{
{max(Exz, 0) \\ | \\ SfA \\leq 0}
\\atop
{max(Exz + NFk \\cdot SfA^{BSf+1}, 0) \\ | \\ SfA > 0}
}`
:math:`SFA = (1 - \\frac{BoWa}{NFk})^\\frac{1}{BSf+1} -
\\frac{WaDa}{(BSf+1) \\cdot NFk}`
:math:`Exz = (BoWa + WaDa) - NFk`
Examples:
For water areas (|FLUSS| and |SEE|), sealed areas (|VERS|), and
areas without any soil storage capacity, all water is completely
routed as direct runoff |QDB| (see the first four HRUs). No
principal distinction is made between the remaining land use
classes (arable land |ACKER| has been selected for the last five
HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> nhru(9)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER, ACKER, ACKER)
>>> bsf(0.4)
>>> nfk(100.0, 100.0, 100.0, 0.0, 100.0, 100.0, 100.0, 100.0, 100.0)
>>> fluxes.wada = 10.0
>>> states.bowa = (
... 100.0, 100.0, 100.0, 0.0, -0.1, 0.0, 50.0, 100.0, 100.1)
>>> model.calc_qdb_v1()
>>> fluxes.qdb
qdb(10.0, 10.0, 10.0, 10.0, 0.142039, 0.144959, 1.993649, 10.0, 10.1)
With the common |BSf| value of 0.4, the discharge coefficient
increases more or less exponentially with soil moisture.
For soil moisture values slightly below zero or above usable
field capacity, plausible amounts of generated direct runoff
are ensured. | [
"Calculate",
"direct",
"runoff",
"released",
"from",
"the",
"soil",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1052-L1131 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_bowa_v1 | def calc_bowa_v1(self):
"""Update soil moisture and correct fluxes if necessary.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequence:
|WaDa|
Updated state sequence:
|BoWa|
Required (and eventually corrected) flux sequences:
|EvB|
|QBB|
|QIB1|
|QIB2|
|QDB|
Basic equations:
:math:`\\frac{dBoWa}{dt} = WaDa - EvB - QBB - QIB1 - QIB2 - QDB`
:math:`BoWa \\geq 0`
Examples:
For water areas (|FLUSS| and |SEE|) and sealed areas (|VERS|),
soil moisture |BoWa| is simply set to zero and no flux correction
are performed (see the first three HRUs). No principal distinction
is made between the remaining land use classes (arable land |ACKER|
has been selected for the last four HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> states.bowa = 2.0
>>> fluxes.wada = 1.0
>>> fluxes.evb = 1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.3
>>> fluxes.qbb = 1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.6
>>> fluxes.qib1 = 1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.9
>>> fluxes.qib2 = 1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 1.2
>>> fluxes.qdb = 1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.5
>>> model.calc_bowa_v1()
>>> states.bowa
bowa(0.0, 0.0, 0.0, 3.0, 1.5, 0.0, 0.0)
>>> fluxes.evb
evb(1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.2)
>>> fluxes.qbb
qbb(1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.4)
>>> fluxes.qib1
qib1(1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.6)
>>> fluxes.qib2
qib2(1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 0.8)
>>> fluxes.qdb
qdb(1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.0)
For the seventh HRU, the original total loss terms would result in a
negative soil moisture value. Hence it is reduced to the total loss
term of the sixt HRU, which results exactly in a complete emptying
of the soil storage.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (VERS, WASSER, FLUSS, SEE):
sta.bowa[k] = 0.
else:
aid.bvl[k] = (
flu.evb[k]+flu.qbb[k]+flu.qib1[k]+flu.qib2[k]+flu.qdb[k])
aid.mvl[k] = sta.bowa[k]+flu.wada[k]
if aid.bvl[k] > aid.mvl[k]:
aid.rvl[k] = aid.mvl[k]/aid.bvl[k]
flu.evb[k] *= aid.rvl[k]
flu.qbb[k] *= aid.rvl[k]
flu.qib1[k] *= aid.rvl[k]
flu.qib2[k] *= aid.rvl[k]
flu.qdb[k] *= aid.rvl[k]
sta.bowa[k] = 0.
else:
sta.bowa[k] = aid.mvl[k]-aid.bvl[k] | python | def calc_bowa_v1(self):
"""Update soil moisture and correct fluxes if necessary.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequence:
|WaDa|
Updated state sequence:
|BoWa|
Required (and eventually corrected) flux sequences:
|EvB|
|QBB|
|QIB1|
|QIB2|
|QDB|
Basic equations:
:math:`\\frac{dBoWa}{dt} = WaDa - EvB - QBB - QIB1 - QIB2 - QDB`
:math:`BoWa \\geq 0`
Examples:
For water areas (|FLUSS| and |SEE|) and sealed areas (|VERS|),
soil moisture |BoWa| is simply set to zero and no flux correction
are performed (see the first three HRUs). No principal distinction
is made between the remaining land use classes (arable land |ACKER|
has been selected for the last four HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> states.bowa = 2.0
>>> fluxes.wada = 1.0
>>> fluxes.evb = 1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.3
>>> fluxes.qbb = 1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.6
>>> fluxes.qib1 = 1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.9
>>> fluxes.qib2 = 1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 1.2
>>> fluxes.qdb = 1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.5
>>> model.calc_bowa_v1()
>>> states.bowa
bowa(0.0, 0.0, 0.0, 3.0, 1.5, 0.0, 0.0)
>>> fluxes.evb
evb(1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.2)
>>> fluxes.qbb
qbb(1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.4)
>>> fluxes.qib1
qib1(1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.6)
>>> fluxes.qib2
qib2(1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 0.8)
>>> fluxes.qdb
qdb(1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.0)
For the seventh HRU, the original total loss terms would result in a
negative soil moisture value. Hence it is reduced to the total loss
term of the sixt HRU, which results exactly in a complete emptying
of the soil storage.
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
for k in range(con.nhru):
if con.lnk[k] in (VERS, WASSER, FLUSS, SEE):
sta.bowa[k] = 0.
else:
aid.bvl[k] = (
flu.evb[k]+flu.qbb[k]+flu.qib1[k]+flu.qib2[k]+flu.qdb[k])
aid.mvl[k] = sta.bowa[k]+flu.wada[k]
if aid.bvl[k] > aid.mvl[k]:
aid.rvl[k] = aid.mvl[k]/aid.bvl[k]
flu.evb[k] *= aid.rvl[k]
flu.qbb[k] *= aid.rvl[k]
flu.qib1[k] *= aid.rvl[k]
flu.qib2[k] *= aid.rvl[k]
flu.qdb[k] *= aid.rvl[k]
sta.bowa[k] = 0.
else:
sta.bowa[k] = aid.mvl[k]-aid.bvl[k] | [
"def",
"calc_bowa_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"aid",
"=",
"self",
".",
"sequences",
".",
"aides",
".",
"fastaccess",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"VERS",
",",
"WASSER",
",",
"FLUSS",
",",
"SEE",
")",
":",
"sta",
".",
"bowa",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"aid",
".",
"bvl",
"[",
"k",
"]",
"=",
"(",
"flu",
".",
"evb",
"[",
"k",
"]",
"+",
"flu",
".",
"qbb",
"[",
"k",
"]",
"+",
"flu",
".",
"qib1",
"[",
"k",
"]",
"+",
"flu",
".",
"qib2",
"[",
"k",
"]",
"+",
"flu",
".",
"qdb",
"[",
"k",
"]",
")",
"aid",
".",
"mvl",
"[",
"k",
"]",
"=",
"sta",
".",
"bowa",
"[",
"k",
"]",
"+",
"flu",
".",
"wada",
"[",
"k",
"]",
"if",
"aid",
".",
"bvl",
"[",
"k",
"]",
">",
"aid",
".",
"mvl",
"[",
"k",
"]",
":",
"aid",
".",
"rvl",
"[",
"k",
"]",
"=",
"aid",
".",
"mvl",
"[",
"k",
"]",
"/",
"aid",
".",
"bvl",
"[",
"k",
"]",
"flu",
".",
"evb",
"[",
"k",
"]",
"*=",
"aid",
".",
"rvl",
"[",
"k",
"]",
"flu",
".",
"qbb",
"[",
"k",
"]",
"*=",
"aid",
".",
"rvl",
"[",
"k",
"]",
"flu",
".",
"qib1",
"[",
"k",
"]",
"*=",
"aid",
".",
"rvl",
"[",
"k",
"]",
"flu",
".",
"qib2",
"[",
"k",
"]",
"*=",
"aid",
".",
"rvl",
"[",
"k",
"]",
"flu",
".",
"qdb",
"[",
"k",
"]",
"*=",
"aid",
".",
"rvl",
"[",
"k",
"]",
"sta",
".",
"bowa",
"[",
"k",
"]",
"=",
"0.",
"else",
":",
"sta",
".",
"bowa",
"[",
"k",
"]",
"=",
"aid",
".",
"mvl",
"[",
"k",
"]",
"-",
"aid",
".",
"bvl",
"[",
"k",
"]"
] | Update soil moisture and correct fluxes if necessary.
Required control parameters:
|NHRU|
|Lnk|
Required flux sequence:
|WaDa|
Updated state sequence:
|BoWa|
Required (and eventually corrected) flux sequences:
|EvB|
|QBB|
|QIB1|
|QIB2|
|QDB|
Basic equations:
:math:`\\frac{dBoWa}{dt} = WaDa - EvB - QBB - QIB1 - QIB2 - QDB`
:math:`BoWa \\geq 0`
Examples:
For water areas (|FLUSS| and |SEE|) and sealed areas (|VERS|),
soil moisture |BoWa| is simply set to zero and no flux correction
are performed (see the first three HRUs). No principal distinction
is made between the remaining land use classes (arable land |ACKER|
has been selected for the last four HRUs arbitrarily):
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> nhru(7)
>>> lnk(FLUSS, SEE, VERS, ACKER, ACKER, ACKER, ACKER)
>>> states.bowa = 2.0
>>> fluxes.wada = 1.0
>>> fluxes.evb = 1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.3
>>> fluxes.qbb = 1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.6
>>> fluxes.qib1 = 1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.9
>>> fluxes.qib2 = 1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 1.2
>>> fluxes.qdb = 1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.5
>>> model.calc_bowa_v1()
>>> states.bowa
bowa(0.0, 0.0, 0.0, 3.0, 1.5, 0.0, 0.0)
>>> fluxes.evb
evb(1.0, 1.0, 1.0, 0.0, 0.1, 0.2, 0.2)
>>> fluxes.qbb
qbb(1.0, 1.0, 1.0, 0.0, 0.2, 0.4, 0.4)
>>> fluxes.qib1
qib1(1.0, 1.0, 1.0, 0.0, 0.3, 0.6, 0.6)
>>> fluxes.qib2
qib2(1.0, 1.0, 1.0, 0.0, 0.4, 0.8, 0.8)
>>> fluxes.qdb
qdb(1.0, 1.0, 1.0, 0.0, 0.5, 1.0, 1.0)
For the seventh HRU, the original total loss terms would result in a
negative soil moisture value. Hence it is reduced to the total loss
term of the sixt HRU, which results exactly in a complete emptying
of the soil storage. | [
"Update",
"soil",
"moisture",
"and",
"correct",
"fluxes",
"if",
"necessary",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1134-L1216 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qbgz_v1 | def calc_qbgz_v1(self):
"""Aggregate the amount of base flow released by all "soil type" HRUs
and the "net precipitation" above water areas of type |SEE|.
Water areas of type |SEE| are assumed to be directly connected with
groundwater, but not with the stream network. This is modelled by
adding their (positive or negative) "net input" (|NKor|-|EvI|) to the
"percolation output" of the soil containing HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequences:
|QBB|
|NKor|
|EvI|
Calculated state sequence:
|QBGZ|
Basic equation:
:math:`QBGZ = \\Sigma(FHRU \\cdot QBB) +
\\Sigma(FHRU \\cdot (NKor_{SEE}-EvI_{SEE}))`
Examples:
The first example shows that |QBGZ| is the area weighted sum of
|QBB| from "soil type" HRUs like arable land (|ACKER|) and of
|NKor|-|EvI| from water areas of type |SEE|. All other water
areas (|WASSER| and |FLUSS|) and also sealed surfaces (|VERS|)
have no impact on |QBGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(6)
>>> lnk(ACKER, ACKER, VERS, WASSER, FLUSS, SEE)
>>> fhru(0.1, 0.2, 0.1, 0.1, 0.1, 0.4)
>>> fluxes.qbb = 2., 4.0, 300.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(5.0)
The second example shows that large evaporation values above a
HRU of type |SEE| can result in negative values of |QBGZ|:
>>> fluxes.evi[5] = 30
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qbgz = 0.
for k in range(con.nhru):
if con.lnk[k] == SEE:
sta.qbgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, FLUSS, VERS):
sta.qbgz += con.fhru[k]*flu.qbb[k] | python | def calc_qbgz_v1(self):
"""Aggregate the amount of base flow released by all "soil type" HRUs
and the "net precipitation" above water areas of type |SEE|.
Water areas of type |SEE| are assumed to be directly connected with
groundwater, but not with the stream network. This is modelled by
adding their (positive or negative) "net input" (|NKor|-|EvI|) to the
"percolation output" of the soil containing HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequences:
|QBB|
|NKor|
|EvI|
Calculated state sequence:
|QBGZ|
Basic equation:
:math:`QBGZ = \\Sigma(FHRU \\cdot QBB) +
\\Sigma(FHRU \\cdot (NKor_{SEE}-EvI_{SEE}))`
Examples:
The first example shows that |QBGZ| is the area weighted sum of
|QBB| from "soil type" HRUs like arable land (|ACKER|) and of
|NKor|-|EvI| from water areas of type |SEE|. All other water
areas (|WASSER| and |FLUSS|) and also sealed surfaces (|VERS|)
have no impact on |QBGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(6)
>>> lnk(ACKER, ACKER, VERS, WASSER, FLUSS, SEE)
>>> fhru(0.1, 0.2, 0.1, 0.1, 0.1, 0.4)
>>> fluxes.qbb = 2., 4.0, 300.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(5.0)
The second example shows that large evaporation values above a
HRU of type |SEE| can result in negative values of |QBGZ|:
>>> fluxes.evi[5] = 30
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qbgz = 0.
for k in range(con.nhru):
if con.lnk[k] == SEE:
sta.qbgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, FLUSS, VERS):
sta.qbgz += con.fhru[k]*flu.qbb[k] | [
"def",
"calc_qbgz_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"sta",
".",
"qbgz",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"==",
"SEE",
":",
"sta",
".",
"qbgz",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"(",
"flu",
".",
"nkor",
"[",
"k",
"]",
"-",
"flu",
".",
"evi",
"[",
"k",
"]",
")",
"elif",
"con",
".",
"lnk",
"[",
"k",
"]",
"not",
"in",
"(",
"WASSER",
",",
"FLUSS",
",",
"VERS",
")",
":",
"sta",
".",
"qbgz",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"qbb",
"[",
"k",
"]"
] | Aggregate the amount of base flow released by all "soil type" HRUs
and the "net precipitation" above water areas of type |SEE|.
Water areas of type |SEE| are assumed to be directly connected with
groundwater, but not with the stream network. This is modelled by
adding their (positive or negative) "net input" (|NKor|-|EvI|) to the
"percolation output" of the soil containing HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequences:
|QBB|
|NKor|
|EvI|
Calculated state sequence:
|QBGZ|
Basic equation:
:math:`QBGZ = \\Sigma(FHRU \\cdot QBB) +
\\Sigma(FHRU \\cdot (NKor_{SEE}-EvI_{SEE}))`
Examples:
The first example shows that |QBGZ| is the area weighted sum of
|QBB| from "soil type" HRUs like arable land (|ACKER|) and of
|NKor|-|EvI| from water areas of type |SEE|. All other water
areas (|WASSER| and |FLUSS|) and also sealed surfaces (|VERS|)
have no impact on |QBGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(6)
>>> lnk(ACKER, ACKER, VERS, WASSER, FLUSS, SEE)
>>> fhru(0.1, 0.2, 0.1, 0.1, 0.1, 0.4)
>>> fluxes.qbb = 2., 4.0, 300.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(5.0)
The second example shows that large evaporation values above a
HRU of type |SEE| can result in negative values of |QBGZ|:
>>> fluxes.evi[5] = 30
>>> model.calc_qbgz_v1()
>>> states.qbgz
qbgz(-3.0) | [
"Aggregate",
"the",
"amount",
"of",
"base",
"flow",
"released",
"by",
"all",
"soil",
"type",
"HRUs",
"and",
"the",
"net",
"precipitation",
"above",
"water",
"areas",
"of",
"type",
"|SEE|",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1219-L1281 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qigz1_v1 | def calc_qigz1_v1(self):
"""Aggregate the amount of the first interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB1|
Calculated state sequence:
|QIGZ1|
Basic equation:
:math:`QIGZ1 = \\Sigma(FHRU \\cdot QIB1)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib1 = 1.0, 5.0
>>> model.calc_qigz1_v1()
>>> states.qigz1
qigz1(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz1 = 0.
for k in range(con.nhru):
sta.qigz1 += con.fhru[k]*flu.qib1[k] | python | def calc_qigz1_v1(self):
"""Aggregate the amount of the first interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB1|
Calculated state sequence:
|QIGZ1|
Basic equation:
:math:`QIGZ1 = \\Sigma(FHRU \\cdot QIB1)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib1 = 1.0, 5.0
>>> model.calc_qigz1_v1()
>>> states.qigz1
qigz1(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz1 = 0.
for k in range(con.nhru):
sta.qigz1 += con.fhru[k]*flu.qib1[k] | [
"def",
"calc_qigz1_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"sta",
".",
"qigz1",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"sta",
".",
"qigz1",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"qib1",
"[",
"k",
"]"
] | Aggregate the amount of the first interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB1|
Calculated state sequence:
|QIGZ1|
Basic equation:
:math:`QIGZ1 = \\Sigma(FHRU \\cdot QIB1)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib1 = 1.0, 5.0
>>> model.calc_qigz1_v1()
>>> states.qigz1
qigz1(2.0) | [
"Aggregate",
"the",
"amount",
"of",
"the",
"first",
"interflow",
"component",
"released",
"by",
"all",
"HRUs",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1284-L1317 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qigz2_v1 | def calc_qigz2_v1(self):
"""Aggregate the amount of the second interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB2|
Calculated state sequence:
|QIGZ2|
Basic equation:
:math:`QIGZ2 = \\Sigma(FHRU \\cdot QIB2)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib2 = 1.0, 5.0
>>> model.calc_qigz2_v1()
>>> states.qigz2
qigz2(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz2 = 0.
for k in range(con.nhru):
sta.qigz2 += con.fhru[k]*flu.qib2[k] | python | def calc_qigz2_v1(self):
"""Aggregate the amount of the second interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB2|
Calculated state sequence:
|QIGZ2|
Basic equation:
:math:`QIGZ2 = \\Sigma(FHRU \\cdot QIB2)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib2 = 1.0, 5.0
>>> model.calc_qigz2_v1()
>>> states.qigz2
qigz2(2.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
sta.qigz2 = 0.
for k in range(con.nhru):
sta.qigz2 += con.fhru[k]*flu.qib2[k] | [
"def",
"calc_qigz2_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"sta",
".",
"qigz2",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"sta",
".",
"qigz2",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"qib2",
"[",
"k",
"]"
] | Aggregate the amount of the second interflow component released
by all HRUs.
Required control parameters:
|NHRU|
|FHRU|
Required flux sequence:
|QIB2|
Calculated state sequence:
|QIGZ2|
Basic equation:
:math:`QIGZ2 = \\Sigma(FHRU \\cdot QIB2)`
Example:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(2)
>>> fhru(0.75, 0.25)
>>> fluxes.qib2 = 1.0, 5.0
>>> model.calc_qigz2_v1()
>>> states.qigz2
qigz2(2.0) | [
"Aggregate",
"the",
"amount",
"of",
"the",
"second",
"interflow",
"component",
"released",
"by",
"all",
"HRUs",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1320-L1353 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qdgz_v1 | def calc_qdgz_v1(self):
"""Aggregate the amount of total direct flow released by all HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequence:
|QDB|
|NKor|
|EvI|
Calculated flux sequence:
|QDGZ|
Basic equation:
:math:`QDGZ = \\Sigma(FHRU \\cdot QDB) +
\\Sigma(FHRU \\cdot (NKor_{FLUSS}-EvI_{FLUSS}))`
Examples:
The first example shows that |QDGZ| is the area weighted sum of
|QDB| from "land type" HRUs like arable land (|ACKER|) and sealed
surfaces (|VERS|) as well as of |NKor|-|EvI| from water areas of
type |FLUSS|. Water areas of type |WASSER| and |SEE| have no
impact on |QDGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(5)
>>> lnk(ACKER, VERS, WASSER, SEE, FLUSS)
>>> fhru(0.1, 0.2, 0.1, 0.2, 0.4)
>>> fluxes.qdb = 2., 4.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(5.0)
The second example shows that large evaporation values above a
HRU of type |FLUSS| can result in negative values of |QDGZ|:
>>> fluxes.evi[4] = 30
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.qdgz = 0.
for k in range(con.nhru):
if con.lnk[k] == FLUSS:
flu.qdgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, SEE):
flu.qdgz += con.fhru[k]*flu.qdb[k] | python | def calc_qdgz_v1(self):
"""Aggregate the amount of total direct flow released by all HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequence:
|QDB|
|NKor|
|EvI|
Calculated flux sequence:
|QDGZ|
Basic equation:
:math:`QDGZ = \\Sigma(FHRU \\cdot QDB) +
\\Sigma(FHRU \\cdot (NKor_{FLUSS}-EvI_{FLUSS}))`
Examples:
The first example shows that |QDGZ| is the area weighted sum of
|QDB| from "land type" HRUs like arable land (|ACKER|) and sealed
surfaces (|VERS|) as well as of |NKor|-|EvI| from water areas of
type |FLUSS|. Water areas of type |WASSER| and |SEE| have no
impact on |QDGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(5)
>>> lnk(ACKER, VERS, WASSER, SEE, FLUSS)
>>> fhru(0.1, 0.2, 0.1, 0.2, 0.4)
>>> fluxes.qdb = 2., 4.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(5.0)
The second example shows that large evaporation values above a
HRU of type |FLUSS| can result in negative values of |QDGZ|:
>>> fluxes.evi[4] = 30
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(-3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
flu.qdgz = 0.
for k in range(con.nhru):
if con.lnk[k] == FLUSS:
flu.qdgz += con.fhru[k]*(flu.nkor[k]-flu.evi[k])
elif con.lnk[k] not in (WASSER, SEE):
flu.qdgz += con.fhru[k]*flu.qdb[k] | [
"def",
"calc_qdgz_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"flu",
".",
"qdgz",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"==",
"FLUSS",
":",
"flu",
".",
"qdgz",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"(",
"flu",
".",
"nkor",
"[",
"k",
"]",
"-",
"flu",
".",
"evi",
"[",
"k",
"]",
")",
"elif",
"con",
".",
"lnk",
"[",
"k",
"]",
"not",
"in",
"(",
"WASSER",
",",
"SEE",
")",
":",
"flu",
".",
"qdgz",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"qdb",
"[",
"k",
"]"
] | Aggregate the amount of total direct flow released by all HRUs.
Required control parameters:
|Lnk|
|NHRU|
|FHRU|
Required flux sequence:
|QDB|
|NKor|
|EvI|
Calculated flux sequence:
|QDGZ|
Basic equation:
:math:`QDGZ = \\Sigma(FHRU \\cdot QDB) +
\\Sigma(FHRU \\cdot (NKor_{FLUSS}-EvI_{FLUSS}))`
Examples:
The first example shows that |QDGZ| is the area weighted sum of
|QDB| from "land type" HRUs like arable land (|ACKER|) and sealed
surfaces (|VERS|) as well as of |NKor|-|EvI| from water areas of
type |FLUSS|. Water areas of type |WASSER| and |SEE| have no
impact on |QDGZ|:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(5)
>>> lnk(ACKER, VERS, WASSER, SEE, FLUSS)
>>> fhru(0.1, 0.2, 0.1, 0.2, 0.4)
>>> fluxes.qdb = 2., 4.0, 300.0, 300.0, 300.0
>>> fluxes.nkor = 200.0, 200.0, 200.0, 200.0, 20.0
>>> fluxes.evi = 100.0, 100.0, 100.0, 100.0, 10.0
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(5.0)
The second example shows that large evaporation values above a
HRU of type |FLUSS| can result in negative values of |QDGZ|:
>>> fluxes.evi[4] = 30
>>> model.calc_qdgz_v1()
>>> fluxes.qdgz
qdgz(-3.0) | [
"Aggregate",
"the",
"amount",
"of",
"total",
"direct",
"flow",
"released",
"by",
"all",
"HRUs",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1356-L1411 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qdgz1_qdgz2_v1 | def calc_qdgz1_qdgz2_v1(self):
"""Seperate total direct flow into a small and a fast component.
Required control parameters:
|A1|
|A2|
Required flux sequence:
|QDGZ|
Calculated state sequences:
|QDGZ1|
|QDGZ2|
Basic equation:
:math:`QDGZ2 = \\frac{(QDGZ-A2)^2}{QDGZ+A1-A2}`
:math:`QDGZ1 = QDGZ - QDGZ1`
Examples:
The formula for calculating the amount of the fast component of
direct flow is borrowed from the famous curve number approach.
Parameter |A2| would be the initial loss and parameter |A1| the
maximum storage, but one should not take this analogy too serious.
Instead, with the value of parameter |A1| set to zero, parameter
|A2| just defines the maximum amount of "slow" direct runoff per
time step:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> a1(0.0)
Let us set the value of |A2| to 4 mm/d, which is 2 mm/12h with
respect to the selected simulation step size:
>>> a2(4.0)
>>> a2
a2(4.0)
>>> a2.value
2.0
Define a test function and let it calculate |QDGZ1| and |QDGZ1| for
values of |QDGZ| ranging from -10 to 100 mm/12h:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_qdgz1_qdgz2_v1,
... last_example=6,
... parseqs=(fluxes.qdgz,
... states.qdgz1,
... states.qdgz2))
>>> test.nexts.qdgz = -10.0, 0.0, 1.0, 2.0, 3.0, 100.0
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 2.0 | 0.0 |
| 5 | 3.0 | 2.0 | 1.0 |
| 6 | 100.0 | 2.0 | 98.0 |
Setting |A2| to zero and |A1| to 4 mm/d (or 2 mm/12h) results in
a smoother transition:
>>> a2(0.0)
>>> a1(4.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
--------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 0.666667 | 0.333333 |
| 4 | 2.0 | 1.0 | 1.0 |
| 5 | 3.0 | 1.2 | 1.8 |
| 6 | 100.0 | 1.960784 | 98.039216 |
Alternatively, one can mix these two configurations by setting
the values of both parameters to 2 mm/h:
>>> a2(2.0)
>>> a1(2.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 1.5 | 0.5 |
| 5 | 3.0 | 1.666667 | 1.333333 |
| 6 | 100.0 | 1.99 | 98.01 |
Note the similarity of the results for very high values of total
direct flow |QDGZ| in all three examples, which converge to the sum
of the values of parameter |A1| and |A2|, representing the maximum
value of `slow` direct flow generation per simulation step
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
if flu.qdgz > con.a2:
sta.qdgz2 = (flu.qdgz-con.a2)**2/(flu.qdgz+con.a1-con.a2)
sta.qdgz1 = flu.qdgz-sta.qdgz2
else:
sta.qdgz2 = 0.
sta.qdgz1 = flu.qdgz | python | def calc_qdgz1_qdgz2_v1(self):
"""Seperate total direct flow into a small and a fast component.
Required control parameters:
|A1|
|A2|
Required flux sequence:
|QDGZ|
Calculated state sequences:
|QDGZ1|
|QDGZ2|
Basic equation:
:math:`QDGZ2 = \\frac{(QDGZ-A2)^2}{QDGZ+A1-A2}`
:math:`QDGZ1 = QDGZ - QDGZ1`
Examples:
The formula for calculating the amount of the fast component of
direct flow is borrowed from the famous curve number approach.
Parameter |A2| would be the initial loss and parameter |A1| the
maximum storage, but one should not take this analogy too serious.
Instead, with the value of parameter |A1| set to zero, parameter
|A2| just defines the maximum amount of "slow" direct runoff per
time step:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> a1(0.0)
Let us set the value of |A2| to 4 mm/d, which is 2 mm/12h with
respect to the selected simulation step size:
>>> a2(4.0)
>>> a2
a2(4.0)
>>> a2.value
2.0
Define a test function and let it calculate |QDGZ1| and |QDGZ1| for
values of |QDGZ| ranging from -10 to 100 mm/12h:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_qdgz1_qdgz2_v1,
... last_example=6,
... parseqs=(fluxes.qdgz,
... states.qdgz1,
... states.qdgz2))
>>> test.nexts.qdgz = -10.0, 0.0, 1.0, 2.0, 3.0, 100.0
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 2.0 | 0.0 |
| 5 | 3.0 | 2.0 | 1.0 |
| 6 | 100.0 | 2.0 | 98.0 |
Setting |A2| to zero and |A1| to 4 mm/d (or 2 mm/12h) results in
a smoother transition:
>>> a2(0.0)
>>> a1(4.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
--------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 0.666667 | 0.333333 |
| 4 | 2.0 | 1.0 | 1.0 |
| 5 | 3.0 | 1.2 | 1.8 |
| 6 | 100.0 | 1.960784 | 98.039216 |
Alternatively, one can mix these two configurations by setting
the values of both parameters to 2 mm/h:
>>> a2(2.0)
>>> a1(2.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 1.5 | 0.5 |
| 5 | 3.0 | 1.666667 | 1.333333 |
| 6 | 100.0 | 1.99 | 98.01 |
Note the similarity of the results for very high values of total
direct flow |QDGZ| in all three examples, which converge to the sum
of the values of parameter |A1| and |A2|, representing the maximum
value of `slow` direct flow generation per simulation step
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
if flu.qdgz > con.a2:
sta.qdgz2 = (flu.qdgz-con.a2)**2/(flu.qdgz+con.a1-con.a2)
sta.qdgz1 = flu.qdgz-sta.qdgz2
else:
sta.qdgz2 = 0.
sta.qdgz1 = flu.qdgz | [
"def",
"calc_qdgz1_qdgz2_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"if",
"flu",
".",
"qdgz",
">",
"con",
".",
"a2",
":",
"sta",
".",
"qdgz2",
"=",
"(",
"flu",
".",
"qdgz",
"-",
"con",
".",
"a2",
")",
"**",
"2",
"/",
"(",
"flu",
".",
"qdgz",
"+",
"con",
".",
"a1",
"-",
"con",
".",
"a2",
")",
"sta",
".",
"qdgz1",
"=",
"flu",
".",
"qdgz",
"-",
"sta",
".",
"qdgz2",
"else",
":",
"sta",
".",
"qdgz2",
"=",
"0.",
"sta",
".",
"qdgz1",
"=",
"flu",
".",
"qdgz"
] | Seperate total direct flow into a small and a fast component.
Required control parameters:
|A1|
|A2|
Required flux sequence:
|QDGZ|
Calculated state sequences:
|QDGZ1|
|QDGZ2|
Basic equation:
:math:`QDGZ2 = \\frac{(QDGZ-A2)^2}{QDGZ+A1-A2}`
:math:`QDGZ1 = QDGZ - QDGZ1`
Examples:
The formula for calculating the amount of the fast component of
direct flow is borrowed from the famous curve number approach.
Parameter |A2| would be the initial loss and parameter |A1| the
maximum storage, but one should not take this analogy too serious.
Instead, with the value of parameter |A1| set to zero, parameter
|A2| just defines the maximum amount of "slow" direct runoff per
time step:
>>> from hydpy.models.lland import *
>>> parameterstep('1d')
>>> simulationstep('12h')
>>> a1(0.0)
Let us set the value of |A2| to 4 mm/d, which is 2 mm/12h with
respect to the selected simulation step size:
>>> a2(4.0)
>>> a2
a2(4.0)
>>> a2.value
2.0
Define a test function and let it calculate |QDGZ1| and |QDGZ1| for
values of |QDGZ| ranging from -10 to 100 mm/12h:
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_qdgz1_qdgz2_v1,
... last_example=6,
... parseqs=(fluxes.qdgz,
... states.qdgz1,
... states.qdgz2))
>>> test.nexts.qdgz = -10.0, 0.0, 1.0, 2.0, 3.0, 100.0
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 2.0 | 0.0 |
| 5 | 3.0 | 2.0 | 1.0 |
| 6 | 100.0 | 2.0 | 98.0 |
Setting |A2| to zero and |A1| to 4 mm/d (or 2 mm/12h) results in
a smoother transition:
>>> a2(0.0)
>>> a1(4.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
--------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 0.666667 | 0.333333 |
| 4 | 2.0 | 1.0 | 1.0 |
| 5 | 3.0 | 1.2 | 1.8 |
| 6 | 100.0 | 1.960784 | 98.039216 |
Alternatively, one can mix these two configurations by setting
the values of both parameters to 2 mm/h:
>>> a2(2.0)
>>> a1(2.0)
>>> test()
| ex. | qdgz | qdgz1 | qdgz2 |
-------------------------------------
| 1 | -10.0 | -10.0 | 0.0 |
| 2 | 0.0 | 0.0 | 0.0 |
| 3 | 1.0 | 1.0 | 0.0 |
| 4 | 2.0 | 1.5 | 0.5 |
| 5 | 3.0 | 1.666667 | 1.333333 |
| 6 | 100.0 | 1.99 | 98.01 |
Note the similarity of the results for very high values of total
direct flow |QDGZ| in all three examples, which converge to the sum
of the values of parameter |A1| and |A2|, representing the maximum
value of `slow` direct flow generation per simulation step | [
"Seperate",
"total",
"direct",
"flow",
"into",
"a",
"small",
"and",
"a",
"fast",
"component",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1414-L1520 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qbga_v1 | def calc_qbga_v1(self):
"""Perform the runoff concentration calculation for base flow.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KB|
Required flux sequence:
|QBGZ|
Calculated state sequence:
|QBGA|
Basic equation:
:math:`QBGA_{neu} = QBGA_{alt} +
(QBGZ_{alt}-QBGA_{alt}) \\cdot (1-exp(-KB^{-1})) +
(QBGZ_{neu}-QBGZ_{alt}) \\cdot (1-KB\\cdot(1-exp(-KB^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kb(0.1)
>>> states.qbgz.old = 2.0
>>> states.qbgz.new = 4.0
>>> states.qbga.old = 3.0
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kb(0.0)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kb(1e500)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kb <= 0.:
new.qbga = new.qbgz
elif der.kb > 1e200:
new.qbga = old.qbga+new.qbgz-old.qbgz
else:
d_temp = (1.-modelutils.exp(-1./der.kb))
new.qbga = (old.qbga +
(old.qbgz-old.qbga)*d_temp +
(new.qbgz-old.qbgz)*(1.-der.kb*d_temp)) | python | def calc_qbga_v1(self):
"""Perform the runoff concentration calculation for base flow.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KB|
Required flux sequence:
|QBGZ|
Calculated state sequence:
|QBGA|
Basic equation:
:math:`QBGA_{neu} = QBGA_{alt} +
(QBGZ_{alt}-QBGA_{alt}) \\cdot (1-exp(-KB^{-1})) +
(QBGZ_{neu}-QBGZ_{alt}) \\cdot (1-KB\\cdot(1-exp(-KB^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kb(0.1)
>>> states.qbgz.old = 2.0
>>> states.qbgz.new = 4.0
>>> states.qbga.old = 3.0
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kb(0.0)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kb(1e500)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kb <= 0.:
new.qbga = new.qbgz
elif der.kb > 1e200:
new.qbga = old.qbga+new.qbgz-old.qbgz
else:
d_temp = (1.-modelutils.exp(-1./der.kb))
new.qbga = (old.qbga +
(old.qbgz-old.qbga)*d_temp +
(new.qbgz-old.qbgz)*(1.-der.kb*d_temp)) | [
"def",
"calc_qbga_v1",
"(",
"self",
")",
":",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"old",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_old",
"new",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_new",
"if",
"der",
".",
"kb",
"<=",
"0.",
":",
"new",
".",
"qbga",
"=",
"new",
".",
"qbgz",
"elif",
"der",
".",
"kb",
">",
"1e200",
":",
"new",
".",
"qbga",
"=",
"old",
".",
"qbga",
"+",
"new",
".",
"qbgz",
"-",
"old",
".",
"qbgz",
"else",
":",
"d_temp",
"=",
"(",
"1.",
"-",
"modelutils",
".",
"exp",
"(",
"-",
"1.",
"/",
"der",
".",
"kb",
")",
")",
"new",
".",
"qbga",
"=",
"(",
"old",
".",
"qbga",
"+",
"(",
"old",
".",
"qbgz",
"-",
"old",
".",
"qbga",
")",
"*",
"d_temp",
"+",
"(",
"new",
".",
"qbgz",
"-",
"old",
".",
"qbgz",
")",
"*",
"(",
"1.",
"-",
"der",
".",
"kb",
"*",
"d_temp",
")",
")"
] | Perform the runoff concentration calculation for base flow.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KB|
Required flux sequence:
|QBGZ|
Calculated state sequence:
|QBGA|
Basic equation:
:math:`QBGA_{neu} = QBGA_{alt} +
(QBGZ_{alt}-QBGA_{alt}) \\cdot (1-exp(-KB^{-1})) +
(QBGZ_{neu}-QBGZ_{alt}) \\cdot (1-KB\\cdot(1-exp(-KB^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kb(0.1)
>>> states.qbgz.old = 2.0
>>> states.qbgz.new = 4.0
>>> states.qbga.old = 3.0
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kb(0.0)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kb(1e500)
>>> model.calc_qbga_v1()
>>> states.qbga
qbga(5.0) | [
"Perform",
"the",
"runoff",
"concentration",
"calculation",
"for",
"base",
"flow",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1523-L1583 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qiga1_v1 | def calc_qiga1_v1(self):
"""Perform the runoff concentration calculation for the first
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI1|
Required state sequence:
|QIGZ1|
Calculated state sequence:
|QIGA1|
Basic equation:
:math:`QIGA1_{neu} = QIGA1_{alt} +
(QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) +
(QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki1(0.1)
>>> states.qigz1.old = 2.0
>>> states.qigz1.new = 4.0
>>> states.qiga1.old = 3.0
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki1(0.0)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki1(1e500)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki1 <= 0.:
new.qiga1 = new.qigz1
elif der.ki1 > 1e200:
new.qiga1 = old.qiga1+new.qigz1-old.qigz1
else:
d_temp = (1.-modelutils.exp(-1./der.ki1))
new.qiga1 = (old.qiga1 +
(old.qigz1-old.qiga1)*d_temp +
(new.qigz1-old.qigz1)*(1.-der.ki1*d_temp)) | python | def calc_qiga1_v1(self):
"""Perform the runoff concentration calculation for the first
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI1|
Required state sequence:
|QIGZ1|
Calculated state sequence:
|QIGA1|
Basic equation:
:math:`QIGA1_{neu} = QIGA1_{alt} +
(QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) +
(QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki1(0.1)
>>> states.qigz1.old = 2.0
>>> states.qigz1.new = 4.0
>>> states.qiga1.old = 3.0
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki1(0.0)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki1(1e500)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki1 <= 0.:
new.qiga1 = new.qigz1
elif der.ki1 > 1e200:
new.qiga1 = old.qiga1+new.qigz1-old.qigz1
else:
d_temp = (1.-modelutils.exp(-1./der.ki1))
new.qiga1 = (old.qiga1 +
(old.qigz1-old.qiga1)*d_temp +
(new.qigz1-old.qigz1)*(1.-der.ki1*d_temp)) | [
"def",
"calc_qiga1_v1",
"(",
"self",
")",
":",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"old",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_old",
"new",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_new",
"if",
"der",
".",
"ki1",
"<=",
"0.",
":",
"new",
".",
"qiga1",
"=",
"new",
".",
"qigz1",
"elif",
"der",
".",
"ki1",
">",
"1e200",
":",
"new",
".",
"qiga1",
"=",
"old",
".",
"qiga1",
"+",
"new",
".",
"qigz1",
"-",
"old",
".",
"qigz1",
"else",
":",
"d_temp",
"=",
"(",
"1.",
"-",
"modelutils",
".",
"exp",
"(",
"-",
"1.",
"/",
"der",
".",
"ki1",
")",
")",
"new",
".",
"qiga1",
"=",
"(",
"old",
".",
"qiga1",
"+",
"(",
"old",
".",
"qigz1",
"-",
"old",
".",
"qiga1",
")",
"*",
"d_temp",
"+",
"(",
"new",
".",
"qigz1",
"-",
"old",
".",
"qigz1",
")",
"*",
"(",
"1.",
"-",
"der",
".",
"ki1",
"*",
"d_temp",
")",
")"
] | Perform the runoff concentration calculation for the first
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI1|
Required state sequence:
|QIGZ1|
Calculated state sequence:
|QIGA1|
Basic equation:
:math:`QIGA1_{neu} = QIGA1_{alt} +
(QIGZ1_{alt}-QIGA1_{alt}) \\cdot (1-exp(-KI1^{-1})) +
(QIGZ1_{neu}-QIGZ1_{alt}) \\cdot (1-KI1\\cdot(1-exp(-KI1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki1(0.1)
>>> states.qigz1.old = 2.0
>>> states.qigz1.new = 4.0
>>> states.qiga1.old = 3.0
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki1(0.0)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki1(1e500)
>>> model.calc_qiga1_v1()
>>> states.qiga1
qiga1(5.0) | [
"Perform",
"the",
"runoff",
"concentration",
"calculation",
"for",
"the",
"first",
"interflow",
"component",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1586-L1647 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qiga2_v1 | def calc_qiga2_v1(self):
"""Perform the runoff concentration calculation for the second
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI2|
Required state sequence:
|QIGZ2|
Calculated state sequence:
|QIGA2|
Basic equation:
:math:`QIGA2_{neu} = QIGA2_{alt} +
(QIGZ2_{alt}-QIGA2_{alt}) \\cdot (1-exp(-KI2^{-1})) +
(QIGZ2_{neu}-QIGZ2_{alt}) \\cdot (1-KI2\\cdot(1-exp(-KI2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki2(0.1)
>>> states.qigz2.old = 2.0
>>> states.qigz2.new = 4.0
>>> states.qiga2.old = 3.0
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki2(0.0)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki2(1e500)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki2 <= 0.:
new.qiga2 = new.qigz2
elif der.ki2 > 1e200:
new.qiga2 = old.qiga2+new.qigz2-old.qigz2
else:
d_temp = (1.-modelutils.exp(-1./der.ki2))
new.qiga2 = (old.qiga2 +
(old.qigz2-old.qiga2)*d_temp +
(new.qigz2-old.qigz2)*(1.-der.ki2*d_temp)) | python | def calc_qiga2_v1(self):
"""Perform the runoff concentration calculation for the second
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI2|
Required state sequence:
|QIGZ2|
Calculated state sequence:
|QIGA2|
Basic equation:
:math:`QIGA2_{neu} = QIGA2_{alt} +
(QIGZ2_{alt}-QIGA2_{alt}) \\cdot (1-exp(-KI2^{-1})) +
(QIGZ2_{neu}-QIGZ2_{alt}) \\cdot (1-KI2\\cdot(1-exp(-KI2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki2(0.1)
>>> states.qigz2.old = 2.0
>>> states.qigz2.new = 4.0
>>> states.qiga2.old = 3.0
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki2(0.0)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki2(1e500)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.ki2 <= 0.:
new.qiga2 = new.qigz2
elif der.ki2 > 1e200:
new.qiga2 = old.qiga2+new.qigz2-old.qigz2
else:
d_temp = (1.-modelutils.exp(-1./der.ki2))
new.qiga2 = (old.qiga2 +
(old.qigz2-old.qiga2)*d_temp +
(new.qigz2-old.qigz2)*(1.-der.ki2*d_temp)) | [
"def",
"calc_qiga2_v1",
"(",
"self",
")",
":",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"old",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_old",
"new",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_new",
"if",
"der",
".",
"ki2",
"<=",
"0.",
":",
"new",
".",
"qiga2",
"=",
"new",
".",
"qigz2",
"elif",
"der",
".",
"ki2",
">",
"1e200",
":",
"new",
".",
"qiga2",
"=",
"old",
".",
"qiga2",
"+",
"new",
".",
"qigz2",
"-",
"old",
".",
"qigz2",
"else",
":",
"d_temp",
"=",
"(",
"1.",
"-",
"modelutils",
".",
"exp",
"(",
"-",
"1.",
"/",
"der",
".",
"ki2",
")",
")",
"new",
".",
"qiga2",
"=",
"(",
"old",
".",
"qiga2",
"+",
"(",
"old",
".",
"qigz2",
"-",
"old",
".",
"qiga2",
")",
"*",
"d_temp",
"+",
"(",
"new",
".",
"qigz2",
"-",
"old",
".",
"qigz2",
")",
"*",
"(",
"1.",
"-",
"der",
".",
"ki2",
"*",
"d_temp",
")",
")"
] | Perform the runoff concentration calculation for the second
interflow component.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KI2|
Required state sequence:
|QIGZ2|
Calculated state sequence:
|QIGA2|
Basic equation:
:math:`QIGA2_{neu} = QIGA2_{alt} +
(QIGZ2_{alt}-QIGA2_{alt}) \\cdot (1-exp(-KI2^{-1})) +
(QIGZ2_{neu}-QIGZ2_{alt}) \\cdot (1-KI2\\cdot(1-exp(-KI2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.ki2(0.1)
>>> states.qigz2.old = 2.0
>>> states.qigz2.new = 4.0
>>> states.qiga2.old = 3.0
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.ki2(0.0)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.ki2(1e500)
>>> model.calc_qiga2_v1()
>>> states.qiga2
qiga2(5.0) | [
"Perform",
"the",
"runoff",
"concentration",
"calculation",
"for",
"the",
"second",
"interflow",
"component",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1650-L1711 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qdga1_v1 | def calc_qdga1_v1(self):
"""Perform the runoff concentration calculation for "slow" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD1|
Required state sequence:
|QDGZ1|
Calculated state sequence:
|QDGA1|
Basic equation:
:math:`QDGA1_{neu} = QDGA1_{alt} +
(QDGZ1_{alt}-QDGA1_{alt}) \\cdot (1-exp(-KD1^{-1})) +
(QDGZ1_{neu}-QDGZ1_{alt}) \\cdot (1-KD1\\cdot(1-exp(-KD1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd1(0.1)
>>> states.qdgz1.old = 2.0
>>> states.qdgz1.new = 4.0
>>> states.qdga1.old = 3.0
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd1(0.0)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd1(1e500)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd1 <= 0.:
new.qdga1 = new.qdgz1
elif der.kd1 > 1e200:
new.qdga1 = old.qdga1+new.qdgz1-old.qdgz1
else:
d_temp = (1.-modelutils.exp(-1./der.kd1))
new.qdga1 = (old.qdga1 +
(old.qdgz1-old.qdga1)*d_temp +
(new.qdgz1-old.qdgz1)*(1.-der.kd1*d_temp)) | python | def calc_qdga1_v1(self):
"""Perform the runoff concentration calculation for "slow" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD1|
Required state sequence:
|QDGZ1|
Calculated state sequence:
|QDGA1|
Basic equation:
:math:`QDGA1_{neu} = QDGA1_{alt} +
(QDGZ1_{alt}-QDGA1_{alt}) \\cdot (1-exp(-KD1^{-1})) +
(QDGZ1_{neu}-QDGZ1_{alt}) \\cdot (1-KD1\\cdot(1-exp(-KD1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd1(0.1)
>>> states.qdgz1.old = 2.0
>>> states.qdgz1.new = 4.0
>>> states.qdga1.old = 3.0
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd1(0.0)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd1(1e500)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd1 <= 0.:
new.qdga1 = new.qdgz1
elif der.kd1 > 1e200:
new.qdga1 = old.qdga1+new.qdgz1-old.qdgz1
else:
d_temp = (1.-modelutils.exp(-1./der.kd1))
new.qdga1 = (old.qdga1 +
(old.qdgz1-old.qdga1)*d_temp +
(new.qdgz1-old.qdgz1)*(1.-der.kd1*d_temp)) | [
"def",
"calc_qdga1_v1",
"(",
"self",
")",
":",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"old",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_old",
"new",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_new",
"if",
"der",
".",
"kd1",
"<=",
"0.",
":",
"new",
".",
"qdga1",
"=",
"new",
".",
"qdgz1",
"elif",
"der",
".",
"kd1",
">",
"1e200",
":",
"new",
".",
"qdga1",
"=",
"old",
".",
"qdga1",
"+",
"new",
".",
"qdgz1",
"-",
"old",
".",
"qdgz1",
"else",
":",
"d_temp",
"=",
"(",
"1.",
"-",
"modelutils",
".",
"exp",
"(",
"-",
"1.",
"/",
"der",
".",
"kd1",
")",
")",
"new",
".",
"qdga1",
"=",
"(",
"old",
".",
"qdga1",
"+",
"(",
"old",
".",
"qdgz1",
"-",
"old",
".",
"qdga1",
")",
"*",
"d_temp",
"+",
"(",
"new",
".",
"qdgz1",
"-",
"old",
".",
"qdgz1",
")",
"*",
"(",
"1.",
"-",
"der",
".",
"kd1",
"*",
"d_temp",
")",
")"
] | Perform the runoff concentration calculation for "slow" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD1|
Required state sequence:
|QDGZ1|
Calculated state sequence:
|QDGA1|
Basic equation:
:math:`QDGA1_{neu} = QDGA1_{alt} +
(QDGZ1_{alt}-QDGA1_{alt}) \\cdot (1-exp(-KD1^{-1})) +
(QDGZ1_{neu}-QDGZ1_{alt}) \\cdot (1-KD1\\cdot(1-exp(-KD1^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd1(0.1)
>>> states.qdgz1.old = 2.0
>>> states.qdgz1.new = 4.0
>>> states.qdga1.old = 3.0
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd1(0.0)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd1(1e500)
>>> model.calc_qdga1_v1()
>>> states.qdga1
qdga1(5.0) | [
"Perform",
"the",
"runoff",
"concentration",
"calculation",
"for",
"slow",
"direct",
"runoff",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1714-L1774 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_qdga2_v1 | def calc_qdga2_v1(self):
"""Perform the runoff concentration calculation for "fast" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD2|
Required state sequence:
|QDGZ2|
Calculated state sequence:
|QDGA2|
Basic equation:
:math:`QDGA2_{neu} = QDGA2_{alt} +
(QDGZ2_{alt}-QDGA2_{alt}) \\cdot (1-exp(-KD2^{-1})) +
(QDGZ2_{neu}-QDGZ2_{alt}) \\cdot (1-KD2\\cdot(1-exp(-KD2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd2(0.1)
>>> states.qdgz2.old = 2.0
>>> states.qdgz2.new = 4.0
>>> states.qdga2.old = 3.0
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd2(0.0)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd2(1e500)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd2 <= 0.:
new.qdga2 = new.qdgz2
elif der.kd2 > 1e200:
new.qdga2 = old.qdga2+new.qdgz2-old.qdgz2
else:
d_temp = (1.-modelutils.exp(-1./der.kd2))
new.qdga2 = (old.qdga2 +
(old.qdgz2-old.qdga2)*d_temp +
(new.qdgz2-old.qdgz2)*(1.-der.kd2*d_temp)) | python | def calc_qdga2_v1(self):
"""Perform the runoff concentration calculation for "fast" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD2|
Required state sequence:
|QDGZ2|
Calculated state sequence:
|QDGA2|
Basic equation:
:math:`QDGA2_{neu} = QDGA2_{alt} +
(QDGZ2_{alt}-QDGA2_{alt}) \\cdot (1-exp(-KD2^{-1})) +
(QDGZ2_{neu}-QDGZ2_{alt}) \\cdot (1-KD2\\cdot(1-exp(-KD2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd2(0.1)
>>> states.qdgz2.old = 2.0
>>> states.qdgz2.new = 4.0
>>> states.qdga2.old = 3.0
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd2(0.0)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd2(1e500)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(5.0)
"""
der = self.parameters.derived.fastaccess
old = self.sequences.states.fastaccess_old
new = self.sequences.states.fastaccess_new
if der.kd2 <= 0.:
new.qdga2 = new.qdgz2
elif der.kd2 > 1e200:
new.qdga2 = old.qdga2+new.qdgz2-old.qdgz2
else:
d_temp = (1.-modelutils.exp(-1./der.kd2))
new.qdga2 = (old.qdga2 +
(old.qdgz2-old.qdga2)*d_temp +
(new.qdgz2-old.qdgz2)*(1.-der.kd2*d_temp)) | [
"def",
"calc_qdga2_v1",
"(",
"self",
")",
":",
"der",
"=",
"self",
".",
"parameters",
".",
"derived",
".",
"fastaccess",
"old",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_old",
"new",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess_new",
"if",
"der",
".",
"kd2",
"<=",
"0.",
":",
"new",
".",
"qdga2",
"=",
"new",
".",
"qdgz2",
"elif",
"der",
".",
"kd2",
">",
"1e200",
":",
"new",
".",
"qdga2",
"=",
"old",
".",
"qdga2",
"+",
"new",
".",
"qdgz2",
"-",
"old",
".",
"qdgz2",
"else",
":",
"d_temp",
"=",
"(",
"1.",
"-",
"modelutils",
".",
"exp",
"(",
"-",
"1.",
"/",
"der",
".",
"kd2",
")",
")",
"new",
".",
"qdga2",
"=",
"(",
"old",
".",
"qdga2",
"+",
"(",
"old",
".",
"qdgz2",
"-",
"old",
".",
"qdga2",
")",
"*",
"d_temp",
"+",
"(",
"new",
".",
"qdgz2",
"-",
"old",
".",
"qdgz2",
")",
"*",
"(",
"1.",
"-",
"der",
".",
"kd2",
"*",
"d_temp",
")",
")"
] | Perform the runoff concentration calculation for "fast" direct runoff.
The working equation is the analytical solution of the linear storage
equation under the assumption of constant change in inflow during
the simulation time step.
Required derived parameter:
|KD2|
Required state sequence:
|QDGZ2|
Calculated state sequence:
|QDGA2|
Basic equation:
:math:`QDGA2_{neu} = QDGA2_{alt} +
(QDGZ2_{alt}-QDGA2_{alt}) \\cdot (1-exp(-KD2^{-1})) +
(QDGZ2_{neu}-QDGZ2_{alt}) \\cdot (1-KD2\\cdot(1-exp(-KD2^{-1})))`
Examples:
A normal test case:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> derived.kd2(0.1)
>>> states.qdgz2.old = 2.0
>>> states.qdgz2.new = 4.0
>>> states.qdga2.old = 3.0
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(3.800054)
First extreme test case (zero division is circumvented):
>>> derived.kd2(0.0)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(4.0)
Second extreme test case (numerical overflow is circumvented):
>>> derived.kd2(1e500)
>>> model.calc_qdga2_v1()
>>> states.qdga2
qdga2(5.0) | [
"Perform",
"the",
"runoff",
"concentration",
"calculation",
"for",
"fast",
"direct",
"runoff",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1777-L1837 | train |
hydpy-dev/hydpy | hydpy/models/lland/lland_model.py | calc_q_v1 | def calc_q_v1(self):
"""Calculate the final runoff.
Note that, in case there are water areas, their |NKor| values are
added and their |EvPo| values are subtracted from the "potential"
runoff value, if possible. This hold true for |WASSER| only and is
due to compatibility with the original LARSIM implementation. Using land
type |WASSER| can result in problematic modifications of simulated
runoff series. It seems advisable to use land type |FLUSS| and/or
land type |SEE| instead.
Required control parameters:
|NHRU|
|FHRU|
|Lnk|
|NegQ|
Required flux sequence:
|NKor|
Updated flux sequence:
|EvI|
Required state sequences:
|QBGA|
|QIGA1|
|QIGA2|
|QDGA1|
|QDGA2|
Calculated flux sequence:
|lland_fluxes.Q|
Basic equations:
:math:`Q = QBGA + QIGA1 + QIGA2 + QDGA1 + QDGA2 +
NKor_{WASSER} - EvI_{WASSER}`
:math:`Q \\geq 0`
Examples:
When there are no water areas in the respective subbasin (we
choose arable land |ACKER| arbitrarily), the different runoff
components are simply summed up:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(3)
>>> lnk(ACKER, ACKER, ACKER)
>>> fhru(0.5, 0.2, 0.3)
>>> negq(False)
>>> states.qbga = 0.1
>>> states.qiga1 = 0.3
>>> states.qiga2 = 0.5
>>> states.qdga1 = 0.7
>>> states.qdga2 = 0.9
>>> fluxes.nkor = 10.0
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(2.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
The defined values of interception evaporation do not show any
impact on the result of the given example, the predefined values
for sequence |EvI| remain unchanged. But when the first HRU is
assumed to be a water area (|WASSER|), its adjusted precipitaton
|NKor| value and its interception evaporation |EvI| value are added
to and subtracted from |lland_fluxes.Q| respectively:
>>> control.lnk(WASSER, VERS, NADELW)
>>> model.calc_q_v1()
>>> fluxes.q
q(5.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
Note that only 5 mm are added (instead of the |NKor| value 10 mm)
and that only 2 mm are substracted (instead of the |EvI| value 4 mm,
as the first HRU`s area only accounts for 50 % of the subbasin area.
Setting also the land use class of the second HRU to land type
|WASSER| and resetting |NKor| to zero would result in overdrying.
To avoid this, both actual water evaporation values stored in
sequence |EvI| are reduced by the same factor:
>>> control.lnk(WASSER, WASSER, NADELW)
>>> fluxes.nkor = 0.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(3.333333, 4.166667, 3.0)
The handling from water areas of type |FLUSS| and |SEE| differs
from those of type |WASSER|, as these do receive their net input
before the runoff concentration routines are applied. This
should be more realistic in most cases (especially for type |SEE|
representing lakes not direct connected to the stream network).
But it could sometimes result in negative outflow values. This
is avoided by simply setting |lland_fluxes.Q| to zero and adding
the truncated negative outflow value to the |EvI| value of all
HRUs of type |FLUSS| and |SEE|:
>>> control.lnk(FLUSS, SEE, NADELW)
>>> states.qbga = -1.0
>>> states.qdga2 = -1.5
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(2.571429, 3.571429, 3.0)
This adjustment of |EvI| is only correct regarding the total
water balance. Neither spatial nor temporal consistency of the
resulting |EvI| values are assured. In the most extreme case,
even negative |EvI| values might occur. This seems acceptable,
as long as the adjustment of |EvI| is rarely triggered. When in
doubt about this, check sequences |EvPo| and |EvI| of HRUs of
types |FLUSS| and |SEE| for possible discrepancies. Also note
that there might occur unnecessary corrections of |lland_fluxes.Q|
in case landtype |WASSER| is combined with either landtype
|SEE| or |FLUSS|.
Eventually you might want to avoid correcting |lland_fluxes.Q|.
This can be achieved by setting parameter |NegQ| to `True`:
>>> negq(True)
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(-1.0)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
flu.q = sta.qbga+sta.qiga1+sta.qiga2+sta.qdga1+sta.qdga2
if (not con.negq) and (flu.q < 0.):
d_area = 0.
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
d_area += con.fhru[k]
if d_area > 0.:
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
flu.evi[k] += flu.q/d_area
flu.q = 0.
aid.epw = 0.
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.q += con.fhru[k]*flu.nkor[k]
aid.epw += con.fhru[k]*flu.evi[k]
if (flu.q > aid.epw) or con.negq:
flu.q -= aid.epw
elif aid.epw > 0.:
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.evi[k] *= flu.q/aid.epw
flu.q = 0. | python | def calc_q_v1(self):
"""Calculate the final runoff.
Note that, in case there are water areas, their |NKor| values are
added and their |EvPo| values are subtracted from the "potential"
runoff value, if possible. This hold true for |WASSER| only and is
due to compatibility with the original LARSIM implementation. Using land
type |WASSER| can result in problematic modifications of simulated
runoff series. It seems advisable to use land type |FLUSS| and/or
land type |SEE| instead.
Required control parameters:
|NHRU|
|FHRU|
|Lnk|
|NegQ|
Required flux sequence:
|NKor|
Updated flux sequence:
|EvI|
Required state sequences:
|QBGA|
|QIGA1|
|QIGA2|
|QDGA1|
|QDGA2|
Calculated flux sequence:
|lland_fluxes.Q|
Basic equations:
:math:`Q = QBGA + QIGA1 + QIGA2 + QDGA1 + QDGA2 +
NKor_{WASSER} - EvI_{WASSER}`
:math:`Q \\geq 0`
Examples:
When there are no water areas in the respective subbasin (we
choose arable land |ACKER| arbitrarily), the different runoff
components are simply summed up:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(3)
>>> lnk(ACKER, ACKER, ACKER)
>>> fhru(0.5, 0.2, 0.3)
>>> negq(False)
>>> states.qbga = 0.1
>>> states.qiga1 = 0.3
>>> states.qiga2 = 0.5
>>> states.qdga1 = 0.7
>>> states.qdga2 = 0.9
>>> fluxes.nkor = 10.0
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(2.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
The defined values of interception evaporation do not show any
impact on the result of the given example, the predefined values
for sequence |EvI| remain unchanged. But when the first HRU is
assumed to be a water area (|WASSER|), its adjusted precipitaton
|NKor| value and its interception evaporation |EvI| value are added
to and subtracted from |lland_fluxes.Q| respectively:
>>> control.lnk(WASSER, VERS, NADELW)
>>> model.calc_q_v1()
>>> fluxes.q
q(5.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
Note that only 5 mm are added (instead of the |NKor| value 10 mm)
and that only 2 mm are substracted (instead of the |EvI| value 4 mm,
as the first HRU`s area only accounts for 50 % of the subbasin area.
Setting also the land use class of the second HRU to land type
|WASSER| and resetting |NKor| to zero would result in overdrying.
To avoid this, both actual water evaporation values stored in
sequence |EvI| are reduced by the same factor:
>>> control.lnk(WASSER, WASSER, NADELW)
>>> fluxes.nkor = 0.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(3.333333, 4.166667, 3.0)
The handling from water areas of type |FLUSS| and |SEE| differs
from those of type |WASSER|, as these do receive their net input
before the runoff concentration routines are applied. This
should be more realistic in most cases (especially for type |SEE|
representing lakes not direct connected to the stream network).
But it could sometimes result in negative outflow values. This
is avoided by simply setting |lland_fluxes.Q| to zero and adding
the truncated negative outflow value to the |EvI| value of all
HRUs of type |FLUSS| and |SEE|:
>>> control.lnk(FLUSS, SEE, NADELW)
>>> states.qbga = -1.0
>>> states.qdga2 = -1.5
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(2.571429, 3.571429, 3.0)
This adjustment of |EvI| is only correct regarding the total
water balance. Neither spatial nor temporal consistency of the
resulting |EvI| values are assured. In the most extreme case,
even negative |EvI| values might occur. This seems acceptable,
as long as the adjustment of |EvI| is rarely triggered. When in
doubt about this, check sequences |EvPo| and |EvI| of HRUs of
types |FLUSS| and |SEE| for possible discrepancies. Also note
that there might occur unnecessary corrections of |lland_fluxes.Q|
in case landtype |WASSER| is combined with either landtype
|SEE| or |FLUSS|.
Eventually you might want to avoid correcting |lland_fluxes.Q|.
This can be achieved by setting parameter |NegQ| to `True`:
>>> negq(True)
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(-1.0)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
"""
con = self.parameters.control.fastaccess
flu = self.sequences.fluxes.fastaccess
sta = self.sequences.states.fastaccess
aid = self.sequences.aides.fastaccess
flu.q = sta.qbga+sta.qiga1+sta.qiga2+sta.qdga1+sta.qdga2
if (not con.negq) and (flu.q < 0.):
d_area = 0.
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
d_area += con.fhru[k]
if d_area > 0.:
for k in range(con.nhru):
if con.lnk[k] in (FLUSS, SEE):
flu.evi[k] += flu.q/d_area
flu.q = 0.
aid.epw = 0.
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.q += con.fhru[k]*flu.nkor[k]
aid.epw += con.fhru[k]*flu.evi[k]
if (flu.q > aid.epw) or con.negq:
flu.q -= aid.epw
elif aid.epw > 0.:
for k in range(con.nhru):
if con.lnk[k] == WASSER:
flu.evi[k] *= flu.q/aid.epw
flu.q = 0. | [
"def",
"calc_q_v1",
"(",
"self",
")",
":",
"con",
"=",
"self",
".",
"parameters",
".",
"control",
".",
"fastaccess",
"flu",
"=",
"self",
".",
"sequences",
".",
"fluxes",
".",
"fastaccess",
"sta",
"=",
"self",
".",
"sequences",
".",
"states",
".",
"fastaccess",
"aid",
"=",
"self",
".",
"sequences",
".",
"aides",
".",
"fastaccess",
"flu",
".",
"q",
"=",
"sta",
".",
"qbga",
"+",
"sta",
".",
"qiga1",
"+",
"sta",
".",
"qiga2",
"+",
"sta",
".",
"qdga1",
"+",
"sta",
".",
"qdga2",
"if",
"(",
"not",
"con",
".",
"negq",
")",
"and",
"(",
"flu",
".",
"q",
"<",
"0.",
")",
":",
"d_area",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"FLUSS",
",",
"SEE",
")",
":",
"d_area",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"if",
"d_area",
">",
"0.",
":",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"in",
"(",
"FLUSS",
",",
"SEE",
")",
":",
"flu",
".",
"evi",
"[",
"k",
"]",
"+=",
"flu",
".",
"q",
"/",
"d_area",
"flu",
".",
"q",
"=",
"0.",
"aid",
".",
"epw",
"=",
"0.",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"==",
"WASSER",
":",
"flu",
".",
"q",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"nkor",
"[",
"k",
"]",
"aid",
".",
"epw",
"+=",
"con",
".",
"fhru",
"[",
"k",
"]",
"*",
"flu",
".",
"evi",
"[",
"k",
"]",
"if",
"(",
"flu",
".",
"q",
">",
"aid",
".",
"epw",
")",
"or",
"con",
".",
"negq",
":",
"flu",
".",
"q",
"-=",
"aid",
".",
"epw",
"elif",
"aid",
".",
"epw",
">",
"0.",
":",
"for",
"k",
"in",
"range",
"(",
"con",
".",
"nhru",
")",
":",
"if",
"con",
".",
"lnk",
"[",
"k",
"]",
"==",
"WASSER",
":",
"flu",
".",
"evi",
"[",
"k",
"]",
"*=",
"flu",
".",
"q",
"/",
"aid",
".",
"epw",
"flu",
".",
"q",
"=",
"0."
] | Calculate the final runoff.
Note that, in case there are water areas, their |NKor| values are
added and their |EvPo| values are subtracted from the "potential"
runoff value, if possible. This hold true for |WASSER| only and is
due to compatibility with the original LARSIM implementation. Using land
type |WASSER| can result in problematic modifications of simulated
runoff series. It seems advisable to use land type |FLUSS| and/or
land type |SEE| instead.
Required control parameters:
|NHRU|
|FHRU|
|Lnk|
|NegQ|
Required flux sequence:
|NKor|
Updated flux sequence:
|EvI|
Required state sequences:
|QBGA|
|QIGA1|
|QIGA2|
|QDGA1|
|QDGA2|
Calculated flux sequence:
|lland_fluxes.Q|
Basic equations:
:math:`Q = QBGA + QIGA1 + QIGA2 + QDGA1 + QDGA2 +
NKor_{WASSER} - EvI_{WASSER}`
:math:`Q \\geq 0`
Examples:
When there are no water areas in the respective subbasin (we
choose arable land |ACKER| arbitrarily), the different runoff
components are simply summed up:
>>> from hydpy.models.lland import *
>>> parameterstep()
>>> nhru(3)
>>> lnk(ACKER, ACKER, ACKER)
>>> fhru(0.5, 0.2, 0.3)
>>> negq(False)
>>> states.qbga = 0.1
>>> states.qiga1 = 0.3
>>> states.qiga2 = 0.5
>>> states.qdga1 = 0.7
>>> states.qdga2 = 0.9
>>> fluxes.nkor = 10.0
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(2.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
The defined values of interception evaporation do not show any
impact on the result of the given example, the predefined values
for sequence |EvI| remain unchanged. But when the first HRU is
assumed to be a water area (|WASSER|), its adjusted precipitaton
|NKor| value and its interception evaporation |EvI| value are added
to and subtracted from |lland_fluxes.Q| respectively:
>>> control.lnk(WASSER, VERS, NADELW)
>>> model.calc_q_v1()
>>> fluxes.q
q(5.5)
>>> fluxes.evi
evi(4.0, 5.0, 3.0)
Note that only 5 mm are added (instead of the |NKor| value 10 mm)
and that only 2 mm are substracted (instead of the |EvI| value 4 mm,
as the first HRU`s area only accounts for 50 % of the subbasin area.
Setting also the land use class of the second HRU to land type
|WASSER| and resetting |NKor| to zero would result in overdrying.
To avoid this, both actual water evaporation values stored in
sequence |EvI| are reduced by the same factor:
>>> control.lnk(WASSER, WASSER, NADELW)
>>> fluxes.nkor = 0.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(3.333333, 4.166667, 3.0)
The handling from water areas of type |FLUSS| and |SEE| differs
from those of type |WASSER|, as these do receive their net input
before the runoff concentration routines are applied. This
should be more realistic in most cases (especially for type |SEE|
representing lakes not direct connected to the stream network).
But it could sometimes result in negative outflow values. This
is avoided by simply setting |lland_fluxes.Q| to zero and adding
the truncated negative outflow value to the |EvI| value of all
HRUs of type |FLUSS| and |SEE|:
>>> control.lnk(FLUSS, SEE, NADELW)
>>> states.qbga = -1.0
>>> states.qdga2 = -1.5
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(0.0)
>>> fluxes.evi
evi(2.571429, 3.571429, 3.0)
This adjustment of |EvI| is only correct regarding the total
water balance. Neither spatial nor temporal consistency of the
resulting |EvI| values are assured. In the most extreme case,
even negative |EvI| values might occur. This seems acceptable,
as long as the adjustment of |EvI| is rarely triggered. When in
doubt about this, check sequences |EvPo| and |EvI| of HRUs of
types |FLUSS| and |SEE| for possible discrepancies. Also note
that there might occur unnecessary corrections of |lland_fluxes.Q|
in case landtype |WASSER| is combined with either landtype
|SEE| or |FLUSS|.
Eventually you might want to avoid correcting |lland_fluxes.Q|.
This can be achieved by setting parameter |NegQ| to `True`:
>>> negq(True)
>>> fluxes.evi = 4.0, 5.0, 3.0
>>> model.calc_q_v1()
>>> fluxes.q
q(-1.0)
>>> fluxes.evi
evi(4.0, 5.0, 3.0) | [
"Calculate",
"the",
"final",
"runoff",
"."
] | 1bc6a82cf30786521d86b36e27900c6717d3348d | https://github.com/hydpy-dev/hydpy/blob/1bc6a82cf30786521d86b36e27900c6717d3348d/hydpy/models/lland/lland_model.py#L1840-L2002 | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.