body_hash stringlengths 64 64 | body stringlengths 23 109k | docstring stringlengths 1 57k | path stringlengths 4 198 | name stringlengths 1 115 | repository_name stringlengths 7 111 | repository_stars float64 0 191k | lang stringclasses 1 value | body_without_docstring stringlengths 14 108k | unified stringlengths 45 133k |
|---|---|---|---|---|---|---|---|---|---|
03d1a9d44eef5dba19f881703ed44983faa9d69e6f5a9baa5cf849bf6fd4a60b | def image_min_value(img, region=None, scale=None):
"Retrieves the minimum value of an image.\n\n Args:\n img (object): The image to calculate the minimum value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
min_value = img.reduceRegion(**{'reducer': ee.Reducer.min(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return min_value | Retrieves the minimum value of an image.
Args:
img (object): The image to calculate the minimum value.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number | geemap/common.py | image_min_value | arheem/geemap | 1 | python | def image_min_value(img, region=None, scale=None):
"Retrieves the minimum value of an image.\n\n Args:\n img (object): The image to calculate the minimum value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
min_value = img.reduceRegion(**{'reducer': ee.Reducer.min(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return min_value | def image_min_value(img, region=None, scale=None):
"Retrieves the minimum value of an image.\n\n Args:\n img (object): The image to calculate the minimum value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
min_value = img.reduceRegion(**{'reducer': ee.Reducer.min(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return min_value<|docstring|>Retrieves the minimum value of an image.
Args:
img (object): The image to calculate the minimum value.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number<|endoftext|> |
ad04de544a59442a7cc2e3c8099243022455f599cb2ba2a3c748a39283af8500 | def image_mean_value(img, region=None, scale=None):
"Retrieves the mean value of an image.\n\n Args:\n img (object): The image to calculate the mean value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
mean_value = img.reduceRegion(**{'reducer': ee.Reducer.mean(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return mean_value | Retrieves the mean value of an image.
Args:
img (object): The image to calculate the mean value.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number | geemap/common.py | image_mean_value | arheem/geemap | 1 | python | def image_mean_value(img, region=None, scale=None):
"Retrieves the mean value of an image.\n\n Args:\n img (object): The image to calculate the mean value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
mean_value = img.reduceRegion(**{'reducer': ee.Reducer.mean(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return mean_value | def image_mean_value(img, region=None, scale=None):
"Retrieves the mean value of an image.\n\n Args:\n img (object): The image to calculate the mean value.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
mean_value = img.reduceRegion(**{'reducer': ee.Reducer.mean(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return mean_value<|docstring|>Retrieves the mean value of an image.
Args:
img (object): The image to calculate the mean value.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number<|endoftext|> |
57ffc0221b1e1abd0f45077dce0ea8470de921091082e0bc8449bcfcbca4985a | def image_std_value(img, region=None, scale=None):
"Retrieves the standard deviation of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
std_value = img.reduceRegion(**{'reducer': ee.Reducer.stdDev(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return std_value | Retrieves the standard deviation of an image.
Args:
img (object): The image to calculate the standard deviation.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number | geemap/common.py | image_std_value | arheem/geemap | 1 | python | def image_std_value(img, region=None, scale=None):
"Retrieves the standard deviation of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
std_value = img.reduceRegion(**{'reducer': ee.Reducer.stdDev(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return std_value | def image_std_value(img, region=None, scale=None):
"Retrieves the standard deviation of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
std_value = img.reduceRegion(**{'reducer': ee.Reducer.stdDev(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return std_value<|docstring|>Retrieves the standard deviation of an image.
Args:
img (object): The image to calculate the standard deviation.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number<|endoftext|> |
7fe8b351593934527528bf319f2f7fd981494383632e812e5b65f31b59f15417 | def image_sum_value(img, region=None, scale=None):
"Retrieves the sum of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
sum_value = img.reduceRegion(**{'reducer': ee.Reducer.sum(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return sum_value | Retrieves the sum of an image.
Args:
img (object): The image to calculate the standard deviation.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number | geemap/common.py | image_sum_value | arheem/geemap | 1 | python | def image_sum_value(img, region=None, scale=None):
"Retrieves the sum of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
sum_value = img.reduceRegion(**{'reducer': ee.Reducer.sum(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return sum_value | def image_sum_value(img, region=None, scale=None):
"Retrieves the sum of an image.\n\n Args:\n img (object): The image to calculate the standard deviation.\n region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.\n scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.\n\n Returns:\n object: ee.Number\n "
if (region is None):
region = img.geometry()
if (scale is None):
scale = image_scale(img)
sum_value = img.reduceRegion(**{'reducer': ee.Reducer.sum(), 'geometry': region, 'scale': scale, 'maxPixels': 1000000000000.0})
return sum_value<|docstring|>Retrieves the sum of an image.
Args:
img (object): The image to calculate the standard deviation.
region (object, optional): The region over which to reduce data. Defaults to the footprint of the image's first band.
scale (float, optional): A nominal scale in meters of the projection to work in. Defaults to None.
Returns:
object: ee.Number<|endoftext|> |
625dab03c8cf8591e99bbfc5c81d1e1f6cf436b7e9ffc8ab175f6e4cb1c5c979 | def extract_values_to_points(in_fc, image, out_fc=None, properties=None, scale=None, projection=None, tile_scale=1, geometries=True):
"Extracts image values to points.\n\n Args:\n in_fc (object): ee.FeatureCollection\n image (object): The ee.Image to extract pixel values\n properties (list, optional): The list of properties to copy from each input feature. Defaults to all non-system properties.\n scale (float, optional): A nominal scale in meters of the projection to sample in. If unspecified,the scale of the image's first band is used.\n projection (str, optional): The projection in which to sample. If unspecified, the projection of the image's first band is used. If specified in addition to scale, rescaled to the specified scale.\n tile_scale (float, optional): A scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default.\n geometries (bool, optional): If true, the results will include a geometry per sampled pixel. Otherwise, geometries will be omitted (saving memory).\n\n Returns:\n object: ee.FeatureCollection\n "
if (not isinstance(in_fc, ee.FeatureCollection)):
try:
in_fc = shp_to_ee(in_fc)
except Exception as e:
print(e)
return
if (not isinstance(image, ee.Image)):
print('The image must be an instance of ee.Image.')
return
result = image.sampleRegions(**{'collection': in_fc, 'properties': properties, 'scale': scale, 'projection': projection, 'tileScale': tile_scale, 'geometries': geometries})
if (out_fc is not None):
ee_export_vector(result, out_fc)
else:
return result | Extracts image values to points.
Args:
in_fc (object): ee.FeatureCollection
image (object): The ee.Image to extract pixel values
properties (list, optional): The list of properties to copy from each input feature. Defaults to all non-system properties.
scale (float, optional): A nominal scale in meters of the projection to sample in. If unspecified,the scale of the image's first band is used.
projection (str, optional): The projection in which to sample. If unspecified, the projection of the image's first band is used. If specified in addition to scale, rescaled to the specified scale.
tile_scale (float, optional): A scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default.
geometries (bool, optional): If true, the results will include a geometry per sampled pixel. Otherwise, geometries will be omitted (saving memory).
Returns:
object: ee.FeatureCollection | geemap/common.py | extract_values_to_points | arheem/geemap | 1 | python | def extract_values_to_points(in_fc, image, out_fc=None, properties=None, scale=None, projection=None, tile_scale=1, geometries=True):
"Extracts image values to points.\n\n Args:\n in_fc (object): ee.FeatureCollection\n image (object): The ee.Image to extract pixel values\n properties (list, optional): The list of properties to copy from each input feature. Defaults to all non-system properties.\n scale (float, optional): A nominal scale in meters of the projection to sample in. If unspecified,the scale of the image's first band is used.\n projection (str, optional): The projection in which to sample. If unspecified, the projection of the image's first band is used. If specified in addition to scale, rescaled to the specified scale.\n tile_scale (float, optional): A scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default.\n geometries (bool, optional): If true, the results will include a geometry per sampled pixel. Otherwise, geometries will be omitted (saving memory).\n\n Returns:\n object: ee.FeatureCollection\n "
if (not isinstance(in_fc, ee.FeatureCollection)):
try:
in_fc = shp_to_ee(in_fc)
except Exception as e:
print(e)
return
if (not isinstance(image, ee.Image)):
print('The image must be an instance of ee.Image.')
return
result = image.sampleRegions(**{'collection': in_fc, 'properties': properties, 'scale': scale, 'projection': projection, 'tileScale': tile_scale, 'geometries': geometries})
if (out_fc is not None):
ee_export_vector(result, out_fc)
else:
return result | def extract_values_to_points(in_fc, image, out_fc=None, properties=None, scale=None, projection=None, tile_scale=1, geometries=True):
"Extracts image values to points.\n\n Args:\n in_fc (object): ee.FeatureCollection\n image (object): The ee.Image to extract pixel values\n properties (list, optional): The list of properties to copy from each input feature. Defaults to all non-system properties.\n scale (float, optional): A nominal scale in meters of the projection to sample in. If unspecified,the scale of the image's first band is used.\n projection (str, optional): The projection in which to sample. If unspecified, the projection of the image's first band is used. If specified in addition to scale, rescaled to the specified scale.\n tile_scale (float, optional): A scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default.\n geometries (bool, optional): If true, the results will include a geometry per sampled pixel. Otherwise, geometries will be omitted (saving memory).\n\n Returns:\n object: ee.FeatureCollection\n "
if (not isinstance(in_fc, ee.FeatureCollection)):
try:
in_fc = shp_to_ee(in_fc)
except Exception as e:
print(e)
return
if (not isinstance(image, ee.Image)):
print('The image must be an instance of ee.Image.')
return
result = image.sampleRegions(**{'collection': in_fc, 'properties': properties, 'scale': scale, 'projection': projection, 'tileScale': tile_scale, 'geometries': geometries})
if (out_fc is not None):
ee_export_vector(result, out_fc)
else:
return result<|docstring|>Extracts image values to points.
Args:
in_fc (object): ee.FeatureCollection
image (object): The ee.Image to extract pixel values
properties (list, optional): The list of properties to copy from each input feature. Defaults to all non-system properties.
scale (float, optional): A nominal scale in meters of the projection to sample in. If unspecified,the scale of the image's first band is used.
projection (str, optional): The projection in which to sample. If unspecified, the projection of the image's first band is used. If specified in addition to scale, rescaled to the specified scale.
tile_scale (float, optional): A scaling factor used to reduce aggregation tile size; using a larger tileScale (e.g. 2 or 4) may enable computations that run out of memory with the default.
geometries (bool, optional): If true, the results will include a geometry per sampled pixel. Otherwise, geometries will be omitted (saving memory).
Returns:
object: ee.FeatureCollection<|endoftext|> |
4b6822e2a59de9c57d7113757345f6a956bdf96a20457ba8b79a948a27d3d137 | def image_reclassify(img, in_list, out_list):
"Reclassify an image.\n\n Args:\n img (object): The image to which the remapping is applied.\n in_list (list): The source values (numbers or EEArrays). All values in this list will be mapped to the corresponding value in 'out_list'.\n out_list (list): The destination values (numbers or EEArrays). These are used to replace the corresponding values in 'from'. Must have the same number of values as 'in_list'.\n\n Returns:\n object: ee.Image\n "
image = img.remap(in_list, out_list)
return image | Reclassify an image.
Args:
img (object): The image to which the remapping is applied.
in_list (list): The source values (numbers or EEArrays). All values in this list will be mapped to the corresponding value in 'out_list'.
out_list (list): The destination values (numbers or EEArrays). These are used to replace the corresponding values in 'from'. Must have the same number of values as 'in_list'.
Returns:
object: ee.Image | geemap/common.py | image_reclassify | arheem/geemap | 1 | python | def image_reclassify(img, in_list, out_list):
"Reclassify an image.\n\n Args:\n img (object): The image to which the remapping is applied.\n in_list (list): The source values (numbers or EEArrays). All values in this list will be mapped to the corresponding value in 'out_list'.\n out_list (list): The destination values (numbers or EEArrays). These are used to replace the corresponding values in 'from'. Must have the same number of values as 'in_list'.\n\n Returns:\n object: ee.Image\n "
image = img.remap(in_list, out_list)
return image | def image_reclassify(img, in_list, out_list):
"Reclassify an image.\n\n Args:\n img (object): The image to which the remapping is applied.\n in_list (list): The source values (numbers or EEArrays). All values in this list will be mapped to the corresponding value in 'out_list'.\n out_list (list): The destination values (numbers or EEArrays). These are used to replace the corresponding values in 'from'. Must have the same number of values as 'in_list'.\n\n Returns:\n object: ee.Image\n "
image = img.remap(in_list, out_list)
return image<|docstring|>Reclassify an image.
Args:
img (object): The image to which the remapping is applied.
in_list (list): The source values (numbers or EEArrays). All values in this list will be mapped to the corresponding value in 'out_list'.
out_list (list): The destination values (numbers or EEArrays). These are used to replace the corresponding values in 'from'. Must have the same number of values as 'in_list'.
Returns:
object: ee.Image<|endoftext|> |
71757bf975a37b09cb3331daaec9866582da31488d56d73bafd532c8cfe323c9 | def image_smoothing(img, reducer, kernel):
'Smooths an image.\n\n Args:\n img (object): The image to be smoothed.\n reducer (object): ee.Reducer\n kernel (object): ee.Kernel\n\n Returns:\n object: ee.Image\n '
image = img.reduceNeighborhood(**{'reducer': reducer, 'kernel': kernel})
return image | Smooths an image.
Args:
img (object): The image to be smoothed.
reducer (object): ee.Reducer
kernel (object): ee.Kernel
Returns:
object: ee.Image | geemap/common.py | image_smoothing | arheem/geemap | 1 | python | def image_smoothing(img, reducer, kernel):
'Smooths an image.\n\n Args:\n img (object): The image to be smoothed.\n reducer (object): ee.Reducer\n kernel (object): ee.Kernel\n\n Returns:\n object: ee.Image\n '
image = img.reduceNeighborhood(**{'reducer': reducer, 'kernel': kernel})
return image | def image_smoothing(img, reducer, kernel):
'Smooths an image.\n\n Args:\n img (object): The image to be smoothed.\n reducer (object): ee.Reducer\n kernel (object): ee.Kernel\n\n Returns:\n object: ee.Image\n '
image = img.reduceNeighborhood(**{'reducer': reducer, 'kernel': kernel})
return image<|docstring|>Smooths an image.
Args:
img (object): The image to be smoothed.
reducer (object): ee.Reducer
kernel (object): ee.Kernel
Returns:
object: ee.Image<|endoftext|> |
93b09c933c5fb39861d30f1b947941efb601df3ff4f8c47246b0bcf82de6a748 | def rename_bands(img, in_band_names, out_band_names):
'Renames image bands.\n\n Args:\n img (object): The image to be renamed.\n in_band_names (list): The list of of input band names.\n out_band_names (list): The list of output band names.\n\n Returns:\n object: The output image with the renamed bands.\n '
return img.select(in_band_names, out_band_names) | Renames image bands.
Args:
img (object): The image to be renamed.
in_band_names (list): The list of of input band names.
out_band_names (list): The list of output band names.
Returns:
object: The output image with the renamed bands. | geemap/common.py | rename_bands | arheem/geemap | 1 | python | def rename_bands(img, in_band_names, out_band_names):
'Renames image bands.\n\n Args:\n img (object): The image to be renamed.\n in_band_names (list): The list of of input band names.\n out_band_names (list): The list of output band names.\n\n Returns:\n object: The output image with the renamed bands.\n '
return img.select(in_band_names, out_band_names) | def rename_bands(img, in_band_names, out_band_names):
'Renames image bands.\n\n Args:\n img (object): The image to be renamed.\n in_band_names (list): The list of of input band names.\n out_band_names (list): The list of output band names.\n\n Returns:\n object: The output image with the renamed bands.\n '
return img.select(in_band_names, out_band_names)<|docstring|>Renames image bands.
Args:
img (object): The image to be renamed.
in_band_names (list): The list of of input band names.
out_band_names (list): The list of output band names.
Returns:
object: The output image with the renamed bands.<|endoftext|> |
60e932ad51fbf5d26d8313bcd45cb5412981a8cdcb3675141fec238fabbaf48d | def bands_to_image_collection(img):
'Converts all bands in an image to an image collection.\n\n Args:\n img (object): The image to convert.\n\n Returns:\n object: ee.ImageCollection\n '
collection = ee.ImageCollection(img.bandNames().map((lambda b: img.select([b]))))
return collection | Converts all bands in an image to an image collection.
Args:
img (object): The image to convert.
Returns:
object: ee.ImageCollection | geemap/common.py | bands_to_image_collection | arheem/geemap | 1 | python | def bands_to_image_collection(img):
'Converts all bands in an image to an image collection.\n\n Args:\n img (object): The image to convert.\n\n Returns:\n object: ee.ImageCollection\n '
collection = ee.ImageCollection(img.bandNames().map((lambda b: img.select([b]))))
return collection | def bands_to_image_collection(img):
'Converts all bands in an image to an image collection.\n\n Args:\n img (object): The image to convert.\n\n Returns:\n object: ee.ImageCollection\n '
collection = ee.ImageCollection(img.bandNames().map((lambda b: img.select([b]))))
return collection<|docstring|>Converts all bands in an image to an image collection.
Args:
img (object): The image to convert.
Returns:
object: ee.ImageCollection<|endoftext|> |
6cae7c8305334c15bec28e9ef3c42eb9bbe6c92cfaf72e04252935955eb62ba1 | def find_landsat_by_path_row(landsat_col, path_num, row_num):
'Finds Landsat images by WRS path number and row number.\n\n Args:\n landsat_col (str): The image collection id of Landsat. \n path_num (int): The WRS path number.\n row_num (int): the WRS row number.\n\n Returns:\n object: ee.ImageCollection\n '
try:
if isinstance(landsat_col, str):
landsat_col = ee.ImageCollection(landsat_col)
collection = landsat_col.filter(ee.Filter.eq('WRS_PATH', path_num)).filter(ee.Filter.eq('WRS_ROW', row_num))
return collection
except Exception as e:
print(e) | Finds Landsat images by WRS path number and row number.
Args:
landsat_col (str): The image collection id of Landsat.
path_num (int): The WRS path number.
row_num (int): the WRS row number.
Returns:
object: ee.ImageCollection | geemap/common.py | find_landsat_by_path_row | arheem/geemap | 1 | python | def find_landsat_by_path_row(landsat_col, path_num, row_num):
'Finds Landsat images by WRS path number and row number.\n\n Args:\n landsat_col (str): The image collection id of Landsat. \n path_num (int): The WRS path number.\n row_num (int): the WRS row number.\n\n Returns:\n object: ee.ImageCollection\n '
try:
if isinstance(landsat_col, str):
landsat_col = ee.ImageCollection(landsat_col)
collection = landsat_col.filter(ee.Filter.eq('WRS_PATH', path_num)).filter(ee.Filter.eq('WRS_ROW', row_num))
return collection
except Exception as e:
print(e) | def find_landsat_by_path_row(landsat_col, path_num, row_num):
'Finds Landsat images by WRS path number and row number.\n\n Args:\n landsat_col (str): The image collection id of Landsat. \n path_num (int): The WRS path number.\n row_num (int): the WRS row number.\n\n Returns:\n object: ee.ImageCollection\n '
try:
if isinstance(landsat_col, str):
landsat_col = ee.ImageCollection(landsat_col)
collection = landsat_col.filter(ee.Filter.eq('WRS_PATH', path_num)).filter(ee.Filter.eq('WRS_ROW', row_num))
return collection
except Exception as e:
print(e)<|docstring|>Finds Landsat images by WRS path number and row number.
Args:
landsat_col (str): The image collection id of Landsat.
path_num (int): The WRS path number.
row_num (int): the WRS row number.
Returns:
object: ee.ImageCollection<|endoftext|> |
f8433b7cb3145ad55a6ea3afae9f08848a3fce29a4df6aac9477f5ecfe1f1f24 | def str_to_num(in_str):
'Converts a string to an ee.Number.\n\n Args:\n in_str (str): The string to convert to a number.\n\n Returns:\n object: ee.Number\n '
return ee.Number.parse(str) | Converts a string to an ee.Number.
Args:
in_str (str): The string to convert to a number.
Returns:
object: ee.Number | geemap/common.py | str_to_num | arheem/geemap | 1 | python | def str_to_num(in_str):
'Converts a string to an ee.Number.\n\n Args:\n in_str (str): The string to convert to a number.\n\n Returns:\n object: ee.Number\n '
return ee.Number.parse(str) | def str_to_num(in_str):
'Converts a string to an ee.Number.\n\n Args:\n in_str (str): The string to convert to a number.\n\n Returns:\n object: ee.Number\n '
return ee.Number.parse(str)<|docstring|>Converts a string to an ee.Number.
Args:
in_str (str): The string to convert to a number.
Returns:
object: ee.Number<|endoftext|> |
7ecaabf933cc21aa9ff659e4b4f7d74baf27c24886674b40339141a7f00fb7bb | def array_sum(arr):
'Accumulates elements of an array along the given axis.\n\n Args:\n arr (object): Array to accumulate.\n\n Returns:\n object: ee.Number\n '
return ee.Array(arr).accum(0).get([(- 1)]) | Accumulates elements of an array along the given axis.
Args:
arr (object): Array to accumulate.
Returns:
object: ee.Number | geemap/common.py | array_sum | arheem/geemap | 1 | python | def array_sum(arr):
'Accumulates elements of an array along the given axis.\n\n Args:\n arr (object): Array to accumulate.\n\n Returns:\n object: ee.Number\n '
return ee.Array(arr).accum(0).get([(- 1)]) | def array_sum(arr):
'Accumulates elements of an array along the given axis.\n\n Args:\n arr (object): Array to accumulate.\n\n Returns:\n object: ee.Number\n '
return ee.Array(arr).accum(0).get([(- 1)])<|docstring|>Accumulates elements of an array along the given axis.
Args:
arr (object): Array to accumulate.
Returns:
object: ee.Number<|endoftext|> |
755a9f8f4c1ed80248b5dacd8ab26b26499dc394b09d7ebbba72ae41299d2c14 | def array_mean(arr):
'Calculates the mean of an array along the given axis.\n\n Args:\n arr (object): Array to calculate mean.\n\n Returns:\n object: ee.Number\n '
total = ee.Array(arr).accum(0).get([(- 1)])
size = arr.length()
return ee.Number(total.divide(size)) | Calculates the mean of an array along the given axis.
Args:
arr (object): Array to calculate mean.
Returns:
object: ee.Number | geemap/common.py | array_mean | arheem/geemap | 1 | python | def array_mean(arr):
'Calculates the mean of an array along the given axis.\n\n Args:\n arr (object): Array to calculate mean.\n\n Returns:\n object: ee.Number\n '
total = ee.Array(arr).accum(0).get([(- 1)])
size = arr.length()
return ee.Number(total.divide(size)) | def array_mean(arr):
'Calculates the mean of an array along the given axis.\n\n Args:\n arr (object): Array to calculate mean.\n\n Returns:\n object: ee.Number\n '
total = ee.Array(arr).accum(0).get([(- 1)])
size = arr.length()
return ee.Number(total.divide(size))<|docstring|>Calculates the mean of an array along the given axis.
Args:
arr (object): Array to calculate mean.
Returns:
object: ee.Number<|endoftext|> |
15bb52152935f326b779113b730fd5bfda66975d82028aeeb25b93be3960de54 | def get_annual_NAIP(year, RGBN=True):
'Filters NAIP ImageCollection by year.\n\n Args:\n year (int): The year to filter the NAIP ImageCollection.\n RGBN (bool, optional): Whether to retrieve 4-band NAIP imagery only. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = (str(year) + '-01-01')
end_date = (str(year) + '-12-31')
naip = collection.filterDate(start_date, end_date)
if RGBN:
naip = naip.filter(ee.Filter.listContains('system:band_names', 'N'))
return naip
except Exception as e:
print(e) | Filters NAIP ImageCollection by year.
Args:
year (int): The year to filter the NAIP ImageCollection.
RGBN (bool, optional): Whether to retrieve 4-band NAIP imagery only. Defaults to True.
Returns:
object: ee.ImageCollection | geemap/common.py | get_annual_NAIP | arheem/geemap | 1 | python | def get_annual_NAIP(year, RGBN=True):
'Filters NAIP ImageCollection by year.\n\n Args:\n year (int): The year to filter the NAIP ImageCollection.\n RGBN (bool, optional): Whether to retrieve 4-band NAIP imagery only. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = (str(year) + '-01-01')
end_date = (str(year) + '-12-31')
naip = collection.filterDate(start_date, end_date)
if RGBN:
naip = naip.filter(ee.Filter.listContains('system:band_names', 'N'))
return naip
except Exception as e:
print(e) | def get_annual_NAIP(year, RGBN=True):
'Filters NAIP ImageCollection by year.\n\n Args:\n year (int): The year to filter the NAIP ImageCollection.\n RGBN (bool, optional): Whether to retrieve 4-band NAIP imagery only. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = (str(year) + '-01-01')
end_date = (str(year) + '-12-31')
naip = collection.filterDate(start_date, end_date)
if RGBN:
naip = naip.filter(ee.Filter.listContains('system:band_names', 'N'))
return naip
except Exception as e:
print(e)<|docstring|>Filters NAIP ImageCollection by year.
Args:
year (int): The year to filter the NAIP ImageCollection.
RGBN (bool, optional): Whether to retrieve 4-band NAIP imagery only. Defaults to True.
Returns:
object: ee.ImageCollection<|endoftext|> |
278f68fde5a23e8fc4f6cc8216c29abd77886577df2f1021c6c3aafe315e7a06 | def get_all_NAIP(start_year=2009, end_year=2019):
'Creates annual NAIP imagery mosaic.\n\n Args:\n start_year (int, optional): The starting year. Defaults to 2009.\n end_year (int, optional): The ending year. Defaults to 2019.\n\n Returns:\n object: ee.ImageCollection\n '
try:
def get_annual_NAIP(year):
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
naip = collection.filterDate(start_date, end_date).filter(ee.Filter.listContains('system:band_names', 'N'))
return ee.ImageCollection(naip)
except Exception as e:
print(e)
years = ee.List.sequence(start_year, end_year)
collection = years.map(get_annual_NAIP)
return collection
except Exception as e:
print(e) | Creates annual NAIP imagery mosaic.
Args:
start_year (int, optional): The starting year. Defaults to 2009.
end_year (int, optional): The ending year. Defaults to 2019.
Returns:
object: ee.ImageCollection | geemap/common.py | get_all_NAIP | arheem/geemap | 1 | python | def get_all_NAIP(start_year=2009, end_year=2019):
'Creates annual NAIP imagery mosaic.\n\n Args:\n start_year (int, optional): The starting year. Defaults to 2009.\n end_year (int, optional): The ending year. Defaults to 2019.\n\n Returns:\n object: ee.ImageCollection\n '
try:
def get_annual_NAIP(year):
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
naip = collection.filterDate(start_date, end_date).filter(ee.Filter.listContains('system:band_names', 'N'))
return ee.ImageCollection(naip)
except Exception as e:
print(e)
years = ee.List.sequence(start_year, end_year)
collection = years.map(get_annual_NAIP)
return collection
except Exception as e:
print(e) | def get_all_NAIP(start_year=2009, end_year=2019):
'Creates annual NAIP imagery mosaic.\n\n Args:\n start_year (int, optional): The starting year. Defaults to 2009.\n end_year (int, optional): The ending year. Defaults to 2019.\n\n Returns:\n object: ee.ImageCollection\n '
try:
def get_annual_NAIP(year):
try:
collection = ee.ImageCollection('USDA/NAIP/DOQQ')
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
naip = collection.filterDate(start_date, end_date).filter(ee.Filter.listContains('system:band_names', 'N'))
return ee.ImageCollection(naip)
except Exception as e:
print(e)
years = ee.List.sequence(start_year, end_year)
collection = years.map(get_annual_NAIP)
return collection
except Exception as e:
print(e)<|docstring|>Creates annual NAIP imagery mosaic.
Args:
start_year (int, optional): The starting year. Defaults to 2009.
end_year (int, optional): The ending year. Defaults to 2019.
Returns:
object: ee.ImageCollection<|endoftext|> |
387b4b25706aee33b033f070cd6f413ea996820c0fcd0668d2b583caa6ae48a5 | def annual_NAIP(year, region):
'Create an NAIP mosaic of a specified year for a specified region. \n\n Args:\n year (int): The specified year to create the mosaic for. \n region (object): ee.Geometry\n\n Returns:\n object: ee.Image\n '
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterDate(start_date, end_date).filterBounds(region)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0))
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1)))
image = ee.Image(collection.mosaic().clip(region))
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDWI)
image = image.addBands(NDVI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end}) | Create an NAIP mosaic of a specified year for a specified region.
Args:
year (int): The specified year to create the mosaic for.
region (object): ee.Geometry
Returns:
object: ee.Image | geemap/common.py | annual_NAIP | arheem/geemap | 1 | python | def annual_NAIP(year, region):
'Create an NAIP mosaic of a specified year for a specified region. \n\n Args:\n year (int): The specified year to create the mosaic for. \n region (object): ee.Geometry\n\n Returns:\n object: ee.Image\n '
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterDate(start_date, end_date).filterBounds(region)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0))
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1)))
image = ee.Image(collection.mosaic().clip(region))
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDWI)
image = image.addBands(NDVI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end}) | def annual_NAIP(year, region):
'Create an NAIP mosaic of a specified year for a specified region. \n\n Args:\n year (int): The specified year to create the mosaic for. \n region (object): ee.Geometry\n\n Returns:\n object: ee.Image\n '
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterDate(start_date, end_date).filterBounds(region)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0))
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1)))
image = ee.Image(collection.mosaic().clip(region))
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDWI)
image = image.addBands(NDVI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end})<|docstring|>Create an NAIP mosaic of a specified year for a specified region.
Args:
year (int): The specified year to create the mosaic for.
region (object): ee.Geometry
Returns:
object: ee.Image<|endoftext|> |
76d46ce15b09dc7965b9e63340f5608b341c547ebb99f2e47020848962a6b491 | def find_NAIP(region, add_NDVI=True, add_NDWI=True):
'Create annual NAIP mosaic for a given region.\n\n Args:\n region (object): ee.Geometry\n add_NDVI (bool, optional): Whether to add the NDVI band. Defaults to True.\n add_NDWI (bool, optional): Whether to add the NDWI band. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
init_collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterBounds(region).filterDate('2009-01-01', '2019-12-31').filter(ee.Filter.listContains('system:band_names', 'N'))
yearList = ee.List(init_collection.distinct(['system:time_start']).aggregate_array('system:time_start'))
init_years = yearList.map((lambda y: ee.Date(y).get('year')))
init_years = ee.Dictionary(init_years.reduce(ee.Reducer.frequencyHistogram())).keys()
years = init_years.map((lambda x: ee.Number.parse(x)))
def NAIPAnnual(year):
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = init_collection.filterDate(start_date, end_date)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0)).format('YYYY-MM-dd')
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1))).format('YYYY-MM-dd')
col_size = collection.size()
image = ee.Image(collection.mosaic().clip(region))
if add_NDVI:
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDVI)
if add_NDWI:
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
image = image.addBands(NDWI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end, 'tiles': col_size})
naip = ee.ImageCollection(years.map(NAIPAnnual))
mean_size = ee.Number(naip.aggregate_mean('tiles'))
total_sd = ee.Number(naip.aggregate_total_sd('tiles'))
threshold = mean_size.subtract(total_sd.multiply(1))
naip = naip.filter(ee.Filter.Or(ee.Filter.gte('tiles', threshold), ee.Filter.gte('tiles', 15)))
naip = naip.filter(ee.Filter.gte('tiles', 7))
naip_count = naip.size()
naip_seq = ee.List.sequence(0, naip_count.subtract(1))
def set_index(index):
img = ee.Image(naip.toList(naip_count).get(index))
return img.set({'system:uid': ee.Number(index).toUint8()})
naip = naip_seq.map(set_index)
return ee.ImageCollection(naip) | Create annual NAIP mosaic for a given region.
Args:
region (object): ee.Geometry
add_NDVI (bool, optional): Whether to add the NDVI band. Defaults to True.
add_NDWI (bool, optional): Whether to add the NDWI band. Defaults to True.
Returns:
object: ee.ImageCollection | geemap/common.py | find_NAIP | arheem/geemap | 1 | python | def find_NAIP(region, add_NDVI=True, add_NDWI=True):
'Create annual NAIP mosaic for a given region.\n\n Args:\n region (object): ee.Geometry\n add_NDVI (bool, optional): Whether to add the NDVI band. Defaults to True.\n add_NDWI (bool, optional): Whether to add the NDWI band. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
init_collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterBounds(region).filterDate('2009-01-01', '2019-12-31').filter(ee.Filter.listContains('system:band_names', 'N'))
yearList = ee.List(init_collection.distinct(['system:time_start']).aggregate_array('system:time_start'))
init_years = yearList.map((lambda y: ee.Date(y).get('year')))
init_years = ee.Dictionary(init_years.reduce(ee.Reducer.frequencyHistogram())).keys()
years = init_years.map((lambda x: ee.Number.parse(x)))
def NAIPAnnual(year):
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = init_collection.filterDate(start_date, end_date)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0)).format('YYYY-MM-dd')
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1))).format('YYYY-MM-dd')
col_size = collection.size()
image = ee.Image(collection.mosaic().clip(region))
if add_NDVI:
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDVI)
if add_NDWI:
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
image = image.addBands(NDWI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end, 'tiles': col_size})
naip = ee.ImageCollection(years.map(NAIPAnnual))
mean_size = ee.Number(naip.aggregate_mean('tiles'))
total_sd = ee.Number(naip.aggregate_total_sd('tiles'))
threshold = mean_size.subtract(total_sd.multiply(1))
naip = naip.filter(ee.Filter.Or(ee.Filter.gte('tiles', threshold), ee.Filter.gte('tiles', 15)))
naip = naip.filter(ee.Filter.gte('tiles', 7))
naip_count = naip.size()
naip_seq = ee.List.sequence(0, naip_count.subtract(1))
def set_index(index):
img = ee.Image(naip.toList(naip_count).get(index))
return img.set({'system:uid': ee.Number(index).toUint8()})
naip = naip_seq.map(set_index)
return ee.ImageCollection(naip) | def find_NAIP(region, add_NDVI=True, add_NDWI=True):
'Create annual NAIP mosaic for a given region.\n\n Args:\n region (object): ee.Geometry\n add_NDVI (bool, optional): Whether to add the NDVI band. Defaults to True.\n add_NDWI (bool, optional): Whether to add the NDWI band. Defaults to True.\n\n Returns:\n object: ee.ImageCollection\n '
init_collection = ee.ImageCollection('USDA/NAIP/DOQQ').filterBounds(region).filterDate('2009-01-01', '2019-12-31').filter(ee.Filter.listContains('system:band_names', 'N'))
yearList = ee.List(init_collection.distinct(['system:time_start']).aggregate_array('system:time_start'))
init_years = yearList.map((lambda y: ee.Date(y).get('year')))
init_years = ee.Dictionary(init_years.reduce(ee.Reducer.frequencyHistogram())).keys()
years = init_years.map((lambda x: ee.Number.parse(x)))
def NAIPAnnual(year):
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
collection = init_collection.filterDate(start_date, end_date)
time_start = ee.Date(ee.List(collection.aggregate_array('system:time_start')).sort().get(0)).format('YYYY-MM-dd')
time_end = ee.Date(ee.List(collection.aggregate_array('system:time_end')).sort().get((- 1))).format('YYYY-MM-dd')
col_size = collection.size()
image = ee.Image(collection.mosaic().clip(region))
if add_NDVI:
NDVI = ee.Image(image).normalizedDifference(['N', 'R']).select(['nd'], ['ndvi'])
image = image.addBands(NDVI)
if add_NDWI:
NDWI = ee.Image(image).normalizedDifference(['G', 'N']).select(['nd'], ['ndwi'])
image = image.addBands(NDWI)
return image.set({'system:time_start': time_start, 'system:time_end': time_end, 'tiles': col_size})
naip = ee.ImageCollection(years.map(NAIPAnnual))
mean_size = ee.Number(naip.aggregate_mean('tiles'))
total_sd = ee.Number(naip.aggregate_total_sd('tiles'))
threshold = mean_size.subtract(total_sd.multiply(1))
naip = naip.filter(ee.Filter.Or(ee.Filter.gte('tiles', threshold), ee.Filter.gte('tiles', 15)))
naip = naip.filter(ee.Filter.gte('tiles', 7))
naip_count = naip.size()
naip_seq = ee.List.sequence(0, naip_count.subtract(1))
def set_index(index):
img = ee.Image(naip.toList(naip_count).get(index))
return img.set({'system:uid': ee.Number(index).toUint8()})
naip = naip_seq.map(set_index)
return ee.ImageCollection(naip)<|docstring|>Create annual NAIP mosaic for a given region.
Args:
region (object): ee.Geometry
add_NDVI (bool, optional): Whether to add the NDVI band. Defaults to True.
add_NDWI (bool, optional): Whether to add the NDWI band. Defaults to True.
Returns:
object: ee.ImageCollection<|endoftext|> |
71bbdb6bad713e6ee49024782d7f4d37e4989d86ef2c9781cf8b8b3f91f1fcf1 | def filter_NWI(HUC08_Id, region, exclude_riverine=True):
'Retrives NWI dataset for a given HUC8 watershed.\n\n Args:\n HUC08_Id (str): The HUC8 watershed id.\n region (object): ee.Geometry\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path).filterBounds(region)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc | Retrives NWI dataset for a given HUC8 watershed.
Args:
HUC08_Id (str): The HUC8 watershed id.
region (object): ee.Geometry
exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.
Returns:
object: ee.FeatureCollection | geemap/common.py | filter_NWI | arheem/geemap | 1 | python | def filter_NWI(HUC08_Id, region, exclude_riverine=True):
'Retrives NWI dataset for a given HUC8 watershed.\n\n Args:\n HUC08_Id (str): The HUC8 watershed id.\n region (object): ee.Geometry\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path).filterBounds(region)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc | def filter_NWI(HUC08_Id, region, exclude_riverine=True):
'Retrives NWI dataset for a given HUC8 watershed.\n\n Args:\n HUC08_Id (str): The HUC8 watershed id.\n region (object): ee.Geometry\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path).filterBounds(region)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc<|docstring|>Retrives NWI dataset for a given HUC8 watershed.
Args:
HUC08_Id (str): The HUC8 watershed id.
region (object): ee.Geometry
exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.
Returns:
object: ee.FeatureCollection<|endoftext|> |
c3adb8142c1c58157cefdbb8d2e2c6188a6b1ccb7505a29154fb05b333bb73a9 | def filter_HUC08(region):
'Filters HUC08 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filterBounds(region)
return HUC08 | Filters HUC08 watersheds intersecting a given region.
Args:
region (object): ee.Geometry
Returns:
object: ee.FeatureCollection | geemap/common.py | filter_HUC08 | arheem/geemap | 1 | python | def filter_HUC08(region):
'Filters HUC08 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filterBounds(region)
return HUC08 | def filter_HUC08(region):
'Filters HUC08 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filterBounds(region)
return HUC08<|docstring|>Filters HUC08 watersheds intersecting a given region.
Args:
region (object): ee.Geometry
Returns:
object: ee.FeatureCollection<|endoftext|> |
ae711b15348d7582d625c7d01c3010cd390288610b00e3433ce560b6437efb9e | def filter_HUC10(region):
'Filters HUC10 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filterBounds(region)
return HUC10 | Filters HUC10 watersheds intersecting a given region.
Args:
region (object): ee.Geometry
Returns:
object: ee.FeatureCollection | geemap/common.py | filter_HUC10 | arheem/geemap | 1 | python | def filter_HUC10(region):
'Filters HUC10 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filterBounds(region)
return HUC10 | def filter_HUC10(region):
'Filters HUC10 watersheds intersecting a given region.\n\n Args:\n region (object): ee.Geometry\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filterBounds(region)
return HUC10<|docstring|>Filters HUC10 watersheds intersecting a given region.
Args:
region (object): ee.Geometry
Returns:
object: ee.FeatureCollection<|endoftext|> |
6a2019f3efd4ab49fcbffb46aabcc34a02e22dde69a158c2ddc8eb101479db5a | def find_HUC08(HUC08_Id):
'Finds a HUC08 watershed based on a given HUC08 ID\n\n Args:\n HUC08_Id (str): The HUC08 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filter(ee.Filter.eq('huc8', HUC08_Id))
return HUC08 | Finds a HUC08 watershed based on a given HUC08 ID
Args:
HUC08_Id (str): The HUC08 ID.
Returns:
object: ee.FeatureCollection | geemap/common.py | find_HUC08 | arheem/geemap | 1 | python | def find_HUC08(HUC08_Id):
'Finds a HUC08 watershed based on a given HUC08 ID\n\n Args:\n HUC08_Id (str): The HUC08 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filter(ee.Filter.eq('huc8', HUC08_Id))
return HUC08 | def find_HUC08(HUC08_Id):
'Finds a HUC08 watershed based on a given HUC08 ID\n\n Args:\n HUC08_Id (str): The HUC08 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC08 = ee.FeatureCollection('USGS/WBD/2017/HUC08')
HUC08 = USGS_HUC08.filter(ee.Filter.eq('huc8', HUC08_Id))
return HUC08<|docstring|>Finds a HUC08 watershed based on a given HUC08 ID
Args:
HUC08_Id (str): The HUC08 ID.
Returns:
object: ee.FeatureCollection<|endoftext|> |
cc2eb07a982c145447c221a4f243db412bb6af6f8357a1179c603682a7d618a1 | def find_HUC10(HUC10_Id):
'Finds a HUC10 watershed based on a given HUC08 ID\n\n Args:\n HUC10_Id (str): The HUC10 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filter(ee.Filter.eq('huc10', HUC10_Id))
return HUC10 | Finds a HUC10 watershed based on a given HUC08 ID
Args:
HUC10_Id (str): The HUC10 ID.
Returns:
object: ee.FeatureCollection | geemap/common.py | find_HUC10 | arheem/geemap | 1 | python | def find_HUC10(HUC10_Id):
'Finds a HUC10 watershed based on a given HUC08 ID\n\n Args:\n HUC10_Id (str): The HUC10 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filter(ee.Filter.eq('huc10', HUC10_Id))
return HUC10 | def find_HUC10(HUC10_Id):
'Finds a HUC10 watershed based on a given HUC08 ID\n\n Args:\n HUC10_Id (str): The HUC10 ID.\n\n Returns:\n object: ee.FeatureCollection\n '
USGS_HUC10 = ee.FeatureCollection('USGS/WBD/2017/HUC10')
HUC10 = USGS_HUC10.filter(ee.Filter.eq('huc10', HUC10_Id))
return HUC10<|docstring|>Finds a HUC10 watershed based on a given HUC08 ID
Args:
HUC10_Id (str): The HUC10 ID.
Returns:
object: ee.FeatureCollection<|endoftext|> |
7dd588c6e7cdd726b9deb5a2a85ac6acdea307fb3b9d0a7a824761d759be6af4 | def find_NWI(HUC08_Id, exclude_riverine=True):
'Finds NWI dataset for a given HUC08 watershed.\n\n Args:\n HUC08_Id (str): The HUC08 watershed ID.\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc | Finds NWI dataset for a given HUC08 watershed.
Args:
HUC08_Id (str): The HUC08 watershed ID.
exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.
Returns:
object: ee.FeatureCollection | geemap/common.py | find_NWI | arheem/geemap | 1 | python | def find_NWI(HUC08_Id, exclude_riverine=True):
'Finds NWI dataset for a given HUC08 watershed.\n\n Args:\n HUC08_Id (str): The HUC08 watershed ID.\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc | def find_NWI(HUC08_Id, exclude_riverine=True):
'Finds NWI dataset for a given HUC08 watershed.\n\n Args:\n HUC08_Id (str): The HUC08 watershed ID.\n exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.\n\n Returns:\n object: ee.FeatureCollection\n '
nwi_asset_prefix = 'users/wqs/NWI-HU8/HU8_'
nwi_asset_suffix = '_Wetlands'
nwi_asset_path = ((nwi_asset_prefix + HUC08_Id) + nwi_asset_suffix)
nwi_huc = ee.FeatureCollection(nwi_asset_path)
if exclude_riverine:
nwi_huc = nwi_huc.filter(ee.Filter.notEquals(**{'leftField': 'WETLAND_TY', 'rightValue': 'Riverine'}))
return nwi_huc<|docstring|>Finds NWI dataset for a given HUC08 watershed.
Args:
HUC08_Id (str): The HUC08 watershed ID.
exclude_riverine (bool, optional): Whether to exclude riverine wetlands. Defaults to True.
Returns:
object: ee.FeatureCollection<|endoftext|> |
e40d00d50cda654e5279cbd0cd57d543b249379dfbd01f08ffade9aec8064720 | def nwi_add_color(fc):
'Converts NWI vector dataset to image and add color to it.\n\n Args:\n fc (object): ee.FeatureCollection\n\n Returns:\n object: ee.Image\n '
emergent = ee.FeatureCollection(fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Emergent Wetland')))
emergent = emergent.map((lambda f: f.set('R', 127).set('G', 195).set('B', 28)))
forested = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Forested/Shrub Wetland'))
forested = forested.map((lambda f: f.set('R', 0).set('G', 136).set('B', 55)))
pond = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Pond'))
pond = pond.map((lambda f: f.set('R', 104).set('G', 140).set('B', 192)))
lake = fc.filter(ee.Filter.eq('WETLAND_TY', 'Lake'))
lake = lake.map((lambda f: f.set('R', 19).set('G', 0).set('B', 124)))
riverine = fc.filter(ee.Filter.eq('WETLAND_TY', 'Riverine'))
riverine = riverine.map((lambda f: f.set('R', 1).set('G', 144).set('B', 191)))
fc = ee.FeatureCollection(emergent.merge(forested).merge(pond).merge(lake).merge(riverine))
base = ee.Image().byte()
img = base.paint(fc, 'R').addBands(base.paint(fc, 'G').addBands(base.paint(fc, 'B')))
return img | Converts NWI vector dataset to image and add color to it.
Args:
fc (object): ee.FeatureCollection
Returns:
object: ee.Image | geemap/common.py | nwi_add_color | arheem/geemap | 1 | python | def nwi_add_color(fc):
'Converts NWI vector dataset to image and add color to it.\n\n Args:\n fc (object): ee.FeatureCollection\n\n Returns:\n object: ee.Image\n '
emergent = ee.FeatureCollection(fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Emergent Wetland')))
emergent = emergent.map((lambda f: f.set('R', 127).set('G', 195).set('B', 28)))
forested = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Forested/Shrub Wetland'))
forested = forested.map((lambda f: f.set('R', 0).set('G', 136).set('B', 55)))
pond = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Pond'))
pond = pond.map((lambda f: f.set('R', 104).set('G', 140).set('B', 192)))
lake = fc.filter(ee.Filter.eq('WETLAND_TY', 'Lake'))
lake = lake.map((lambda f: f.set('R', 19).set('G', 0).set('B', 124)))
riverine = fc.filter(ee.Filter.eq('WETLAND_TY', 'Riverine'))
riverine = riverine.map((lambda f: f.set('R', 1).set('G', 144).set('B', 191)))
fc = ee.FeatureCollection(emergent.merge(forested).merge(pond).merge(lake).merge(riverine))
base = ee.Image().byte()
img = base.paint(fc, 'R').addBands(base.paint(fc, 'G').addBands(base.paint(fc, 'B')))
return img | def nwi_add_color(fc):
'Converts NWI vector dataset to image and add color to it.\n\n Args:\n fc (object): ee.FeatureCollection\n\n Returns:\n object: ee.Image\n '
emergent = ee.FeatureCollection(fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Emergent Wetland')))
emergent = emergent.map((lambda f: f.set('R', 127).set('G', 195).set('B', 28)))
forested = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Forested/Shrub Wetland'))
forested = forested.map((lambda f: f.set('R', 0).set('G', 136).set('B', 55)))
pond = fc.filter(ee.Filter.eq('WETLAND_TY', 'Freshwater Pond'))
pond = pond.map((lambda f: f.set('R', 104).set('G', 140).set('B', 192)))
lake = fc.filter(ee.Filter.eq('WETLAND_TY', 'Lake'))
lake = lake.map((lambda f: f.set('R', 19).set('G', 0).set('B', 124)))
riverine = fc.filter(ee.Filter.eq('WETLAND_TY', 'Riverine'))
riverine = riverine.map((lambda f: f.set('R', 1).set('G', 144).set('B', 191)))
fc = ee.FeatureCollection(emergent.merge(forested).merge(pond).merge(lake).merge(riverine))
base = ee.Image().byte()
img = base.paint(fc, 'R').addBands(base.paint(fc, 'G').addBands(base.paint(fc, 'B')))
return img<|docstring|>Converts NWI vector dataset to image and add color to it.
Args:
fc (object): ee.FeatureCollection
Returns:
object: ee.Image<|endoftext|> |
ea59c700557441cd92869d29ce05ad438fc451d867ea81d336c43ebd3d7da67b | def summarize_by_group(collection, column, group, group_name, stats_type, return_dict=True):
'Calculates summary statistics by group.\n\n Args:\n collection (object): The input feature collection\n column (str): The value column to calculate summary statistics.\n group (str): The name of the group column.\n group_name (str): The new group name to use.\n stats_type (str): The type of summary statistics.\n return_dict (bool): Whether to return the result as a dictionary.\n\n Returns:\n object: ee.Dictionary or ee.List\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column, group]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type].group(**{'groupField': 1, 'groupName': group_name})})
results = ee.List(ee.Dictionary(stats).get('groups'))
if return_dict:
keys = results.map((lambda k: ee.Dictionary(k).get(group_name)))
values = results.map((lambda v: ee.Dictionary(v).get(stats_type)))
results = ee.Dictionary.fromLists(keys, values)
return results | Calculates summary statistics by group.
Args:
collection (object): The input feature collection
column (str): The value column to calculate summary statistics.
group (str): The name of the group column.
group_name (str): The new group name to use.
stats_type (str): The type of summary statistics.
return_dict (bool): Whether to return the result as a dictionary.
Returns:
object: ee.Dictionary or ee.List | geemap/common.py | summarize_by_group | arheem/geemap | 1 | python | def summarize_by_group(collection, column, group, group_name, stats_type, return_dict=True):
'Calculates summary statistics by group.\n\n Args:\n collection (object): The input feature collection\n column (str): The value column to calculate summary statistics.\n group (str): The name of the group column.\n group_name (str): The new group name to use.\n stats_type (str): The type of summary statistics.\n return_dict (bool): Whether to return the result as a dictionary.\n\n Returns:\n object: ee.Dictionary or ee.List\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column, group]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type].group(**{'groupField': 1, 'groupName': group_name})})
results = ee.List(ee.Dictionary(stats).get('groups'))
if return_dict:
keys = results.map((lambda k: ee.Dictionary(k).get(group_name)))
values = results.map((lambda v: ee.Dictionary(v).get(stats_type)))
results = ee.Dictionary.fromLists(keys, values)
return results | def summarize_by_group(collection, column, group, group_name, stats_type, return_dict=True):
'Calculates summary statistics by group.\n\n Args:\n collection (object): The input feature collection\n column (str): The value column to calculate summary statistics.\n group (str): The name of the group column.\n group_name (str): The new group name to use.\n stats_type (str): The type of summary statistics.\n return_dict (bool): Whether to return the result as a dictionary.\n\n Returns:\n object: ee.Dictionary or ee.List\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column, group]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type].group(**{'groupField': 1, 'groupName': group_name})})
results = ee.List(ee.Dictionary(stats).get('groups'))
if return_dict:
keys = results.map((lambda k: ee.Dictionary(k).get(group_name)))
values = results.map((lambda v: ee.Dictionary(v).get(stats_type)))
results = ee.Dictionary.fromLists(keys, values)
return results<|docstring|>Calculates summary statistics by group.
Args:
collection (object): The input feature collection
column (str): The value column to calculate summary statistics.
group (str): The name of the group column.
group_name (str): The new group name to use.
stats_type (str): The type of summary statistics.
return_dict (bool): Whether to return the result as a dictionary.
Returns:
object: ee.Dictionary or ee.List<|endoftext|> |
97a06f95a77618068c6f5e159f32c54f13c05fec39db2086057842a5ef0d6d03 | def summary_stats(collection, column):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate summary statistics.\n column (str): The name of the column to calculate summary statistics.\n\n Returns:\n dict: The dictionary containing information about the summary statistics.\n '
stats = collection.aggregate_stats(column).getInfo()
return eval(str(stats)) | Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean,
sample standard deviation, sample variance, total standard deviation and total variance of the selected property.
Args:
collection (FeatureCollection): The input feature collection to calculate summary statistics.
column (str): The name of the column to calculate summary statistics.
Returns:
dict: The dictionary containing information about the summary statistics. | geemap/common.py | summary_stats | arheem/geemap | 1 | python | def summary_stats(collection, column):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate summary statistics.\n column (str): The name of the column to calculate summary statistics.\n\n Returns:\n dict: The dictionary containing information about the summary statistics.\n '
stats = collection.aggregate_stats(column).getInfo()
return eval(str(stats)) | def summary_stats(collection, column):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate summary statistics.\n column (str): The name of the column to calculate summary statistics.\n\n Returns:\n dict: The dictionary containing information about the summary statistics.\n '
stats = collection.aggregate_stats(column).getInfo()
return eval(str(stats))<|docstring|>Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean,
sample standard deviation, sample variance, total standard deviation and total variance of the selected property.
Args:
collection (FeatureCollection): The input feature collection to calculate summary statistics.
column (str): The name of the column to calculate summary statistics.
Returns:
dict: The dictionary containing information about the summary statistics.<|endoftext|> |
d2608fb9ff8e3681bd6e9888ac65f52137bda38d17bac308cb340a0edf505562 | def column_stats(collection, column, stats_type):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate statistics.\n column (str): The name of the column to calculate statistics.\n stats_type (str): The type of statistics to calculate.\n\n Returns:\n dict: The dictionary containing information about the requested statistics.\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type]})
return stats | Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean,
sample standard deviation, sample variance, total standard deviation and total variance of the selected property.
Args:
collection (FeatureCollection): The input feature collection to calculate statistics.
column (str): The name of the column to calculate statistics.
stats_type (str): The type of statistics to calculate.
Returns:
dict: The dictionary containing information about the requested statistics. | geemap/common.py | column_stats | arheem/geemap | 1 | python | def column_stats(collection, column, stats_type):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate statistics.\n column (str): The name of the column to calculate statistics.\n stats_type (str): The type of statistics to calculate.\n\n Returns:\n dict: The dictionary containing information about the requested statistics.\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type]})
return stats | def column_stats(collection, column, stats_type):
'Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean, \n sample standard deviation, sample variance, total standard deviation and total variance of the selected property.\n\n Args:\n collection (FeatureCollection): The input feature collection to calculate statistics.\n column (str): The name of the column to calculate statistics.\n stats_type (str): The type of statistics to calculate.\n\n Returns:\n dict: The dictionary containing information about the requested statistics.\n '
stats_type = stats_type.lower()
allowed_stats = ['min', 'max', 'mean', 'median', 'sum', 'stdDev', 'variance']
if (stats_type not in allowed_stats):
print('The stats type must be one of the following: {}'.format(','.join(allowed_stats)))
return
stats_dict = {'min': ee.Reducer.min(), 'max': ee.Reducer.max(), 'mean': ee.Reducer.mean(), 'median': ee.Reducer.median(), 'sum': ee.Reducer.sum(), 'stdDev': ee.Reducer.stdDev(), 'variance': ee.Reducer.variance()}
selectors = [column]
stats = collection.reduceColumns(**{'selectors': selectors, 'reducer': stats_dict[stats_type]})
return stats<|docstring|>Aggregates over a given property of the objects in a collection, calculating the sum, min, max, mean,
sample standard deviation, sample variance, total standard deviation and total variance of the selected property.
Args:
collection (FeatureCollection): The input feature collection to calculate statistics.
column (str): The name of the column to calculate statistics.
stats_type (str): The type of statistics to calculate.
Returns:
dict: The dictionary containing information about the requested statistics.<|endoftext|> |
fbf9389c005b3cdd21d5f7e96a3c132275945c948bfc6f78bc4518e4333f7146 | def ee_num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (ee.Number): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n ee.Number: The number with the specified decimal places rounded.\n '
format_str = '%.{}f'.format(decimal)
return ee.Number.parse(ee.Number(num).format(format_str)) | Rounds a number to a specified number of decimal places.
Args:
num (ee.Number): The number to round.
decimal (int, optional): The number of decimal places to round. Defaults to 2.
Returns:
ee.Number: The number with the specified decimal places rounded. | geemap/common.py | ee_num_round | arheem/geemap | 1 | python | def ee_num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (ee.Number): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n ee.Number: The number with the specified decimal places rounded.\n '
format_str = '%.{}f'.format(decimal)
return ee.Number.parse(ee.Number(num).format(format_str)) | def ee_num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (ee.Number): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n ee.Number: The number with the specified decimal places rounded.\n '
format_str = '%.{}f'.format(decimal)
return ee.Number.parse(ee.Number(num).format(format_str))<|docstring|>Rounds a number to a specified number of decimal places.
Args:
num (ee.Number): The number to round.
decimal (int, optional): The number of decimal places to round. Defaults to 2.
Returns:
ee.Number: The number with the specified decimal places rounded.<|endoftext|> |
ade869e42b4fd94f59b33e933af05460b1a245be6625dfc8ddce622bb3bc43e2 | def num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (float): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n float: The number with the specified decimal places rounded.\n '
return round(num, decimal) | Rounds a number to a specified number of decimal places.
Args:
num (float): The number to round.
decimal (int, optional): The number of decimal places to round. Defaults to 2.
Returns:
float: The number with the specified decimal places rounded. | geemap/common.py | num_round | arheem/geemap | 1 | python | def num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (float): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n float: The number with the specified decimal places rounded.\n '
return round(num, decimal) | def num_round(num, decimal=2):
'Rounds a number to a specified number of decimal places.\n\n Args:\n num (float): The number to round.\n decimal (int, optional): The number of decimal places to round. Defaults to 2.\n\n Returns:\n float: The number with the specified decimal places rounded.\n '
return round(num, decimal)<|docstring|>Rounds a number to a specified number of decimal places.
Args:
num (float): The number to round.
decimal (int, optional): The number of decimal places to round. Defaults to 2.
Returns:
float: The number with the specified decimal places rounded.<|endoftext|> |
5aa5bacd26aa6116e2d2aa09237524ddcd53f473a7f2303cb3b2687bcf218012 | def HandleRequest(unused_environ, handler_name, unused_url, post_data, unused_error, application_root, python_lib, import_hook=None):
"Handle a single CGI request.\n\n Handles a request for handler_name in the form 'path/to/handler.py' with the\n environment contained in environ.\n\n Args:\n handler_name: A str containing the user-specified handler file to use for\n this request as specified in the script field of a handler in app.yaml.\n post_data: A stream containing the post data for this request.\n application_root: A str containing the root path of the application.\n python_lib: A str containing the root the Python App Engine library.\n import_hook: Optional import hook (PEP 302 style loader).\n\n Returns:\n A dict containing zero or more of the following:\n error: App Engine error code. 0 for OK, 1 for error. Defaults to OK if not\n set. If set, then the other fields may be missing.\n response_code: HTTP response code.\n headers: A list of tuples (key, value) of HTTP headers.\n body: A str of the body of the response.\n "
body = cStringIO.StringIO()
module_name = _FileToModuleName(handler_name)
(parent_module, _, submodule_name) = module_name.rpartition('.')
parent_module = _GetModuleOrNone(parent_module)
main = None
if (module_name in sys.modules):
module = sys.modules[module_name]
main = _GetValidMain(module)
if (not main):
module = imp.new_module('__main__')
if (import_hook is not None):
module.__loader__ = import_hook
saved_streams = (sys.stdin, sys.stdout)
try:
sys.modules['__main__'] = module
module.__dict__['__name__'] = '__main__'
sys.stdin = post_data
sys.stdout = body
if main:
os.environ['PATH_TRANSLATED'] = module.__file__
main()
else:
filename = _AbsolutePath(handler_name, application_root, python_lib)
if filename.endswith((os.sep + '__init__.py')):
module.__path__ = [os.path.dirname(filename)]
if (import_hook is None):
(code, filename) = _LoadModuleCode(filename)
else:
code = import_hook.get_code(module_name)
if (not code):
return {'error': 2}
os.environ['PATH_TRANSLATED'] = filename
module.__file__ = filename
try:
sys.modules[module_name] = module
eval(code, module.__dict__)
except:
del sys.modules[module_name]
if (parent_module and (submodule_name in parent_module.__dict__)):
del parent_module.__dict__[submodule_name]
raise
else:
if parent_module:
parent_module.__dict__[submodule_name] = module
return _ParseResponse(body)
except:
exception = sys.exc_info()
message = ''.join(traceback.format_exception(exception[0], exception[1], exception[2].tb_next))
logging.error(message)
return {'error': 1}
finally:
(sys.stdin, sys.stdout) = saved_streams
module.__name__ = module_name
if ('__main__' in sys.modules):
del sys.modules['__main__'] | Handle a single CGI request.
Handles a request for handler_name in the form 'path/to/handler.py' with the
environment contained in environ.
Args:
handler_name: A str containing the user-specified handler file to use for
this request as specified in the script field of a handler in app.yaml.
post_data: A stream containing the post data for this request.
application_root: A str containing the root path of the application.
python_lib: A str containing the root the Python App Engine library.
import_hook: Optional import hook (PEP 302 style loader).
Returns:
A dict containing zero or more of the following:
error: App Engine error code. 0 for OK, 1 for error. Defaults to OK if not
set. If set, then the other fields may be missing.
response_code: HTTP response code.
headers: A list of tuples (key, value) of HTTP headers.
body: A str of the body of the response. | google/appengine/runtime/cgi.py | HandleRequest | plooploops/rosterrun | 790 | python | def HandleRequest(unused_environ, handler_name, unused_url, post_data, unused_error, application_root, python_lib, import_hook=None):
"Handle a single CGI request.\n\n Handles a request for handler_name in the form 'path/to/handler.py' with the\n environment contained in environ.\n\n Args:\n handler_name: A str containing the user-specified handler file to use for\n this request as specified in the script field of a handler in app.yaml.\n post_data: A stream containing the post data for this request.\n application_root: A str containing the root path of the application.\n python_lib: A str containing the root the Python App Engine library.\n import_hook: Optional import hook (PEP 302 style loader).\n\n Returns:\n A dict containing zero or more of the following:\n error: App Engine error code. 0 for OK, 1 for error. Defaults to OK if not\n set. If set, then the other fields may be missing.\n response_code: HTTP response code.\n headers: A list of tuples (key, value) of HTTP headers.\n body: A str of the body of the response.\n "
body = cStringIO.StringIO()
module_name = _FileToModuleName(handler_name)
(parent_module, _, submodule_name) = module_name.rpartition('.')
parent_module = _GetModuleOrNone(parent_module)
main = None
if (module_name in sys.modules):
module = sys.modules[module_name]
main = _GetValidMain(module)
if (not main):
module = imp.new_module('__main__')
if (import_hook is not None):
module.__loader__ = import_hook
saved_streams = (sys.stdin, sys.stdout)
try:
sys.modules['__main__'] = module
module.__dict__['__name__'] = '__main__'
sys.stdin = post_data
sys.stdout = body
if main:
os.environ['PATH_TRANSLATED'] = module.__file__
main()
else:
filename = _AbsolutePath(handler_name, application_root, python_lib)
if filename.endswith((os.sep + '__init__.py')):
module.__path__ = [os.path.dirname(filename)]
if (import_hook is None):
(code, filename) = _LoadModuleCode(filename)
else:
code = import_hook.get_code(module_name)
if (not code):
return {'error': 2}
os.environ['PATH_TRANSLATED'] = filename
module.__file__ = filename
try:
sys.modules[module_name] = module
eval(code, module.__dict__)
except:
del sys.modules[module_name]
if (parent_module and (submodule_name in parent_module.__dict__)):
del parent_module.__dict__[submodule_name]
raise
else:
if parent_module:
parent_module.__dict__[submodule_name] = module
return _ParseResponse(body)
except:
exception = sys.exc_info()
message = .join(traceback.format_exception(exception[0], exception[1], exception[2].tb_next))
logging.error(message)
return {'error': 1}
finally:
(sys.stdin, sys.stdout) = saved_streams
module.__name__ = module_name
if ('__main__' in sys.modules):
del sys.modules['__main__'] | def HandleRequest(unused_environ, handler_name, unused_url, post_data, unused_error, application_root, python_lib, import_hook=None):
"Handle a single CGI request.\n\n Handles a request for handler_name in the form 'path/to/handler.py' with the\n environment contained in environ.\n\n Args:\n handler_name: A str containing the user-specified handler file to use for\n this request as specified in the script field of a handler in app.yaml.\n post_data: A stream containing the post data for this request.\n application_root: A str containing the root path of the application.\n python_lib: A str containing the root the Python App Engine library.\n import_hook: Optional import hook (PEP 302 style loader).\n\n Returns:\n A dict containing zero or more of the following:\n error: App Engine error code. 0 for OK, 1 for error. Defaults to OK if not\n set. If set, then the other fields may be missing.\n response_code: HTTP response code.\n headers: A list of tuples (key, value) of HTTP headers.\n body: A str of the body of the response.\n "
body = cStringIO.StringIO()
module_name = _FileToModuleName(handler_name)
(parent_module, _, submodule_name) = module_name.rpartition('.')
parent_module = _GetModuleOrNone(parent_module)
main = None
if (module_name in sys.modules):
module = sys.modules[module_name]
main = _GetValidMain(module)
if (not main):
module = imp.new_module('__main__')
if (import_hook is not None):
module.__loader__ = import_hook
saved_streams = (sys.stdin, sys.stdout)
try:
sys.modules['__main__'] = module
module.__dict__['__name__'] = '__main__'
sys.stdin = post_data
sys.stdout = body
if main:
os.environ['PATH_TRANSLATED'] = module.__file__
main()
else:
filename = _AbsolutePath(handler_name, application_root, python_lib)
if filename.endswith((os.sep + '__init__.py')):
module.__path__ = [os.path.dirname(filename)]
if (import_hook is None):
(code, filename) = _LoadModuleCode(filename)
else:
code = import_hook.get_code(module_name)
if (not code):
return {'error': 2}
os.environ['PATH_TRANSLATED'] = filename
module.__file__ = filename
try:
sys.modules[module_name] = module
eval(code, module.__dict__)
except:
del sys.modules[module_name]
if (parent_module and (submodule_name in parent_module.__dict__)):
del parent_module.__dict__[submodule_name]
raise
else:
if parent_module:
parent_module.__dict__[submodule_name] = module
return _ParseResponse(body)
except:
exception = sys.exc_info()
message = .join(traceback.format_exception(exception[0], exception[1], exception[2].tb_next))
logging.error(message)
return {'error': 1}
finally:
(sys.stdin, sys.stdout) = saved_streams
module.__name__ = module_name
if ('__main__' in sys.modules):
del sys.modules['__main__']<|docstring|>Handle a single CGI request.
Handles a request for handler_name in the form 'path/to/handler.py' with the
environment contained in environ.
Args:
handler_name: A str containing the user-specified handler file to use for
this request as specified in the script field of a handler in app.yaml.
post_data: A stream containing the post data for this request.
application_root: A str containing the root path of the application.
python_lib: A str containing the root the Python App Engine library.
import_hook: Optional import hook (PEP 302 style loader).
Returns:
A dict containing zero or more of the following:
error: App Engine error code. 0 for OK, 1 for error. Defaults to OK if not
set. If set, then the other fields may be missing.
response_code: HTTP response code.
headers: A list of tuples (key, value) of HTTP headers.
body: A str of the body of the response.<|endoftext|> |
a12bc8e55ca767ea41d29e422e184826eaffab11a8853d18d071f17fb208e3c0 | def _ParseResponse(response):
'Parses an HTTP response into a dict.\n\n Args:\n response: A cStringIO.StringIO (StringO) containing the HTTP response.\n\n Returns:\n A dict with fields:\n body: A str containing the body.\n headers: A list containing tuples (key, value) of key and value pairs.\n response_code: An int containing the HTTP response code.\n '
response.reset()
parser = feedparser.FeedParser()
parser._set_headersonly()
while True:
line = response.readline()
if (not feedparser.headerRE.match(line)):
if (not feedparser.NLCRE.match(line)):
parser.feed(line)
break
parser.feed(line)
parsed_response = parser.close()
if ('Status' in parsed_response):
status = int(parsed_response['Status'].split(' ', 1)[0])
del parsed_response['Status']
else:
status = 200
return {'body': (parsed_response.get_payload() + response.read()), 'headers': parsed_response.items(), 'response_code': status} | Parses an HTTP response into a dict.
Args:
response: A cStringIO.StringIO (StringO) containing the HTTP response.
Returns:
A dict with fields:
body: A str containing the body.
headers: A list containing tuples (key, value) of key and value pairs.
response_code: An int containing the HTTP response code. | google/appengine/runtime/cgi.py | _ParseResponse | plooploops/rosterrun | 790 | python | def _ParseResponse(response):
'Parses an HTTP response into a dict.\n\n Args:\n response: A cStringIO.StringIO (StringO) containing the HTTP response.\n\n Returns:\n A dict with fields:\n body: A str containing the body.\n headers: A list containing tuples (key, value) of key and value pairs.\n response_code: An int containing the HTTP response code.\n '
response.reset()
parser = feedparser.FeedParser()
parser._set_headersonly()
while True:
line = response.readline()
if (not feedparser.headerRE.match(line)):
if (not feedparser.NLCRE.match(line)):
parser.feed(line)
break
parser.feed(line)
parsed_response = parser.close()
if ('Status' in parsed_response):
status = int(parsed_response['Status'].split(' ', 1)[0])
del parsed_response['Status']
else:
status = 200
return {'body': (parsed_response.get_payload() + response.read()), 'headers': parsed_response.items(), 'response_code': status} | def _ParseResponse(response):
'Parses an HTTP response into a dict.\n\n Args:\n response: A cStringIO.StringIO (StringO) containing the HTTP response.\n\n Returns:\n A dict with fields:\n body: A str containing the body.\n headers: A list containing tuples (key, value) of key and value pairs.\n response_code: An int containing the HTTP response code.\n '
response.reset()
parser = feedparser.FeedParser()
parser._set_headersonly()
while True:
line = response.readline()
if (not feedparser.headerRE.match(line)):
if (not feedparser.NLCRE.match(line)):
parser.feed(line)
break
parser.feed(line)
parsed_response = parser.close()
if ('Status' in parsed_response):
status = int(parsed_response['Status'].split(' ', 1)[0])
del parsed_response['Status']
else:
status = 200
return {'body': (parsed_response.get_payload() + response.read()), 'headers': parsed_response.items(), 'response_code': status}<|docstring|>Parses an HTTP response into a dict.
Args:
response: A cStringIO.StringIO (StringO) containing the HTTP response.
Returns:
A dict with fields:
body: A str containing the body.
headers: A list containing tuples (key, value) of key and value pairs.
response_code: An int containing the HTTP response code.<|endoftext|> |
1928d7028fa577393f54100a8fb6ecd86efcd839abc9ee8be41bfc945af7c1e4 | def _ParseHeader(header):
'Parses a str header into a (key, value) pair.'
(key, _, value) = header.partition(':')
return (key.strip(), value.strip()) | Parses a str header into a (key, value) pair. | google/appengine/runtime/cgi.py | _ParseHeader | plooploops/rosterrun | 790 | python | def _ParseHeader(header):
(key, _, value) = header.partition(':')
return (key.strip(), value.strip()) | def _ParseHeader(header):
(key, _, value) = header.partition(':')
return (key.strip(), value.strip())<|docstring|>Parses a str header into a (key, value) pair.<|endoftext|> |
9fe778b07d1a7423e00d1374c5df71dbb594e8c6acee76370246ae0d3ce1ff52 | def _GetValidMain(module):
'Returns a main function in module if it exists and is valid or None.\n\n A main function is valid if it can be called with no arguments, i.e. calling\n module.main() would be valid.\n\n Args:\n module: The module in which to search for a main function.\n\n Returns:\n A function that takes no arguments if found or None otherwise.\n '
if (not hasattr(module, 'main')):
return None
main = module.main
if (not hasattr(main, '__call__')):
return None
defaults = main.__defaults__
if defaults:
default_argcount = len(defaults)
else:
default_argcount = 0
if ((main.__code__.co_argcount - default_argcount) == 0):
return main
else:
return None | Returns a main function in module if it exists and is valid or None.
A main function is valid if it can be called with no arguments, i.e. calling
module.main() would be valid.
Args:
module: The module in which to search for a main function.
Returns:
A function that takes no arguments if found or None otherwise. | google/appengine/runtime/cgi.py | _GetValidMain | plooploops/rosterrun | 790 | python | def _GetValidMain(module):
'Returns a main function in module if it exists and is valid or None.\n\n A main function is valid if it can be called with no arguments, i.e. calling\n module.main() would be valid.\n\n Args:\n module: The module in which to search for a main function.\n\n Returns:\n A function that takes no arguments if found or None otherwise.\n '
if (not hasattr(module, 'main')):
return None
main = module.main
if (not hasattr(main, '__call__')):
return None
defaults = main.__defaults__
if defaults:
default_argcount = len(defaults)
else:
default_argcount = 0
if ((main.__code__.co_argcount - default_argcount) == 0):
return main
else:
return None | def _GetValidMain(module):
'Returns a main function in module if it exists and is valid or None.\n\n A main function is valid if it can be called with no arguments, i.e. calling\n module.main() would be valid.\n\n Args:\n module: The module in which to search for a main function.\n\n Returns:\n A function that takes no arguments if found or None otherwise.\n '
if (not hasattr(module, 'main')):
return None
main = module.main
if (not hasattr(main, '__call__')):
return None
defaults = main.__defaults__
if defaults:
default_argcount = len(defaults)
else:
default_argcount = 0
if ((main.__code__.co_argcount - default_argcount) == 0):
return main
else:
return None<|docstring|>Returns a main function in module if it exists and is valid or None.
A main function is valid if it can be called with no arguments, i.e. calling
module.main() would be valid.
Args:
module: The module in which to search for a main function.
Returns:
A function that takes no arguments if found or None otherwise.<|endoftext|> |
00f5b18d48b0c27c0a1285c98bc3beaa38db335f06ac39c8daa65cadce6febd3 | def _FileToModuleName(filename):
'Returns the module name corresponding to a filename.'
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
module = suffix
else:
module = filename
module = os.path.normpath(module)
if ('.py' in module):
module = module.rpartition('.py')[0]
module = module.replace(os.sep, '.')
module = module.strip('.')
if module.endswith('.__init__'):
module = module.rpartition('.__init__')[0]
return module | Returns the module name corresponding to a filename. | google/appengine/runtime/cgi.py | _FileToModuleName | plooploops/rosterrun | 790 | python | def _FileToModuleName(filename):
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
module = suffix
else:
module = filename
module = os.path.normpath(module)
if ('.py' in module):
module = module.rpartition('.py')[0]
module = module.replace(os.sep, '.')
module = module.strip('.')
if module.endswith('.__init__'):
module = module.rpartition('.__init__')[0]
return module | def _FileToModuleName(filename):
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
module = suffix
else:
module = filename
module = os.path.normpath(module)
if ('.py' in module):
module = module.rpartition('.py')[0]
module = module.replace(os.sep, '.')
module = module.strip('.')
if module.endswith('.__init__'):
module = module.rpartition('.__init__')[0]
return module<|docstring|>Returns the module name corresponding to a filename.<|endoftext|> |
bbbe86e5701caf735330ae9754b164f0fe122e5706df6ad6a910de0430ade3e1 | def _AbsolutePath(filename, application_root, python_lib):
'Returns the absolute path of a Python script file.\n\n Args:\n filename: A str containing the handler script path.\n application_root: The absolute path of the root of the application.\n python_lib: The absolute path of the Python library.\n\n Returns:\n The absolute path of the handler script.\n '
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
filename = os.path.join(python_lib, suffix)
else:
filename = os.path.join(application_root, filename)
if (filename.endswith(os.sep) or os.path.isdir(filename)):
filename = os.path.join(filename, '__init__.py')
return filename | Returns the absolute path of a Python script file.
Args:
filename: A str containing the handler script path.
application_root: The absolute path of the root of the application.
python_lib: The absolute path of the Python library.
Returns:
The absolute path of the handler script. | google/appengine/runtime/cgi.py | _AbsolutePath | plooploops/rosterrun | 790 | python | def _AbsolutePath(filename, application_root, python_lib):
'Returns the absolute path of a Python script file.\n\n Args:\n filename: A str containing the handler script path.\n application_root: The absolute path of the root of the application.\n python_lib: The absolute path of the Python library.\n\n Returns:\n The absolute path of the handler script.\n '
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
filename = os.path.join(python_lib, suffix)
else:
filename = os.path.join(application_root, filename)
if (filename.endswith(os.sep) or os.path.isdir(filename)):
filename = os.path.join(filename, '__init__.py')
return filename | def _AbsolutePath(filename, application_root, python_lib):
'Returns the absolute path of a Python script file.\n\n Args:\n filename: A str containing the handler script path.\n application_root: The absolute path of the root of the application.\n python_lib: The absolute path of the Python library.\n\n Returns:\n The absolute path of the handler script.\n '
(_, lib, suffix) = filename.partition('$PYTHON_LIB/')
if lib:
filename = os.path.join(python_lib, suffix)
else:
filename = os.path.join(application_root, filename)
if (filename.endswith(os.sep) or os.path.isdir(filename)):
filename = os.path.join(filename, '__init__.py')
return filename<|docstring|>Returns the absolute path of a Python script file.
Args:
filename: A str containing the handler script path.
application_root: The absolute path of the root of the application.
python_lib: The absolute path of the Python library.
Returns:
The absolute path of the handler script.<|endoftext|> |
265e5290673da5c6f4a2b87cec51d500d1df8aa8a8d450caaaf77d8406871888 | def _LoadModuleCode(filename):
'Loads the code of a module, using compiled bytecode if available.\n\n Args:\n filename: The Python script filename.\n\n Returns:\n A 2-tuple (code, filename) where:\n code: A code object contained in the file or None if it does not exist.\n filename: The name of the file loaded, either the same as the arg\n filename, or the corresponding .pyc file.\n '
compiled_filename = (filename + 'c')
if os.path.exists(compiled_filename):
with open(compiled_filename, 'r') as f:
magic_numbers = f.read(8)
if ((len(magic_numbers) == 8) and (magic_numbers[:4] == imp.get_magic())):
try:
return (_FixCodeFilename(marshal.load(f), filename), compiled_filename)
except (EOFError, ValueError):
pass
if os.path.exists(filename):
with open(filename, 'r') as f:
code = compile(f.read(), filename, 'exec', 0, True)
return (code, filename)
else:
return (None, filename) | Loads the code of a module, using compiled bytecode if available.
Args:
filename: The Python script filename.
Returns:
A 2-tuple (code, filename) where:
code: A code object contained in the file or None if it does not exist.
filename: The name of the file loaded, either the same as the arg
filename, or the corresponding .pyc file. | google/appengine/runtime/cgi.py | _LoadModuleCode | plooploops/rosterrun | 790 | python | def _LoadModuleCode(filename):
'Loads the code of a module, using compiled bytecode if available.\n\n Args:\n filename: The Python script filename.\n\n Returns:\n A 2-tuple (code, filename) where:\n code: A code object contained in the file or None if it does not exist.\n filename: The name of the file loaded, either the same as the arg\n filename, or the corresponding .pyc file.\n '
compiled_filename = (filename + 'c')
if os.path.exists(compiled_filename):
with open(compiled_filename, 'r') as f:
magic_numbers = f.read(8)
if ((len(magic_numbers) == 8) and (magic_numbers[:4] == imp.get_magic())):
try:
return (_FixCodeFilename(marshal.load(f), filename), compiled_filename)
except (EOFError, ValueError):
pass
if os.path.exists(filename):
with open(filename, 'r') as f:
code = compile(f.read(), filename, 'exec', 0, True)
return (code, filename)
else:
return (None, filename) | def _LoadModuleCode(filename):
'Loads the code of a module, using compiled bytecode if available.\n\n Args:\n filename: The Python script filename.\n\n Returns:\n A 2-tuple (code, filename) where:\n code: A code object contained in the file or None if it does not exist.\n filename: The name of the file loaded, either the same as the arg\n filename, or the corresponding .pyc file.\n '
compiled_filename = (filename + 'c')
if os.path.exists(compiled_filename):
with open(compiled_filename, 'r') as f:
magic_numbers = f.read(8)
if ((len(magic_numbers) == 8) and (magic_numbers[:4] == imp.get_magic())):
try:
return (_FixCodeFilename(marshal.load(f), filename), compiled_filename)
except (EOFError, ValueError):
pass
if os.path.exists(filename):
with open(filename, 'r') as f:
code = compile(f.read(), filename, 'exec', 0, True)
return (code, filename)
else:
return (None, filename)<|docstring|>Loads the code of a module, using compiled bytecode if available.
Args:
filename: The Python script filename.
Returns:
A 2-tuple (code, filename) where:
code: A code object contained in the file or None if it does not exist.
filename: The name of the file loaded, either the same as the arg
filename, or the corresponding .pyc file.<|endoftext|> |
eab8feb74604f155a9692952bd172df3679e2db480851f8a92f6641f58f0ce91 | def _FixCodeFilename(code, filename):
'Creates a CodeType with co_filename replaced with filename.\n\n Also affects nested code objects in co_consts.\n\n Args:\n code: The code object to be replaced.\n filename: The replacement filename.\n\n Returns:\n A new code object with its co_filename set to the provided filename.\n '
if isinstance(code, types.CodeType):
code = types.CodeType(code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags, code.co_code, tuple([_FixCodeFilename(c, filename) for c in code.co_consts]), code.co_names, code.co_varnames, filename, code.co_name, code.co_firstlineno, code.co_lnotab, code.co_freevars, code.co_cellvars)
return code | Creates a CodeType with co_filename replaced with filename.
Also affects nested code objects in co_consts.
Args:
code: The code object to be replaced.
filename: The replacement filename.
Returns:
A new code object with its co_filename set to the provided filename. | google/appengine/runtime/cgi.py | _FixCodeFilename | plooploops/rosterrun | 790 | python | def _FixCodeFilename(code, filename):
'Creates a CodeType with co_filename replaced with filename.\n\n Also affects nested code objects in co_consts.\n\n Args:\n code: The code object to be replaced.\n filename: The replacement filename.\n\n Returns:\n A new code object with its co_filename set to the provided filename.\n '
if isinstance(code, types.CodeType):
code = types.CodeType(code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags, code.co_code, tuple([_FixCodeFilename(c, filename) for c in code.co_consts]), code.co_names, code.co_varnames, filename, code.co_name, code.co_firstlineno, code.co_lnotab, code.co_freevars, code.co_cellvars)
return code | def _FixCodeFilename(code, filename):
'Creates a CodeType with co_filename replaced with filename.\n\n Also affects nested code objects in co_consts.\n\n Args:\n code: The code object to be replaced.\n filename: The replacement filename.\n\n Returns:\n A new code object with its co_filename set to the provided filename.\n '
if isinstance(code, types.CodeType):
code = types.CodeType(code.co_argcount, code.co_nlocals, code.co_stacksize, code.co_flags, code.co_code, tuple([_FixCodeFilename(c, filename) for c in code.co_consts]), code.co_names, code.co_varnames, filename, code.co_name, code.co_firstlineno, code.co_lnotab, code.co_freevars, code.co_cellvars)
return code<|docstring|>Creates a CodeType with co_filename replaced with filename.
Also affects nested code objects in co_consts.
Args:
code: The code object to be replaced.
filename: The replacement filename.
Returns:
A new code object with its co_filename set to the provided filename.<|endoftext|> |
8a8dd449372d312bb217c6d0acbe05cb912857a0c4d973d8d52218b9aab11066 | def _GetModuleOrNone(module_name):
'Returns a module if it exists or None.'
module = None
if module_name:
try:
module = __import__(module_name)
except ImportError:
pass
else:
for name in module_name.split('.')[1:]:
module = getattr(module, name)
return module | Returns a module if it exists or None. | google/appengine/runtime/cgi.py | _GetModuleOrNone | plooploops/rosterrun | 790 | python | def _GetModuleOrNone(module_name):
module = None
if module_name:
try:
module = __import__(module_name)
except ImportError:
pass
else:
for name in module_name.split('.')[1:]:
module = getattr(module, name)
return module | def _GetModuleOrNone(module_name):
module = None
if module_name:
try:
module = __import__(module_name)
except ImportError:
pass
else:
for name in module_name.split('.')[1:]:
module = getattr(module, name)
return module<|docstring|>Returns a module if it exists or None.<|endoftext|> |
0896bc1088016524de4d545fa58c154b6f162adfb28413a1e7d347137d3b6a0f | def _CreateUnsyncedEvents(self, host_count=1, events_per_host=(- 1)):
'Creates a bunch of _UnsycnedEvents across a number of Windows hosts.\n\n Args:\n host_count: The number of hosts to create _UnsyncedEvents for.\n events_per_host: The number of _UnsyncedEvents to create per host. If set\n to -1 (default), creates a random number of _UnsyncedEvents.\n\n Returns:\n A sorted list of the randomly generated host IDs.\n '
hosts = [bit9_test_utils.CreateComputer(id=host_id) for host_id in xrange(host_count)]
for host in hosts:
if (events_per_host == (- 1)):
events_per_host = random.randint(1, 5)
for _ in xrange(events_per_host):
(event, _) = _CreateEventTuple(computer=host)
sync._UnsyncedEvent.Generate(event, []).put()
return sorted((host.id for host in hosts)) | Creates a bunch of _UnsycnedEvents across a number of Windows hosts.
Args:
host_count: The number of hosts to create _UnsyncedEvents for.
events_per_host: The number of _UnsyncedEvents to create per host. If set
to -1 (default), creates a random number of _UnsyncedEvents.
Returns:
A sorted list of the randomly generated host IDs. | upvote/gae/modules/bit9_api/sync_test.py | _CreateUnsyncedEvents | cclauss/upvote | 0 | python | def _CreateUnsyncedEvents(self, host_count=1, events_per_host=(- 1)):
'Creates a bunch of _UnsycnedEvents across a number of Windows hosts.\n\n Args:\n host_count: The number of hosts to create _UnsyncedEvents for.\n events_per_host: The number of _UnsyncedEvents to create per host. If set\n to -1 (default), creates a random number of _UnsyncedEvents.\n\n Returns:\n A sorted list of the randomly generated host IDs.\n '
hosts = [bit9_test_utils.CreateComputer(id=host_id) for host_id in xrange(host_count)]
for host in hosts:
if (events_per_host == (- 1)):
events_per_host = random.randint(1, 5)
for _ in xrange(events_per_host):
(event, _) = _CreateEventTuple(computer=host)
sync._UnsyncedEvent.Generate(event, []).put()
return sorted((host.id for host in hosts)) | def _CreateUnsyncedEvents(self, host_count=1, events_per_host=(- 1)):
'Creates a bunch of _UnsycnedEvents across a number of Windows hosts.\n\n Args:\n host_count: The number of hosts to create _UnsyncedEvents for.\n events_per_host: The number of _UnsyncedEvents to create per host. If set\n to -1 (default), creates a random number of _UnsyncedEvents.\n\n Returns:\n A sorted list of the randomly generated host IDs.\n '
hosts = [bit9_test_utils.CreateComputer(id=host_id) for host_id in xrange(host_count)]
for host in hosts:
if (events_per_host == (- 1)):
events_per_host = random.randint(1, 5)
for _ in xrange(events_per_host):
(event, _) = _CreateEventTuple(computer=host)
sync._UnsyncedEvent.Generate(event, []).put()
return sorted((host.id for host in hosts))<|docstring|>Creates a bunch of _UnsycnedEvents across a number of Windows hosts.
Args:
host_count: The number of hosts to create _UnsyncedEvents for.
events_per_host: The number of _UnsyncedEvents to create per host. If set
to -1 (default), creates a random number of _UnsyncedEvents.
Returns:
A sorted list of the randomly generated host IDs.<|endoftext|> |
06932b4d51b82e9f8fc3be30ce39d091e615d27762ec2252bebe1549d80286c8 | @abc.abstractmethod
def create_registered_limits(self, registered_limits):
'Create new registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n create.\n\n :returns: all the newly created registered limits.\n :raises keystone.exception.Conflict: If a duplicate registered limit\n exists.\n\n '
raise exception.NotImplemented() | Create new registered limits.
:param registered_limits: a list of dictionaries representing limits to
create.
:returns: all the newly created registered limits.
:raises keystone.exception.Conflict: If a duplicate registered limit
exists. | keystone/limit/backends/base.py | create_registered_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def create_registered_limits(self, registered_limits):
'Create new registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n create.\n\n :returns: all the newly created registered limits.\n :raises keystone.exception.Conflict: If a duplicate registered limit\n exists.\n\n '
raise exception.NotImplemented() | @abc.abstractmethod
def create_registered_limits(self, registered_limits):
'Create new registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n create.\n\n :returns: all the newly created registered limits.\n :raises keystone.exception.Conflict: If a duplicate registered limit\n exists.\n\n '
raise exception.NotImplemented()<|docstring|>Create new registered limits.
:param registered_limits: a list of dictionaries representing limits to
create.
:returns: all the newly created registered limits.
:raises keystone.exception.Conflict: If a duplicate registered limit
exists.<|endoftext|> |
0b228f45c3a0c47c7d8d7470459ffe6acaec11462589e10f136d22c1cafb0506 | @abc.abstractmethod
def update_registered_limits(self, registered_limits):
"Update existing registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n update.\n\n :returns: all the registered limits.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n :raises keystone.exception.Conflict: If update to a duplicate\n registered limit.\n\n "
raise exception.NotImplemented() | Update existing registered limits.
:param registered_limits: a list of dictionaries representing limits to
update.
:returns: all the registered limits.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist.
:raises keystone.exception.Conflict: If update to a duplicate
registered limit. | keystone/limit/backends/base.py | update_registered_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def update_registered_limits(self, registered_limits):
"Update existing registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n update.\n\n :returns: all the registered limits.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n :raises keystone.exception.Conflict: If update to a duplicate\n registered limit.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def update_registered_limits(self, registered_limits):
"Update existing registered limits.\n\n :param registered_limits: a list of dictionaries representing limits to\n update.\n\n :returns: all the registered limits.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n :raises keystone.exception.Conflict: If update to a duplicate\n registered limit.\n\n "
raise exception.NotImplemented()<|docstring|>Update existing registered limits.
:param registered_limits: a list of dictionaries representing limits to
update.
:returns: all the registered limits.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist.
:raises keystone.exception.Conflict: If update to a duplicate
registered limit.<|endoftext|> |
e9fa4c875304afd18c781eb1c021bcc3480639fa0f7602c1074cbac5c82ee56c | @abc.abstractmethod
def list_registered_limits(self, hints):
'List all registered limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty registered limit.\n\n '
raise exception.NotImplemented() | List all registered limits.
:param hints: contains the list of filters yet to be satisfied.
Any filters satisfied here will be removed so that
the caller will know if any filters remain.
:returns: a list of dictionaries or an empty registered limit. | keystone/limit/backends/base.py | list_registered_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def list_registered_limits(self, hints):
'List all registered limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty registered limit.\n\n '
raise exception.NotImplemented() | @abc.abstractmethod
def list_registered_limits(self, hints):
'List all registered limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty registered limit.\n\n '
raise exception.NotImplemented()<|docstring|>List all registered limits.
:param hints: contains the list of filters yet to be satisfied.
Any filters satisfied here will be removed so that
the caller will know if any filters remain.
:returns: a list of dictionaries or an empty registered limit.<|endoftext|> |
bfa39db1a007baf86ba28b790d0b364856c24fc20cd4277b509bdd0451ebb461 | @abc.abstractmethod
def get_registered_limit(self, registered_limit_id):
"Get a registered limit.\n\n :param registered_limit_id: the registered limit id to get.\n\n :returns: a dictionary representing a registered limit reference.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented() | Get a registered limit.
:param registered_limit_id: the registered limit id to get.
:returns: a dictionary representing a registered limit reference.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist. | keystone/limit/backends/base.py | get_registered_limit | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def get_registered_limit(self, registered_limit_id):
"Get a registered limit.\n\n :param registered_limit_id: the registered limit id to get.\n\n :returns: a dictionary representing a registered limit reference.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def get_registered_limit(self, registered_limit_id):
"Get a registered limit.\n\n :param registered_limit_id: the registered limit id to get.\n\n :returns: a dictionary representing a registered limit reference.\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented()<|docstring|>Get a registered limit.
:param registered_limit_id: the registered limit id to get.
:returns: a dictionary representing a registered limit reference.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist.<|endoftext|> |
063edd65ba733ac55455ad52380da0ae467ac20f0fc4539f2d5160d4b6826a4b | @abc.abstractmethod
def delete_registered_limit(self, registered_limit_id):
"Delete an existing registered limit.\n\n :param registered_limit_id: the registered limit id to delete.\n\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented() | Delete an existing registered limit.
:param registered_limit_id: the registered limit id to delete.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist. | keystone/limit/backends/base.py | delete_registered_limit | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def delete_registered_limit(self, registered_limit_id):
"Delete an existing registered limit.\n\n :param registered_limit_id: the registered limit id to delete.\n\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def delete_registered_limit(self, registered_limit_id):
"Delete an existing registered limit.\n\n :param registered_limit_id: the registered limit id to delete.\n\n :raises keystone.exception.RegisteredLimitNotFound: If registered limit\n doesn't exist.\n\n "
raise exception.NotImplemented()<|docstring|>Delete an existing registered limit.
:param registered_limit_id: the registered limit id to delete.
:raises keystone.exception.RegisteredLimitNotFound: If registered limit
doesn't exist.<|endoftext|> |
6ab3d2d4100bb91704dbf521ef3864a247d937589ad9d2dadef620d6e15b8b7b | @abc.abstractmethod
def create_limits(self, limits):
'Create new limits.\n\n :param limits: a list of dictionaries representing limits to create.\n\n :returns: all the newly created limits.\n :raises keystone.exception.Conflict: If a duplicate limit exists.\n :raises keystone.exception.NoLimitReference: If no reference registered\n limit exists.\n\n '
raise exception.NotImplemented() | Create new limits.
:param limits: a list of dictionaries representing limits to create.
:returns: all the newly created limits.
:raises keystone.exception.Conflict: If a duplicate limit exists.
:raises keystone.exception.NoLimitReference: If no reference registered
limit exists. | keystone/limit/backends/base.py | create_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def create_limits(self, limits):
'Create new limits.\n\n :param limits: a list of dictionaries representing limits to create.\n\n :returns: all the newly created limits.\n :raises keystone.exception.Conflict: If a duplicate limit exists.\n :raises keystone.exception.NoLimitReference: If no reference registered\n limit exists.\n\n '
raise exception.NotImplemented() | @abc.abstractmethod
def create_limits(self, limits):
'Create new limits.\n\n :param limits: a list of dictionaries representing limits to create.\n\n :returns: all the newly created limits.\n :raises keystone.exception.Conflict: If a duplicate limit exists.\n :raises keystone.exception.NoLimitReference: If no reference registered\n limit exists.\n\n '
raise exception.NotImplemented()<|docstring|>Create new limits.
:param limits: a list of dictionaries representing limits to create.
:returns: all the newly created limits.
:raises keystone.exception.Conflict: If a duplicate limit exists.
:raises keystone.exception.NoLimitReference: If no reference registered
limit exists.<|endoftext|> |
f6d849ce7a2a4a8ce84160d87f225da52b0afab4f2d858fff1965fc6f885766a | @abc.abstractmethod
def update_limits(self, limits):
"Update existing limits.\n\n :param limits: a list of dictionaries representing limits to update.\n\n :returns: all the limits.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n :raises keystone.exception.Conflict: If update to a duplicate limit.\n\n "
raise exception.NotImplemented() | Update existing limits.
:param limits: a list of dictionaries representing limits to update.
:returns: all the limits.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist.
:raises keystone.exception.Conflict: If update to a duplicate limit. | keystone/limit/backends/base.py | update_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def update_limits(self, limits):
"Update existing limits.\n\n :param limits: a list of dictionaries representing limits to update.\n\n :returns: all the limits.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n :raises keystone.exception.Conflict: If update to a duplicate limit.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def update_limits(self, limits):
"Update existing limits.\n\n :param limits: a list of dictionaries representing limits to update.\n\n :returns: all the limits.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n :raises keystone.exception.Conflict: If update to a duplicate limit.\n\n "
raise exception.NotImplemented()<|docstring|>Update existing limits.
:param limits: a list of dictionaries representing limits to update.
:returns: all the limits.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist.
:raises keystone.exception.Conflict: If update to a duplicate limit.<|endoftext|> |
97570cf2aceb74b7489a2a2a99ee72789c633fe02b514aa5a450ca03499e20a0 | @abc.abstractmethod
def list_limits(self, hints):
'List all limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty list.\n\n '
raise exception.NotImplemented() | List all limits.
:param hints: contains the list of filters yet to be satisfied.
Any filters satisfied here will be removed so that
the caller will know if any filters remain.
:returns: a list of dictionaries or an empty list. | keystone/limit/backends/base.py | list_limits | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def list_limits(self, hints):
'List all limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty list.\n\n '
raise exception.NotImplemented() | @abc.abstractmethod
def list_limits(self, hints):
'List all limits.\n\n :param hints: contains the list of filters yet to be satisfied.\n Any filters satisfied here will be removed so that\n the caller will know if any filters remain.\n\n :returns: a list of dictionaries or an empty list.\n\n '
raise exception.NotImplemented()<|docstring|>List all limits.
:param hints: contains the list of filters yet to be satisfied.
Any filters satisfied here will be removed so that
the caller will know if any filters remain.
:returns: a list of dictionaries or an empty list.<|endoftext|> |
49ae235165fa57dcff2b1ce3ae68029e5916875385ae911f33c930a9c348787d | @abc.abstractmethod
def get_limit(self, limit_id):
"Get a limit.\n\n :param limit_id: the limit id to get.\n\n :returns: a dictionary representing a limit reference.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented() | Get a limit.
:param limit_id: the limit id to get.
:returns: a dictionary representing a limit reference.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist. | keystone/limit/backends/base.py | get_limit | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def get_limit(self, limit_id):
"Get a limit.\n\n :param limit_id: the limit id to get.\n\n :returns: a dictionary representing a limit reference.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def get_limit(self, limit_id):
"Get a limit.\n\n :param limit_id: the limit id to get.\n\n :returns: a dictionary representing a limit reference.\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented()<|docstring|>Get a limit.
:param limit_id: the limit id to get.
:returns: a dictionary representing a limit reference.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist.<|endoftext|> |
4249d2f3c3d5dbf603a532cbcf40a387e8a1ba3ebd08a88d20694d1e0cb5e23d | @abc.abstractmethod
def delete_limit(self, limit_id):
"Delete an existing limit.\n\n :param limit_id: the limit id to delete.\n\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented() | Delete an existing limit.
:param limit_id: the limit id to delete.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist. | keystone/limit/backends/base.py | delete_limit | chengangA/keystone1 | 0 | python | @abc.abstractmethod
def delete_limit(self, limit_id):
"Delete an existing limit.\n\n :param limit_id: the limit id to delete.\n\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented() | @abc.abstractmethod
def delete_limit(self, limit_id):
"Delete an existing limit.\n\n :param limit_id: the limit id to delete.\n\n :raises keystone.exception.LimitNotFound: If limit doesn't\n exist.\n\n "
raise exception.NotImplemented()<|docstring|>Delete an existing limit.
:param limit_id: the limit id to delete.
:raises keystone.exception.LimitNotFound: If limit doesn't
exist.<|endoftext|> |
605b44af4b58f2555a7b8a0d7d22d89bc5e44bc05c67b1635dc0ba5413eeccf4 | def check(self, run, data, cnt):
'\n check conditional if cnt is greater than start count\n cnt-start count is greater than 0\n and cnt-start count is divisable by frequency\n\n returns True if check passes. e.i. Write checks to trip on success. To terminate if Ar36 intensity is less than\n x use Ar36<x\n\n :param run: ``AutomatedRun``\n :param data: 2-tuple. (keys, signals) where keys==detector names, signals== measured intensities\n :param cnt: int\n :return: True if check passes. e.i. Write checks to trip on success.\n\n '
if self._should_check(run, data, cnt):
return self._check(run, data, cnt) | check conditional if cnt is greater than start count
cnt-start count is greater than 0
and cnt-start count is divisable by frequency
returns True if check passes. e.i. Write checks to trip on success. To terminate if Ar36 intensity is less than
x use Ar36<x
:param run: ``AutomatedRun``
:param data: 2-tuple. (keys, signals) where keys==detector names, signals== measured intensities
:param cnt: int
:return: True if check passes. e.i. Write checks to trip on success. | pychron/experiment/conditional/conditional.py | check | ASUPychron/pychron | 31 | python | def check(self, run, data, cnt):
'\n check conditional if cnt is greater than start count\n cnt-start count is greater than 0\n and cnt-start count is divisable by frequency\n\n returns True if check passes. e.i. Write checks to trip on success. To terminate if Ar36 intensity is less than\n x use Ar36<x\n\n :param run: ``AutomatedRun``\n :param data: 2-tuple. (keys, signals) where keys==detector names, signals== measured intensities\n :param cnt: int\n :return: True if check passes. e.i. Write checks to trip on success.\n\n '
if self._should_check(run, data, cnt):
return self._check(run, data, cnt) | def check(self, run, data, cnt):
'\n check conditional if cnt is greater than start count\n cnt-start count is greater than 0\n and cnt-start count is divisable by frequency\n\n returns True if check passes. e.i. Write checks to trip on success. To terminate if Ar36 intensity is less than\n x use Ar36<x\n\n :param run: ``AutomatedRun``\n :param data: 2-tuple. (keys, signals) where keys==detector names, signals== measured intensities\n :param cnt: int\n :return: True if check passes. e.i. Write checks to trip on success.\n\n '
if self._should_check(run, data, cnt):
return self._check(run, data, cnt)<|docstring|>check conditional if cnt is greater than start count
cnt-start count is greater than 0
and cnt-start count is divisable by frequency
returns True if check passes. e.i. Write checks to trip on success. To terminate if Ar36 intensity is less than
x use Ar36<x
:param run: ``AutomatedRun``
:param data: 2-tuple. (keys, signals) where keys==detector names, signals== measured intensities
:param cnt: int
:return: True if check passes. e.i. Write checks to trip on success.<|endoftext|> |
54855dd32a29c5dc09b71f3c6978d8acb0ebdff48be6ecb20471871264e25154 | def _check(self, run, data, cnt, verbose=False):
'\n make a teststr and context from the run and data\n evaluate the teststr with the context\n\n '
(teststr, ctx) = self._make_context(run, data)
(self._teststr, self._ctx) = (teststr, ctx)
self.value_context = vc = pprint.pformat(ctx, width=1)
self.debug('Count: {} testing {}'.format(cnt, teststr))
if verbose:
self.debug('attribute context {}'.format(pprint.pformat(self._attr_dict(), width=1)))
msg = 'evaluate ot="{}" t="{}", ctx="{}"'.format(self.teststr, teststr, vc)
self.debug(msg)
if (teststr and ctx):
if eval(teststr, ctx):
self.trips += 1
self.debug('condition {} is true trips={}/{}'.format(teststr, self.trips, self.ntrips))
if (self.trips >= self.ntrips):
self.tripped = True
self.message = 'condition {} is True'.format(teststr)
self.trips = 0
return True
else:
self.trips = 0 | make a teststr and context from the run and data
evaluate the teststr with the context | pychron/experiment/conditional/conditional.py | _check | ASUPychron/pychron | 31 | python | def _check(self, run, data, cnt, verbose=False):
'\n make a teststr and context from the run and data\n evaluate the teststr with the context\n\n '
(teststr, ctx) = self._make_context(run, data)
(self._teststr, self._ctx) = (teststr, ctx)
self.value_context = vc = pprint.pformat(ctx, width=1)
self.debug('Count: {} testing {}'.format(cnt, teststr))
if verbose:
self.debug('attribute context {}'.format(pprint.pformat(self._attr_dict(), width=1)))
msg = 'evaluate ot="{}" t="{}", ctx="{}"'.format(self.teststr, teststr, vc)
self.debug(msg)
if (teststr and ctx):
if eval(teststr, ctx):
self.trips += 1
self.debug('condition {} is true trips={}/{}'.format(teststr, self.trips, self.ntrips))
if (self.trips >= self.ntrips):
self.tripped = True
self.message = 'condition {} is True'.format(teststr)
self.trips = 0
return True
else:
self.trips = 0 | def _check(self, run, data, cnt, verbose=False):
'\n make a teststr and context from the run and data\n evaluate the teststr with the context\n\n '
(teststr, ctx) = self._make_context(run, data)
(self._teststr, self._ctx) = (teststr, ctx)
self.value_context = vc = pprint.pformat(ctx, width=1)
self.debug('Count: {} testing {}'.format(cnt, teststr))
if verbose:
self.debug('attribute context {}'.format(pprint.pformat(self._attr_dict(), width=1)))
msg = 'evaluate ot="{}" t="{}", ctx="{}"'.format(self.teststr, teststr, vc)
self.debug(msg)
if (teststr and ctx):
if eval(teststr, ctx):
self.trips += 1
self.debug('condition {} is true trips={}/{}'.format(teststr, self.trips, self.ntrips))
if (self.trips >= self.ntrips):
self.tripped = True
self.message = 'condition {} is True'.format(teststr)
self.trips = 0
return True
else:
self.trips = 0<|docstring|>make a teststr and context from the run and data
evaluate the teststr with the context<|endoftext|> |
1c81ccbb28d95766a119f7dfee38695dab17a419e4fc6b88f4fe6cc1f003db07 | def perform(self, script):
'\n perform the specified action.\n\n use ``MeasurementPyScript.execute_snippet`` to perform desired action\n\n :param script: MeasurementPyScript\n '
action = self.action
if isinstance(action, str):
try:
script.execute_snippet(action)
except BaseException:
self.warning('Invalid action: "{}"'.format(action))
elif hasattr(action, '__call__'):
action() | perform the specified action.
use ``MeasurementPyScript.execute_snippet`` to perform desired action
:param script: MeasurementPyScript | pychron/experiment/conditional/conditional.py | perform | ASUPychron/pychron | 31 | python | def perform(self, script):
'\n perform the specified action.\n\n use ``MeasurementPyScript.execute_snippet`` to perform desired action\n\n :param script: MeasurementPyScript\n '
action = self.action
if isinstance(action, str):
try:
script.execute_snippet(action)
except BaseException:
self.warning('Invalid action: "{}"'.format(action))
elif hasattr(action, '__call__'):
action() | def perform(self, script):
'\n perform the specified action.\n\n use ``MeasurementPyScript.execute_snippet`` to perform desired action\n\n :param script: MeasurementPyScript\n '
action = self.action
if isinstance(action, str):
try:
script.execute_snippet(action)
except BaseException:
self.warning('Invalid action: "{}"'.format(action))
elif hasattr(action, '__call__'):
action()<|docstring|>perform the specified action.
use ``MeasurementPyScript.execute_snippet`` to perform desired action
:param script: MeasurementPyScript<|endoftext|> |
4d9161425890478b565a8cb8dc4e00f297cb71de21cbdd19ea856897bf957e1d | @property
def srcnat(self):
'Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._srcnat
except Exception as e:
raise e | Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | srcnat | guardicore/nitro-python | 0 | python | @property
def srcnat(self):
'\n\t\t'
try:
return self._srcnat
except Exception as e:
raise e | @property
def srcnat(self):
'\n\t\t'
try:
return self._srcnat
except Exception as e:
raise e<|docstring|>Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
9aacab5f68d759bf13244f6440cbe8ec9e5dab1fea38f8b5352e9c4d6a4043c1 | @srcnat.setter
def srcnat(self, srcnat):
'Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._srcnat = srcnat
except Exception as e:
raise e | Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | srcnat | guardicore/nitro-python | 0 | python | @srcnat.setter
def srcnat(self, srcnat):
'\n\t\t'
try:
self._srcnat = srcnat
except Exception as e:
raise e | @srcnat.setter
def srcnat(self, srcnat):
'\n\t\t'
try:
self._srcnat = srcnat
except Exception as e:
raise e<|docstring|>Perform NAT if only the source is in the private network.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
b75cea9e4c2878a8ae68d3fe3280c915642ddbff839f2d63e435dbc1c666c9c3 | @property
def icmpgenratethreshold(self):
'NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100.\n\t\t'
try:
return self._icmpgenratethreshold
except Exception as e:
raise e | NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | icmpgenratethreshold | guardicore/nitro-python | 0 | python | @property
def icmpgenratethreshold(self):
'\n\t\t'
try:
return self._icmpgenratethreshold
except Exception as e:
raise e | @property
def icmpgenratethreshold(self):
'\n\t\t'
try:
return self._icmpgenratethreshold
except Exception as e:
raise e<|docstring|>NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100.<|endoftext|> |
f6e07bbe3f46219d7b24d8e95b425a913980642b6b3ce3b2922672a8306346f0 | @icmpgenratethreshold.setter
def icmpgenratethreshold(self, icmpgenratethreshold):
'NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100\n\t\t'
try:
self._icmpgenratethreshold = icmpgenratethreshold
except Exception as e:
raise e | NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100 | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | icmpgenratethreshold | guardicore/nitro-python | 0 | python | @icmpgenratethreshold.setter
def icmpgenratethreshold(self, icmpgenratethreshold):
'\n\t\t'
try:
self._icmpgenratethreshold = icmpgenratethreshold
except Exception as e:
raise e | @icmpgenratethreshold.setter
def icmpgenratethreshold(self, icmpgenratethreshold):
'\n\t\t'
try:
self._icmpgenratethreshold = icmpgenratethreshold
except Exception as e:
raise e<|docstring|>NS generated ICMP pkts per 10ms rate threshold.<br/>Default value: 100<|endoftext|> |
0dd32f7b86704518a1bca7a316e4665307c5e04b3c69d80899d913feafc7d12d | @property
def overridernat(self):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._overridernat
except Exception as e:
raise e | USNIP/USIP settings override RNAT settings for configured
service/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | overridernat | guardicore/nitro-python | 0 | python | @property
def overridernat(self):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._overridernat
except Exception as e:
raise e | @property
def overridernat(self):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._overridernat
except Exception as e:
raise e<|docstring|>USNIP/USIP settings override RNAT settings for configured
service/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
f959e1d8cb2494d65b59281b74c1e0eb32c3ffde82e87fe83fae2ef5d9d17253 | @overridernat.setter
def overridernat(self, overridernat):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._overridernat = overridernat
except Exception as e:
raise e | USNIP/USIP settings override RNAT settings for configured
service/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | overridernat | guardicore/nitro-python | 0 | python | @overridernat.setter
def overridernat(self, overridernat):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._overridernat = overridernat
except Exception as e:
raise e | @overridernat.setter
def overridernat(self, overridernat):
'USNIP/USIP settings override RNAT settings for configured\n\t\tservice/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._overridernat = overridernat
except Exception as e:
raise e<|docstring|>USNIP/USIP settings override RNAT settings for configured
service/virtual server traffic.. .<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
ecf5bdba98075500588c35944e285b9e605ab67f70e76363c5de4e93e27833bd | @property
def dropdfflag(self):
'Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._dropdfflag
except Exception as e:
raise e | Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dropdfflag | guardicore/nitro-python | 0 | python | @property
def dropdfflag(self):
'\n\t\t'
try:
return self._dropdfflag
except Exception as e:
raise e | @property
def dropdfflag(self):
'\n\t\t'
try:
return self._dropdfflag
except Exception as e:
raise e<|docstring|>Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
c64e9e04acbb11a1783830d36729d53079e1ffe1f1ebf77cad5134f957858369 | @dropdfflag.setter
def dropdfflag(self, dropdfflag):
'Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._dropdfflag = dropdfflag
except Exception as e:
raise e | Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dropdfflag | guardicore/nitro-python | 0 | python | @dropdfflag.setter
def dropdfflag(self, dropdfflag):
'\n\t\t'
try:
self._dropdfflag = dropdfflag
except Exception as e:
raise e | @dropdfflag.setter
def dropdfflag(self, dropdfflag):
'\n\t\t'
try:
self._dropdfflag = dropdfflag
except Exception as e:
raise e<|docstring|>Enable dropping the IP DF flag.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
7c62d34fd24c774371e53e6fc1db44523f86a867a51d499a00974eec8490f052 | @property
def miproundrobin(self):
'Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._miproundrobin
except Exception as e:
raise e | Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | miproundrobin | guardicore/nitro-python | 0 | python | @property
def miproundrobin(self):
'\n\t\t'
try:
return self._miproundrobin
except Exception as e:
raise e | @property
def miproundrobin(self):
'\n\t\t'
try:
return self._miproundrobin
except Exception as e:
raise e<|docstring|>Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
6646d2399016e62e2aa46699b4ff71d4788d5f196d2640dbce61e8eaeaff1742 | @miproundrobin.setter
def miproundrobin(self, miproundrobin):
'Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._miproundrobin = miproundrobin
except Exception as e:
raise e | Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | miproundrobin | guardicore/nitro-python | 0 | python | @miproundrobin.setter
def miproundrobin(self, miproundrobin):
'\n\t\t'
try:
self._miproundrobin = miproundrobin
except Exception as e:
raise e | @miproundrobin.setter
def miproundrobin(self, miproundrobin):
'\n\t\t'
try:
self._miproundrobin = miproundrobin
except Exception as e:
raise e<|docstring|>Enable round robin usage of mapped IPs.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
9e3d7c35a30771f7a0d2172b24434e6dc7eb1b89df1ed9a3f75dbddff8925bb6 | @property
def externalloopback(self):
'Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._externalloopback
except Exception as e:
raise e | Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | externalloopback | guardicore/nitro-python | 0 | python | @property
def externalloopback(self):
'\n\t\t'
try:
return self._externalloopback
except Exception as e:
raise e | @property
def externalloopback(self):
'\n\t\t'
try:
return self._externalloopback
except Exception as e:
raise e<|docstring|>Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
f3bf054c0ea7b66b1c5c9d16571bc2f7bad40e7ae4a8eaa038d14b19284074c2 | @externalloopback.setter
def externalloopback(self, externalloopback):
'Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._externalloopback = externalloopback
except Exception as e:
raise e | Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | externalloopback | guardicore/nitro-python | 0 | python | @externalloopback.setter
def externalloopback(self, externalloopback):
'\n\t\t'
try:
self._externalloopback = externalloopback
except Exception as e:
raise e | @externalloopback.setter
def externalloopback(self, externalloopback):
'\n\t\t'
try:
self._externalloopback = externalloopback
except Exception as e:
raise e<|docstring|>Enable external loopback.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
d053500e9f45a21ed9c9a51e39f7b1c87854a18d89b9f4cc4589263ae1c37de0 | @property
def tnlpmtuwoconn(self):
'Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._tnlpmtuwoconn
except Exception as e:
raise e | Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | tnlpmtuwoconn | guardicore/nitro-python | 0 | python | @property
def tnlpmtuwoconn(self):
'\n\t\t'
try:
return self._tnlpmtuwoconn
except Exception as e:
raise e | @property
def tnlpmtuwoconn(self):
'\n\t\t'
try:
return self._tnlpmtuwoconn
except Exception as e:
raise e<|docstring|>Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
50e787d46190b9986142b255d8515780c11b0a752a5eaef9258812c531b491f5 | @tnlpmtuwoconn.setter
def tnlpmtuwoconn(self, tnlpmtuwoconn):
'Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._tnlpmtuwoconn = tnlpmtuwoconn
except Exception as e:
raise e | Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | tnlpmtuwoconn | guardicore/nitro-python | 0 | python | @tnlpmtuwoconn.setter
def tnlpmtuwoconn(self, tnlpmtuwoconn):
'\n\t\t'
try:
self._tnlpmtuwoconn = tnlpmtuwoconn
except Exception as e:
raise e | @tnlpmtuwoconn.setter
def tnlpmtuwoconn(self, tnlpmtuwoconn):
'\n\t\t'
try:
self._tnlpmtuwoconn = tnlpmtuwoconn
except Exception as e:
raise e<|docstring|>Enable/Disable learning PMTU of IP tunnel when ICMP error does not contain connection information.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
cafabd878435ac4a5c3f5cdec494f26044e9fe93f36bad04399cd77d22f6c66a | @property
def usipserverstraypkt(self):
'Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._usipserverstraypkt
except Exception as e:
raise e | Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | usipserverstraypkt | guardicore/nitro-python | 0 | python | @property
def usipserverstraypkt(self):
'\n\t\t'
try:
return self._usipserverstraypkt
except Exception as e:
raise e | @property
def usipserverstraypkt(self):
'\n\t\t'
try:
return self._usipserverstraypkt
except Exception as e:
raise e<|docstring|>Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
203e21ed7156d8c75ee91189ab8b579989286cfbd771ab79fff17cc570fd5fd1 | @usipserverstraypkt.setter
def usipserverstraypkt(self, usipserverstraypkt):
'Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._usipserverstraypkt = usipserverstraypkt
except Exception as e:
raise e | Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | usipserverstraypkt | guardicore/nitro-python | 0 | python | @usipserverstraypkt.setter
def usipserverstraypkt(self, usipserverstraypkt):
'\n\t\t'
try:
self._usipserverstraypkt = usipserverstraypkt
except Exception as e:
raise e | @usipserverstraypkt.setter
def usipserverstraypkt(self, usipserverstraypkt):
'\n\t\t'
try:
self._usipserverstraypkt = usipserverstraypkt
except Exception as e:
raise e<|docstring|>Enable detection of stray server side pkts in USIP mode.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
04148975c71d1e8743abd65f5369e5665b46eb53e3fc6191f273475b0984f20b | @property
def forwardicmpfragments(self):
'Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._forwardicmpfragments
except Exception as e:
raise e | Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | forwardicmpfragments | guardicore/nitro-python | 0 | python | @property
def forwardicmpfragments(self):
'\n\t\t'
try:
return self._forwardicmpfragments
except Exception as e:
raise e | @property
def forwardicmpfragments(self):
'\n\t\t'
try:
return self._forwardicmpfragments
except Exception as e:
raise e<|docstring|>Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
fb75c46f674bc7d23f75df0cca5677e92a8f4e44096f9fbdc36d8c6eaa9f889c | @forwardicmpfragments.setter
def forwardicmpfragments(self, forwardicmpfragments):
'Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._forwardicmpfragments = forwardicmpfragments
except Exception as e:
raise e | Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | forwardicmpfragments | guardicore/nitro-python | 0 | python | @forwardicmpfragments.setter
def forwardicmpfragments(self, forwardicmpfragments):
'\n\t\t'
try:
self._forwardicmpfragments = forwardicmpfragments
except Exception as e:
raise e | @forwardicmpfragments.setter
def forwardicmpfragments(self, forwardicmpfragments):
'\n\t\t'
try:
self._forwardicmpfragments = forwardicmpfragments
except Exception as e:
raise e<|docstring|>Enable forwarding of ICMP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
3767e4f679444e76e5f1ccc700869ac75dd075306b732f7ddd76e86400c3d96e | @property
def dropipfragments(self):
'Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._dropipfragments
except Exception as e:
raise e | Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dropipfragments | guardicore/nitro-python | 0 | python | @property
def dropipfragments(self):
'\n\t\t'
try:
return self._dropipfragments
except Exception as e:
raise e | @property
def dropipfragments(self):
'\n\t\t'
try:
return self._dropipfragments
except Exception as e:
raise e<|docstring|>Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
6df9122ca5bbe9cc88908aa2d5f9ece4744bccbef1de3cc5726ad94f06e50f22 | @dropipfragments.setter
def dropipfragments(self, dropipfragments):
'Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._dropipfragments = dropipfragments
except Exception as e:
raise e | Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dropipfragments | guardicore/nitro-python | 0 | python | @dropipfragments.setter
def dropipfragments(self, dropipfragments):
'\n\t\t'
try:
self._dropipfragments = dropipfragments
except Exception as e:
raise e | @dropipfragments.setter
def dropipfragments(self, dropipfragments):
'\n\t\t'
try:
self._dropipfragments = dropipfragments
except Exception as e:
raise e<|docstring|>Enable dropping of IP fragments.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
3d633fcd591d9dd3698901167cdad4f4fe37f78cc9c3f183e1f159b407136840 | @property
def acllogtime(self):
'Parameter to tune acl logging time.<br/>Default value: 5000.\n\t\t'
try:
return self._acllogtime
except Exception as e:
raise e | Parameter to tune acl logging time.<br/>Default value: 5000. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | acllogtime | guardicore/nitro-python | 0 | python | @property
def acllogtime(self):
'\n\t\t'
try:
return self._acllogtime
except Exception as e:
raise e | @property
def acllogtime(self):
'\n\t\t'
try:
return self._acllogtime
except Exception as e:
raise e<|docstring|>Parameter to tune acl logging time.<br/>Default value: 5000.<|endoftext|> |
618e7d03cbea57cb7f28fd922ad5f0266b4b2ee3257d2f734ba47da9b4702ffb | @acllogtime.setter
def acllogtime(self, acllogtime):
'Parameter to tune acl logging time.<br/>Default value: 5000\n\t\t'
try:
self._acllogtime = acllogtime
except Exception as e:
raise e | Parameter to tune acl logging time.<br/>Default value: 5000 | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | acllogtime | guardicore/nitro-python | 0 | python | @acllogtime.setter
def acllogtime(self, acllogtime):
'\n\t\t'
try:
self._acllogtime = acllogtime
except Exception as e:
raise e | @acllogtime.setter
def acllogtime(self, acllogtime):
'\n\t\t'
try:
self._acllogtime = acllogtime
except Exception as e:
raise e<|docstring|>Parameter to tune acl logging time.<br/>Default value: 5000<|endoftext|> |
e5622852c8b19111c3f5307a8759ae316dc21a49f251cd180bffadbdc579a65e | @property
def implicitaclallow(self):
'Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._implicitaclallow
except Exception as e:
raise e | Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | implicitaclallow | guardicore/nitro-python | 0 | python | @property
def implicitaclallow(self):
'\n\t\t'
try:
return self._implicitaclallow
except Exception as e:
raise e | @property
def implicitaclallow(self):
'\n\t\t'
try:
return self._implicitaclallow
except Exception as e:
raise e<|docstring|>Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
16cf62292c82034842bc98ddbffcce81bb5d5b3642d19d7e07d71812d8c333f0 | @implicitaclallow.setter
def implicitaclallow(self, implicitaclallow):
'Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._implicitaclallow = implicitaclallow
except Exception as e:
raise e | Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | implicitaclallow | guardicore/nitro-python | 0 | python | @implicitaclallow.setter
def implicitaclallow(self, implicitaclallow):
'\n\t\t'
try:
self._implicitaclallow = implicitaclallow
except Exception as e:
raise e | @implicitaclallow.setter
def implicitaclallow(self, implicitaclallow):
'\n\t\t'
try:
self._implicitaclallow = implicitaclallow
except Exception as e:
raise e<|docstring|>Do not apply ACLs for internal ports.<br/>Default value: ENABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
0bafa1cd16a8c49a55f321a978fe4bbe9ff5b27817fb73073f138f4281965627 | @property
def dynamicrouting(self):
'Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._dynamicrouting
except Exception as e:
raise e | Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dynamicrouting | guardicore/nitro-python | 0 | python | @property
def dynamicrouting(self):
'\n\t\t'
try:
return self._dynamicrouting
except Exception as e:
raise e | @property
def dynamicrouting(self):
'\n\t\t'
try:
return self._dynamicrouting
except Exception as e:
raise e<|docstring|>Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
7e4470c9aff3a4cd19bbfa0878e8c1990894740c80f901756981558341c78ea7 | @dynamicrouting.setter
def dynamicrouting(self, dynamicrouting):
'Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._dynamicrouting = dynamicrouting
except Exception as e:
raise e | Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | dynamicrouting | guardicore/nitro-python | 0 | python | @dynamicrouting.setter
def dynamicrouting(self, dynamicrouting):
'\n\t\t'
try:
self._dynamicrouting = dynamicrouting
except Exception as e:
raise e | @dynamicrouting.setter
def dynamicrouting(self, dynamicrouting):
'\n\t\t'
try:
self._dynamicrouting = dynamicrouting
except Exception as e:
raise e<|docstring|>Enable/Disable Dynamic routing on partition. This configuration is not applicable to default partition.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
08bb293e7ce00ea752510b33ef295a7eadd1c7dea3b41294c7dbf66c8dbb47b6 | @property
def ipv6dynamicrouting(self):
'Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._ipv6dynamicrouting
except Exception as e:
raise e | Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | ipv6dynamicrouting | guardicore/nitro-python | 0 | python | @property
def ipv6dynamicrouting(self):
'\n\t\t'
try:
return self._ipv6dynamicrouting
except Exception as e:
raise e | @property
def ipv6dynamicrouting(self):
'\n\t\t'
try:
return self._ipv6dynamicrouting
except Exception as e:
raise e<|docstring|>Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
52ca4b571344d0e689921e34bc612dc3dc8766429dc72fdba7e70af34a6bf232 | @ipv6dynamicrouting.setter
def ipv6dynamicrouting(self, ipv6dynamicrouting):
'Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._ipv6dynamicrouting = ipv6dynamicrouting
except Exception as e:
raise e | Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | ipv6dynamicrouting | guardicore/nitro-python | 0 | python | @ipv6dynamicrouting.setter
def ipv6dynamicrouting(self, ipv6dynamicrouting):
'\n\t\t'
try:
self._ipv6dynamicrouting = ipv6dynamicrouting
except Exception as e:
raise e | @ipv6dynamicrouting.setter
def ipv6dynamicrouting(self, ipv6dynamicrouting):
'\n\t\t'
try:
self._ipv6dynamicrouting = ipv6dynamicrouting
except Exception as e:
raise e<|docstring|>Enable/Disable IPv6 Dynamic routing.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
092a1f96a9ba2d18e89c6a4b746e87e2d51a6a90f3e571f139222a24a8847da2 | @property
def allowclasseipv4(self):
'Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.\n\t\t'
try:
return self._allowclasseipv4
except Exception as e:
raise e | Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | allowclasseipv4 | guardicore/nitro-python | 0 | python | @property
def allowclasseipv4(self):
'\n\t\t'
try:
return self._allowclasseipv4
except Exception as e:
raise e | @property
def allowclasseipv4(self):
'\n\t\t'
try:
return self._allowclasseipv4
except Exception as e:
raise e<|docstring|>Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED.<|endoftext|> |
01d333705833f6cff7c95b313caf933279e55fef8ea61e5bcefd61546da0ae6a | @allowclasseipv4.setter
def allowclasseipv4(self, allowclasseipv4):
'Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED\n\t\t'
try:
self._allowclasseipv4 = allowclasseipv4
except Exception as e:
raise e | Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | allowclasseipv4 | guardicore/nitro-python | 0 | python | @allowclasseipv4.setter
def allowclasseipv4(self, allowclasseipv4):
'\n\t\t'
try:
self._allowclasseipv4 = allowclasseipv4
except Exception as e:
raise e | @allowclasseipv4.setter
def allowclasseipv4(self, allowclasseipv4):
'\n\t\t'
try:
self._allowclasseipv4 = allowclasseipv4
except Exception as e:
raise e<|docstring|>Enable/Disable IPv4 Class E address clients.<br/>Default value: DISABLED<br/>Possible values = ENABLED, DISABLED<|endoftext|> |
fb6026291e4f308d8f6c3fc05be3d71228c2b7aff433efa9d17d5738138dbc11 | def _get_nitro_response(self, service, response):
' converts nitro response into object and returns the object array in case of get request.\n\t\t'
try:
result = service.payload_formatter.string_to_resource(l3param_response, response, self.__class__.__name__)
if (result.errorcode != 0):
if (result.errorcode == 444):
service.clear_session(self)
if result.severity:
if (result.severity == 'ERROR'):
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
else:
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
return result.l3param
except Exception as e:
raise e | converts nitro response into object and returns the object array in case of get request. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | _get_nitro_response | guardicore/nitro-python | 0 | python | def _get_nitro_response(self, service, response):
' \n\t\t'
try:
result = service.payload_formatter.string_to_resource(l3param_response, response, self.__class__.__name__)
if (result.errorcode != 0):
if (result.errorcode == 444):
service.clear_session(self)
if result.severity:
if (result.severity == 'ERROR'):
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
else:
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
return result.l3param
except Exception as e:
raise e | def _get_nitro_response(self, service, response):
' \n\t\t'
try:
result = service.payload_formatter.string_to_resource(l3param_response, response, self.__class__.__name__)
if (result.errorcode != 0):
if (result.errorcode == 444):
service.clear_session(self)
if result.severity:
if (result.severity == 'ERROR'):
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
else:
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
return result.l3param
except Exception as e:
raise e<|docstring|>converts nitro response into object and returns the object array in case of get request.<|endoftext|> |
b804ba6ed8e2d664db8fbce626f3836b21a83d8a847161f6b8a39c56b3beebb2 | def _get_object_name(self):
' Returns the value of object identifier argument\n\t\t'
try:
return 0
except Exception as e:
raise e | Returns the value of object identifier argument | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | _get_object_name | guardicore/nitro-python | 0 | python | def _get_object_name(self):
' \n\t\t'
try:
return 0
except Exception as e:
raise e | def _get_object_name(self):
' \n\t\t'
try:
return 0
except Exception as e:
raise e<|docstring|>Returns the value of object identifier argument<|endoftext|> |
9511904ac13eaf827a408ce0004d9a763028bf9a008ce935badd72fab321ad5e | @classmethod
def filter_update_parameters(cls, resource):
' Use this function to create a resource with only update operation specific parameters.\n\t\t'
updateresource = l3param()
updateresource.srcnat = resource.srcnat
updateresource.icmpgenratethreshold = resource.icmpgenratethreshold
updateresource.overridernat = resource.overridernat
updateresource.dropdfflag = resource.dropdfflag
updateresource.miproundrobin = resource.miproundrobin
updateresource.externalloopback = resource.externalloopback
updateresource.tnlpmtuwoconn = resource.tnlpmtuwoconn
updateresource.usipserverstraypkt = resource.usipserverstraypkt
updateresource.forwardicmpfragments = resource.forwardicmpfragments
updateresource.dropipfragments = resource.dropipfragments
updateresource.acllogtime = resource.acllogtime
updateresource.implicitaclallow = resource.implicitaclallow
updateresource.dynamicrouting = resource.dynamicrouting
updateresource.ipv6dynamicrouting = resource.ipv6dynamicrouting
updateresource.allowclasseipv4 = resource.allowclasseipv4
return updateresource | Use this function to create a resource with only update operation specific parameters. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | filter_update_parameters | guardicore/nitro-python | 0 | python | @classmethod
def filter_update_parameters(cls, resource):
' \n\t\t'
updateresource = l3param()
updateresource.srcnat = resource.srcnat
updateresource.icmpgenratethreshold = resource.icmpgenratethreshold
updateresource.overridernat = resource.overridernat
updateresource.dropdfflag = resource.dropdfflag
updateresource.miproundrobin = resource.miproundrobin
updateresource.externalloopback = resource.externalloopback
updateresource.tnlpmtuwoconn = resource.tnlpmtuwoconn
updateresource.usipserverstraypkt = resource.usipserverstraypkt
updateresource.forwardicmpfragments = resource.forwardicmpfragments
updateresource.dropipfragments = resource.dropipfragments
updateresource.acllogtime = resource.acllogtime
updateresource.implicitaclallow = resource.implicitaclallow
updateresource.dynamicrouting = resource.dynamicrouting
updateresource.ipv6dynamicrouting = resource.ipv6dynamicrouting
updateresource.allowclasseipv4 = resource.allowclasseipv4
return updateresource | @classmethod
def filter_update_parameters(cls, resource):
' \n\t\t'
updateresource = l3param()
updateresource.srcnat = resource.srcnat
updateresource.icmpgenratethreshold = resource.icmpgenratethreshold
updateresource.overridernat = resource.overridernat
updateresource.dropdfflag = resource.dropdfflag
updateresource.miproundrobin = resource.miproundrobin
updateresource.externalloopback = resource.externalloopback
updateresource.tnlpmtuwoconn = resource.tnlpmtuwoconn
updateresource.usipserverstraypkt = resource.usipserverstraypkt
updateresource.forwardicmpfragments = resource.forwardicmpfragments
updateresource.dropipfragments = resource.dropipfragments
updateresource.acllogtime = resource.acllogtime
updateresource.implicitaclallow = resource.implicitaclallow
updateresource.dynamicrouting = resource.dynamicrouting
updateresource.ipv6dynamicrouting = resource.ipv6dynamicrouting
updateresource.allowclasseipv4 = resource.allowclasseipv4
return updateresource<|docstring|>Use this function to create a resource with only update operation specific parameters.<|endoftext|> |
7b135d33444bd8ba80cb7a0f62360f7595b742602dfc3f26f8b18ea3165940d4 | @classmethod
def update(cls, client, resource):
' Use this API to update l3param.\n\t\t'
try:
if (type(resource) is not list):
updateresource = cls.filter_update_parameters(resource)
return updateresource.update_resource(client)
except Exception as e:
raise e | Use this API to update l3param. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | update | guardicore/nitro-python | 0 | python | @classmethod
def update(cls, client, resource):
' \n\t\t'
try:
if (type(resource) is not list):
updateresource = cls.filter_update_parameters(resource)
return updateresource.update_resource(client)
except Exception as e:
raise e | @classmethod
def update(cls, client, resource):
' \n\t\t'
try:
if (type(resource) is not list):
updateresource = cls.filter_update_parameters(resource)
return updateresource.update_resource(client)
except Exception as e:
raise e<|docstring|>Use this API to update l3param.<|endoftext|> |
e8f80b2ae59686481d38caf4b8fee38e308fbbb69fff294f16ac3e9130a1df75 | @classmethod
def unset(cls, client, resource, args):
' Use this API to unset the properties of l3param resource.\n\t\tProperties that need to be unset are specified in args array.\n\t\t'
try:
if (type(resource) is not list):
unsetresource = l3param()
return unsetresource.unset_resource(client, args)
except Exception as e:
raise e | Use this API to unset the properties of l3param resource.
Properties that need to be unset are specified in args array. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | unset | guardicore/nitro-python | 0 | python | @classmethod
def unset(cls, client, resource, args):
' Use this API to unset the properties of l3param resource.\n\t\tProperties that need to be unset are specified in args array.\n\t\t'
try:
if (type(resource) is not list):
unsetresource = l3param()
return unsetresource.unset_resource(client, args)
except Exception as e:
raise e | @classmethod
def unset(cls, client, resource, args):
' Use this API to unset the properties of l3param resource.\n\t\tProperties that need to be unset are specified in args array.\n\t\t'
try:
if (type(resource) is not list):
unsetresource = l3param()
return unsetresource.unset_resource(client, args)
except Exception as e:
raise e<|docstring|>Use this API to unset the properties of l3param resource.
Properties that need to be unset are specified in args array.<|endoftext|> |
0ee737481160702ec935e2cab42360353e366e05a93dd36f308cf4bcdb462f22 | @classmethod
def get(cls, client, name='', option_=''):
' Use this API to fetch all the l3param resources that are configured on netscaler.\n\t\t'
try:
if (not name):
obj = l3param()
response = obj.get_resources(client, option_)
return response
except Exception as e:
raise e | Use this API to fetch all the l3param resources that are configured on netscaler. | nssrc/com/citrix/netscaler/nitro/resource/config/network/l3param.py | get | guardicore/nitro-python | 0 | python | @classmethod
def get(cls, client, name=, option_=):
' \n\t\t'
try:
if (not name):
obj = l3param()
response = obj.get_resources(client, option_)
return response
except Exception as e:
raise e | @classmethod
def get(cls, client, name=, option_=):
' \n\t\t'
try:
if (not name):
obj = l3param()
response = obj.get_resources(client, option_)
return response
except Exception as e:
raise e<|docstring|>Use this API to fetch all the l3param resources that are configured on netscaler.<|endoftext|> |
b96690546a7726e83dfeb277b8d9cf2d4009500059dc5d070fc20e4298c410ed | def job_list_fixture():
'Job list fixture\n\n :rtype: dict\n '
return [{'description': 'sleeping is what is do', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': None, 'successCount': 0}, 'id': 'snorlax', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}, {'description': 'electrifying rodent', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': '2017-03-31T14:22:01.541+0000', 'successCount': 1}, 'id': 'pikachu', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}] | Job list fixture
:rtype: dict | python/lib/dcoscli/tests/fixtures/metronome.py | job_list_fixture | bamarni/dcos-core-cli | 11 | python | def job_list_fixture():
'Job list fixture\n\n :rtype: dict\n '
return [{'description': 'sleeping is what is do', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': None, 'successCount': 0}, 'id': 'snorlax', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}, {'description': 'electrifying rodent', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': '2017-03-31T14:22:01.541+0000', 'successCount': 1}, 'id': 'pikachu', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}] | def job_list_fixture():
'Job list fixture\n\n :rtype: dict\n '
return [{'description': 'sleeping is what is do', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': None, 'successCount': 0}, 'id': 'snorlax', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}, {'description': 'electrifying rodent', 'historySummary': {'failureCount': 0, 'lastFailureAt': None, 'lastSuccessAt': '2017-03-31T14:22:01.541+0000', 'successCount': 1}, 'id': 'pikachu', 'labels': {}, 'run': {'artifacts': [], 'cmd': 'sleep 10', 'cpus': 0.01, 'disk': 0, 'env': {}, 'maxLaunchDelay': 3600, 'mem': 32, 'placement': {'constraints': []}, 'restart': {'policy': 'NEVER'}, 'volumes': []}}]<|docstring|>Job list fixture
:rtype: dict<|endoftext|> |
809a0f40edf3777ed0905dc6b7976a59e1ac51e88d829a2393985790b1866ab3 | def job_run_fixture():
'Job run fixture\n\n :rtype: dict\n '
return [{'completedAt': None, 'createdAt': '2017-03-31T21:05:30.613+0000', 'id': '20170331210530QHpRU', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210530QHpRU.c5e4b1e7-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:31.499+0000', 'status': 'TASK_RUNNING'}]}, {'completedAt': None, 'createdAt': '2017-03-31T21:05:32.422+0000', 'id': '20170331210532uxgVF', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210532uxgVF.c8e324d8-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:36.417+0000', 'status': 'TASK_RUNNING'}]}] | Job run fixture
:rtype: dict | python/lib/dcoscli/tests/fixtures/metronome.py | job_run_fixture | bamarni/dcos-core-cli | 11 | python | def job_run_fixture():
'Job run fixture\n\n :rtype: dict\n '
return [{'completedAt': None, 'createdAt': '2017-03-31T21:05:30.613+0000', 'id': '20170331210530QHpRU', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210530QHpRU.c5e4b1e7-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:31.499+0000', 'status': 'TASK_RUNNING'}]}, {'completedAt': None, 'createdAt': '2017-03-31T21:05:32.422+0000', 'id': '20170331210532uxgVF', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210532uxgVF.c8e324d8-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:36.417+0000', 'status': 'TASK_RUNNING'}]}] | def job_run_fixture():
'Job run fixture\n\n :rtype: dict\n '
return [{'completedAt': None, 'createdAt': '2017-03-31T21:05:30.613+0000', 'id': '20170331210530QHpRU', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210530QHpRU.c5e4b1e7-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:31.499+0000', 'status': 'TASK_RUNNING'}]}, {'completedAt': None, 'createdAt': '2017-03-31T21:05:32.422+0000', 'id': '20170331210532uxgVF', 'jobId': 'pikachu', 'status': 'ACTIVE', 'tasks': [{'id': 'pikachu_20170331210532uxgVF.c8e324d8-1655-11e7-8bd5-6ef119b8e20f', 'startedAt': '2017-03-31T21:05:36.417+0000', 'status': 'TASK_RUNNING'}]}]<|docstring|>Job run fixture
:rtype: dict<|endoftext|> |
621d8df2db0a114867b27b3f4ae99a6993047d393be1a3e8bce992f5002524ac | def job_history_fixture():
'Job history fixture\n\n :rtype: dict\n '
return [{'createdAt': '2017-03-31T21:05:32.422+0000', 'finishedAt': '2017-03-31T21:05:46.805+0000', 'id': '20170331210532uxgVF'}, {'createdAt': '2017-03-31T21:05:30.613+0000', 'finishedAt': '2017-03-31T21:05:41.740+0000', 'id': '20170331210530QHpRU'}] | Job history fixture
:rtype: dict | python/lib/dcoscli/tests/fixtures/metronome.py | job_history_fixture | bamarni/dcos-core-cli | 11 | python | def job_history_fixture():
'Job history fixture\n\n :rtype: dict\n '
return [{'createdAt': '2017-03-31T21:05:32.422+0000', 'finishedAt': '2017-03-31T21:05:46.805+0000', 'id': '20170331210532uxgVF'}, {'createdAt': '2017-03-31T21:05:30.613+0000', 'finishedAt': '2017-03-31T21:05:41.740+0000', 'id': '20170331210530QHpRU'}] | def job_history_fixture():
'Job history fixture\n\n :rtype: dict\n '
return [{'createdAt': '2017-03-31T21:05:32.422+0000', 'finishedAt': '2017-03-31T21:05:46.805+0000', 'id': '20170331210532uxgVF'}, {'createdAt': '2017-03-31T21:05:30.613+0000', 'finishedAt': '2017-03-31T21:05:41.740+0000', 'id': '20170331210530QHpRU'}]<|docstring|>Job history fixture
:rtype: dict<|endoftext|> |
fd03a36d10b0e77ad36d29bc63abdca90248108b64a84c4c06f3859d505e6b22 | def job_schedule_fixture():
'Job schedule fixture\n\n :rtype: dict\n '
return [{'concurrencyPolicy': 'ALLOW', 'cron': '20 0 * * *', 'enabled': True, 'id': 'nightly', 'nextRunAt': '2017-04-01T00:20:00.000+0000', 'startingDeadlineSeconds': 900, 'timezone': 'UTC'}] | Job schedule fixture
:rtype: dict | python/lib/dcoscli/tests/fixtures/metronome.py | job_schedule_fixture | bamarni/dcos-core-cli | 11 | python | def job_schedule_fixture():
'Job schedule fixture\n\n :rtype: dict\n '
return [{'concurrencyPolicy': 'ALLOW', 'cron': '20 0 * * *', 'enabled': True, 'id': 'nightly', 'nextRunAt': '2017-04-01T00:20:00.000+0000', 'startingDeadlineSeconds': 900, 'timezone': 'UTC'}] | def job_schedule_fixture():
'Job schedule fixture\n\n :rtype: dict\n '
return [{'concurrencyPolicy': 'ALLOW', 'cron': '20 0 * * *', 'enabled': True, 'id': 'nightly', 'nextRunAt': '2017-04-01T00:20:00.000+0000', 'startingDeadlineSeconds': 900, 'timezone': 'UTC'}]<|docstring|>Job schedule fixture
:rtype: dict<|endoftext|> |
eed0f92c48e67553afcbacd3d39af86f5efd8487fbc50d8bb7b80afbb28ab5cf | def __init__(self, root: TreeNode):
"\n Maintain a dequeue of insertion candidates\n Insertion candidates are non-full nodes (superset of leaf nodes)\n BFS to get the insertion candidates\n\n During insertion, insert the node to the first insertion candidate's\n child. Then, the inserting node is the last in the candidate queue\n "
self.candidates = deque()
self.root = root
q = [root]
while q:
cur_q = []
for e in q:
if e.left:
cur_q.append(e.left)
if e.right:
cur_q.append(e.right)
if ((not e.left) or (not e.right)):
self.candidates.append(e)
q = cur_q | Maintain a dequeue of insertion candidates
Insertion candidates are non-full nodes (superset of leaf nodes)
BFS to get the insertion candidates
During insertion, insert the node to the first insertion candidate's
child. Then, the inserting node is the last in the candidate queue | 919 Complete Binary Tree Inserter.py | __init__ | scorpionpd/LeetCode-all | 872 | python | def __init__(self, root: TreeNode):
"\n Maintain a dequeue of insertion candidates\n Insertion candidates are non-full nodes (superset of leaf nodes)\n BFS to get the insertion candidates\n\n During insertion, insert the node to the first insertion candidate's\n child. Then, the inserting node is the last in the candidate queue\n "
self.candidates = deque()
self.root = root
q = [root]
while q:
cur_q = []
for e in q:
if e.left:
cur_q.append(e.left)
if e.right:
cur_q.append(e.right)
if ((not e.left) or (not e.right)):
self.candidates.append(e)
q = cur_q | def __init__(self, root: TreeNode):
"\n Maintain a dequeue of insertion candidates\n Insertion candidates are non-full nodes (superset of leaf nodes)\n BFS to get the insertion candidates\n\n During insertion, insert the node to the first insertion candidate's\n child. Then, the inserting node is the last in the candidate queue\n "
self.candidates = deque()
self.root = root
q = [root]
while q:
cur_q = []
for e in q:
if e.left:
cur_q.append(e.left)
if e.right:
cur_q.append(e.right)
if ((not e.left) or (not e.right)):
self.candidates.append(e)
q = cur_q<|docstring|>Maintain a dequeue of insertion candidates
Insertion candidates are non-full nodes (superset of leaf nodes)
BFS to get the insertion candidates
During insertion, insert the node to the first insertion candidate's
child. Then, the inserting node is the last in the candidate queue<|endoftext|> |
c5e165339e7fece8e50c923c22f796e2a3aac8c1f977658277b26bb7718676f8 | def db_write_user_ping(user_id):
'To store the User Ping TimeStamps to DB'
query = "INSERT INTO user_ping(uuid,user_id,timestamp) VALUES('{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Ping to DB!') | To store the User Ping TimeStamps to DB | db_operations.py | db_write_user_ping | gauravarora011/discord_bot | 0 | python | def db_write_user_ping(user_id):
query = "INSERT INTO user_ping(uuid,user_id,timestamp) VALUES('{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Ping to DB!') | def db_write_user_ping(user_id):
query = "INSERT INTO user_ping(uuid,user_id,timestamp) VALUES('{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Ping to DB!')<|docstring|>To store the User Ping TimeStamps to DB<|endoftext|> |
185bfb9341e76e58dabed781c5dedd8ef0c8e52350a95a0c3d9f80f540feb153 | def db_write_user_search(user_id, search_string):
' To write the USER Search History in DB'
query = "INSERT INTO search_history(uuid,user_id,search_string,timestamp) VALUES('{}','{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, search_string, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Search History to DB!') | To write the USER Search History in DB | db_operations.py | db_write_user_search | gauravarora011/discord_bot | 0 | python | def db_write_user_search(user_id, search_string):
' '
query = "INSERT INTO search_history(uuid,user_id,search_string,timestamp) VALUES('{}','{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, search_string, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Search History to DB!') | def db_write_user_search(user_id, search_string):
' '
query = "INSERT INTO search_history(uuid,user_id,search_string,timestamp) VALUES('{}','{}','{}','{}')"
try:
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(uuid.uuid1(), user_id, search_string, datetime.datetime.now()))
conn.commit()
except:
raise Exception('Unable to Insert User Search History to DB!')<|docstring|>To write the USER Search History in DB<|endoftext|> |
db308345fa7fdee70e836d1f804311fc50439539a1b27726e88f1fb056faaccd | def db_read_user_history(user_id, search_like):
' To read the USER Search History in DB'
query = 'SELECT search_string FROM search_history where user_id = {} and ({})'
try:
result_str = 'Here are your recent related searches : \n\n'
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(user_id, ' OR '.join([f"LOWER(search_string) LIKE '%{y}%'" for y in search_like.lower().split()])))
results = cur.fetchall()
result_str = (result_str + '\n'.join([row[0].strip() for row in results]))
return result_str
except:
raise Exception('Unable to Read User Search History to DB!') | To read the USER Search History in DB | db_operations.py | db_read_user_history | gauravarora011/discord_bot | 0 | python | def db_read_user_history(user_id, search_like):
' '
query = 'SELECT search_string FROM search_history where user_id = {} and ({})'
try:
result_str = 'Here are your recent related searches : \n\n'
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(user_id, ' OR '.join([f"LOWER(search_string) LIKE '%{y}%'" for y in search_like.lower().split()])))
results = cur.fetchall()
result_str = (result_str + '\n'.join([row[0].strip() for row in results]))
return result_str
except:
raise Exception('Unable to Read User Search History to DB!') | def db_read_user_history(user_id, search_like):
' '
query = 'SELECT search_string FROM search_history where user_id = {} and ({})'
try:
result_str = 'Here are your recent related searches : \n\n'
conn = DBStore.getInstance()
cur = conn.cursor()
cur.execute(query.format(user_id, ' OR '.join([f"LOWER(search_string) LIKE '%{y}%'" for y in search_like.lower().split()])))
results = cur.fetchall()
result_str = (result_str + '\n'.join([row[0].strip() for row in results]))
return result_str
except:
raise Exception('Unable to Read User Search History to DB!')<|docstring|>To read the USER Search History in DB<|endoftext|> |
df7c66f245d955f32813e757dced03a1b73850eba34d87fb71373b8dcc6f66a7 | def __init__(self, n_neurons, eps=1e-05):
'\n Initializes CustomBatchNormAutograd object.\n \n Args:\n n_neurons: int specifying the number of neurons\n eps: small float to be added to the variance for stability\n\n '
super(CustomBatchNormAutograd, self).__init__()
self.n_neurons = n_neurons
self.eps = eps
self.beta = nn.Parameter(torch.zeros(self.n_neurons))
self.gamma = nn.Parameter(torch.ones(self.n_neurons)) | Initializes CustomBatchNormAutograd object.
Args:
n_neurons: int specifying the number of neurons
eps: small float to be added to the variance for stability | assignment_1/custom_batchnorm.py | __init__ | RaymondKoopmanschap/DL_assignment_code | 0 | python | def __init__(self, n_neurons, eps=1e-05):
'\n Initializes CustomBatchNormAutograd object.\n \n Args:\n n_neurons: int specifying the number of neurons\n eps: small float to be added to the variance for stability\n\n '
super(CustomBatchNormAutograd, self).__init__()
self.n_neurons = n_neurons
self.eps = eps
self.beta = nn.Parameter(torch.zeros(self.n_neurons))
self.gamma = nn.Parameter(torch.ones(self.n_neurons)) | def __init__(self, n_neurons, eps=1e-05):
'\n Initializes CustomBatchNormAutograd object.\n \n Args:\n n_neurons: int specifying the number of neurons\n eps: small float to be added to the variance for stability\n\n '
super(CustomBatchNormAutograd, self).__init__()
self.n_neurons = n_neurons
self.eps = eps
self.beta = nn.Parameter(torch.zeros(self.n_neurons))
self.gamma = nn.Parameter(torch.ones(self.n_neurons))<|docstring|>Initializes CustomBatchNormAutograd object.
Args:
n_neurons: int specifying the number of neurons
eps: small float to be added to the variance for stability<|endoftext|> |
fe6ba8a416647d3ccff3bcb05be8faefdb012dca3b5f8bff8b3952a8f567510d | def forward(self, input):
'\n Compute the batch normalization\n\n Args:\n input: input tensor of shape (n_batch, n_neurons)\n Returns:\n out: batch-normalized tensor\n\n '
batch_size = input.shape[0]
assert (input.shape[1] == self.n_neurons), 'Input not in the correct shape'
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + self.eps)))
out = ((self.gamma * norm) + self.beta)
return out | Compute the batch normalization
Args:
input: input tensor of shape (n_batch, n_neurons)
Returns:
out: batch-normalized tensor | assignment_1/custom_batchnorm.py | forward | RaymondKoopmanschap/DL_assignment_code | 0 | python | def forward(self, input):
'\n Compute the batch normalization\n\n Args:\n input: input tensor of shape (n_batch, n_neurons)\n Returns:\n out: batch-normalized tensor\n\n '
batch_size = input.shape[0]
assert (input.shape[1] == self.n_neurons), 'Input not in the correct shape'
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + self.eps)))
out = ((self.gamma * norm) + self.beta)
return out | def forward(self, input):
'\n Compute the batch normalization\n\n Args:\n input: input tensor of shape (n_batch, n_neurons)\n Returns:\n out: batch-normalized tensor\n\n '
batch_size = input.shape[0]
assert (input.shape[1] == self.n_neurons), 'Input not in the correct shape'
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + self.eps)))
out = ((self.gamma * norm) + self.beta)
return out<|docstring|>Compute the batch normalization
Args:
input: input tensor of shape (n_batch, n_neurons)
Returns:
out: batch-normalized tensor<|endoftext|> |
23ffb5d6ab0cb90c075b4def6faa5028642a60fb8e99c3a0964cd0618a397ca6 | @staticmethod
def forward(ctx, input, gamma, beta, eps=1e-05):
'\n Compute the batch normalization\n\n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n input: input tensor of shape (n_batch, n_neurons)\n gamma: variance scaling tensor, applied per neuron, shpae (n_neurons)\n beta: mean bias tensor, applied per neuron, shpae (n_neurons)\n eps: small float added to the variance for stability\n Returns:\n out: batch-normalized tensor\n '
batch_size = input.shape[0]
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + eps)))
out = ((gamma * norm) + beta)
ctx.save_for_backward(norm, gamma, var)
ctx.eps = eps
return out | Compute the batch normalization
Args:
ctx: context object handling storing and retrival of tensors and constants and specifying
whether tensors need gradients in backward pass
input: input tensor of shape (n_batch, n_neurons)
gamma: variance scaling tensor, applied per neuron, shpae (n_neurons)
beta: mean bias tensor, applied per neuron, shpae (n_neurons)
eps: small float added to the variance for stability
Returns:
out: batch-normalized tensor | assignment_1/custom_batchnorm.py | forward | RaymondKoopmanschap/DL_assignment_code | 0 | python | @staticmethod
def forward(ctx, input, gamma, beta, eps=1e-05):
'\n Compute the batch normalization\n\n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n input: input tensor of shape (n_batch, n_neurons)\n gamma: variance scaling tensor, applied per neuron, shpae (n_neurons)\n beta: mean bias tensor, applied per neuron, shpae (n_neurons)\n eps: small float added to the variance for stability\n Returns:\n out: batch-normalized tensor\n '
batch_size = input.shape[0]
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + eps)))
out = ((gamma * norm) + beta)
ctx.save_for_backward(norm, gamma, var)
ctx.eps = eps
return out | @staticmethod
def forward(ctx, input, gamma, beta, eps=1e-05):
'\n Compute the batch normalization\n\n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n input: input tensor of shape (n_batch, n_neurons)\n gamma: variance scaling tensor, applied per neuron, shpae (n_neurons)\n beta: mean bias tensor, applied per neuron, shpae (n_neurons)\n eps: small float added to the variance for stability\n Returns:\n out: batch-normalized tensor\n '
batch_size = input.shape[0]
mean = ((1 / batch_size) * torch.sum(input, dim=0))
var = input.var(dim=0, unbiased=False)
norm = ((input - mean) / torch.sqrt((var + eps)))
out = ((gamma * norm) + beta)
ctx.save_for_backward(norm, gamma, var)
ctx.eps = eps
return out<|docstring|>Compute the batch normalization
Args:
ctx: context object handling storing and retrival of tensors and constants and specifying
whether tensors need gradients in backward pass
input: input tensor of shape (n_batch, n_neurons)
gamma: variance scaling tensor, applied per neuron, shpae (n_neurons)
beta: mean bias tensor, applied per neuron, shpae (n_neurons)
eps: small float added to the variance for stability
Returns:
out: batch-normalized tensor<|endoftext|> |
17c5607e067a84fcf07641d1af9db8b4ba3d346b14d48be0c2d42727e61d5ff7 | @staticmethod
def backward(ctx, grad_output):
'\n Compute backward pass of the batch normalization.\n \n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n Returns:\n out: tuple containing gradients for all input arguments\n\n '
(normalized, gamma, var) = ctx.saved_tensors
eps = ctx.eps
B = grad_output.shape[0]
grad_gamma = (grad_output * normalized).sum(0)
grad_beta = torch.sum(grad_output, dim=0)
grad_input = (torch.div(gamma, (B * torch.sqrt((var + eps)))) * (((B * grad_output) - grad_beta) - (grad_gamma * normalized)))
return (grad_input, grad_gamma, grad_beta, None) | Compute backward pass of the batch normalization.
Args:
ctx: context object handling storing and retrival of tensors and constants and specifying
whether tensors need gradients in backward pass
Returns:
out: tuple containing gradients for all input arguments | assignment_1/custom_batchnorm.py | backward | RaymondKoopmanschap/DL_assignment_code | 0 | python | @staticmethod
def backward(ctx, grad_output):
'\n Compute backward pass of the batch normalization.\n \n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n Returns:\n out: tuple containing gradients for all input arguments\n\n '
(normalized, gamma, var) = ctx.saved_tensors
eps = ctx.eps
B = grad_output.shape[0]
grad_gamma = (grad_output * normalized).sum(0)
grad_beta = torch.sum(grad_output, dim=0)
grad_input = (torch.div(gamma, (B * torch.sqrt((var + eps)))) * (((B * grad_output) - grad_beta) - (grad_gamma * normalized)))
return (grad_input, grad_gamma, grad_beta, None) | @staticmethod
def backward(ctx, grad_output):
'\n Compute backward pass of the batch normalization.\n \n Args:\n ctx: context object handling storing and retrival of tensors and constants and specifying\n whether tensors need gradients in backward pass\n Returns:\n out: tuple containing gradients for all input arguments\n\n '
(normalized, gamma, var) = ctx.saved_tensors
eps = ctx.eps
B = grad_output.shape[0]
grad_gamma = (grad_output * normalized).sum(0)
grad_beta = torch.sum(grad_output, dim=0)
grad_input = (torch.div(gamma, (B * torch.sqrt((var + eps)))) * (((B * grad_output) - grad_beta) - (grad_gamma * normalized)))
return (grad_input, grad_gamma, grad_beta, None)<|docstring|>Compute backward pass of the batch normalization.
Args:
ctx: context object handling storing and retrival of tensors and constants and specifying
whether tensors need gradients in backward pass
Returns:
out: tuple containing gradients for all input arguments<|endoftext|> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.