hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5e01beeb6b1ee589e4f759a1186d50a20e1ce5d5 | 367 | py | Python | bawsvis/readers/__init__.py | JohannesSMHI/BAWS-vis | 5563c2bcd7ba057d6ebac150d681aa937efc4835 | [
"MIT"
] | null | null | null | bawsvis/readers/__init__.py | JohannesSMHI/BAWS-vis | 5563c2bcd7ba057d6ebac150d681aa937efc4835 | [
"MIT"
] | null | null | null | bawsvis/readers/__init__.py | JohannesSMHI/BAWS-vis | 5563c2bcd7ba057d6ebac150d681aa937efc4835 | [
"MIT"
] | null | null | null | # Copyright (c) 2020 SMHI, Swedish Meteorological and Hydrological Institute
# License: MIT License (see LICENSE.txt or http://opensource.org/licenses/mit).
from bawsvis.readers.raster import raster_reader
from bawsvis.readers.text import np_txt_reader
from bawsvis.readers.shape import shape_reader
from bawsvis.readers.dictionary import json_reader, pandas_reader
| 45.875 | 79 | 0.833787 | 52 | 367 | 5.769231 | 0.576923 | 0.146667 | 0.24 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012085 | 0.098093 | 367 | 7 | 80 | 52.428571 | 0.89426 | 0.414169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e03345d24807610a367b504e4628968e3ff4a74 | 145 | py | Python | LineageTracker/ProcessData/__init__.py | Shuhua-Group/Y-LineageTracker | 82b14c74be95ef2d4d929ce20bf7436869f163ea | [
"MIT"
] | 2 | 2021-06-11T09:34:52.000Z | 2022-03-08T07:19:05.000Z | LineageTracker/ProcessData/__init__.py | Shuhua-Group/Y-LineageTracker | 82b14c74be95ef2d4d929ce20bf7436869f163ea | [
"MIT"
] | 2 | 2022-01-11T11:19:20.000Z | 2022-03-26T09:01:33.000Z | LineageTracker/Test/__init__.py | Shuhua-Group/Y-LineageTracker | 82b14c74be95ef2d4d929ce20bf7436869f163ea | [
"MIT"
] | 1 | 2022-03-26T09:05:30.000Z | 2022-03-26T09:05:30.000Z | import os
import sys
current_dir = os.path.split(os.path.realpath(__file__))[0]
sys.path.append(current_dir)
sys.path.append(current_dir+'/..')
| 20.714286 | 58 | 0.758621 | 24 | 145 | 4.291667 | 0.458333 | 0.291262 | 0.252427 | 0.38835 | 0.446602 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.068966 | 145 | 6 | 59 | 24.166667 | 0.755556 | 0 | 0 | 0 | 0 | 0 | 0.02069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5e2e76ac03961b9b32678fe18e2c767f2a57af31 | 139 | py | Python | tweet/admin.py | Sondosissa18/twitterclone-project | 570709c56f93ebf0ebd7daec06bd7a42a47b81a5 | [
"MIT"
] | null | null | null | tweet/admin.py | Sondosissa18/twitterclone-project | 570709c56f93ebf0ebd7daec06bd7a42a47b81a5 | [
"MIT"
] | null | null | null | tweet/admin.py | Sondosissa18/twitterclone-project | 570709c56f93ebf0ebd7daec06bd7a42a47b81a5 | [
"MIT"
] | null | null | null | from django.contrib import admin
# from django.contrib.auth.admin import UserAdmin
from .models import Tweet
admin.site.register(Tweet)
| 17.375 | 49 | 0.805755 | 20 | 139 | 5.6 | 0.55 | 0.178571 | 0.303571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.122302 | 139 | 7 | 50 | 19.857143 | 0.918033 | 0.33813 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e376287c5c7b3a0b22fd33ca7df0940999d44b6 | 45 | py | Python | lib/__init__.py | kawatapw/kuriso | a24baa12ccfdaab7cd0772985a1ada5a6a09f0d7 | [
"MIT"
] | 6 | 2021-03-07T20:14:29.000Z | 2022-03-10T20:28:20.000Z | lib/__init__.py | kawatapw/kuriso | a24baa12ccfdaab7cd0772985a1ada5a6a09f0d7 | [
"MIT"
] | 3 | 2021-04-20T17:18:58.000Z | 2022-03-28T18:17:35.000Z | lib/__init__.py | kawatapw/kuriso | a24baa12ccfdaab7cd0772985a1ada5a6a09f0d7 | [
"MIT"
] | 4 | 2021-03-30T12:55:07.000Z | 2022-03-10T09:01:16.000Z | from .async_mysql import AsyncSQLPoolWrapper
| 22.5 | 44 | 0.888889 | 5 | 45 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5e3c42dfa1fdd86441892cfe94e1f5d4962f045a | 102 | py | Python | terrascript/icinga2/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/icinga2/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | terrascript/icinga2/__init__.py | amlodzianowski/python-terrascript | 1111affe6cd30d9b8b7bc74ae4e27590f7d4dc49 | [
"BSD-2-Clause"
] | null | null | null | # terrascript/icinga2/__init__.py
import terrascript
class icinga2(terrascript.Provider):
pass
| 12.75 | 36 | 0.784314 | 11 | 102 | 6.909091 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.137255 | 102 | 7 | 37 | 14.571429 | 0.840909 | 0.303922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5e3c70778e76b846ff7c95075112a4d614ce6aa2 | 1,412 | py | Python | pkg/tests/test_col_name_issues.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | 1 | 2020-08-16T12:07:56.000Z | 2020-08-16T12:07:56.000Z | pkg/tests/test_col_name_issues.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | null | null | null | pkg/tests/test_col_name_issues.py | arita37/pyvtreat | c32e7ce6db11a2ccdd63e545b25028cbec03a3ff | [
"BSD-3-Clause"
] | null | null | null |
import pytest
import pandas
import vtreat
def test_col_dups_1():
d = pandas.DataFrame({'x': [1], 'x2': [2], 'y': [3]})
d.columns = ['x', 'x', 'y']
transform = vtreat.UnsupervisedTreatment(
var_list=['x'],
cols_to_copy=['y']
)
with pytest.raises(ValueError):
transform.fit_transform(d, d["y"])
def test_xgboost_col_name_issue_1():
# https://stackoverflow.com/questions/48645846/pythons-xgoost-valueerrorfeature-names-may-not-contain-or
# ValueError('feature_names may not contain [, ] or <')
d = pandas.DataFrame({'x': ['[', ']', '<' , '>']})
transform = vtreat.UnsupervisedTreatment(
var_list=['x']
)
d_transformed = transform.fit_transform(d, None)
cols = d_transformed.columns
for col in cols:
assert not any(c in col for c in "[]<>")
assert len(set(cols)) == len(cols)
def test_xgboost_col_name_issue_2():
# https://stackoverflow.com/questions/48645846/pythons-xgoost-valueerrorfeature-names-may-not-contain-or
# ValueError('feature_names may not contain [, ] or <')
d = pandas.DataFrame({'x': ['[', ']', '<' , '_lt_']})
transform = vtreat.UnsupervisedTreatment(
var_list=['x']
)
d_transformed = transform.fit_transform(d, None)
cols = d_transformed.columns
for col in cols:
assert not any(c in col for c in "[]<>")
assert len(set(cols)) == len(cols)
| 30.042553 | 108 | 0.628187 | 180 | 1,412 | 4.766667 | 0.305556 | 0.037296 | 0.051282 | 0.083916 | 0.825175 | 0.825175 | 0.713287 | 0.713287 | 0.713287 | 0.713287 | 0 | 0.020481 | 0.204674 | 1,412 | 46 | 109 | 30.695652 | 0.743544 | 0.221671 | 0 | 0.46875 | 0 | 0 | 0.030192 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.09375 | false | 0 | 0.09375 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
eacf7469b118735bbbd2d5f34f2e196fe09e2c6c | 4,210 | py | Python | djsani/medical_history/waivers/models.py | carthage-college/django-djsani | e344fd636695682a0765fc5d5c2137fe3e6b35d1 | [
"BSD-3-Clause"
] | 1 | 2017-04-22T11:08:41.000Z | 2017-04-22T11:08:41.000Z | djsani/medical_history/waivers/models.py | carthage-college/django-djsani | e344fd636695682a0765fc5d5c2137fe3e6b35d1 | [
"BSD-3-Clause"
] | 10 | 2020-06-11T14:22:47.000Z | 2021-07-08T14:05:27.000Z | djsani/medical_history/waivers/models.py | carthage-college/django-djsani | e344fd636695682a0765fc5d5c2137fe3e6b35d1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""Data models."""
from django.db import models
from djsani.core.models import FILE_VALIDATORS
from djsani.core.models import StudentMedicalManager
from djtools.fields.helpers import upload_to_path
class Sicklecell(models.Model):
"""Sicklecell waiver."""
# core
college_id = models.IntegerField()
manager = models.ForeignKey(
StudentMedicalManager, on_delete=models.CASCADE,
)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
# waiver fields
waive = models.BooleanField()
proof = models.BooleanField()
results = models.CharField(max_length=64)
results_file = models.FileField(
upload_to=upload_to_path,
validators=FILE_VALIDATORS,
max_length=128,
null=True,
blank=True,
)
results_file_status = models.BooleanField()
class Meta:
"""Attributes about the data model and admin options."""
db_table = 'cc_athlete_sicklecell_waiver'
def __repr__(self):
"""Default value for this object."""
return str(self.college_id)
def current(self, day):
"""Determine if this is the current waiver for academic year."""
return self.created_at > day
class Meni(models.Model):
"""Meningitis and Hepatitis B waiver."""
# core
college_id = models.IntegerField()
manager = models.ForeignKey(
StudentMedicalManager, on_delete=models.CASCADE,
)
created_at = models.DateTimeField(auto_now_add=True)
# waiver fields
agree = models.BooleanField()
class Meta:
"""Attributes about the data model and admin options."""
db_table = 'cc_student_meni_waiver'
def __repr__(self):
"""Default value for this object."""
return str(self.college_id)
def current(self, day):
"""Determine if this is the current waiver for academic year."""
return self.created_at > day
class Risk(models.Model):
"""Assumption of Risk Waiver."""
# core
college_id = models.IntegerField()
manager = models.ForeignKey(
StudentMedicalManager, on_delete=models.CASCADE,
)
created_at = models.DateTimeField(auto_now_add=True)
# waiver fields
agree = models.BooleanField()
class Meta:
"""Attributes about the data model and admin options."""
db_table = 'cc_athlete_risk_waiver'
def __repr__(self):
"""Default data for display."""
return str(self.college_id)
def current(self, day):
"""Determine if this is the current waiver for academic year."""
return self.created_at > day
class Reporting(models.Model):
"""CCIW Injury and Illness Reporting Acknowledgement."""
# core
college_id = models.IntegerField()
manager = models.ForeignKey(
StudentMedicalManager, on_delete=models.CASCADE,
)
created_at = models.DateTimeField(auto_now_add=True)
# waiver fields
agree = models.BooleanField()
class Meta:
"""Attributes about the data model and admin options."""
db_table = 'cc_athlete_reporting_waiver'
def __repr__(self):
"""Default data for display."""
return str(self.college_id)
def current(self, day):
"""Determine if this is the current waiver for academic year."""
return self.created_at > day
class Privacy(models.Model):
"""Privacy waiver."""
# core
college_id = models.IntegerField()
manager = models.ForeignKey(
StudentMedicalManager, on_delete=models.CASCADE,
)
created_at = models.DateTimeField(auto_now_add=True)
# waiver fields
news_media = models.BooleanField() # not required
parents_guardians = models.BooleanField() # not required
disclose_records = models.BooleanField()
class Meta:
"""Attributes about the data model and admin options."""
db_table = 'cc_athlete_privacy_waiver'
def __repr__(self):
"""Default data for display."""
return str(self.college_id)
def current(self, day):
"""Determine if this is the current waiver for academic year."""
return self.created_at > day
| 27.697368 | 72 | 0.665558 | 490 | 4,210 | 5.530612 | 0.212245 | 0.03321 | 0.046494 | 0.055351 | 0.75203 | 0.722509 | 0.722509 | 0.722509 | 0.722509 | 0.722509 | 0 | 0.001862 | 0.234679 | 4,210 | 151 | 73 | 27.880795 | 0.83923 | 0.236105 | 0 | 0.585366 | 0 | 0 | 0.040026 | 0.040026 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.04878 | 0 | 0.743902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
eae70afada43c8d67050757aef3da1ef9fa677e7 | 33 | py | Python | data/__all_models.py | Shubarin/psybot | 9766af45b03ab5fb7e53a7c62caee37be4758d68 | [
"BSD-3-Clause"
] | 1 | 2021-05-09T05:10:11.000Z | 2021-05-09T05:10:11.000Z | data/__all_models.py | Shubarin/psybot | 9766af45b03ab5fb7e53a7c62caee37be4758d68 | [
"BSD-3-Clause"
] | null | null | null | data/__all_models.py | Shubarin/psybot | 9766af45b03ab5fb7e53a7c62caee37be4758d68 | [
"BSD-3-Clause"
] | null | null | null | from . import posts, tags, users
| 16.5 | 32 | 0.727273 | 5 | 33 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 33 | 1 | 33 | 33 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
eaeee124c6c09abbccce7946252157a61ff42eb4 | 64 | py | Python | tests/one/__init__.py | misspellted/aoc-two-zero-two-zero | a0c560b309ff960ec3e0b609b21c89fe54b220cd | [
"Unlicense"
] | null | null | null | tests/one/__init__.py | misspellted/aoc-two-zero-two-zero | a0c560b309ff960ec3e0b609b21c89fe54b220cd | [
"Unlicense"
] | null | null | null | tests/one/__init__.py | misspellted/aoc-two-zero-two-zero | a0c560b309ff960ec3e0b609b21c89fe54b220cd | [
"Unlicense"
] | null | null | null |
# from .one import *
from .two import *
from .three import *
| 9.142857 | 20 | 0.640625 | 9 | 64 | 4.555556 | 0.555556 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 64 | 6 | 21 | 10.666667 | 0.854167 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc30925d38d97f30b9a4542a5c5b59c505cb0b8b | 2,897 | py | Python | metrica/metrics_map.py | m-zakeri/TsDD | bcaba955a7eb89865d11591fb812dcb768fd33da | [
"Apache-2.0"
] | null | null | null | metrica/metrics_map.py | m-zakeri/TsDD | bcaba955a7eb89865d11591fb812dcb768fd33da | [
"Apache-2.0"
] | null | null | null | metrica/metrics_map.py | m-zakeri/TsDD | bcaba955a7eb89865d11591fb812dcb768fd33da | [
"Apache-2.0"
] | null | null | null | """
This module provide the dictionary to map full metrics name
to their abbreviation names for different quality attributes
"""
__version__ = '0.1.0'
__author__ = 'Morteza'
# Reference:
reusability_metrics = {
# 1- Cohesion
'Lack of Cohesion in Methods 5': ('LCOM5', 0.87),
# 2- Complexity
'Nesting Level': ('NL', 0.79),
'Nesting Level Else-If': ('NLE', 0.88),
'Weighted Methods per Class': ('WMC', 0.59),
# 3- Coupling
'Coupling Between Object classes': ('CBO', 0.15),
'Coupling Between Object classes Inverse': ('CBOI', 0.58),
'Number of Incoming Invocations': ('NII', 0.93),
'Number of Outgoing Invocations': ('NOI', 0.88),
'Response set For Class': ('RFC', 0.73),
# 4- Documentation
'API Documentation': ('AD', 0.38),
'Comment Density': ('CD', 0.81),
'Total Comment Density': ('TCD', 0.84),
'Comment Lines of Code': ('CLOC', 0.38),
'Total Comment Lines of Code': ('TCLOC', 0.36),
'Documentation Lines of Code': ('DLOC', 0.28),
'Total Documentation Lines of Code': ('PDA', 0.52),
# 5- Inheritance
'Depth of Inheritance Tree': ('DIT', 0.75),
# 6- Size
'Lines of Code': ('LOC', 0.6),
'Logical Lines of Code': ('LLOC', 0.73),
'Total Logical Lines of Code': ('TLLOC', 0.68),
'Total Lines of Code': ('TLOC', 0.68), # Not set in reference paper
'Total Number of Attributes': ('TNA', 0.90),
'Number of Getters': ('NG', 0.79),
'Total Number of Getters': ('TNG', 0.82),
'Total Number of Methods': ('TNM', 0.63),
'Total Number of Statements': ('TNOS', 0.78),
'Total Number of Public Methods': ('TNPM', 0.77),
}
testability_metrics = {
# 1- Cohesion
'Lack of Cohesion in Methods 5': ('LCOM5', 0.87),
# 2- Complexity
'Nesting Level': ('NL', 0.79),
'Nesting Level Else-If': ('NLE', 0.88),
'Weighted Methods per Class': ('WMC', 0.59),
# 3- Coupling
'Coupling Between Object classes': ('CBO', 0.15),
'Coupling Between Object classes Inverse': ('CBOI', 0.58),
'Number of Incoming Invocations': ('NII', 0.93),
'Number of Outgoing Invocations': ('NOI', 0.88),
'Response set For Class': ('RFC', 0.73),
# 4- Documentation
# Documentation metrics are not used in testability measurement
# 5- Inheritance
'Depth of Inheritance Tree': ('DIT', 0.75),
# 6- Size
'Lines of Code': ('LOC', 0.6),
'Logical Lines of Code': ('LLOC', 0.73),
'Total Logical Lines of Code': ('TLLOC', 0.68),
'Total Lines of Code': ('TLOC', 0.68),
'Total Number of Attributes': ('TNA', 0.90),
'Number of Getters': ('NG', 0.79),
'Total Number of Getters': ('TNG', 0.82),
'Total Number of Methods': ('TNM', 0.63),
'Total Number of Statements': ('TNOS', 0.78),
'Total Number of Public Methods': ('TNPM', 0.77),
}
# Test driver
# print(reusability_metrics['Total Number of Public Methods'][1])
| 31.48913 | 72 | 0.599586 | 405 | 2,897 | 4.261728 | 0.308642 | 0.078795 | 0.076477 | 0.06489 | 0.736964 | 0.7219 | 0.7219 | 0.7219 | 0.7219 | 0.7219 | 0 | 0.070137 | 0.217466 | 2,897 | 91 | 73 | 31.835165 | 0.691222 | 0.156369 | 0 | 0.754717 | 0 | 0 | 0.542373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dc4194e5edf2060c576df1aaad8869f3b45551fc | 31 | py | Python | semversioner/__init__.py | mvanbaak/semversioner | d6b9ec6a204405f0eb877dc98c7b4d0e7052fa5d | [
"MIT"
] | 9 | 2020-10-11T09:57:52.000Z | 2021-12-21T10:10:23.000Z | semversioner/__init__.py | mvanbaak/semversioner | d6b9ec6a204405f0eb877dc98c7b4d0e7052fa5d | [
"MIT"
] | 6 | 2020-11-12T16:36:41.000Z | 2021-11-04T07:35:07.000Z | semversioner/__init__.py | mvanbaak/semversioner | d6b9ec6a204405f0eb877dc98c7b4d0e7052fa5d | [
"MIT"
] | 2 | 2020-11-12T16:37:04.000Z | 2020-12-24T22:53:05.000Z | from semversioner.cli import *
| 15.5 | 30 | 0.806452 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc52ad1e8edd06f3acda024cfc6114c1eaa81c48 | 86 | py | Python | code-stubs/python-app/module/tests/test_main.py | mazgi/dockerfiles | bb178ebd9d77b341421e4625e4c332e3bb1c3174 | [
"MIT"
] | null | null | null | code-stubs/python-app/module/tests/test_main.py | mazgi/dockerfiles | bb178ebd9d77b341421e4625e4c332e3bb1c3174 | [
"MIT"
] | null | null | null | code-stubs/python-app/module/tests/test_main.py | mazgi/dockerfiles | bb178ebd9d77b341421e4625e4c332e3bb1c3174 | [
"MIT"
] | null | null | null | import pytest
from ..main import main
def test_main():
assert main() == 'Hello'
| 12.285714 | 28 | 0.662791 | 12 | 86 | 4.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.209302 | 86 | 6 | 29 | 14.333333 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.05814 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dc7cc4b877861612a34738bda9d735679cb83c23 | 14,184 | py | Python | slack/tests/test_events.py | drewp/slack-sansio | 3c578bd087073174b1ec31b9a610e889d1fa0449 | [
"MIT"
] | 39 | 2017-08-19T16:58:15.000Z | 2022-03-22T01:00:03.000Z | slack/tests/test_events.py | drewp/slack-sansio | 3c578bd087073174b1ec31b9a610e889d1fa0449 | [
"MIT"
] | 32 | 2017-08-24T18:14:32.000Z | 2019-07-25T16:57:55.000Z | slack/tests/test_events.py | drewp/slack-sansio | 3c578bd087073174b1ec31b9a610e889d1fa0449 | [
"MIT"
] | 10 | 2017-08-09T15:56:56.000Z | 2019-10-31T06:24:46.000Z | import re
import pytest
import slack
from slack.events import Event
from . import data
class TestEvents:
def test_clone_event(self, slack_event):
ev = Event.from_http(slack_event)
clone = ev.clone()
assert clone == ev
def test_modify_clone(self, slack_event):
ev = Event.from_http(slack_event)
clone = ev.clone()
clone["text"] = "aaaaa"
assert clone != ev
def test_parsing(self, slack_event):
http_event = slack.events.Event.from_http(slack_event)
rtm_event = slack.events.Event.from_rtm(slack_event["event"])
assert isinstance(http_event, slack.events.Event)
assert isinstance(rtm_event, slack.events.Event)
assert http_event.event == rtm_event.event == slack_event["event"]
assert rtm_event.metadata is None
assert http_event.metadata == {
"token": "supersecuretoken",
"team_id": "T000AAA0A",
"api_app_id": "A0AAAAAAA",
"event": http_event.event,
"type": "event_callback",
"authed_teams": ["T000AAA0A"],
"event_id": "AAAAAAA",
"event_time": 123456789,
}
def test_parsing_token(self, slack_event):
slack.events.Event.from_http(slack_event, verification_token="supersecuretoken")
def test_parsing_team_id(self, slack_event):
slack.events.Event.from_http(slack_event, team_id="T000AAA0A")
def test_parsing_wrong_token(self, slack_event):
with pytest.raises(slack.exceptions.FailedVerification):
slack.events.Event.from_http(slack_event, verification_token="xxx")
def test_parsing_wrong_team_id(self, slack_event):
with pytest.raises(slack.exceptions.FailedVerification):
slack.events.Event.from_http(slack_event, team_id="xxx")
@pytest.mark.parametrize(
"slack_event",
[
"pin_added",
"reaction_added",
"simple",
"snippet",
"shared",
"threaded",
"bot",
"attachment",
],
indirect=True,
)
def test_mapping_access(self, slack_event):
ev = Event.from_http(slack_event)
assert ev["user"] == "U000AA000"
@pytest.mark.parametrize(
"slack_event",
[
"pin_added",
"reaction_added",
"simple",
"snippet",
"shared",
"threaded",
"bot",
"attachment",
],
indirect=True,
)
def test_mapping_delete(self, slack_event):
ev = Event.from_http(slack_event)
assert ev["user"] == "U000AA000"
del ev["user"]
with pytest.raises(KeyError):
print(ev["user"])
@pytest.mark.parametrize(
"slack_event",
[
"pin_added",
"reaction_added",
"simple",
"snippet",
"shared",
"threaded",
"bot",
"attachment",
],
indirect=True,
)
def test_mapping_set(self, slack_event):
ev = Event.from_http(slack_event)
assert ev["user"] == "U000AA000"
ev["user"] = "foo"
assert ev["user"] == "foo"
class TestMessage:
@pytest.mark.parametrize(
"slack_event", {**data.Messages.__members__}, indirect=True
)
def test_parsing(self, slack_event):
http_event = slack.events.Event.from_http(slack_event)
rtm_event = slack.events.Event.from_rtm(slack_event["event"])
assert isinstance(http_event, slack.events.Message)
assert isinstance(rtm_event, slack.events.Message)
assert http_event.event == rtm_event.event
assert rtm_event.metadata is None
assert http_event.metadata == {
"token": "supersecuretoken",
"team_id": "T000AAA0A",
"api_app_id": "A0AAAAAAA",
"event": http_event.event,
"type": "event_callback",
"authed_teams": ["T000AAA0A"],
"event_id": "AAAAAAA",
"event_time": 123456789,
}
def test_serialize(self):
msg = slack.events.Message()
msg["channel"] = "C00000A00"
msg["text"] = "Hello world"
assert msg.serialize() == {"channel": "C00000A00", "text": "Hello world"}
def test_serialize_attachments(self):
msg = slack.events.Message()
msg["channel"] = "C00000A00"
msg["text"] = "Hello world"
msg["attachments"] = {"hello": "world"}
assert msg.serialize() == {
"channel": "C00000A00",
"text": "Hello world",
"attachments": '{"hello": "world"}',
}
def test_response(self, slack_message):
msg = Event.from_http(slack_message)
rep = msg.response()
assert isinstance(rep, slack.events.Message)
assert rep["channel"] == "C00000A00"
def test_response_not_in_thread(self, slack_message):
msg = Event.from_http(slack_message)
rep = msg.response(in_thread=False)
assert rep == {"channel": "C00000A00"}
def test_response_in_thread(self, slack_message):
msg = Event.from_http(slack_message)
rep = msg.response(in_thread=True)
assert rep == {"channel": "C00000A00", "thread_ts": "123456789.000001"}
def test_response_thread_default(self, slack_message):
msg = Event.from_http(slack_message)
rep = msg.response()
if "thread_ts" in msg or "thread_ts" in msg.get("message", {}):
assert rep == {"channel": "C00000A00", "thread_ts": "123456789.000001"}
else:
assert rep == {"channel": "C00000A00"}
class TestEventRouter:
def test_register(self, event_router):
def handler():
pass
event_router.register("channel_deleted", handler)
assert len(event_router._routes["channel_deleted"]["*"]["*"]) == 1
assert event_router._routes["channel_deleted"]["*"]["*"][0] is handler
def test_register_details(self, event_router):
def handler():
pass
event_router.register("channel_deleted", handler, hello="world")
assert len(event_router._routes["channel_deleted"]["hello"]["world"]) == 1
assert event_router._routes["channel_deleted"]["hello"]["world"][0] is handler
def test_register_two_details(self, event_router):
def handler():
pass
with pytest.raises(ValueError):
event_router.register("channel_deleted", handler, hello="world", foo="bar")
def test_multiple_register(self, event_router):
def handler():
pass
def handler_bis():
pass
event_router.register("channel_deleted", handler)
event_router.register("channel_deleted", handler_bis)
assert len(event_router._routes["channel_deleted"]["*"]["*"]) == 2
assert event_router._routes["channel_deleted"]["*"]["*"][0] is handler
assert event_router._routes["channel_deleted"]["*"]["*"][1] is handler_bis
@pytest.mark.parametrize("slack_event", {**data.Events.__members__}, indirect=True)
def test_dispatch(self, slack_event, event_router):
def handler():
pass
ev = Event.from_http(slack_event)
handlers = list()
event_router.register("channel_deleted", handler)
event_router.register("pin_added", handler)
event_router.register("reaction_added", handler)
event_router.register("message", handler)
for h in event_router.dispatch(ev):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
@pytest.mark.parametrize("slack_event", {**data.Events.__members__}, indirect=True)
def test_no_dispatch(self, slack_event, event_router):
def handler():
pass
ev = Event.from_http(slack_event)
event_router.register("xxx", handler)
for h in event_router.dispatch(ev):
assert False
@pytest.mark.parametrize("slack_event", {**data.Events.__members__}, indirect=True)
def test_dispatch_details(self, slack_event, event_router):
def handler():
pass
ev = Event.from_http(slack_event)
handlers = list()
event_router.register("channel_deleted", handler, channel="C00000A00")
event_router.register("pin_added", handler, channel="C00000A00")
event_router.register("reaction_added", handler, reaction="sirbot")
event_router.register("message", handler, text=None)
for h in event_router.dispatch(ev):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
@pytest.mark.parametrize("slack_event", {**data.Events.__members__}, indirect=True)
def test_multiple_dispatch(self, slack_event, event_router):
def handler():
pass
def handler_bis():
pass
ev = Event.from_http(slack_event)
handlers = list()
event_router.register("channel_deleted", handler)
event_router.register("pin_added", handler)
event_router.register("reaction_added", handler)
event_router.register("channel_deleted", handler_bis)
event_router.register("pin_added", handler_bis)
event_router.register("reaction_added", handler_bis)
event_router.register("message", handler)
event_router.register("message", handler_bis)
for h in event_router.dispatch(ev):
handlers.append(h)
assert len(handlers) == 2
assert handlers[0] is handler
assert handlers[1] is handler_bis
class TestMessageRouter:
def test_register(self, message_router):
def handler():
pass
message_router.register(".*", handler)
assert len(message_router._routes["*"][None][re.compile(".*")]) == 1
assert message_router._routes["*"][None][re.compile(".*")][0] is handler
def test_register_channel(self, message_router):
def handler():
pass
message_router.register(".*", handler, channel="C00000A00")
assert len(message_router._routes["C00000A00"][None][re.compile(".*")]) == 1
assert message_router._routes["C00000A00"][None][re.compile(".*")][0] is handler
def test_register_subtype(self, message_router):
def handler():
pass
message_router.register(".*", handler, subtype="bot_message")
assert len(message_router._routes["*"]["bot_message"][re.compile(".*")]) == 1
assert (
message_router._routes["*"]["bot_message"][re.compile(".*")][0] is handler
)
def test_multiple_register(self, message_router):
def handler():
pass
def handler_bis():
pass
message_router.register(".*", handler)
message_router.register(".*", handler_bis)
assert len(message_router._routes["*"][None][re.compile(".*")]) == 2
assert message_router._routes["*"][None][re.compile(".*")][0] is handler
assert message_router._routes["*"][None][re.compile(".*")][1] is handler_bis
def test_dispatch(self, message_router, slack_message):
def handler():
pass
msg = Event.from_http(slack_message)
message_router.register(".*", handler)
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
def test_no_dispatch(self, message_router, slack_message):
def handler():
pass
msg = Event.from_http(slack_message)
message_router.register("xxx", handler)
for h in message_router.dispatch(msg):
assert False
def test_dispatch_pattern(self, message_router, slack_message):
def handler():
pass
msg = Event.from_http(slack_message)
message_router.register("hello", handler)
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
def test_multiple_dispatch(self, message_router, slack_message):
def handler():
pass
def handler_bis():
pass
msg = Event.from_http(slack_message)
message_router.register(".*", handler)
message_router.register(".*", handler_bis)
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 2
assert handlers[0] is handler
assert handlers[1] is handler_bis
def test_multiple_dispatch_pattern(self, message_router, slack_message):
def handler():
pass
def handler_bis():
pass
msg = Event.from_http(slack_message)
message_router.register("hello", handler)
message_router.register("hello", handler_bis)
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 2
assert handlers[0] is handler
assert handlers[1] is handler_bis
def test_dispatch_channel(self, message_router, slack_message):
def handler():
pass
msg = Event.from_http(slack_message)
message_router.register("hello", handler, channel="C00000A00")
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
@pytest.mark.parametrize("slack_message", ("channel_topic",), indirect=True)
def test_dispatch_subtype(self, message_router, slack_message):
def handler():
pass
msg = Event.from_http(slack_message)
message_router.register(".*", handler, subtype="channel_topic")
handlers = list()
for h in message_router.dispatch(msg):
handlers.append(h)
assert len(handlers) == 1
assert handlers[0] is handler
| 32.309795 | 88 | 0.607657 | 1,565 | 14,184 | 5.265176 | 0.079872 | 0.049757 | 0.041019 | 0.056796 | 0.898301 | 0.867476 | 0.812985 | 0.732282 | 0.683252 | 0.629612 | 0 | 0.02219 | 0.269247 | 14,184 | 438 | 89 | 32.383562 | 0.772793 | 0 | 0 | 0.66474 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0.182081 | 1 | 0.17341 | false | 0.069364 | 0.014451 | 0 | 0.199422 | 0.00289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
dc9cfb51da7c5cf257c5fe3a5cbcc923e1072a2f | 102 | py | Python | src/test/data/pa3/AdditionalTestCase/UnitTest/Stmt_If.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Stmt_If.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | src/test/data/pa3/AdditionalTestCase/UnitTest/Stmt_If.py | Leo-Enrique-Wu/chocopy_compiler_code_generation | 4606be0531b3de77411572aae98f73169f46b3b9 | [
"BSD-2-Clause"
] | null | null | null | x:int = 3
if x > 0:
print("positive")
if x < 0:
print("negative")
if x == 0:
print("zero") | 14.571429 | 21 | 0.509804 | 18 | 102 | 2.888889 | 0.5 | 0.173077 | 0.230769 | 0.519231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054795 | 0.284314 | 102 | 7 | 22 | 14.571429 | 0.657534 | 0 | 0 | 0 | 0 | 0 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
f4bfc694d1d67d77153ed90373a19e6a8b99a900 | 31 | py | Python | brain_plasma/__init__.py | HEBOS/brain-plasma | 6dff4a95a60a70f8ec7f0e2aeff4de2611224807 | [
"MIT"
] | null | null | null | brain_plasma/__init__.py | HEBOS/brain-plasma | 6dff4a95a60a70f8ec7f0e2aeff4de2611224807 | [
"MIT"
] | null | null | null | brain_plasma/__init__.py | HEBOS/brain-plasma | 6dff4a95a60a70f8ec7f0e2aeff4de2611224807 | [
"MIT"
] | null | null | null | from .brain_plasma import Brain | 31 | 31 | 0.870968 | 5 | 31 | 5.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5230d45ae2033db1e21bd450c7f72660591a2490 | 27 | py | Python | mozbadges/site/__init__.py | Mozilla-GitHub-Standards/32efa1a1fe75882ab357bdb58d92207732a76c86f080bb0f12b4b3357b38899d | 3b57876dcdf7bf9e8158b69acdb9648a7a401a49 | [
"BSD-3-Clause"
] | 1 | 2017-01-13T03:56:04.000Z | 2017-01-13T03:56:04.000Z | mozbadges/site/__init__.py | Mozilla-GitHub-Standards/32efa1a1fe75882ab357bdb58d92207732a76c86f080bb0f12b4b3357b38899d | 3b57876dcdf7bf9e8158b69acdb9648a7a401a49 | [
"BSD-3-Clause"
] | 10 | 2019-03-29T04:59:00.000Z | 2022-01-19T14:54:45.000Z | mozbadges/site/__init__.py | Mozilla-GitHub-Standards/32efa1a1fe75882ab357bdb58d92207732a76c86f080bb0f12b4b3357b38899d | 3b57876dcdf7bf9e8158b69acdb9648a7a401a49 | [
"BSD-3-Clause"
] | 1 | 2019-03-29T04:59:02.000Z | 2019-03-29T04:59:02.000Z | import helpers
import urls
| 9 | 14 | 0.851852 | 4 | 27 | 5.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 2 | 15 | 13.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
524c045aab3e89dca0fe39cf925148398f8ffec3 | 9,434 | py | Python | tests/contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/test_michelson_coding_KT1Jcr.py | juztin/pytezos-1 | 7e608ff599d934bdcf129e47db43dbdb8fef9027 | [
"MIT"
] | 1 | 2020-08-11T02:31:24.000Z | 2020-08-11T02:31:24.000Z | tests/contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/test_michelson_coding_KT1Jcr.py | juztin/pytezos-1 | 7e608ff599d934bdcf129e47db43dbdb8fef9027 | [
"MIT"
] | 1 | 2020-12-30T16:44:56.000Z | 2020-12-30T16:44:56.000Z | tests/contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/test_michelson_coding_KT1Jcr.py | tqtezos/pytezos | a4ac0b022d35d4c9f3062609d8ce09d584b5faa8 | [
"MIT"
] | 1 | 2022-03-20T19:01:00.000Z | 2022-03-20T19:01:00.000Z | from unittest import TestCase
from tests import get_data
from pytezos.michelson.micheline import michelson_to_micheline
from pytezos.michelson.formatter import micheline_to_michelson
class MichelsonCodingTestKT1Jcr(TestCase):
def setUp(self):
self.maxDiff = None
def test_michelson_parse_code_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/code_KT1Jcr.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/code_KT1Jcr.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_code_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/code_KT1Jcr.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/code_KT1Jcr.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_code_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/code_KT1Jcr.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_storage_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/storage_KT1Jcr.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/storage_KT1Jcr.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_storage_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/storage_KT1Jcr.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/storage_KT1Jcr.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_storage_KT1Jcr(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/storage_KT1Jcr.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_opXBki(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opXBki.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opXBki.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_opXBki(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opXBki.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opXBki.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_opXBki(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opXBki.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_oo1W9F(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1W9F.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1W9F.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_oo1W9F(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1W9F.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1W9F.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_oo1W9F(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1W9F.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_oo1FDX(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1FDX.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1FDX.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_oo1FDX(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1FDX.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1FDX.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_oo1FDX(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oo1FDX.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_oohTVV(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oohTVV.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oohTVV.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_oohTVV(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oohTVV.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oohTVV.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_oohTVV(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oohTVV.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_opQZ7h(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opQZ7h.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opQZ7h.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_opQZ7h(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opQZ7h.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opQZ7h.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_opQZ7h(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opQZ7h.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_opAmZ9(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opAmZ9.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opAmZ9.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_opAmZ9(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opAmZ9.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opAmZ9.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_opAmZ9(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_opAmZ9.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
def test_michelson_parse_parameter_oodNXi(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oodNXi.json')
actual = michelson_to_micheline(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oodNXi.tz'))
self.assertEqual(expected, actual)
def test_michelson_format_parameter_oodNXi(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oodNXi.tz')
actual = micheline_to_michelson(get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oodNXi.json'),
inline=True)
self.assertEqual(expected, actual)
def test_michelson_inverse_parameter_oodNXi(self):
expected = get_data(
path='contracts/KT1JcrtCT2YLiGXNXMMgR63tHTEtg8WNohx3/parameter_oodNXi.json')
actual = michelson_to_micheline(micheline_to_michelson(expected))
self.assertEqual(expected, actual)
| 46.935323 | 89 | 0.734683 | 880 | 9,434 | 7.563636 | 0.05 | 0.048377 | 0.074369 | 0.135216 | 0.963341 | 0.963341 | 0.963341 | 0.963341 | 0.947416 | 0.947416 | 0 | 0.042823 | 0.190587 | 9,434 | 200 | 90 | 47.17 | 0.828837 | 0 | 0 | 0.639053 | 0 | 0 | 0.316833 | 0.316833 | 0 | 0 | 0 | 0 | 0.159763 | 1 | 0.16568 | false | 0 | 0.023669 | 0 | 0.195266 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bffc21352a617a3a5f3ef3730d1fc9e5b4821db9 | 91 | py | Python | tdameritrade/__init__.py | ka05/tdameritrade | 809db2e787a689bb0c35bcca0e4147440ac27f0c | [
"Apache-2.0"
] | 528 | 2018-08-19T17:06:29.000Z | 2022-03-28T03:39:22.000Z | tdameritrade/__init__.py | ka05/tdameritrade | 809db2e787a689bb0c35bcca0e4147440ac27f0c | [
"Apache-2.0"
] | 122 | 2018-10-23T00:06:22.000Z | 2022-03-27T15:17:24.000Z | tdameritrade/__init__.py | ka05/tdameritrade | 809db2e787a689bb0c35bcca0e4147440ac27f0c | [
"Apache-2.0"
] | 232 | 2018-09-07T19:13:00.000Z | 2022-01-28T17:32:17.000Z | from .client import TDClient # noqa: F401
from ._version import __version__ # noqa: F401
| 30.333333 | 47 | 0.758242 | 12 | 91 | 5.333333 | 0.583333 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.175824 | 91 | 2 | 48 | 45.5 | 0.773333 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
871595ac71aa64ddb71c719940e2dd887776476d | 123 | py | Python | test_gurbert/test_num.py | Gurbert/test_gurbert | 636415011565dca1860d6da510848398de2bae8f | [
"MIT"
] | null | null | null | test_gurbert/test_num.py | Gurbert/test_gurbert | 636415011565dca1860d6da510848398de2bae8f | [
"MIT"
] | null | null | null | test_gurbert/test_num.py | Gurbert/test_gurbert | 636415011565dca1860d6da510848398de2bae8f | [
"MIT"
] | null | null | null | class Calculate:
def __init__(self, num):
self.num = num
def add(self):
return 1 + self.num
| 17.571429 | 29 | 0.536585 | 16 | 123 | 3.875 | 0.5625 | 0.33871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012821 | 0.365854 | 123 | 6 | 30 | 20.5 | 0.782051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
875192dae68a7b7276d318e3285948ce8c388c65 | 16,015 | py | Python | tests/lookup_tests/test_lookup_classes.py | michaelhall28/darwinian_shift | 7554532379d1650e9174fdf3ab0a0f0074b4d2c5 | [
"MIT"
] | null | null | null | tests/lookup_tests/test_lookup_classes.py | michaelhall28/darwinian_shift | 7554532379d1650e9174fdf3ab0a0f0074b4d2c5 | [
"MIT"
] | null | null | null | tests/lookup_tests/test_lookup_classes.py | michaelhall28/darwinian_shift | 7554532379d1650e9174fdf3ab0a0f0074b4d2c5 | [
"MIT"
] | null | null | null | from darwinian_shift.lookup_classes import *
import os
import numpy as np
from tests.conftest import TEST_DATA_DIR
FILE_DIR = os.path.dirname(os.path.realpath(__file__))
def test_aaindex_lookup1(seq_object):
aaindex_file = os.path.join(FILE_DIR, 'aaindex_sample.txt')
aa = AAindexLookup(aaindex_file, 'alpha-CH chemical shifts (Andersen et al., 1992)', abs_value=False)
res = aa(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_aaindex_results1'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_aaindex_results1.npy'))
np.testing.assert_array_equal(expected, res)
def test_aaindex_lookup2(seq_object):
aaindex_file = os.path.join(FILE_DIR, 'aaindex_sample.txt')
aa = AAindexLookup(aaindex_file, 'Hydrophobicity index (Argos et al., 1982)', abs_value=True)
res = aa(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_aaindex_results2'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_aaindex_results2.npy'))
np.testing.assert_array_equal(expected, res)
def test_bigwig_lookup(seq_object):
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
res = bw(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_bw_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_bw_results.npy'))
np.testing.assert_array_equal(expected, res)
def test_dummy_lookup(seq_object, project):
np.random.seed(0)
d = DummyValuesPosition(project.data, np.random.random)
res = d(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_dummy_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_dummy_results.npy'))
np.testing.assert_array_equal(expected, res)
def test_foldx_lookup(pdb_seq_object):
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
res = f(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_foldx_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_foldx_results.npy'))
np.testing.assert_array_equal(expected, res)
def test_OR_lookup1(pdb_seq_object):
# Want to use foldx lookup as one of the inputs so can test include_missing_values
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
orl = ORLookup([bw, f], thresholds=[4, 1], directions=[1, -1])
res = orl(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_or_results1'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_or_results1.npy'))
np.testing.assert_array_equal(expected, res)
def test_OR_lookup2(pdb_seq_object):
# Want to use foldx lookup as one of the inputs so can test include_missing_values
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
orl = ORLookup([bw, f], thresholds=[4, 1], directions=[1, -1], include_missing_values=True)
res = orl(pdb_seq_object)
# Save reference results if they have been deliberately changed
#np.save(os.path.join(FILE_DIR, 'reference_or_results2'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_or_results2.npy'))
np.testing.assert_array_equal(expected, res)
def test_AND_lookup1(pdb_seq_object):
# Want to use foldx lookup as one of the inputs so can test include_missing_values
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
andl = ANDLookup([bw, f], thresholds=[4, 1], directions=[1, -1])
res = andl(pdb_seq_object)
# Save reference results if they have been deliberately changed
#np.save(os.path.join(FILE_DIR, 'reference_and_results1'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_and_results1.npy'))
np.testing.assert_array_equal(expected, res)
def test_MutationExclusion_lookup(pdb_seq_object):
# Want to use foldx lookup as one of the inputs so can test include_missing_values
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
ml = MutationExclusionLookup(lookup=bw, exclusion_lookup=f, exclusion_threshold=2)
res = ml(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_mut_exl_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_mut_exl_results.npy'))
np.testing.assert_array_equal(expected, res)
def test_AND_lookup2(pdb_seq_object):
# Want to use foldx lookup as one of the inputs so can test include_missing_values
bw_file = os.path.join(FILE_DIR, 'phylop_bw_section.bw')
bw = BigwigLookup(bw_file)
f = FoldXLookup(foldx_results_directory=os.path.join(FILE_DIR, 'foldx_data'),
sifts_directory=TEST_DATA_DIR, download_sifts=False)
andl = ANDLookup([bw, f], thresholds=[4, 1], directions=[1, -1], include_missing_values=True)
res = andl(pdb_seq_object)
# Save reference results if they have been deliberately changed
#np.save(os.path.join(FILE_DIR, 'reference_and_results2'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_and_results2.npy'))
np.testing.assert_array_equal(expected, res)
def test_iupred2a_lookup(seq_object):
pred = IUPRED2ALookup(iupred_results_dir=FILE_DIR)
seq_object.iupred_file = 'iupred_sample.txt'
res = pred(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_iupred2a_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_iupred2a_results.npy'))
np.testing.assert_array_equal(expected, res)
def test_prody_lookup(pdb_seq_object):
for prody_metric in ProDyLookup._options:
pr = ProDyLookup(metric=prody_metric, exclude_ends=0, pdb_directory=TEST_DATA_DIR,
sifts_directory=TEST_DATA_DIR, dssp_directory=TEST_DATA_DIR,
download_sifts=False, quiet=True)
res = pr(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_{}_results'.format(prody_metric)), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_{}_results.npy'.format(prody_metric)))
np.testing.assert_almost_equal(expected, res)
for prody_metric in ProDyLookup._options:
pr = ProDyLookup(metric=prody_metric, exclude_ends=5, pdb_directory=TEST_DATA_DIR,
sifts_directory=TEST_DATA_DIR, dssp_directory=TEST_DATA_DIR,
download_sifts=False, quiet=True)
res = pr(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_{}_exclude_ends_results'.format(prody_metric)), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_{}_exclude_ends_results.npy'.format(prody_metric)))
np.testing.assert_almost_equal(expected, res)
def test_residue_distance_lookup(pdb_seq_object):
dist_lookup = StructureDistanceLookup(pdb_directory=TEST_DATA_DIR,
sifts_directory=TEST_DATA_DIR, distance_to_alpha_carbons=True)
pdb_seq_object.target_selection = "protein and segid A and resid 1531 1532"
res = dist_lookup(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_residue_distance_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_residue_distance_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_residue_distance_lookup2(pdb_seq_object):
dist_lookup = StructureDistanceLookup(pdb_directory=TEST_DATA_DIR,
sifts_directory=TEST_DATA_DIR, distance_to_alpha_carbons=False)
pdb_seq_object.target_selection = "protein and segid A and resid 1531 1532"
res = dist_lookup(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_residue_distance_results2'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_residue_distance_results2.npy'))
np.testing.assert_almost_equal(expected, res)
def test_residue_distance_lookup_bool(pdb_seq_object):
dist_lookup = StructureDistanceLookup(pdb_directory=TEST_DATA_DIR,
sifts_directory=TEST_DATA_DIR, boolean=True)
pdb_seq_object.target_selection = "protein and segid A and resid 1531 1532"
res = dist_lookup(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_residue_distance_results_bool'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_residue_distance_results_bool.npy'))
np.testing.assert_almost_equal(expected, res)
def test_sequence_distance_lookup(pdb_seq_object):
dist_lookup = SequenceDistanceLookup()
pdb_seq_object.target_selection = np.array([1531, 1532])
res = dist_lookup(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_sequence_distance_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_sequence_distance_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_sequence_distance_lookup_bool(pdb_seq_object):
dist_lookup = SequenceDistanceLookup(boolean=True, target_key='some_test_key')
pdb_seq_object.some_test_key = np.array([1531, 1532])
res = dist_lookup(pdb_seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_sequence_distance_results2'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_sequence_distance_results2.npy'))
np.testing.assert_almost_equal(expected, res)
def test_uniprot_lookup1(seq_object):
# Uses pre-downloaded files
uniprot_lookup = UniprotLookup(uniprot_directory=TEST_DATA_DIR)
res = uniprot_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_uniprot_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_uniprot_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_uniprot_lookup2(seq_object):
# Uses uniprot api
uniprot_lookup = UniprotLookup()
res = uniprot_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_uniprot_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_uniprot_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_uniprot_lookup3(seq_object):
# Uses the uniprot accession
uniprot_lookup = UniprotLookup(uniprot_directory=TEST_DATA_DIR,
transcript_uniprot_mapping={'ENST00000263388': 'Q9UM47'})
res = uniprot_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_uniprot_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_uniprot_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_uniprot_lookup4(seq_object):
# Uses the uniprot accession and api
uniprot_lookup = UniprotLookup(transcript_uniprot_mapping={'ENST00000263388': 'Q9UM47'})
res = uniprot_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_uniprot_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_uniprot_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_clinvar_lookup(seq_object):
clinvar_lookup = ClinvarLookup(clinvar_variant_summary_file = os.path.join(TEST_DATA_DIR, "clinvar_sample.txt"),
assembly="GRCh37")
res = clinvar_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_clinvar_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_clinvar_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_pdbekb_lookup(seq_object):
# Uses pre-downloaded files
pdbe_lookup = PDBeKBLookup(
pdbekb_dir=TEST_DATA_DIR,
transcript_uniprot_mapping={'ENST00000263388': 'ABC123'} # Map to the fake test data
)
reset_section(seq_object)
seq_object.pdbekb_kwargs = {'uniprot_subsection': 'interface_residues', 'data_accessions': 'DEF456'}
res = pdbe_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_pdbekb_results'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_pdbekb_results.npy'))
np.testing.assert_almost_equal(expected, res)
def test_pdbekb_lookup2(seq_object):
# Uses pre-downloaded files
pdbe_lookup = PDBeKBLookup(
pdbekb_dir=TEST_DATA_DIR,
transcript_uniprot_mapping={'ENST00000263388': 'ABC123'} # Map to the fake test data
)
reset_section(seq_object)
seq_object.pdbekb_kwargs = {'uniprot_subsection': 'annotations', 'data_accessions': 'monomeric_residue_depth',
'score_method': 'mean'}
res2 = pdbe_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_pdbekb_results2'), res2)
expected = np.load(os.path.join(FILE_DIR, 'reference_pdbekb_results2.npy'))
np.testing.assert_almost_equal(expected, res2)
def test_variant_match_lookup_aachange(seq_object):
vm_lookup = VariantMatchLookup(match_column='aachange')
seq_object.match_list = ['C1392W', 'S154N', 'W724*', 'S154W']
res = vm_lookup(seq_object)
# Save reference results if they have been deliberately changed
# np.save(os.path.join(FILE_DIR, 'reference_variant_match_results_aachange'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_variant_match_results_aachange.npy'))
np.testing.assert_almost_equal(expected, res)
def test_variant_match_lookup_ds_id(seq_object):
vm_lookup = VariantMatchLookup(match_column='ds_mut_id')
seq_object.match_list = ['15303191:G>A', '15302430:G>A', '15281362:C>A']
res = vm_lookup(seq_object)
# Save reference results if they have been deliberately changed
np.save(os.path.join(FILE_DIR, 'reference_variant_match_results_ds_id'), res)
expected = np.load(os.path.join(FILE_DIR, 'reference_variant_match_results_ds_id.npy'))
np.testing.assert_almost_equal(expected, res)
def reset_section(section):
section.null_mutations = None
section.observed_mutations = None
section.load_section_mutations() | 39.156479 | 116 | 0.735623 | 2,242 | 16,015 | 4.958965 | 0.094558 | 0.038316 | 0.062062 | 0.085627 | 0.884691 | 0.878395 | 0.865893 | 0.85564 | 0.833783 | 0.826857 | 0 | 0.015182 | 0.165095 | 16,015 | 409 | 117 | 39.156479 | 0.816319 | 0.255635 | 0 | 0.477387 | 0 | 0 | 0.139772 | 0.076423 | 0 | 0 | 0 | 0 | 0.135678 | 1 | 0.135678 | false | 0 | 0.020101 | 0 | 0.155779 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
876c10f4f9f1901b36098b1bdb4699ffed7a932c | 5,800 | py | Python | broker/test_setup/user_set.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | null | null | null | broker/test_setup/user_set.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | null | null | null | broker/test_setup/user_set.py | ebloc/EBlocBroker | 8e0a8be0fb4e998b0b7214c3eb3eff20b90a8253 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
providers = [
"0x29e613b04125c16db3f3613563bfdd0ba24cb629", # A
"0x1926b36af775e1312fdebcc46303ecae50d945af", # B
"0x4934a70ba8c1c3acfa72e809118bdd9048563a24", # C
"0x51e2b36469cdbf58863db70cc38652da84d20c67", # D
]
requesters = [ # should not contain providers
"0x378181ce7b07e8dd749c6f42772574441b20e35f",
"0x4cd57387cc4414be8cece4e6ab84a7dd641eab25",
"0x02a07535bc88f180ecf9f7c6fd66c0c8941fd7ab",
"0x90b25485fcbde99f3ca4864792947cfcfb071de6",
"0x449ecd91143d77cfa9fbd4a5ba779a0911d21423",
"0x1397e8bf32d57b46f507ff2e912a29cd9aa78dcd",
"0xdce54cfd06e7ccf5f2e7640e0007ba667190e38e",
"0x5affc0638b7b311be40a0e27ed5cd7c133c16e64",
"0x904c343addd9f21510e711564dbf52d2a0daf7e3",
"0x17b4ec0bcd6a8f386b354becd44b3c4813448184",
"0x6e89235ddcc313a8184ffa4cea496d0f42f1f647",
"0x76f47f566845d7499c058c3a36ccd2fe5695c9f7",
"0xb0c24da05236caae4b2ee964052aa917eb3927ed",
"0x1176e979939e9a0ea65b9ece552fe413747243dc",
"0x24056c57e3b0933d2fa7d83fb9667d0efdfae64d",
"0x5c2719b9f74dba2feaefb872ffaf6a375c8e70f9",
"0x30f02cecf3e824f963cfa05270c8993a49703d55",
"0x44d85663b00117e38a9d6def322fb09dc40b6839",
"0x78bc08e70dce53f7823456e34610bc55828373af",
"0x7293d2089b6f6e814240d21dc869cc88a3471839",
"0x141c01e36d4e908d42437e203442cd3af40b4d79",
"0x782cf10b0c7393c0c98587277bfc26e73d3d0ca2",
"0x66b7bf743b17f05f8a5afff687f085dc88ed3515",
"0xbe2a4e39b6853086aab31d4003e2d1fa67561eae",
"0xbc28a0dab6eaef04574e2116059bb9102aa31e42",
"0x8d9fc9ad8d51647a12b52d74cfab6f5948687084",
"0x67613da8568ae0b0e76796ec8c45b50c469e3f30",
"0x71cef32e92caacee9bb3b020e2c1b2007d360c26",
"0x90018b8b5ccf4c291b64d85fdd3d9e179706da26",
"0xd3ff698232ac001ce663992b316510a50e93e460",
"0x4c2aebf67f8cfce387ad0ee3774bb193c4a62ef6",
"0x6d00e4661e8407f24b2942149ea4a9d445d10038",
"0x92086404471d0e5453f856363358967308b37cd5",
"0xf6261f330f75052c84022bf3c165153c36d0fcdc",
"0xd1e34cbda0d77407bbe76cce19d2e11257d00a1b",
"0x00ddbcfcee1e3222fa8abc2b2b9d319b61993e27",
"0x6f8d2420c6917bb5bb84db7bc615b9893aa30cb3",
"0xa0cadcbabd102b7966b82f3556c436e5d88daf07",
"0xada6c7bc6c6e0004b407733e3993f65397b321ab",
"0x740fcbc6d4e7e5102b8cba29370b93c6de4c786e",
"0x2cc6d89cda9db6592f36d548d5ced9ec27a80d5c",
"0x82b506dee5091b58a85b52bc31681f29d2c55584",
"0x12ba09353d5c8af8cb362d6ff1d782c1e195b571",
"0x53cf1a8d668ef8e0de626ceed58e1766c08bb625",
"0xcf1c9e9f33d3e439e51d065e0ebfccad6850cbd9",
"0x945ec7ca40720991f4a387e1b19217fbff62cbde",
"0xa61bb920ef738eab3d296c0c983a660f6492e1af",
"0xf4c8345baab83d92a88420b093a96dcdb08705de",
"0xd00440d20faba4a08b3f80f1596420ae16a9910b",
"0xe01eda38f7b5146463872f0c769ac14885dbf518",
"0x5ae3a2b79537f849eabb235401645982a2b1d7bd",
"0xc660aba0006e51cd79428f179467a8d7fbcf90f7",
"0x64b570f0e7c019dc750c4a75c33dca55bdc51845",
"0xf6d9f88fa98d4dc427ffdb1bdf01860fd12c98c7",
"0x865fdb0532b1579ee4eebf0096dbde06f1548a36",
"0x676b8d6e031a394c079fc8fee03ad2974ef126f5",
"0x77c0b42b5c358ff7c97e268794b9ff6a278a0f1e",
"0x6579dac76f0a009f996270bd1b7716ed72cdb2ce",
"0x7621e307269e5d9d4d27fd1c24425b210048f486",
"0xe72307313f8b8f96cfcd99ecef0f1ab03d28be5d",
"0xfe0f02d3d387eec745090c756a31a3c3c2bf32cf",
"0x831854a093f30acb032ab9eeaeb715b37ee1bb03",
"0x28fe7b65c3846541d6d17271585792805ae280f7",
"0xfce73328daf1ae024d0a724c595e1d5b2ac8aecb",
"0x805332ee379269b087c8c1b96adb0f398d53e46f",
"0xe953a99ff799e6da23948ca876fce3f264447de8",
"0x55414e26855c90056b3b54f797b5c7e6558146b3",
"0x1099f0f45f5f2040cc408b062557a31bfedd00d6",
"0x7553a853a45358103ac9650d50d4a15ade1038e3",
"0x5170a965650fc8704b748548c97edb80fec5efd3",
"0xc23aae5daa9e66d6db8496ada096b447297cbddd",
"0xc60409093861fe191ae851572a440deb42818a63",
"0xa47a5e1206e3b8c5fca11f5064d4c0d35e2fd240",
"0xf767b0567a2c4c6c4b4d071746b68198dddb7202",
"0x382f1e1fe7680315053ea1c489b4fc003ff9ad64",
"0x3d50f068445c457c0c38d52980a5e6442e780d89",
"0x8b7356a95c8ba1846eb963fd127741730f666ba8",
"0x4d96ba91f490eca2abd5a93f084ad779c54656aa",
"0x1b8a68bc837f5325bdc57804c26922d98a0332ab",
"0x4b419991c9b949b20556ab9ad13c5d54354f601f",
"0x13fe0b65bdd902d252d0b431aec6acf02a0b2f41",
"0x96963310153ec9a47a19312719a98cc08041134d",
"0x2c73a80956516ba9e6005030faed2f8212bc10a3",
"0xb9a498c3a76049bffc96c898605758620f459244",
"0xf8b8cac4f133b039d1990e9d578289c32ff774de",
"0xfb674750e02afa2f47f26a2d59ea1fe20444b250",
"0x6b781119b8ff1c7585f97caf779be6a80a88daf0",
"0xcaa482f269dd0926fdfd5c48e3a22b621f9d1a09",
"0x446aa04436809c130eab8e33ce9f5d3c80564fb7",
"0xd1806d39a8c2cd7469048f6d99c26cb66cd46f83",
"0xee516025bf14f7118aa9ced5eb2adacfd5827d14",
"0x1d16cfb236f5830875716c86f600dd2ee7456515",
"0xe2969f599cb904e9a808ec7218bc14fcfa346965",
"0x0636278cbd420368b1238ab204b1073df9cc1c5c",
"0x72c1a89ff3606aa29686ba8d29e28dccff06430a",
"0x168cb3240a429c899a74aacda4289f4658ec924b",
"0x08b003717bfab7a80b17b51c32223460fe9efe2a",
"0x4aae9220409e1c4d74ac95ba14edb0684a431379",
"0xab608a70d0b4a3a288dd68a1661cdb8b6c742672",
]
extra_requesters = [
"0xe2e146d6b456760150d78819af7d276a1223a6d4",
"0xa9fc23943e48a3efd35bbdd440932f123d05b697",
"0x5b235d87f3fab61f87d238c11f6790dec1cde736",
"0xe03914783115e807d8ea1660dbdcb4f5b2f969c0",
"0x85fa5e6dd9843cce8f67f4797a96a156c3c79c25",
]
for provider in providers:
if provider in requesters:
raise Exception(f"Provider {provider}, is in the requester list fix it")
if provider in extra_requesters:
raise Exception(f"Provider {provider}, is in the extra_requesters list fix it")
| 46.031746 | 87 | 0.828793 | 162 | 5,800 | 29.654321 | 0.833333 | 0.009367 | 0.004996 | 0.010408 | 0.019983 | 0.019983 | 0.019983 | 0.019983 | 0.019983 | 0 | 0 | 0.549642 | 0.109138 | 5,800 | 125 | 88 | 46.4 | 0.380105 | 0.01 | 0 | 0 | 0 | 0 | 0.810146 | 0.790795 | 0 | 0 | 0.790795 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e7b42d2807bc96a724b23bfe2eb381dd758245a | 69,101 | py | Python | tests/build_/test_build_request.py | rcerven/osbs-client | c7e6fdde85ec65683b07b1983fd75aed0fb356c3 | [
"BSD-3-Clause"
] | null | null | null | tests/build_/test_build_request.py | rcerven/osbs-client | c7e6fdde85ec65683b07b1983fd75aed0fb356c3 | [
"BSD-3-Clause"
] | null | null | null | tests/build_/test_build_request.py | rcerven/osbs-client | c7e6fdde85ec65683b07b1983fd75aed0fb356c3 | [
"BSD-3-Clause"
] | null | null | null | """
Copyright (c) 2015 Red Hat, Inc
All rights reserved.
This software may be modified and distributed under the terms
of the BSD license. See the LICENSE file for details.
"""
import copy
import json
import os
import fnmatch
from pkg_resources import parse_version
import shutil
import six
from osbs.build.build_request import BuildRequest
from osbs.constants import (DEFAULT_BUILD_IMAGE, DEFAULT_OUTER_TEMPLATE,
DEFAULT_INNER_TEMPLATE, SECRETS_PATH)
from osbs.exceptions import OsbsValidationException
from osbs import __version__ as expected_version
from flexmock import flexmock
import pytest
from tests.constants import (INPUTS_PATH, TEST_BUILD_CONFIG, TEST_BUILD_JSON,
TEST_COMPONENT, TEST_GIT_BRANCH, TEST_GIT_REF,
TEST_GIT_URI, TEST_GIT_URI_HUMAN_NAME)
class NoSuchPluginException(Exception):
pass
def get_sample_prod_params():
return {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': 'john-foo',
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': 'registry.example.com',
'source_registry_uri': 'registry.example.com',
'openshift_uri': 'http://openshift/',
'builder_openshift_url': 'http://openshift/',
'koji_target': 'koji-target',
'kojiroot': 'http://root/',
'kojihub': 'http://hub/',
'sources_command': 'make',
'vendor': 'Foo Vendor',
'authoritative_registry': 'registry.example.com',
'distribution_scope': 'authoritative-source-only',
'registry_api_versions': ['v1'],
'pdc_url': 'https://pdc.example.com',
'smtp_uri': 'smtp.example.com',
'proxy': 'http://proxy.example.com'
}
def get_plugins_from_build_json(build_json):
env_vars = build_json['spec']['strategy']['customStrategy']['env']
plugins = None
for d in env_vars:
if d['name'] == 'ATOMIC_REACTOR_PLUGINS':
plugins = json.loads(d['value'])
break
assert plugins is not None
return plugins
def get_plugin(plugins, plugin_type, plugin_name):
plugins = plugins[plugin_type]
for plugin in plugins:
if plugin["name"] == plugin_name:
return plugin
else:
raise NoSuchPluginException()
def has_plugin(plugins, plugin_type, plugin_name):
try:
get_plugin(plugins, plugin_type, plugin_name)
except NoSuchPluginException:
return False
return True
def plugin_value_get(plugins, plugin_type, plugin_name, *args):
result = get_plugin(plugins, plugin_type, plugin_name)
for arg in args:
result = result[arg]
return result
def get_secret_mountpath_by_name(build_json, name):
secrets = build_json['spec']['strategy']['customStrategy']['secrets']
named_secrets = [secret for secret in secrets
if secret['secretSource']['name'] == name]
assert len(named_secrets) == 1
secret = named_secrets[0]
assert 'mountPath' in secret
return secret['mountPath']
class TestBuildRequest(object):
def assert_import_image_plugin(self, plugins, name_label, registry_uri,
openshift_uri, use_auth, insecure_registry):
assert get_plugin(plugins, "postbuild_plugins", "import_image")
assert plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"imagestream") == name_label.replace('/', '-')
expected_repo = os.path.join(registry_uri, name_label)
expected_repo = expected_repo.replace('https://', '')
expected_repo = expected_repo.replace('http://', '')
assert plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"docker_image_repo") == expected_repo
assert plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"url") == openshift_uri
if use_auth is not None:
assert plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"use_auth") == use_auth
else:
with pytest.raises(KeyError):
plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"use_auth")
if insecure_registry:
assert plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"insecure_registry")
else:
with pytest.raises(KeyError):
plugin_value_get(plugins,
"postbuild_plugins", "import_image", "args",
"insecure_registry")
def test_build_request_has_ist_trigger(self):
build_json = copy.deepcopy(TEST_BUILD_JSON)
br = BuildRequest('something')
flexmock(br).should_receive('template').and_return(build_json)
assert br.has_ist_trigger() is True
def test_build_request_isnt_auto_instantiated(self):
build_json = copy.deepcopy(TEST_BUILD_JSON)
build_json['spec']['triggers'] = []
br = BuildRequest('something')
flexmock(br).should_receive('template').and_return(build_json)
assert br.has_ist_trigger() is False
def test_set_label(self):
build_json = copy.deepcopy(TEST_BUILD_JSON)
br = BuildRequest('something')
flexmock(br).should_receive('template').and_return(build_json)
assert br.template['metadata'].get('labels') is None
br.set_label('label-1', 'value-1')
br.set_label('label-2', 'value-2')
br.set_label('label-3', 'value-3')
assert br.template['metadata']['labels'] == {
'label-1': 'value-1',
'label-2': 'value-2',
'label-3': 'value-3',
}
@pytest.mark.parametrize('registry_uris', [
[],
["registry.example.com:5000"],
["registry.example.com:5000", "localhost:6000"],
])
@pytest.mark.parametrize('build_image', [
None,
'fancy_buildroot:latestest'
])
def test_render_simple_request(self, registry_uris, build_image):
build_request = BuildRequest(INPUTS_PATH)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'user': "john-foo",
'component': TEST_COMPONENT,
'registry_uris': registry_uris,
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'build_image': build_image,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_api_versions': ['v1'],
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_json["metadata"]["name"] is not None
assert "triggers" not in build_json["spec"]
assert build_json["spec"]["source"]["git"]["uri"] == TEST_GIT_URI
assert build_json["spec"]["source"]["git"]["ref"] == TEST_GIT_REF
expected_output = "john-foo/component:none-20"
if registry_uris:
expected_output = registry_uris[0] + "/" + expected_output
assert build_json["spec"]["output"]["to"]["name"].startswith(expected_output)
plugins = get_plugins_from_build_json(build_json)
pull_base_image = get_plugin(plugins, "prebuild_plugins",
"pull_base_image")
assert pull_base_image is not None
assert ('args' not in pull_base_image or
'parent_registry' not in pull_base_image['args'] or
pull_base_image['args']['parent_registry'] == None)
assert plugin_value_get(plugins, "exit_plugins", "store_metadata_in_osv3", "args", "url") == \
"http://openshift/"
for r in registry_uris:
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries", r) == {"insecure": True}
rendered_build_image = build_json["spec"]["strategy"]["customStrategy"]["from"]["name"]
assert rendered_build_image == (build_image if build_image else DEFAULT_BUILD_IMAGE)
@pytest.mark.parametrize('proxy', [
None,
'http://proxy.example.com',
])
@pytest.mark.parametrize('build_image', [
None,
'ultimate-buildroot:v1.0'
])
@pytest.mark.parametrize('build_imagestream', [
None,
'buildroot-stream:v1.0'
])
def test_render_prod_request_with_repo(self, build_image, build_imagestream, proxy):
build_request = BuildRequest(INPUTS_PATH)
name_label = "fedora/resultingimage"
vendor = "Foo Vendor"
authoritative_registry = "registry.example.com"
distribution_scope = "authoritative-source-only"
koji_task_id = 4756
assert isinstance(build_request, BuildRequest)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': "registry.example.com",
'source_registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'koji_task_id': koji_task_id,
'sources_command': "make",
'vendor': vendor,
'authoritative_registry': authoritative_registry,
'distribution_scope': distribution_scope,
'yum_repourls': ["http://example.com/my.repo"],
'registry_api_versions': ['v1'],
'build_image': build_image,
'build_imagestream': build_imagestream,
'proxy': proxy,
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert fnmatch.fnmatch(build_json["metadata"]["name"], TEST_BUILD_CONFIG)
assert build_json["metadata"]["labels"]["koji-task-id"] == str(koji_task_id)
assert "triggers" not in build_json["spec"]
assert build_json["spec"]["source"]["git"]["uri"] == TEST_GIT_URI
assert build_json["spec"]["source"]["git"]["ref"] == TEST_GIT_REF
assert build_json["spec"]["output"]["to"]["name"].startswith(
"registry.example.com/john-foo/component:"
)
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert plugin_value_get(plugins, "prebuild_plugins", "bump_release",
"args", "hub") == "http://hub/"
assert plugin_value_get(plugins, "prebuild_plugins", "distgit_fetch_artefacts",
"args", "command") == "make"
assert plugin_value_get(plugins, "prebuild_plugins", "pull_base_image",
"args", "parent_registry") == "registry.example.com"
assert plugin_value_get(plugins, "exit_plugins", "store_metadata_in_osv3",
"args", "url") == "http://openshift/"
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries", "registry.example.com") == {"insecure": True}
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "koji")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_push")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
assert plugin_value_get(plugins, "prebuild_plugins", "add_yum_repo_by_url",
"args", "repourls") == ["http://example.com/my.repo"]
if proxy:
assert plugin_value_get(plugins, "prebuild_plugins", "add_yum_repo_by_url",
"args", "inject_proxy") == proxy
else:
with pytest.raises(KeyError):
plugin_value_get(plugins, "prebuild_plugins", "add_yum_repo_by_url",
"args", "inject_proxy")
labels = plugin_value_get(plugins, "prebuild_plugins", "add_labels_in_dockerfile",
"args", "labels")
assert labels is not None
assert labels['authoritative-source-url'] == authoritative_registry
assert labels['vendor'] == vendor
assert labels['distribution-scope'] == distribution_scope
rendered_build_image = build_json["spec"]["strategy"]["customStrategy"]["from"]["name"]
if not build_imagestream:
assert rendered_build_image == (build_image if build_image else DEFAULT_BUILD_IMAGE)
else:
assert rendered_build_image == build_imagestream
assert build_json["spec"]["strategy"]["customStrategy"]["from"]["kind"] == "ImageStreamTag"
@pytest.mark.parametrize('proxy', [
None,
'http://proxy.example.com',
])
def test_render_prod_request(self, proxy):
build_request = BuildRequest(INPUTS_PATH)
name_label = "fedora/resultingimage"
koji_target = "koji-target"
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': "registry.example.com",
'source_registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': koji_target,
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'pdc_url': 'https://pdc.example.com',
'smtp_uri': 'smtp.example.com',
'proxy': proxy
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert fnmatch.fnmatch(build_json["metadata"]["name"], TEST_BUILD_CONFIG)
assert "triggers" not in build_json["spec"]
assert build_json["spec"]["source"]["git"]["uri"] == TEST_GIT_URI
assert build_json["spec"]["source"]["git"]["ref"] == TEST_GIT_REF
assert build_json["spec"]["output"]["to"]["name"].startswith(
"registry.example.com/john-foo/component:"
)
assert build_json["metadata"]["labels"]["git-repo-name"] == TEST_GIT_URI_HUMAN_NAME
assert build_json["metadata"]["labels"]["git-branch"] == TEST_GIT_BRANCH
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert plugin_value_get(plugins, "prebuild_plugins", "bump_release",
"args", "hub") == "http://hub/"
assert plugin_value_get(plugins, "prebuild_plugins", "distgit_fetch_artefacts",
"args", "command") == "make"
assert plugin_value_get(plugins, "prebuild_plugins", "pull_base_image", "args",
"parent_registry") == "registry.example.com"
assert plugin_value_get(plugins, "exit_plugins", "store_metadata_in_osv3",
"args", "url") == "http://openshift/"
assert plugin_value_get(plugins, "prebuild_plugins", "koji",
"args", "root") == "http://root/"
assert plugin_value_get(plugins, "prebuild_plugins", "koji",
"args", "target") == koji_target
assert plugin_value_get(plugins, "prebuild_plugins", "koji",
"args", "hub") == "http://hub/"
if proxy:
assert plugin_value_get(plugins, "prebuild_plugins", "koji",
"args", "proxy") == proxy
else:
with pytest.raises(KeyError):
plugin_value_get(plugins, "prebuild_plugins", "koji", "args", "proxy")
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries", "registry.example.com") == {"insecure": True}
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_push")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
assert get_plugin(plugins, "exit_plugins", "koji_promote")
assert plugin_value_get(plugins, "exit_plugins", "koji_promote", "args",
"target") == koji_target
labels = plugin_value_get(plugins, "prebuild_plugins", "add_labels_in_dockerfile",
"args", "labels")
assert labels is not None
assert labels['authoritative-source-url'] is not None
assert labels['vendor'] is not None
assert labels['distribution-scope'] is not None
def test_render_prod_without_koji_request(self):
build_request = BuildRequest(INPUTS_PATH)
name_label = "fedora/resultingimage"
assert isinstance(build_request, BuildRequest)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': "registry.example.com",
'source_registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert fnmatch.fnmatch(build_json["metadata"]["name"], TEST_BUILD_CONFIG)
assert "triggers" not in build_json["spec"]
assert build_json["spec"]["source"]["git"]["uri"] == TEST_GIT_URI
assert build_json["spec"]["source"]["git"]["ref"] == TEST_GIT_REF
assert build_json["spec"]["output"]["to"]["name"].startswith(
"registry.example.com/john-foo/component:none-"
)
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "bump_release")
assert plugin_value_get(plugins, "prebuild_plugins", "distgit_fetch_artefacts",
"args", "command") == "make"
assert plugin_value_get(plugins, "prebuild_plugins", "pull_base_image", "args",
"parent_registry") == "registry.example.com"
assert plugin_value_get(plugins, "exit_plugins", "store_metadata_in_osv3",
"args", "url") == "http://openshift/"
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries", "registry.example.com") == {"insecure": True}
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "koji")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_push")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "koji_promote")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
labels = plugin_value_get(plugins, "prebuild_plugins", "add_labels_in_dockerfile",
"args", "labels")
assert labels is not None
assert labels['authoritative-source-url'] is not None
assert labels['vendor'] is not None
assert labels['distribution-scope'] is not None
def test_render_prod_with_secret_request(self):
build_request = BuildRequest(INPUTS_PATH)
assert isinstance(build_request, BuildRequest)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': "",
'pulp_registry': "registry.example.com",
'nfs_server_path': "server:path",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'source_secret': 'mysecret',
}
build_request.set_params(**kwargs)
build_json = build_request.render()
# Check that the secret's mountPath matches the plugin's
# configured path for the secret
mount_path = get_secret_mountpath_by_name(build_json, 'mysecret')
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, "postbuild_plugins", "pulp_push")
assert plugin_value_get(plugins, 'postbuild_plugins', 'pulp_push',
'args', 'pulp_secret_path') == mount_path
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert plugin_value_get(plugins, "prebuild_plugins", "bump_release",
"args", "hub") == "http://hub/"
assert get_plugin(plugins, "prebuild_plugins", "koji")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries") == {}
@pytest.mark.parametrize('registry_secrets', [None, ['registry-secret']])
@pytest.mark.parametrize('source_registry', [None, 'registry.example.com', 'localhost'])
def test_render_pulp_sync(self, registry_secrets, source_registry):
build_request = BuildRequest(INPUTS_PATH)
pulp_env = 'env'
pulp_secret = 'pulp-secret'
registry_uri = 'https://registry.example.com'
registry_ver = '/v2'
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': registry_uri + registry_ver,
'openshift_uri': "http://openshift/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v2'],
'registry_secrets': registry_secrets,
'pulp_registry': pulp_env,
'pulp_secret': pulp_secret,
}
if source_registry:
kwargs['source_registry_uri'] = source_registry
build_request.set_params(**kwargs)
build_json = build_request.render()
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, 'postbuild_plugins', 'pulp_sync')
assert plugin_value_get(plugins, 'postbuild_plugins',
'pulp_sync', 'args',
'pulp_registry_name') == pulp_env
assert plugin_value_get(plugins, 'postbuild_plugins',
'pulp_sync', 'args',
'docker_registry') == registry_uri
if source_registry and source_registry in kwargs['registry_uri']:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
else:
assert get_plugin(plugins, 'exit_plugins', 'delete_from_registry')
assert 'https://registry.example.com' in plugin_value_get(plugins, 'exit_plugins',
'delete_from_registry',
'args', 'registries')
if registry_secrets:
assert plugin_value_get(plugins, 'exit_plugins',
'delete_from_registry', 'args',
'registries', 'https://registry.example.com', 'secret')
else:
assert plugin_value_get(plugins, 'exit_plugins',
'delete_from_registry', 'args',
'registries', 'https://registry.example.com') == {}
if registry_secrets:
mount_path = get_secret_mountpath_by_name(build_json,
registry_secrets[0])
assert plugin_value_get(plugins, 'postbuild_plugins',
'pulp_sync', 'args',
'registry_secret_path') == mount_path
mount_path = get_secret_mountpath_by_name(build_json, pulp_secret)
assert plugin_value_get(plugins, 'postbuild_plugins',
'pulp_sync', 'args',
'pulp_secret_path') == mount_path
def test_render_prod_with_registry_secrets(self):
build_request = BuildRequest(INPUTS_PATH)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': "registry.example.com",
'nfs_server_path': "server:path",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'source_secret': 'mysecret',
'registry_secrets': ['registry_secret'],
}
build_request.set_params(**kwargs)
build_json = build_request.render()
mount_path = get_secret_mountpath_by_name(build_json, 'registry_secret')
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, "postbuild_plugins", "tag_and_push")
assert plugin_value_get(
plugins, "postbuild_plugins", "tag_and_push", "args", "registries",
"registry.example.com", "secret") == mount_path
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert get_plugin(plugins, "prebuild_plugins", "bump_release")
assert get_plugin(plugins, "prebuild_plugins", "koji")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
def test_render_prod_request_requires_newer(self):
"""
We should get an OsbsValidationException when trying to use the
sendmail plugin without requiring OpenShift 1.0.6, as
configuring the plugin requires the new-style secrets.
"""
build_request = BuildRequest(INPUTS_PATH)
name_label = "fedora/resultingimage"
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uris': ["registry1.example.com/v1", # first is primary
"registry2.example.com/v2"],
'nfs_server_path': "server:path",
'source_registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'pdc_secret': 'foo',
'pdc_url': 'https://pdc.example.com',
'smtp_uri': 'smtp.example.com',
}
build_request.set_params(**kwargs)
with pytest.raises(OsbsValidationException):
build_request.render()
@pytest.mark.parametrize('registry_api_versions', [
['v1', 'v2'],
['v2'],
])
@pytest.mark.parametrize('scratch', [False, True])
def test_render_prod_request_v1_v2(self, registry_api_versions, scratch):
build_request = BuildRequest(INPUTS_PATH)
name_label = "fedora/resultingimage"
pulp_env = 'v1pulp'
pulp_secret = pulp_env + 'secret'
registry_secret = 'registry_secret'
kwargs = {
'pulp_registry': pulp_env,
'pulp_secret': pulp_secret,
}
kwargs.update({
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uris': [
# first is primary
"http://registry1.example.com:5000/v1",
"http://registry2.example.com:5000/v2"
],
'registry_secrets': [
"",
registry_secret,
],
'nfs_server_path': "server:path",
'source_registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': registry_api_versions,
'scratch': scratch,
})
build_request.set_params(**kwargs)
build_json = build_request.render()
assert fnmatch.fnmatch(build_json["metadata"]["name"], TEST_BUILD_CONFIG)
assert "triggers" not in build_json["spec"]
assert build_json["spec"]["source"]["git"]["uri"] == TEST_GIT_URI
assert build_json["spec"]["source"]["git"]["ref"] == TEST_GIT_REF
# Pulp used, so no direct registry output
assert build_json["spec"]["output"]["to"]["name"].startswith(
"john-foo/component:"
)
plugins = get_plugins_from_build_json(build_json)
# tag_and_push configuration. Must not have the scheme part.
expected_registries = {
'registry2.example.com:5000': {
'insecure': True,
'secret': '/var/run/secrets/atomic-reactor/registry_secret'
},
}
if 'v1' in registry_api_versions:
expected_registries['registry1.example.com:5000'] = {
'insecure': True,
}
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push",
"args", "registries") == expected_registries
secrets = build_json['spec']['strategy']['customStrategy']['secrets']
for version, plugin in [('v1', 'pulp_push'), ('v2', 'pulp_sync')]:
if version not in registry_api_versions:
continue
path = plugin_value_get(plugins, "postbuild_plugins", plugin,
"args", "pulp_secret_path")
mount_path = get_secret_mountpath_by_name(build_json, pulp_secret)
assert mount_path == path
if plugin == 'pulp_sync':
path = plugin_value_get(plugins, "postbuild_plugins", plugin,
"args", "registry_secret_path")
mount_path = get_secret_mountpath_by_name(build_json,
registry_secret)
assert mount_path == path
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
if 'v1' in registry_api_versions:
assert get_plugin(plugins, "postbuild_plugins",
"pulp_push")
assert plugin_value_get(plugins, "postbuild_plugins", "pulp_push",
"args", "pulp_registry_name") == pulp_env
else:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins",
"pulp_push")
if 'v2' in registry_api_versions:
assert get_plugin(plugins, "postbuild_plugins", "pulp_sync")
env = plugin_value_get(plugins, "postbuild_plugins", "pulp_sync",
"args", "pulp_registry_name")
assert env == pulp_env
pulp_secret = plugin_value_get(plugins, "postbuild_plugins",
"pulp_sync", "args",
"pulp_secret_path")
docker_registry = plugin_value_get(plugins, "postbuild_plugins",
"pulp_sync", "args",
"docker_registry")
# pulp_sync config must have the scheme part to satisfy pulp.
assert docker_registry == 'http://registry2.example.com:5000'
else:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
if scratch:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "compress")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "koji_promote")
else:
assert get_plugin(plugins, "postbuild_plugins", "compress")
assert get_plugin(plugins, "exit_plugins", "koji_promote")
def test_render_with_yum_repourls(self):
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
}
build_request = BuildRequest(INPUTS_PATH)
# Test validation for yum_repourls parameter
kwargs['yum_repourls'] = 'should be a list'
with pytest.raises(OsbsValidationException):
build_request.set_params(**kwargs)
# Use a valid yum_repourls parameter and check the result
kwargs['yum_repourls'] = ['http://example.com/repo1.repo', 'http://example.com/repo2.repo']
build_request.set_params(**kwargs)
build_json = build_request.render()
strategy = build_json['spec']['strategy']['customStrategy']['env']
plugins = get_plugins_from_build_json(build_json)
repourls = None
for d in plugins['prebuild_plugins']:
if d['name'] == 'add_yum_repo_by_url':
repourls = d['args']['repourls']
assert repourls is not None
assert len(repourls) == 2
assert 'http://example.com/repo1.repo' in repourls
assert 'http://example.com/repo2.repo' in repourls
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert plugin_value_get(plugins, "prebuild_plugins", "bump_release",
"args", "hub") == "http://hub/"
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "koji")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "cp_built_image_to_nfs")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_push")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "pulp_sync")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "postbuild_plugins", "import_image")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
@pytest.mark.parametrize(('hub', 'disabled'), [
('http://hub/', False),
(None, True),
])
def test_render_bump_release(self, hub, disabled):
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
}
if hub:
kwargs['kojihub'] = hub
build_request = BuildRequest(INPUTS_PATH)
build_request.set_params(**kwargs)
build_json = build_request.render()
strategy = build_json['spec']['strategy']['customStrategy']['env']
plugins = get_plugins_from_build_json(build_json)
if disabled:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "bump_release")
else:
assert plugin_value_get(plugins, "prebuild_plugins", "bump_release",
"args", "hub") == hub
@pytest.mark.parametrize('unique_tag_only', [False, None, True])
def test_render_unique_tag_only(self, unique_tag_only):
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'openshift_uri': "http://openshift/",
'registry_api_versions': ['v1', 'v2'],
}
if unique_tag_only is not None:
kwargs['unique_tag_only'] = unique_tag_only
build_request = BuildRequest(INPUTS_PATH)
build_request.set_params(**kwargs)
build_json = build_request.render()
strategy = build_json['spec']['strategy']['customStrategy']['env']
plugins = get_plugins_from_build_json(build_json)
if unique_tag_only:
assert plugin_value_get(plugins,
'postbuild_plugins',
'tag_by_labels',
'args',
'unique_tag_only') == True
assert not has_plugin(plugins, 'postbuild_plugins', 'tag_from_config')
else:
tag_and_push_args = get_plugin(plugins,
'postbuild_plugins',
'tag_and_push')['args']
assert 'unique_tag_only' not in tag_and_push_args
assert has_plugin(plugins, 'postbuild_plugins', 'tag_from_config')
def test_render_prod_with_pulp_no_auth(self):
"""
Rendering should fail if pulp is specified but auth config isn't
"""
build_request = BuildRequest(INPUTS_PATH)
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': 'fedora/resultingimage',
'registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'pulp_registry': "foo",
}
build_request.set_params(**kwargs)
with pytest.raises(OsbsValidationException):
build_request.render()
@staticmethod
def create_image_change_trigger_json(outdir):
"""
Create JSON templates with an image change trigger added.
:param outdir: str, path to store modified templates
"""
# Make temporary copies of the JSON files
for basename in [DEFAULT_OUTER_TEMPLATE, DEFAULT_INNER_TEMPLATE]:
shutil.copy(os.path.join(INPUTS_PATH, basename),
os.path.join(outdir, basename))
# Create a build JSON description with an image change trigger
with open(os.path.join(outdir, DEFAULT_OUTER_TEMPLATE), 'r+') as prod_json:
build_json = json.load(prod_json)
# Add the image change trigger
build_json['spec']['triggers'] = [
{
"type": "ImageChange",
"imageChange": {
"from": {
"kind": "ImageStreamTag",
"name": "{{BASE_IMAGE_STREAM}}"
}
}
}
]
prod_json.seek(0)
json.dump(build_json, prod_json)
prod_json.truncate()
@pytest.mark.parametrize(('registry_uri', 'insecure_registry'), [
("https://registry.example.com", False),
("http://registry.example.com", True),
])
@pytest.mark.parametrize('use_auth', (True, False, None))
def test_render_prod_request_with_trigger(self, tmpdir, registry_uri,
insecure_registry, use_auth):
self.create_image_change_trigger_json(str(tmpdir))
build_request = BuildRequest(str(tmpdir))
name_label = "fedora/resultingimage"
pdc_secret_name = 'foo'
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': registry_uri,
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'pdc_secret': pdc_secret_name,
'pdc_url': 'https://pdc.example.com',
'smtp_uri': 'smtp.example.com',
}
if use_auth is not None:
kwargs['use_auth'] = use_auth
build_request.set_params(**kwargs)
build_json = build_request.render()
assert "triggers" in build_json["spec"]
assert build_json["spec"]["triggers"][0]["imageChange"]["from"]["name"] == 'fedora:latest'
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
assert get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
assert plugin_value_get(plugins, "prebuild_plugins",
"check_and_set_rebuild", "args",
"url") == kwargs["openshift_uri"]
self.assert_import_image_plugin(
plugins=plugins,
name_label=name_label,
registry_uri=kwargs['registry_uri'],
openshift_uri=kwargs['openshift_uri'],
use_auth=use_auth,
insecure_registry=insecure_registry)
assert plugin_value_get(plugins, "postbuild_plugins", "tag_and_push", "args",
"registries", "registry.example.com") == {"insecure": True}
assert get_plugin(plugins, "exit_plugins", "koji_promote")
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "kojihub") == kwargs["kojihub"]
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "url") == kwargs["openshift_uri"]
with pytest.raises(KeyError):
plugin_value_get(plugins, 'exit_plugins', 'koji_promote',
'args', 'metadata_only') # v1 enabled by default
mount_path = get_secret_mountpath_by_name(build_json, pdc_secret_name)
expected = {'args': {'from_address': 'osbs@example.com',
'url': 'http://openshift/',
'pdc_url': 'https://pdc.example.com',
'pdc_secret_path': mount_path,
'send_on': ['auto_fail', 'auto_success'],
'error_addresses': ['errors@example.com'],
'smtp_uri': 'smtp.example.com',
'submitter': 'john-foo'},
'name': 'sendmail'}
assert get_plugin(plugins, 'exit_plugins', 'sendmail') == expected
@pytest.mark.parametrize(('registry_uri', 'insecure_registry'), [
("https://registry.example.com", False),
("http://registry.example.com", True),
])
@pytest.mark.parametrize('use_auth', (True, False, None))
def test_render_custom_base_image_with_trigger(self, tmpdir, registry_uri,
insecure_registry, use_auth):
name_label = "fedora/resultingimage"
self.create_image_change_trigger_json(str(tmpdir))
build_request = BuildRequest(str(tmpdir))
kwargs = get_sample_prod_params()
kwargs['base_image'] = 'koji/image-build'
kwargs['yum_repourls'] = ["http://example.com/my.repo"]
kwargs['pdc_secret'] = 'foo'
kwargs['pdc_url'] = 'https://pdc.example.com'
kwargs['smtp_uri'] = 'smtp.example.com'
kwargs['registry_uri'] = registry_uri
kwargs['source_registry_uri'] = registry_uri
kwargs['openshift_uri'] = 'http://openshift/'
if use_auth is not None:
kwargs['use_auth'] = use_auth
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_request.is_custom_base_image() is True
# Verify the triggers are now disabled
assert "triggers" not in build_json["spec"]
# Verify the rebuild plugins are all disabled
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins", "check_and_set_rebuild")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "prebuild_plugins",
"stop_autorebuild_if_disabled")
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, "exit_plugins", "sendmail")
self.assert_import_image_plugin(
plugins=plugins,
name_label=name_label,
registry_uri=kwargs['registry_uri'],
openshift_uri=kwargs['openshift_uri'],
use_auth=use_auth,
insecure_registry=insecure_registry)
def test_render_prod_request_new_secrets(self, tmpdir):
secret_name = 'mysecret'
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': "fedora/resultingimage",
'registry_uri': "registry.example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'sources_command': "make",
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'pulp_registry': 'foo',
'pulp_secret': secret_name,
}
# Default required version (1.0.6), implicitly and explicitly
for required in (None, parse_version('1.0.6')):
build_request = BuildRequest(INPUTS_PATH)
if required is not None:
build_request.set_openshift_required_version(required)
build_request.set_params(**kwargs)
build_json = build_request.render()
# Not using the sourceSecret scheme
assert 'sourceSecret' not in build_json['spec']['source']
# Check that the secret's mountPath matches the plugin's
# configured path for the secret
mount_path = get_secret_mountpath_by_name(build_json, secret_name)
plugins = get_plugins_from_build_json(build_json)
assert plugin_value_get(plugins, 'postbuild_plugins', 'pulp_push',
'args', 'pulp_secret_path') == mount_path
def test_render_prod_request_with_koji_secret(self, tmpdir):
self.create_image_change_trigger_json(str(tmpdir))
build_request = BuildRequest(str(tmpdir))
name_label = "fedora/resultingimage"
koji_certs_secret_name = 'foobar'
koji_task_id = 1234
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': "example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'koji_task_id': koji_task_id,
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
'koji_certs_secret': koji_certs_secret_name,
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_json["metadata"]["labels"]["koji-task-id"] == str(koji_task_id)
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, "exit_plugins", "koji_promote")
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "kojihub") == kwargs["kojihub"]
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "url") == kwargs["openshift_uri"]
mount_path = get_secret_mountpath_by_name(build_json,
koji_certs_secret_name)
assert get_plugin(plugins, 'exit_plugins', 'koji_promote')['args']['koji_ssl_certs'] == mount_path
def test_render_prod_request_with_koji_kerberos(self, tmpdir):
self.create_image_change_trigger_json(str(tmpdir))
build_request = BuildRequest(str(tmpdir))
name_label = "fedora/resultingimage"
koji_task_id = 1234
koji_use_kerberos = True
koji_kerberos_keytab = "FILE:/tmp/fakekeytab"
koji_kerberos_principal = "myprincipal@OSBSDOMAIN.COM"
kwargs = {
'git_uri': TEST_GIT_URI,
'git_ref': TEST_GIT_REF,
'git_branch': TEST_GIT_BRANCH,
'user': "john-foo",
'component': TEST_COMPONENT,
'base_image': 'fedora:latest',
'name_label': name_label,
'registry_uri': "example.com",
'openshift_uri': "http://openshift/",
'builder_openshift_url': "http://openshift/",
'koji_target': "koji-target",
'kojiroot': "http://root/",
'kojihub': "http://hub/",
'sources_command': "make",
'koji_task_id': koji_task_id,
'koji_use_kerberos': koji_use_kerberos,
'koji_kerberos_keytab': koji_kerberos_keytab,
'koji_kerberos_principal': koji_kerberos_principal,
'vendor': "Foo Vendor",
'authoritative_registry': "registry.example.com",
'distribution_scope': "authoritative-source-only",
'registry_api_versions': ['v1'],
}
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_json["metadata"]["labels"]["koji-task-id"] == str(koji_task_id)
plugins = get_plugins_from_build_json(build_json)
assert get_plugin(plugins, "exit_plugins", "koji_promote")
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "kojihub") == kwargs["kojihub"]
assert plugin_value_get(plugins, "exit_plugins", "koji_promote",
"args", "url") == kwargs["openshift_uri"]
assert get_plugin(plugins, 'exit_plugins', 'koji_promote')['args']['koji_principal'] == koji_kerberos_principal
assert get_plugin(plugins, 'exit_plugins', 'koji_promote')['args']['koji_keytab'] == koji_kerberos_keytab
@pytest.mark.parametrize(('base_image', 'is_custom'), [
('fedora', False),
('fedora:latest', False),
('koji/image-build', True),
('koji/image-build:spam.conf', True),
])
def test_prod_is_custom_base_image(self, tmpdir, base_image, is_custom):
build_request = BuildRequest(INPUTS_PATH)
# Safe to call prior to build image being set
assert build_request.is_custom_base_image() is False
kwargs = get_sample_prod_params()
kwargs['base_image'] = base_image
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_request.is_custom_base_image() == is_custom
def test_prod_missing_kojihub__custom_base_image(self, tmpdir):
build_request = BuildRequest(INPUTS_PATH)
kwargs = get_sample_prod_params()
kwargs['base_image'] = 'koji/image-build'
del kwargs['kojihub']
build_request.set_params(**kwargs)
with pytest.raises(OsbsValidationException) as exc:
build_request.render()
assert str(exc.value).startswith(
'Custom base image builds require kojihub')
def test_prod_custom_base_image(self, tmpdir):
build_request = BuildRequest(INPUTS_PATH)
kwargs = get_sample_prod_params()
kwargs['base_image'] = 'koji/image-build'
kwargs['yum_repourls'] = ["http://example.com/my.repo"]
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_request.is_custom_base_image() is True
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'prebuild_plugins', 'pull_base_image')
add_filesystem_args = plugin_value_get(
plugins, 'prebuild_plugins', 'add_filesystem', 'args')
assert add_filesystem_args['koji_hub'] == kwargs['kojihub']
assert add_filesystem_args['koji_proxyuser'] == kwargs['proxy']
assert add_filesystem_args['repos'] == kwargs['yum_repourls']
def test_prod_non_custom_base_image(self, tmpdir):
build_request = BuildRequest(INPUTS_PATH)
kwargs = get_sample_prod_params()
build_request.set_params(**kwargs)
build_json = build_request.render()
assert build_request.is_custom_base_image() is False
plugins = get_plugins_from_build_json(build_json)
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'prebuild_plugins', 'add_filesystem')
pull_base_image_plugin = get_plugin(
plugins, 'prebuild_plugins', 'pull_base_image')
assert pull_base_image_plugin is not None
def test_render_prod_custom_site_plugin_enable(self):
"""
Test to make sure that when we attempt to enable a plugin, it is
actually enabled in the JSON for the build_request after running
build_request.render()
"""
plugin_type = "exit_plugins"
plugin_name = "testing_exit_plugin"
plugin_args = {"foo": "bar"}
build_request = BuildRequest(INPUTS_PATH)
build_request.customize_conf['enable_plugins'].append(
{
"plugin_type": plugin_type,
"plugin_name": plugin_name,
"plugin_args": plugin_args
}
)
kwargs = get_sample_prod_params()
build_request.set_params(**kwargs)
build_request.render()
assert {
"name": plugin_name,
"args": plugin_args
} in build_request.dj.dock_json[plugin_type]
def test_render_prod_custom_site_plugin_disable(self):
"""
Test to make sure that when we attempt to disable a plugin, it is
actually disabled in the JSON for the build_request after running
build_request.render()
"""
plugin_type = "postbuild_plugins"
plugin_name = "compress"
build_request = BuildRequest(INPUTS_PATH)
build_request.customize_conf['disable_plugins'].append(
{
"plugin_type": plugin_type,
"plugin_name": plugin_name
}
)
kwargs = get_sample_prod_params()
build_request.set_params(**kwargs)
build_request.render()
for plugin in build_request.dj.dock_json[plugin_type]:
if plugin['name'] == plugin_name:
assert False
def test_render_prod_custom_site_plugin_override(self):
"""
Test to make sure that when we attempt to override a plugin's args,
they are actually overridden in the JSON for the build_request
after running build_request.render()
"""
plugin_type = "postbuild_plugins"
plugin_name = "compress"
plugin_args = {"foo": "bar"}
kwargs = get_sample_prod_params()
unmodified_build_request = BuildRequest(INPUTS_PATH)
unmodified_build_request.set_params(**kwargs)
unmodified_build_request.render()
for plugin_dict in unmodified_build_request.dj.dock_json[plugin_type]:
if plugin_dict['name'] == plugin_name:
plugin_index = unmodified_build_request.dj.dock_json[plugin_type].index(plugin_dict)
build_request = BuildRequest(INPUTS_PATH)
build_request.customize_conf['enable_plugins'].append(
{
"plugin_type": plugin_type,
"plugin_name": plugin_name,
"plugin_args": plugin_args
}
)
build_request.set_params(**kwargs)
build_request.render()
assert {
"name": plugin_name,
"args": plugin_args
} in build_request.dj.dock_json[plugin_type]
assert unmodified_build_request.dj.dock_json[plugin_type][plugin_index]['name'] == plugin_name
assert build_request.dj.dock_json[plugin_type][plugin_index]['name'] == plugin_name
def test_has_version(self):
br = BuildRequest(INPUTS_PATH)
br.render()
assert 'client_version' in br.dj.dock_json
actual_version = br.dj.dock_json['client_version']
assert isinstance(actual_version, six.string_types)
assert expected_version == actual_version
@pytest.mark.parametrize('secret', [None, 'osbsconf'])
def test_reactor_config(self, secret):
br = BuildRequest(INPUTS_PATH)
kwargs = get_sample_prod_params()
kwargs['reactor_config_secret'] = secret
br.set_params(**kwargs)
build_json = br.render()
plugins = get_plugins_from_build_json(build_json)
if secret is None:
with pytest.raises(NoSuchPluginException):
get_plugin(plugins, 'prebuild_plugins', 'reactor_config')
else:
assert get_plugin(plugins, 'prebuild_plugins', 'reactor_config')
assert plugin_value_get(plugins, 'prebuild_plugins',
'reactor_config', 'args',
'config_path').startswith('/')
@pytest.mark.parametrize('secret', [None, 'osbsconf'])
def test_client_config_secret(self, secret):
br = BuildRequest(INPUTS_PATH)
plugin_type = "buildstep_plugins"
plugin_name = "orchestrate_build"
plugin_args = {"foo": "bar"}
kwargs = get_sample_prod_params()
kwargs['client_config_secret'] = secret
br.set_params(**kwargs)
br.dj.dock_json_set_param(plugin_type, [])
br.dj.add_plugin(plugin_type, plugin_name, plugin_args)
build_json = br.render()
plugins = get_plugins_from_build_json(build_json)
if secret is not None:
assert get_secret_mountpath_by_name(build_json, secret) == os.path.join(SECRETS_PATH, secret)
assert get_plugin(plugins, plugin_type, plugin_name)
assert plugin_value_get(plugins, plugin_type, plugin_name,
'args', 'osbs_client_config') == os.path.join(SECRETS_PATH, secret)
else:
with pytest.raises(AssertionError):
get_secret_mountpath_by_name(build_json, secret)
@pytest.mark.parametrize('secret', [
{'secret': None},
{'secret': 'path'},
{'secret1': 'path1',
'secret2': 'path2'
},
{'secret1': 'path1',
'secret2': 'path2',
'secret2': 'path3'
}
])
def test_token_secrets(self, secret):
br = BuildRequest(INPUTS_PATH)
kwargs = get_sample_prod_params()
kwargs['token_secrets'] = secret
br.set_params(**kwargs)
build_json = br.render()
for (sec, path) in secret.items():
if path:
assert get_secret_mountpath_by_name(build_json, sec) == path
else:
assert get_secret_mountpath_by_name(build_json, sec) == os.path.join(SECRETS_PATH, sec)
| 43.459748 | 119 | 0.592495 | 7,086 | 69,101 | 5.45103 | 0.056449 | 0.031689 | 0.038109 | 0.038057 | 0.804277 | 0.767333 | 0.743437 | 0.721172 | 0.700513 | 0.653291 | 0 | 0.002861 | 0.291747 | 69,101 | 1,589 | 120 | 43.487099 | 0.786375 | 0.026497 | 0 | 0.644776 | 0 | 0 | 0.262125 | 0.048334 | 0 | 0 | 0 | 0 | 0.13209 | 1 | 0.029851 | false | 0.000746 | 0.023134 | 0.000746 | 0.059701 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5e8d5d921b8848770ae58f9dff7d0884f9a6e489 | 108 | py | Python | subcmds/__init__.py | quinoa42/mpv-alfred-workflow | 6542628dfb160a41c762a09b4a0c08afa75001fd | [
"MIT"
] | 4 | 2019-03-22T04:15:38.000Z | 2021-04-27T21:29:06.000Z | subcmds/__init__.py | quinoa42/mpv-alfred-workflow | 6542628dfb160a41c762a09b4a0c08afa75001fd | [
"MIT"
] | null | null | null | subcmds/__init__.py | quinoa42/mpv-alfred-workflow | 6542628dfb160a41c762a09b4a0c08afa75001fd | [
"MIT"
] | null | null | null | from .playlist import GetPlayListSubcommand, SetIndexSubcommand
from .toggle import TogglePlayingSubcommand
| 36 | 63 | 0.888889 | 9 | 108 | 10.666667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 108 | 2 | 64 | 54 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0db8af92c756d18d0d13659686194f99a477b276 | 156 | py | Python | yosai_dpcache/cache/backends/__init__.py | YosaiProject/yosai_dpcache | 85d2c2922165a12ea06315bcbb6a4d6f02729793 | [
"Apache-2.0"
] | 6 | 2015-11-23T15:25:35.000Z | 2017-02-08T16:40:22.000Z | yosai_dpcache/cache/backends/__init__.py | YosaiProject/yosai_dpcache | 85d2c2922165a12ea06315bcbb6a4d6f02729793 | [
"Apache-2.0"
] | null | null | null | yosai_dpcache/cache/backends/__init__.py | YosaiProject/yosai_dpcache | 85d2c2922165a12ea06315bcbb6a4d6f02729793 | [
"Apache-2.0"
] | 1 | 2019-07-04T09:38:18.000Z | 2019-07-04T09:38:18.000Z | from yosai_dpcache.cache.region import register_backend
register_backend(
"yosai_dpcache.redis", "yosai_dpcache.cache.backends.redis", "RedisBackend")
| 31.2 | 80 | 0.814103 | 19 | 156 | 6.421053 | 0.578947 | 0.295082 | 0.278689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 156 | 4 | 81 | 39 | 0.853147 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0.217949 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0dc4ecb60c3ab7118ea0e799da708c3870421586 | 176 | py | Python | tests/unittests/load_functions/submodule/main.py | gohar94/azure-functions-python-worker | 4322e53ddbcc1eea40c1b061b42653336d9003f6 | [
"MIT"
] | 277 | 2018-01-25T23:13:03.000Z | 2022-02-22T06:12:04.000Z | tests/unittests/load_functions/submodule/main.py | gohar94/azure-functions-python-worker | 4322e53ddbcc1eea40c1b061b42653336d9003f6 | [
"MIT"
] | 731 | 2018-01-18T18:54:38.000Z | 2022-03-29T00:01:46.000Z | tests/unittests/load_functions/submodule/main.py | YunchuWang/azure-functions-python-worker | 1f23e038a506c6412e4efbf07eb471a6afab0c2a | [
"MIT"
] | 109 | 2018-01-18T02:22:57.000Z | 2022-02-15T18:59:54.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from .sub_module import module
def main(req) -> str:
return module.__name__
| 22 | 59 | 0.75 | 24 | 176 | 5.291667 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170455 | 176 | 7 | 60 | 25.142857 | 0.869863 | 0.505682 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0dcb6fd3f88625dcafbfc896164db5ff023fc406 | 343 | py | Python | src/genie/libs/parser/ios/tests/ShowInventory/cli/equal/golden_output_6_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/ios/tests/ShowInventory/cli/equal/golden_output_6_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/ios/tests/ShowInventory/cli/equal/golden_output_6_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"slot": {
"1": {
"lc": {
"SM-ES2-16-P": {
"descr": "SM-ES2-16-P",
"name": "1",
"pid": "SM-ES2-16-P",
"sn": "FOC09876NP3",
"vid": "",
}
}
}
}
}
| 21.4375 | 43 | 0.218659 | 24 | 343 | 3.083333 | 0.625 | 0.202703 | 0.283784 | 0.324324 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124088 | 0.600583 | 343 | 15 | 44 | 22.866667 | 0.416058 | 0 | 0 | 0 | 0 | 0 | 0.201166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0deba4bf934403cf4f957cbf4878a1121b52e8e6 | 198 | py | Python | tests/fixtures/db_driver.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | 1 | 2017-08-08T14:30:36.000Z | 2017-08-08T14:30:36.000Z | tests/fixtures/db_driver.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | 45 | 2017-08-22T13:01:51.000Z | 2017-12-12T12:19:14.000Z | tests/fixtures/db_driver.py | Sheshtawy/hawkeye | 3f8a6002ec56edc6d60d0fb87aa6b7ee56ccfb14 | [
"MIT"
] | null | null | null | import pytest
from hawkeye.utils.db_drivers.postgres_driver import PostgresDriver
from hawkeye import settings
@pytest.fixture
def db_driver():
return PostgresDriver(**settings.DB_CREDENTIALS)
| 24.75 | 67 | 0.833333 | 25 | 198 | 6.44 | 0.6 | 0.136646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 198 | 7 | 68 | 28.285714 | 0.904494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.5 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
df33c2118ddedc9b27e82b53ffdff100105c4e01 | 42 | py | Python | NumericalMethods/boundary_problem/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 4 | 2020-09-11T19:08:41.000Z | 2021-05-06T06:43:40.000Z | NumericalMethods/boundary_problem/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 1 | 2020-11-27T08:47:35.000Z | 2020-11-27T08:47:35.000Z | NumericalMethods/boundary_problem/__init__.py | banin-artem/NumericalMethods | 2457b11e08be55cd0baae053e270db9190cff9d5 | [
"MIT"
] | 1 | 2021-01-21T16:51:50.000Z | 2021-01-21T16:51:50.000Z | from ._final_diff import final_difference
| 21 | 41 | 0.880952 | 6 | 42 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
df658dfde75cc0ff015ba1e52df74671f97b4281 | 2,018 | py | Python | falcon/falcon_resource/templates/resource.py | aeksco/codotype-python-falcon-mongodb-generator | 32680519e249bafe678ee1f6d394893a2e36086c | [
"MIT"
] | null | null | null | falcon/falcon_resource/templates/resource.py | aeksco/codotype-python-falcon-mongodb-generator | 32680519e249bafe678ee1f6d394893a2e36086c | [
"MIT"
] | null | null | null | falcon/falcon_resource/templates/resource.py | aeksco/codotype-python-falcon-mongodb-generator | 32680519e249bafe678ee1f6d394893a2e36086c | [
"MIT"
] | null | null | null | import falcon
import json
# CRUD Resources
class <%- schema.class_name %>CollectionResource(object):
"""Handles GET requests"""
def on_get(self, req, resp):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from GET /item" })
def on_post(self, req, resp):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from POST /item" })
class <%- schema.class_name %>ModelResource(object):
def on_get(self, req, resp, like_id):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from GET /items/:id" })
def on_put(self, req, resp, like_id):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from PUT /items/:id" })
def on_delete(self, req, resp, like_id):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from DELETE /items/:id" })
<% for (let attr of schema.attributes) { -%>
<% if (attr.datatype === 'RELATION') { -%>
<% if (attr.datatypeOptions.relationType === 'BELONGS_TO') { -%>
class <%- schema.class_name %>Related<%- attr.datatypeOptions.schema_class_name %>Resource(object):
def on_get(self, req, resp, like_id):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from GET /items/:id/foo" })
<% } else if (attr.datatypeOptions.relationType === 'HAS_MANY' || attr.datatypeOptions.relationType === 'OWNS_MANY') { -%>
class <%- schema.class_name %>Related<%- attr.datatypeOptions.schema_class_name_plural %>Resource(object):
def on_get(self, req, resp, like_id):
resp.status = falcon.HTTP_200 # This is the default status
resp.body = json.dumps({ "message": "Hi, this is from GET /items/:id/foo" })
<% } -%>
<% } -%>
<% } -%>
| 46.930233 | 122 | 0.647175 | 280 | 2,018 | 4.553571 | 0.207143 | 0.065882 | 0.060392 | 0.109804 | 0.706667 | 0.706667 | 0.700392 | 0.700392 | 0.700392 | 0.700392 | 0 | 0.012947 | 0.196234 | 2,018 | 42 | 123 | 48.047619 | 0.77312 | 0.100595 | 0 | 0.441176 | 0 | 0 | 0.170225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
10e1fceda37c5a563b6e46b16b6fdbac597843ed | 49 | py | Python | bolt_server/message_dispatcher/__init__.py | project-bolt/bolt-server | 01189c73520da0a95228086a6d095a1b00f93ee3 | [
"MIT"
] | 1 | 2017-10-17T17:23:45.000Z | 2017-10-17T17:23:45.000Z | bolt_server/message_dispatcher/__init__.py | project-bolt/bolt-server | 01189c73520da0a95228086a6d095a1b00f93ee3 | [
"MIT"
] | null | null | null | bolt_server/message_dispatcher/__init__.py | project-bolt/bolt-server | 01189c73520da0a95228086a6d095a1b00f93ee3 | [
"MIT"
] | null | null | null | from message_dispatcher import MessageDispatcher
| 24.5 | 48 | 0.918367 | 5 | 49 | 8.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081633 | 49 | 1 | 49 | 49 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
804381478761dd99705d30a938e4f732289860dc | 851 | py | Python | tests/test_unit/test_test_systems.py | ADicksonLab/openmm_systems | ae6e0acb0d55f93de8b68f48b43d3df40311a2e3 | [
"MIT"
] | 2 | 2020-04-30T19:58:50.000Z | 2021-06-30T05:39:02.000Z | tests/test_unit/test_test_systems.py | ADicksonLab/openmm_systems | ae6e0acb0d55f93de8b68f48b43d3df40311a2e3 | [
"MIT"
] | null | null | null | tests/test_unit/test_test_systems.py | ADicksonLab/openmm_systems | ae6e0acb0d55f93de8b68f48b43d3df40311a2e3 | [
"MIT"
] | null | null | null |
from openmm_systems.test_systems import (
LennardJonesPair,
LysozymeImplicit,
)
def test_LennardJonesPair():
# just touch a bunch of stuff to make sure the work
testsys = LennardJonesPair()
assert hasattr(testsys, 'mdtraj_topology')
assert hasattr(testsys, 'topology')
assert hasattr(testsys, 'system')
assert hasattr(testsys, 'positions')
assert hasattr(testsys, 'receptor_indices')
assert hasattr(testsys, 'ligand_indices')
def test_Lysozyme():
testsys = LysozymeImplicit()
assert hasattr(testsys, 'mdtraj_topology')
assert hasattr(testsys, 'topology')
assert hasattr(testsys, 'system')
assert hasattr(testsys, 'positions')
# TODO: these really should be here.. but they aren't
# assert hasattr(testsys, 'receptor_indices')
# assert hasattr(testsys, 'ligand_indices')
| 26.59375 | 57 | 0.714454 | 93 | 851 | 6.430108 | 0.430108 | 0.26087 | 0.401338 | 0.187291 | 0.618729 | 0.618729 | 0.618729 | 0.618729 | 0.618729 | 0.618729 | 0 | 0 | 0.185664 | 851 | 31 | 58 | 27.451613 | 0.862915 | 0.219741 | 0 | 0.444444 | 0 | 0 | 0.161339 | 0 | 0 | 0 | 0 | 0.032258 | 0.555556 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
339d909cbb2c2f7d8534fec0322e5dea48e9d00b | 182 | py | Python | ask_boyarskikh/app/tests/unit/__init__.py | Nikita-Boyarskikh/Questions-And-Answers | 0feb9c38aab141a5db257748808ec90b851e0a77 | [
"MIT"
] | null | null | null | ask_boyarskikh/app/tests/unit/__init__.py | Nikita-Boyarskikh/Questions-And-Answers | 0feb9c38aab141a5db257748808ec90b851e0a77 | [
"MIT"
] | null | null | null | ask_boyarskikh/app/tests/unit/__init__.py | Nikita-Boyarskikh/Questions-And-Answers | 0feb9c38aab141a5db257748808ec90b851e0a77 | [
"MIT"
] | null | null | null | from .admin import *
from .consumers import *
from .forms import *
from .models import *
from .signals import *
from .templatetags import *
from .utils import *
from .views import *
| 20.222222 | 27 | 0.736264 | 24 | 182 | 5.583333 | 0.416667 | 0.522388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175824 | 182 | 8 | 28 | 22.75 | 0.893333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33bdb93e5776b59ca53b35bdbaba233675cb45a3 | 153 | py | Python | authors/apps/products/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/products/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | 3 | 2020-06-05T19:27:45.000Z | 2021-06-10T20:59:41.000Z | authors/apps/products/admin.py | hoslack/jua-kali_Backend | e0e92aa0287c4a17b303fdde941f457b28c51223 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from .models import Product
class AuthorAdmin(admin.ModelAdmin):
pass
admin.site.register(Product, AuthorAdmin)
| 15.3 | 41 | 0.79085 | 19 | 153 | 6.368421 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 153 | 9 | 42 | 17 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
33c388ac5076fc2737c0ae62bcdc8c2dd1b63109 | 40 | py | Python | sentipy/model/__init__.py | arthurdjn/sentipy | ac3d25ed04e39cf684a6a6d043c70e1730bd04de | [
"MIT"
] | null | null | null | sentipy/model/__init__.py | arthurdjn/sentipy | ac3d25ed04e39cf684a6a6d043c70e1730bd04de | [
"MIT"
] | null | null | null | sentipy/model/__init__.py | arthurdjn/sentipy | ac3d25ed04e39cf684a6a6d043c70e1730bd04de | [
"MIT"
] | null | null | null | #
from sentipy.model.model import CNN
| 8 | 35 | 0.75 | 6 | 40 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175 | 40 | 4 | 36 | 10 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
33fa2e8b6f5a8f5eecf1c8056ec8d8c4043f1b73 | 160 | py | Python | (8 kyu) String repeat/(8 kyu) String repeat.py | suchov/codewars-1 | 25edae81c48c7c9273700a098bc24144f28e31f1 | [
"MIT"
] | 1 | 2020-10-25T17:47:33.000Z | 2020-10-25T17:47:33.000Z | (8 kyu) String repeat/(8 kyu) String repeat.py | novsunheng/codewars | c54b1d822356889b91587b088d02ca0bd3d8dc9e | [
"MIT"
] | null | null | null | (8 kyu) String repeat/(8 kyu) String repeat.py | novsunheng/codewars | c54b1d822356889b91587b088d02ca0bd3d8dc9e | [
"MIT"
] | null | null | null | def repeat_str(repeat, string):
# #1
# str = ""
# for i in range(repeat):
# str += string
# return str
#2
return repeat * string | 20 | 31 | 0.51875 | 20 | 160 | 4.1 | 0.55 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.35625 | 160 | 8 | 32 | 20 | 0.776699 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1d2ceb8661b26c35cc4605c1a8ad09147a2b781d | 130 | py | Python | anomalytransfer/__init__.py | AnoTransfer/AnoTransfer-code | 4b56ea9181b109bfa332d4fedb65bf3661220253 | [
"MIT"
] | null | null | null | anomalytransfer/__init__.py | AnoTransfer/AnoTransfer-code | 4b56ea9181b109bfa332d4fedb65bf3661220253 | [
"MIT"
] | null | null | null | anomalytransfer/__init__.py | AnoTransfer/AnoTransfer-code | 4b56ea9181b109bfa332d4fedb65bf3661220253 | [
"MIT"
] | null | null | null | import anomalytransfer.transfer as transfer
import anomalytransfer.clustering as clustering
import anomalytransfer.utils as utils
| 32.5 | 47 | 0.884615 | 15 | 130 | 7.666667 | 0.4 | 0.547826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092308 | 130 | 3 | 48 | 43.333333 | 0.974576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1d37d771e9c8b9bc8e985fd271fd296b67c59252 | 2,044 | py | Python | epytope/Data/pssms/smmpmbec/mat/B_44_02_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/smmpmbec/mat/B_44_02_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/smmpmbec/mat/B_44_02_8.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | B_44_02_8 = {0: {'A': -0.613, 'C': -0.025, 'E': 0.121, 'D': 0.431, 'G': 0.027, 'F': 0.001, 'I': -0.588, 'H': 0.18, 'K': 0.01, 'M': 0.03, 'L': -0.102, 'N': 0.304, 'Q': 0.171, 'P': -0.007, 'S': 0.075, 'R': 0.006, 'T': -0.019, 'W': 0.19, 'V': -0.298, 'Y': 0.107}, 1: {'A': 0.032, 'C': -0.123, 'E': -0.69, 'D': -0.471, 'G': -0.033, 'F': 0.339, 'I': 0.167, 'H': -0.019, 'K': 0.137, 'M': 0.094, 'L': 0.24, 'N': -0.15, 'Q': -0.107, 'P': -0.017, 'S': -0.074, 'R': 0.094, 'T': 0.067, 'W': 0.071, 'V': 0.166, 'Y': 0.276}, 2: {'A': -0.002, 'C': -0.013, 'E': -0.002, 'D': 0.011, 'G': 0.016, 'F': -0.091, 'I': -0.052, 'H': 0.009, 'K': 0.034, 'M': -0.058, 'L': -0.066, 'N': 0.007, 'Q': 0.049, 'P': 0.096, 'S': 0.072, 'R': 0.048, 'T': 0.039, 'W': -0.029, 'V': -0.019, 'Y': -0.05}, 3: {'A': 0.006, 'C': -0.0, 'E': -0.006, 'D': -0.004, 'G': 0.005, 'F': -0.003, 'I': -0.002, 'H': 0.001, 'K': 0.005, 'M': -0.002, 'L': -0.009, 'N': -0.002, 'Q': -0.004, 'P': 0.0, 'S': 0.006, 'R': 0.01, 'T': 0.0, 'W': -0.003, 'V': 0.002, 'Y': -0.002}, 4: {'A': -0.129, 'C': 0.075, 'E': 0.103, 'D': 0.088, 'G': -0.047, 'F': 0.2, 'I': 0.199, 'H': -0.105, 'K': -0.167, 'M': 0.097, 'L': 0.091, 'N': 0.062, 'Q': -0.122, 'P': 0.03, 'S': -0.145, 'R': -0.315, 'T': -0.104, 'W': 0.13, 'V': 0.042, 'Y': 0.016}, 5: {'A': -0.029, 'C': -0.003, 'E': -0.006, 'D': -0.003, 'G': 0.003, 'F': -0.033, 'I': -0.027, 'H': 0.016, 'K': 0.023, 'M': 0.007, 'L': -0.009, 'N': 0.008, 'Q': 0.026, 'P': -0.0, 'S': 0.015, 'R': 0.034, 'T': 0.001, 'W': 0.008, 'V': -0.024, 'Y': -0.006}, 6: {'A': -0.101, 'C': -0.203, 'E': -0.047, 'D': -0.026, 'G': -0.021, 'F': -0.201, 'I': 0.017, 'H': 0.097, 'K': 0.086, 'M': 0.095, 'L': -0.076, 'N': 0.058, 'Q': 0.051, 'P': -0.214, 'S': 0.163, 'R': 0.071, 'T': 0.134, 'W': -0.016, 'V': 0.067, 'Y': 0.066}, 7: {'A': 0.031, 'C': -0.009, 'E': 0.003, 'D': 0.003, 'G': 0.009, 'F': -0.071, 'I': -0.01, 'H': -0.013, 'K': 0.008, 'M': -0.005, 'L': 0.003, 'N': 0.001, 'Q': 0.024, 'P': 0.027, 'S': 0.022, 'R': -0.001, 'T': 0.023, 'W': -0.024, 'V': 0.016, 'Y': -0.037}, -1: {'con': 4.60445}} | 2,044 | 2,044 | 0.393836 | 496 | 2,044 | 1.616935 | 0.25 | 0.01995 | 0.012469 | 0.014963 | 0.062344 | 0 | 0 | 0 | 0 | 0 | 0 | 0.372664 | 0.162427 | 2,044 | 1 | 2,044 | 2,044 | 0.095794 | 0 | 0 | 0 | 0 | 0 | 0.079707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d715f60c9a2c28fa79144184f9eab0b1149f936 | 6,805 | py | Python | tests/extensions/analytics/test_analytics_extension.py | mirlarof/blip-sdk-python | f958149b2524d4340eeafad8739a33db71df45ed | [
"MIT"
] | 2 | 2021-07-02T20:10:48.000Z | 2021-07-13T20:51:18.000Z | tests/extensions/analytics/test_analytics_extension.py | mirlarof/blip-sdk-python | f958149b2524d4340eeafad8739a33db71df45ed | [
"MIT"
] | 3 | 2021-06-24T13:27:21.000Z | 2021-07-30T15:37:43.000Z | tests/extensions/analytics/test_analytics_extension.py | mirlarof/blip-sdk-python | f958149b2524d4340eeafad8739a33db71df45ed | [
"MIT"
] | 3 | 2021-06-23T19:53:20.000Z | 2022-01-04T17:50:44.000Z | from lime_python import Command
from pytest import fixture, mark
from pytest_mock import MockerFixture
from src import AnalyticsExtension
from ...async_mock import async_return
ANALYTICS_TO = 'postmaster@analytics.msging.net'
class TestAnalyticsExtension:
@fixture
def target(self, mocker: MockerFixture) -> AnalyticsExtension:
yield AnalyticsExtension(mocker.MagicMock(), 'msging.net')
@mark.asyncio
async def test_get_events_track_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Arrange
expected_command = Command(
'get',
'/event-track'
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.get_categories_async()
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_get_event_track_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Arrange
category = 'Saudacao Inicial'
start_date = '2020-12-17'
end_date = '2021-12-17'
expected_command = Command(
'get',
'/event-track/Saudacao%20Inicial?startDate=2020-12-17&endDate=2021-12-17&$take=10' # noqa: E501
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.get_category_actions_counter_async(
category,
start_date,
end_date,
10
)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_create_event_track_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Assert
category = 'payments'
action = 'success-order'
expected_command = Command(
'set',
'/event-track',
'application/vnd.iris.eventTrack+json',
{
'category': category,
'action': action
}
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.create_event_track_async(
category,
action
)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_create_event_track_with_extras_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Assert
category = 'payments'
action = 'success-order'
extras = {
'nome': 'Teste'
}
expected_command = Command(
'set',
'/event-track',
'application/vnd.iris.eventTrack+json',
{
'category': category,
'action': action,
'extras': extras
}
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.create_event_track_async(
category,
action,
extras=extras
)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_create_event_track_with_identity_and_extras_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Assert
category = 'payments'
action = 'success-order'
identity = '123456@messenger.gw.msging.net'
extras = {
'nome': 'Teste'
}
expected_command = Command(
'set',
'/event-track',
'application/vnd.iris.eventTrack+json',
{
'category': category,
'action': action,
'contact': {
'identity': identity
},
'extras': extras
}
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.create_event_track_async(
category,
action,
identity,
extras
)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_get_event_track_details_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Assert
category = 'Saudacao Inicial'
action = 'Contador'
start_date = '2020-12-17'
end_date = '2021-12-17'
expected_command = Command(
'get',
'/event-track/Saudacao%20Inicial/Contador?startDate=2020-12-17&endDate=2021-12-17&$take=10' # noqa: E501
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.get_event_details_async(
category,
action,
start_date,
end_date,
10
)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
@mark.asyncio
async def test_delete_event_track_async(
self,
mocker: MockerFixture,
target: AnalyticsExtension
) -> None:
# Arrange
category = 'Saudacao Inicial'
expected_command = Command(
'delete',
'/event-track/Saudacao%20Inicial'
)
expected_command.to = ANALYTICS_TO
mock = mocker.MagicMock(
return_value=async_return(None)
)
target.client.process_command_async = mock
# Act
await target.delete_category_async(category)
# Assert
expected_command.id = mock.call_args[0][0].id
mock.assert_called_once_with(expected_command)
| 25.973282 | 117 | 0.564291 | 654 | 6,805 | 5.62844 | 0.140673 | 0.114099 | 0.049986 | 0.036131 | 0.831839 | 0.831839 | 0.815268 | 0.815268 | 0.815268 | 0.777778 | 0 | 0.023588 | 0.352094 | 6,805 | 261 | 118 | 26.072797 | 0.811295 | 0.022043 | 0 | 0.685 | 0 | 0.01 | 0.105112 | 0.055648 | 0 | 0 | 0 | 0 | 0.035 | 1 | 0.005 | false | 0 | 0.025 | 0 | 0.035 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1d7f04e3b3821fb78b61cb2a95210dc62e8cf8e1 | 39 | py | Python | __init__.py | nherbert25/chess_test | 5cebe4d57c9a1bdc073925079f8530ad900041c9 | [
"MIT"
] | null | null | null | __init__.py | nherbert25/chess_test | 5cebe4d57c9a1bdc073925079f8530ad900041c9 | [
"MIT"
] | null | null | null | __init__.py | nherbert25/chess_test | 5cebe4d57c9a1bdc073925079f8530ad900041c9 | [
"MIT"
] | null | null | null | pass
#testing 11/12/19
#testing 7/19/20 | 13 | 17 | 0.74359 | 9 | 39 | 3.222222 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.314286 | 0.102564 | 39 | 3 | 18 | 13 | 0.514286 | 0.794872 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d523685693e91a9c08ee861727d5a462cc3e5b9a | 13 | py | Python | Chapter 02/ch2_19.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 02/ch2_19.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | Chapter 02/ch2_19.py | bpbpublications/TEST-YOUR-SKILLS-IN-PYTHON-LANGUAGE | f6a4194684515495d00aa38347a725dd08f39a0c | [
"MIT"
] | null | null | null | print("\x27") | 13 | 13 | 0.615385 | 2 | 13 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 13 | 1 | 13 | 13 | 0.461538 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d541f14cd467fb55d2886a0a21f264772eed389b | 236 | py | Python | cellrank/tl/estimators/mixins/__init__.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 172 | 2020-03-19T19:50:53.000Z | 2022-03-28T09:36:04.000Z | cellrank/tl/estimators/mixins/__init__.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 702 | 2020-03-19T08:09:04.000Z | 2022-03-30T09:55:14.000Z | cellrank/tl/estimators/mixins/__init__.py | WeilerP/cellrank | c8c2b9f6bd2448861fb414435aee7620ca5a0bad | [
"BSD-3-Clause"
] | 17 | 2020-04-07T03:11:02.000Z | 2022-02-02T20:39:16.000Z | from cellrank.tl.estimators.mixins.decomposition import EigenMixin, SchurMixin
from cellrank.tl.estimators.mixins._lineage_drivers import LinDriversMixin
from cellrank.tl.estimators.mixins._absorption_probabilities import AbsProbsMixin
| 59 | 81 | 0.889831 | 27 | 236 | 7.62963 | 0.555556 | 0.174757 | 0.203884 | 0.349515 | 0.436893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055085 | 236 | 3 | 82 | 78.666667 | 0.923767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
d596a27433fcba1270127a3de68d443022b13725 | 162 | py | Python | folks/tasks/__init__.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | 4 | 2019-12-02T20:04:55.000Z | 2020-04-30T22:14:30.000Z | folks/tasks/__init__.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | null | null | null | folks/tasks/__init__.py | marinintim/folks | 2dce457c9d57da34626717667b942fa91f62385f | [
"MIT"
] | null | null | null | from .feeds import notify_readers
from .images import rotate_according_to_exif, upload_to_s3, upload_workflow, generate_thumbnails
from .celery import app, logger | 54 | 96 | 0.864198 | 24 | 162 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006803 | 0.092593 | 162 | 3 | 97 | 54 | 0.891156 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
634f22e8e66d13c4f3ad3b6ec99dba87e666d096 | 31 | py | Python | caffeination_gui/__init__.py | neisor/caffeination_gui | e0efd58f4f94629749a0c540161eda6740028224 | [
"MIT"
] | null | null | null | caffeination_gui/__init__.py | neisor/caffeination_gui | e0efd58f4f94629749a0c540161eda6740028224 | [
"MIT"
] | null | null | null | caffeination_gui/__init__.py | neisor/caffeination_gui | e0efd58f4f94629749a0c540161eda6740028224 | [
"MIT"
] | null | null | null | from caffeination_gui import *
| 15.5 | 30 | 0.83871 | 4 | 31 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
634fb423dd3094e83249f89e124aa43240cdd8a2 | 110 | py | Python | supervised/jsonable.py | michaelneale/mljar-supervised | 8d1b5fdd56e994a7f13ec5f6d2033830744f3d6f | [
"MIT"
] | 1 | 2020-03-13T09:44:41.000Z | 2020-03-13T09:44:41.000Z | supervised/jsonable.py | wambagilles/mljar-supervised | 3192c91979b31810b249767a63e60ee74068c668 | [
"MIT"
] | null | null | null | supervised/jsonable.py | wambagilles/mljar-supervised | 3192c91979b31810b249767a63e60ee74068c668 | [
"MIT"
] | 1 | 2021-03-12T05:48:45.000Z | 2021-03-12T05:48:45.000Z | class Jsonable(object):
def to_json(self):
pass
def from_json(self, data_json):
pass
| 15.714286 | 35 | 0.6 | 15 | 110 | 4.2 | 0.666667 | 0.253968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309091 | 110 | 6 | 36 | 18.333333 | 0.828947 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
635ea92757ed7fb8403baa5ac457f9cf13316f2b | 42 | py | Python | __init__.py | jyjokokk/pyImgScraper | e0aa2aa70970a43d091da1364b77ee3a5d77c743 | [
"MIT"
] | null | null | null | __init__.py | jyjokokk/pyImgScraper | e0aa2aa70970a43d091da1364b77ee3a5d77c743 | [
"MIT"
] | null | null | null | __init__.py | jyjokokk/pyImgScraper | e0aa2aa70970a43d091da1364b77ee3a5d77c743 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import pyimgscraper | 21 | 22 | 0.809524 | 6 | 42 | 5.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.071429 | 42 | 2 | 23 | 21 | 0.846154 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63a3db05d97bc97db4bce45ac93658fbcc65ae65 | 9,252 | py | Python | ask-sdk-s3-persistence-adapter/tests/unit/test_adapter.py | timothyaaron/alexa-skills-kit-sdk-for-python | 6b9a45b0c484676e10257c93dc46c9832ea0479d | [
"Apache-2.0"
] | 496 | 2018-09-19T19:50:11.000Z | 2022-03-26T13:32:43.000Z | ask-sdk-s3-persistence-adapter/tests/unit/test_adapter.py | timothyaaron/alexa-skills-kit-sdk-for-python | 6b9a45b0c484676e10257c93dc46c9832ea0479d | [
"Apache-2.0"
] | 112 | 2018-09-18T18:55:43.000Z | 2022-03-31T22:58:40.000Z | ask-sdk-s3-persistence-adapter/tests/unit/test_adapter.py | timothyaaron/alexa-skills-kit-sdk-for-python | 6b9a45b0c484676e10257c93dc46c9832ea0479d | [
"Apache-2.0"
] | 220 | 2018-09-18T18:53:15.000Z | 2022-02-22T22:56:44.000Z | import unittest
import json
import os
from boto3.exceptions import ResourceNotExistsError
from ask_sdk_model import RequestEnvelope
from ask_sdk_core.exceptions import PersistenceException
from ask_sdk_s3.adapter import S3Adapter
try:
import mock
except ImportError:
from unittest import mock
_MOCK_DATA = {"key1": "value1", "key2": "value2", "key3": "1.1"}
class TestS3Adapter(unittest.TestCase):
def setUp(self):
self.s3_client = mock.Mock()
self.object_keygen = mock.Mock()
self.request_envelope = RequestEnvelope()
self.bucket_name = "test_bucket"
self.bucket_key = "test_key"
def test_get_attributes_from_existing_bucket(self):
self.object_keygen.return_value = "test_object_key"
mock_object = {"Body": MockData()}
self.s3_client.get_object = mock.MagicMock(return_value=mock_object)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
result = test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
self.assertEquals(_MOCK_DATA, result)
self.s3_client.get_object.assert_called()
def test_get_attributes_from_existing_bucket_no_prefix(self):
self.object_keygen.return_value = "test_object_key"
mock_object = {"Body": MockData()}
self.s3_client.get_object = mock.MagicMock(return_value=mock_object)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=None,
s3_client=self.s3_client, object_keygen=self.object_keygen)
result = test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
self.assertEquals(_MOCK_DATA, result)
self.s3_client.get_object.assert_called()
def test_get_attributes_from_existing_bucket_get_object_fails(self):
self.object_keygen.return_value = "test_object_key"
self.s3_client.get_object.read.side_effect = Exception("test exception")
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
def test_get_attributes_resource_not_exist_fails(self):
self.object_keygen.return_value = "test_object_key"
self.s3_client.get_object.side_effect = ResourceNotExistsError("resource does not exist",
self.bucket_key, self.bucket_name)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
def test_get_attributes_from_existing_bucket_get_object_returns_no_item(self):
self.object_keygen.return_value = "test_object_key"
mock_object = {"Body": {}}
self.s3_client.get_object = mock.MagicMock(return_value=mock_object)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
result = test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
self.assertEquals({}, result)
def test_get_attributes_from_existing_bucket_get_object_null_returns_no_item(self):
self.object_keygen.return_value = "test_object_key"
mock_object = {"Body": None}
self.s3_client.get_object = mock.MagicMock(return_value=mock_object)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
result = test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
self.assertEquals({}, result)
def test_get_attributes_from_existing_bucket_get_object_invalid_json_fails(self):
self.object_keygen.return_value = "test_object_key"
mock_object = {"Body": MockMalformedData()}
self.s3_client.get_object = mock.MagicMock(return_value=mock_object)
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.get_attributes(request_envelope=self.request_envelope)
def test_save_attributes_to_existing_bucket(self):
self.object_keygen.return_value = "test_object_key"
json_data = json.dumps(_MOCK_DATA)
generated_key = os.path.join("test_key", "test_object_key")
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
test_s3_adapter.save_attributes(request_envelope=self.request_envelope, attributes=_MOCK_DATA)
self.object_keygen.assert_called_once_with(self.request_envelope)
self.s3_client.put_object.assert_called_once_with(Body=json_data,
Bucket=self.bucket_name,
Key=generated_key)
def test_save_attributes_to_existing_bucket_put_item_fails(self):
self.object_keygen.return_value = "test_object_key"
self.s3_client.put_object.side_effect = Exception("test exception")
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.save_attributes(request_envelope=self.request_envelope, attributes=_MOCK_DATA)
def test_save_attributes_fails_with_no_existing_bucket(self):
self.object_keygen.return_value = "test_object_key"
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
self.s3_client.put_object.side_effect = ResourceNotExistsError("resource does not exist",
self.bucket_key, self.bucket_name)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.save_attributes(request_envelope=self.request_envelope, attributes=_MOCK_DATA)
def test_delete_attributes_to_existing_bucket(self):
self.object_keygen.return_value = "test_object_key"
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
test_s3_adapter.delete_attributes(request_envelope=self.request_envelope)
self.object_keygen.assert_called_once_with(self.request_envelope)
self.s3_client.delete_object.assert_called_once_with(
Bucket=self.bucket_name,
Key=os.path.join("test_key", "test_object_key"))
def test_delete_attributes_to_existing_bucket_delete_object_fails(self):
self.object_keygen.return_value = "test_object_key"
self.s3_client.delete_object.side_effect = Exception("test exception")
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.delete_attributes(request_envelope=self.request_envelope)
def test_delete_attributes_fails_with_no_existing_bucket(self):
self.object_keygen.return_value = "test_object_key"
test_s3_adapter = S3Adapter(bucket_name=self.bucket_name, path_prefix=self.bucket_key,
s3_client=self.s3_client, object_keygen=self.object_keygen)
self.s3_client.delete_object.side_effect = ResourceNotExistsError("resource does not exist",
self.bucket_key, self.bucket_name)
with self.assertRaises(PersistenceException) as exc:
test_s3_adapter.delete_attributes(request_envelope=self.request_envelope)
def tearDown(self):
self.s3_client = None
self.object_keygen = None
self.request_envelope = None
self.bucket_name = None
self.bucket_key = None
class MockData:
def read(self):
return json.dumps(_MOCK_DATA)
class MockMalformedData:
def read(self):
return json.dumps({"malformed json\n"}) | 50.010811 | 108 | 0.694553 | 1,125 | 9,252 | 5.299556 | 0.078222 | 0.057699 | 0.060382 | 0.04361 | 0.871687 | 0.854244 | 0.845522 | 0.824052 | 0.823381 | 0.80946 | 0 | 0.013065 | 0.230653 | 9,252 | 185 | 109 | 50.010811 | 0.824529 | 0 | 0 | 0.57971 | 0 | 0 | 0.046904 | 0 | 0 | 0 | 0 | 0 | 0.123188 | 1 | 0.123188 | false | 0 | 0.072464 | 0.014493 | 0.231884 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
63d542e9256080e36fb2824b0503eb4d8f79f4db | 62 | py | Python | src/pyqumo/pyqumo/fitting/__init__.py | larioandr/thesis-models | ecbc8c01aaeaa69034d6fe1d8577ab655968ea5f | [
"MIT"
] | 1 | 2021-01-17T15:49:03.000Z | 2021-01-17T15:49:03.000Z | src/pyqumo/pyqumo/fitting/__init__.py | larioandr/thesis-models | ecbc8c01aaeaa69034d6fe1d8577ab655968ea5f | [
"MIT"
] | null | null | null | src/pyqumo/pyqumo/fitting/__init__.py | larioandr/thesis-models | ecbc8c01aaeaa69034d6fe1d8577ab655968ea5f | [
"MIT"
] | 1 | 2021-03-07T15:31:06.000Z | 2021-03-07T15:31:06.000Z | from .acph2 import fit_acph2
from .johnson89 import fit_mern2
| 20.666667 | 32 | 0.83871 | 10 | 62 | 5 | 0.6 | 0.36 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092593 | 0.129032 | 62 | 2 | 33 | 31 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98410299480b4c6fe625f1e484f225897b8b559c | 540 | py | Python | hooks/post_gen_project.py | melMass/pyvfx-boilerplate | a4df88a6e272514205a5bf34a88f4a27daa66f9c | [
"MIT"
] | null | null | null | hooks/post_gen_project.py | melMass/pyvfx-boilerplate | a4df88a6e272514205a5bf34a88f4a27daa66f9c | [
"MIT"
] | null | null | null | hooks/post_gen_project.py | melMass/pyvfx-boilerplate | a4df88a6e272514205a5bf34a88f4a27daa66f9c | [
"MIT"
] | null | null | null | INFO = """
{{cookiecutter.prefix}}_{{cookiecutter.project_slug|lower}} successfully created.
FOR MAYA:
add this to you shell:
import {{cookiecutter.project_slug|lower}}.main as {{cookiecutter.project_slug|lower}}
{{cookiecutter.project_slug|lower}}.run_maya()
FOR NUKE:
simply move the {{cookiecutter.prefix}}_{{cookiecutter.project_slug|lower}} folder to $NUKEPATH
STANDALONE:
import {{cookiecutter.project_slug|lower}}.main as {{cookiecutter.project_slug|lower}}
{{cookiecutter.project_slug|lower}}.run_standalone()
"""
print(INFO) | 27 | 95 | 0.774074 | 67 | 540 | 6.059701 | 0.402985 | 0.374384 | 0.453202 | 0.551724 | 0.714286 | 0.714286 | 0.487685 | 0.487685 | 0.487685 | 0.487685 | 0 | 0 | 0.075926 | 540 | 20 | 96 | 27 | 0.813627 | 0 | 0 | 0.153846 | 0 | 0 | 0.948244 | 0.676525 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9893a1706bca54071f05ecbf51be5893a79985ed | 171,910 | py | Python | operators/portworx-essentials/python/pulumi_pulumi_kubernetes_crds_operators_portworx_essentials/core/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | operators/portworx-essentials/python/pulumi_pulumi_kubernetes_crds_operators_portworx_essentials/core/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | 2 | 2020-09-18T17:12:23.000Z | 2020-12-30T19:40:56.000Z | operators/portworx-essentials/python/pulumi_pulumi_kubernetes_crds_operators_portworx_essentials/core/v1alpha1/_inputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by crd2pulumi. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
__all__ = [
'StorageClusterSpecArgs',
'StorageClusterSpecAutopilotArgs',
'StorageClusterSpecAutopilotEnvArgs',
'StorageClusterSpecAutopilotEnvValueFromArgs',
'StorageClusterSpecAutopilotEnvValueFromConfigMapKeyRefArgs',
'StorageClusterSpecAutopilotEnvValueFromFieldRefArgs',
'StorageClusterSpecAutopilotEnvValueFromResourceFieldRefArgs',
'StorageClusterSpecAutopilotEnvValueFromSecretKeyRefArgs',
'StorageClusterSpecAutopilotProvidersArgs',
'StorageClusterSpecCloudStorageArgs',
'StorageClusterSpecCloudStorageCapacitySpecsArgs',
'StorageClusterSpecDeleteStrategyArgs',
'StorageClusterSpecEnvArgs',
'StorageClusterSpecEnvValueFromArgs',
'StorageClusterSpecEnvValueFromConfigMapKeyRefArgs',
'StorageClusterSpecEnvValueFromFieldRefArgs',
'StorageClusterSpecEnvValueFromResourceFieldRefArgs',
'StorageClusterSpecEnvValueFromSecretKeyRefArgs',
'StorageClusterSpecKvdbArgs',
'StorageClusterSpecMonitoringArgs',
'StorageClusterSpecMonitoringPrometheusArgs',
'StorageClusterSpecNetworkArgs',
'StorageClusterSpecNodesArgs',
'StorageClusterSpecNodesEnvArgs',
'StorageClusterSpecNodesEnvValueFromArgs',
'StorageClusterSpecNodesEnvValueFromConfigMapKeyRefArgs',
'StorageClusterSpecNodesEnvValueFromFieldRefArgs',
'StorageClusterSpecNodesEnvValueFromResourceFieldRefArgs',
'StorageClusterSpecNodesEnvValueFromSecretKeyRefArgs',
'StorageClusterSpecNodesNetworkArgs',
'StorageClusterSpecNodesSelectorArgs',
'StorageClusterSpecNodesSelectorLabelSelectorArgs',
'StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs',
'StorageClusterSpecNodesStorageArgs',
'StorageClusterSpecPlacementArgs',
'StorageClusterSpecPlacementNodeAffinityArgs',
'StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs',
'StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceArgs',
'StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchExpressionsArgs',
'StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchFieldsArgs',
'StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs',
'StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsArgs',
'StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchExpressionsArgs',
'StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchFieldsArgs',
'StorageClusterSpecPlacementTolerationsArgs',
'StorageClusterSpecStorageArgs',
'StorageClusterSpecStorkArgs',
'StorageClusterSpecStorkEnvArgs',
'StorageClusterSpecStorkEnvValueFromArgs',
'StorageClusterSpecStorkEnvValueFromConfigMapKeyRefArgs',
'StorageClusterSpecStorkEnvValueFromFieldRefArgs',
'StorageClusterSpecStorkEnvValueFromResourceFieldRefArgs',
'StorageClusterSpecStorkEnvValueFromSecretKeyRefArgs',
'StorageClusterSpecUpdateStrategyArgs',
'StorageClusterSpecUpdateStrategyRollingUpdateArgs',
'StorageClusterSpecUserInterfaceArgs',
'StorageClusterSpecUserInterfaceEnvArgs',
'StorageClusterSpecUserInterfaceEnvValueFromArgs',
'StorageClusterSpecUserInterfaceEnvValueFromConfigMapKeyRefArgs',
'StorageClusterSpecUserInterfaceEnvValueFromFieldRefArgs',
'StorageClusterSpecUserInterfaceEnvValueFromResourceFieldRefArgs',
'StorageClusterSpecUserInterfaceEnvValueFromSecretKeyRefArgs',
'StorageClusterStatusArgs',
'StorageClusterStatusConditionsArgs',
'StorageClusterStatusStorageArgs',
'StorageNodeSpecArgs',
'StorageNodeSpecCloudStorageArgs',
'StorageNodeSpecCloudStorageDriveConfigsArgs',
'StorageNodeStatusArgs',
'StorageNodeStatusConditionsArgs',
'StorageNodeStatusGeographyArgs',
'StorageNodeStatusNetworkArgs',
]
@pulumi.input_type
class StorageClusterSpecArgs:
def __init__(__self__, *,
autopilot: Optional[pulumi.Input['StorageClusterSpecAutopilotArgs']] = None,
cloud_storage: Optional[pulumi.Input['StorageClusterSpecCloudStorageArgs']] = None,
custom_image_registry: Optional[pulumi.Input[str]] = None,
delete_strategy: Optional[pulumi.Input['StorageClusterSpecDeleteStrategyArgs']] = None,
env: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecEnvArgs']]]] = None,
feature_gates: Optional[pulumi.Input[Mapping[str, Any]]] = None,
image: Optional[pulumi.Input[str]] = None,
image_pull_policy: Optional[pulumi.Input[str]] = None,
image_pull_secret: Optional[pulumi.Input[str]] = None,
kvdb: Optional[pulumi.Input['StorageClusterSpecKvdbArgs']] = None,
monitoring: Optional[pulumi.Input['StorageClusterSpecMonitoringArgs']] = None,
network: Optional[pulumi.Input['StorageClusterSpecNetworkArgs']] = None,
nodes: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesArgs']]]] = None,
placement: Optional[pulumi.Input['StorageClusterSpecPlacementArgs']] = None,
revision_history_limit: Optional[pulumi.Input[int]] = None,
runtime_options: Optional[pulumi.Input[Mapping[str, Any]]] = None,
secrets_provider: Optional[pulumi.Input[str]] = None,
start_port: Optional[pulumi.Input[int]] = None,
storage: Optional[pulumi.Input['StorageClusterSpecStorageArgs']] = None,
stork: Optional[pulumi.Input['StorageClusterSpecStorkArgs']] = None,
update_strategy: Optional[pulumi.Input['StorageClusterSpecUpdateStrategyArgs']] = None,
user_interface: Optional[pulumi.Input['StorageClusterSpecUserInterfaceArgs']] = None,
version: Optional[pulumi.Input[str]] = None):
"""
The desired behavior of the storage cluster.
:param pulumi.Input['StorageClusterSpecAutopilotArgs'] autopilot: Contains spec of autopilot component for storage driver.
:param pulumi.Input['StorageClusterSpecCloudStorageArgs'] cloud_storage: Details of storage used in cloud environment.
:param pulumi.Input[str] custom_image_registry: Custom container image registry server that will be used instead of index.docker.io to download Docker images. This may include the repository as well. (Example: myregistry.net:5443 or myregistry.com/myrepository)
:param pulumi.Input['StorageClusterSpecDeleteStrategyArgs'] delete_strategy: Delete strategy to uninstall and wipe the storage cluster.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecEnvArgs']]] env: List of environment variables used by the driver. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
:param pulumi.Input[Mapping[str, Any]] feature_gates: This is a map of feature names to string values.
:param pulumi.Input[str] image: Docker image of the storage driver.
:param pulumi.Input[str] image_pull_policy: Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always.
:param pulumi.Input[str] image_pull_secret: Image pull secret is a reference to secret in the same namespace as the StorageCluster. It is used for pulling all images used by the StorageCluster.
:param pulumi.Input['StorageClusterSpecKvdbArgs'] kvdb: Details of KVDB that the storage driver will use.
:param pulumi.Input['StorageClusterSpecMonitoringArgs'] monitoring: Contains monitoring configuration for the storage cluster.
:param pulumi.Input['StorageClusterSpecNetworkArgs'] network: Contains network information that is needed by the storage driver.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesArgs']]] nodes: Node level configurations that will override the configuration at cluster level. These configurations can be for individual nodes or can be grouped to override configuration of multiple nodes based on label selectors.
:param pulumi.Input['StorageClusterSpecPlacementArgs'] placement: Describes placement configuration for the storage cluster pods.
:param pulumi.Input[int] revision_history_limit: The number of old history to retain to allow rollback. This is a pointer to distinguish between an explicit zero and not specified. Defaults to 10.
:param pulumi.Input[Mapping[str, Any]] runtime_options: This is map of any runtime options that need to be sent to the storage driver. The value is a string.
:param pulumi.Input[str] secrets_provider: Secrets provider is the name of secret provider that driver will connect to.
:param pulumi.Input[int] start_port: Start port is the starting port in the range of ports used by the cluster.
:param pulumi.Input['StorageClusterSpecStorageArgs'] storage: Details of the storage used by the storage driver.
:param pulumi.Input['StorageClusterSpecStorkArgs'] stork: Contains STORK related spec.
:param pulumi.Input['StorageClusterSpecUpdateStrategyArgs'] update_strategy: An update strategy to replace existing StorageCluster pods with new pods.
:param pulumi.Input['StorageClusterSpecUserInterfaceArgs'] user_interface: Contains spec of a user interface for the storage driver.
:param pulumi.Input[str] version: Version of the storage driver. This field is read-only.
"""
if autopilot is not None:
pulumi.set(__self__, "autopilot", autopilot)
if cloud_storage is not None:
pulumi.set(__self__, "cloud_storage", cloud_storage)
if custom_image_registry is not None:
pulumi.set(__self__, "custom_image_registry", custom_image_registry)
if delete_strategy is not None:
pulumi.set(__self__, "delete_strategy", delete_strategy)
if env is not None:
pulumi.set(__self__, "env", env)
if feature_gates is not None:
pulumi.set(__self__, "feature_gates", feature_gates)
if image is not None:
pulumi.set(__self__, "image", image)
if image_pull_policy is not None:
pulumi.set(__self__, "image_pull_policy", image_pull_policy)
if image_pull_secret is not None:
pulumi.set(__self__, "image_pull_secret", image_pull_secret)
if kvdb is not None:
pulumi.set(__self__, "kvdb", kvdb)
if monitoring is not None:
pulumi.set(__self__, "monitoring", monitoring)
if network is not None:
pulumi.set(__self__, "network", network)
if nodes is not None:
pulumi.set(__self__, "nodes", nodes)
if placement is not None:
pulumi.set(__self__, "placement", placement)
if revision_history_limit is not None:
pulumi.set(__self__, "revision_history_limit", revision_history_limit)
if runtime_options is not None:
pulumi.set(__self__, "runtime_options", runtime_options)
if secrets_provider is not None:
pulumi.set(__self__, "secrets_provider", secrets_provider)
if start_port is not None:
pulumi.set(__self__, "start_port", start_port)
if storage is not None:
pulumi.set(__self__, "storage", storage)
if stork is not None:
pulumi.set(__self__, "stork", stork)
if update_strategy is not None:
pulumi.set(__self__, "update_strategy", update_strategy)
if user_interface is not None:
pulumi.set(__self__, "user_interface", user_interface)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def autopilot(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotArgs']]:
"""
Contains spec of autopilot component for storage driver.
"""
return pulumi.get(self, "autopilot")
@autopilot.setter
def autopilot(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotArgs']]):
pulumi.set(self, "autopilot", value)
@property
@pulumi.getter(name="cloudStorage")
def cloud_storage(self) -> Optional[pulumi.Input['StorageClusterSpecCloudStorageArgs']]:
"""
Details of storage used in cloud environment.
"""
return pulumi.get(self, "cloud_storage")
@cloud_storage.setter
def cloud_storage(self, value: Optional[pulumi.Input['StorageClusterSpecCloudStorageArgs']]):
pulumi.set(self, "cloud_storage", value)
@property
@pulumi.getter(name="customImageRegistry")
def custom_image_registry(self) -> Optional[pulumi.Input[str]]:
"""
Custom container image registry server that will be used instead of index.docker.io to download Docker images. This may include the repository as well. (Example: myregistry.net:5443 or myregistry.com/myrepository)
"""
return pulumi.get(self, "custom_image_registry")
@custom_image_registry.setter
def custom_image_registry(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "custom_image_registry", value)
@property
@pulumi.getter(name="deleteStrategy")
def delete_strategy(self) -> Optional[pulumi.Input['StorageClusterSpecDeleteStrategyArgs']]:
"""
Delete strategy to uninstall and wipe the storage cluster.
"""
return pulumi.get(self, "delete_strategy")
@delete_strategy.setter
def delete_strategy(self, value: Optional[pulumi.Input['StorageClusterSpecDeleteStrategyArgs']]):
pulumi.set(self, "delete_strategy", value)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecEnvArgs']]]]:
"""
List of environment variables used by the driver. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
"""
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter(name="featureGates")
def feature_gates(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
This is a map of feature names to string values.
"""
return pulumi.get(self, "feature_gates")
@feature_gates.setter
def feature_gates(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "feature_gates", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Docker image of the storage driver.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="imagePullPolicy")
def image_pull_policy(self) -> Optional[pulumi.Input[str]]:
"""
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always.
"""
return pulumi.get(self, "image_pull_policy")
@image_pull_policy.setter
def image_pull_policy(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image_pull_policy", value)
@property
@pulumi.getter(name="imagePullSecret")
def image_pull_secret(self) -> Optional[pulumi.Input[str]]:
"""
Image pull secret is a reference to secret in the same namespace as the StorageCluster. It is used for pulling all images used by the StorageCluster.
"""
return pulumi.get(self, "image_pull_secret")
@image_pull_secret.setter
def image_pull_secret(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image_pull_secret", value)
@property
@pulumi.getter
def kvdb(self) -> Optional[pulumi.Input['StorageClusterSpecKvdbArgs']]:
"""
Details of KVDB that the storage driver will use.
"""
return pulumi.get(self, "kvdb")
@kvdb.setter
def kvdb(self, value: Optional[pulumi.Input['StorageClusterSpecKvdbArgs']]):
pulumi.set(self, "kvdb", value)
@property
@pulumi.getter
def monitoring(self) -> Optional[pulumi.Input['StorageClusterSpecMonitoringArgs']]:
"""
Contains monitoring configuration for the storage cluster.
"""
return pulumi.get(self, "monitoring")
@monitoring.setter
def monitoring(self, value: Optional[pulumi.Input['StorageClusterSpecMonitoringArgs']]):
pulumi.set(self, "monitoring", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['StorageClusterSpecNetworkArgs']]:
"""
Contains network information that is needed by the storage driver.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['StorageClusterSpecNetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter
def nodes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesArgs']]]]:
"""
Node level configurations that will override the configuration at cluster level. These configurations can be for individual nodes or can be grouped to override configuration of multiple nodes based on label selectors.
"""
return pulumi.get(self, "nodes")
@nodes.setter
def nodes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesArgs']]]]):
pulumi.set(self, "nodes", value)
@property
@pulumi.getter
def placement(self) -> Optional[pulumi.Input['StorageClusterSpecPlacementArgs']]:
"""
Describes placement configuration for the storage cluster pods.
"""
return pulumi.get(self, "placement")
@placement.setter
def placement(self, value: Optional[pulumi.Input['StorageClusterSpecPlacementArgs']]):
pulumi.set(self, "placement", value)
@property
@pulumi.getter(name="revisionHistoryLimit")
def revision_history_limit(self) -> Optional[pulumi.Input[int]]:
"""
The number of old history to retain to allow rollback. This is a pointer to distinguish between an explicit zero and not specified. Defaults to 10.
"""
return pulumi.get(self, "revision_history_limit")
@revision_history_limit.setter
def revision_history_limit(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "revision_history_limit", value)
@property
@pulumi.getter(name="runtimeOptions")
def runtime_options(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
This is map of any runtime options that need to be sent to the storage driver. The value is a string.
"""
return pulumi.get(self, "runtime_options")
@runtime_options.setter
def runtime_options(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "runtime_options", value)
@property
@pulumi.getter(name="secretsProvider")
def secrets_provider(self) -> Optional[pulumi.Input[str]]:
"""
Secrets provider is the name of secret provider that driver will connect to.
"""
return pulumi.get(self, "secrets_provider")
@secrets_provider.setter
def secrets_provider(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "secrets_provider", value)
@property
@pulumi.getter(name="startPort")
def start_port(self) -> Optional[pulumi.Input[int]]:
"""
Start port is the starting port in the range of ports used by the cluster.
"""
return pulumi.get(self, "start_port")
@start_port.setter
def start_port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "start_port", value)
@property
@pulumi.getter
def storage(self) -> Optional[pulumi.Input['StorageClusterSpecStorageArgs']]:
"""
Details of the storage used by the storage driver.
"""
return pulumi.get(self, "storage")
@storage.setter
def storage(self, value: Optional[pulumi.Input['StorageClusterSpecStorageArgs']]):
pulumi.set(self, "storage", value)
@property
@pulumi.getter
def stork(self) -> Optional[pulumi.Input['StorageClusterSpecStorkArgs']]:
"""
Contains STORK related spec.
"""
return pulumi.get(self, "stork")
@stork.setter
def stork(self, value: Optional[pulumi.Input['StorageClusterSpecStorkArgs']]):
pulumi.set(self, "stork", value)
@property
@pulumi.getter(name="updateStrategy")
def update_strategy(self) -> Optional[pulumi.Input['StorageClusterSpecUpdateStrategyArgs']]:
"""
An update strategy to replace existing StorageCluster pods with new pods.
"""
return pulumi.get(self, "update_strategy")
@update_strategy.setter
def update_strategy(self, value: Optional[pulumi.Input['StorageClusterSpecUpdateStrategyArgs']]):
pulumi.set(self, "update_strategy", value)
@property
@pulumi.getter(name="userInterface")
def user_interface(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceArgs']]:
"""
Contains spec of a user interface for the storage driver.
"""
return pulumi.get(self, "user_interface")
@user_interface.setter
def user_interface(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceArgs']]):
pulumi.set(self, "user_interface", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version of the storage driver. This field is read-only.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class StorageClusterSpecAutopilotArgs:
def __init__(__self__, *,
args: Optional[pulumi.Input[Mapping[str, Any]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
env: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotEnvArgs']]]] = None,
image: Optional[pulumi.Input[str]] = None,
lock_image: Optional[pulumi.Input[bool]] = None,
providers: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotProvidersArgs']]]] = None):
"""
Contains spec of autopilot component for storage driver.
:param pulumi.Input[Mapping[str, Any]] args: It is a map of arguments provided to autopilot. Example: log-level: debug
:param pulumi.Input[bool] enabled: Flag indicating whether autopilot needs to be enabled.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotEnvArgs']]] env: List of environment variables used by autopilot. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
:param pulumi.Input[str] image: Docker image of the autopilot container.
:param pulumi.Input[bool] lock_image: Flag indicating if the autopilot image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotProvidersArgs']]] providers: List of input data providers to autopilot.
"""
if args is not None:
pulumi.set(__self__, "args", args)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if env is not None:
pulumi.set(__self__, "env", env)
if image is not None:
pulumi.set(__self__, "image", image)
if lock_image is not None:
pulumi.set(__self__, "lock_image", lock_image)
if providers is not None:
pulumi.set(__self__, "providers", providers)
@property
@pulumi.getter
def args(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
It is a map of arguments provided to autopilot. Example: log-level: debug
"""
return pulumi.get(self, "args")
@args.setter
def args(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "args", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating whether autopilot needs to be enabled.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotEnvArgs']]]]:
"""
List of environment variables used by autopilot. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
"""
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Docker image of the autopilot container.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="lockImage")
def lock_image(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating if the autopilot image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
"""
return pulumi.get(self, "lock_image")
@lock_image.setter
def lock_image(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "lock_image", value)
@property
@pulumi.getter
def providers(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotProvidersArgs']]]]:
"""
List of input data providers to autopilot.
"""
return pulumi.get(self, "providers")
@providers.setter
def providers(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecAutopilotProvidersArgs']]]]):
pulumi.set(self, "providers", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
value_from: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromArgs']] = None):
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
if value_from is not None:
pulumi.set(__self__, "value_from", value_from)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="valueFrom")
def value_from(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromArgs']]:
return pulumi.get(self, "value_from")
@value_from.setter
def value_from(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromArgs']]):
pulumi.set(self, "value_from", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvValueFromArgs:
def __init__(__self__, *,
config_map_key_ref: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromConfigMapKeyRefArgs']] = None,
field_ref: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromFieldRefArgs']] = None,
resource_field_ref: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromResourceFieldRefArgs']] = None,
secret_key_ref: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromSecretKeyRefArgs']] = None):
if config_map_key_ref is not None:
pulumi.set(__self__, "config_map_key_ref", config_map_key_ref)
if field_ref is not None:
pulumi.set(__self__, "field_ref", field_ref)
if resource_field_ref is not None:
pulumi.set(__self__, "resource_field_ref", resource_field_ref)
if secret_key_ref is not None:
pulumi.set(__self__, "secret_key_ref", secret_key_ref)
@property
@pulumi.getter(name="configMapKeyRef")
def config_map_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromConfigMapKeyRefArgs']]:
return pulumi.get(self, "config_map_key_ref")
@config_map_key_ref.setter
def config_map_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromConfigMapKeyRefArgs']]):
pulumi.set(self, "config_map_key_ref", value)
@property
@pulumi.getter(name="fieldRef")
def field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromFieldRefArgs']]:
return pulumi.get(self, "field_ref")
@field_ref.setter
def field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromFieldRefArgs']]):
pulumi.set(self, "field_ref", value)
@property
@pulumi.getter(name="resourceFieldRef")
def resource_field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromResourceFieldRefArgs']]:
return pulumi.get(self, "resource_field_ref")
@resource_field_ref.setter
def resource_field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromResourceFieldRefArgs']]):
pulumi.set(self, "resource_field_ref", value)
@property
@pulumi.getter(name="secretKeyRef")
def secret_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromSecretKeyRefArgs']]:
return pulumi.get(self, "secret_key_ref")
@secret_key_ref.setter
def secret_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecAutopilotEnvValueFromSecretKeyRefArgs']]):
pulumi.set(self, "secret_key_ref", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvValueFromConfigMapKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvValueFromFieldRefArgs:
def __init__(__self__, *,
api_version: Optional[pulumi.Input[str]] = None,
field_path: Optional[pulumi.Input[str]] = None):
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "field_path")
@field_path.setter
def field_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "field_path", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvValueFromResourceFieldRefArgs:
def __init__(__self__, *,
container_name: Optional[pulumi.Input[str]] = None,
divisor: Optional[pulumi.Input[str]] = None,
resource: Optional[pulumi.Input[str]] = None):
if container_name is not None:
pulumi.set(__self__, "container_name", container_name)
if divisor is not None:
pulumi.set(__self__, "divisor", divisor)
if resource is not None:
pulumi.set(__self__, "resource", resource)
@property
@pulumi.getter(name="containerName")
def container_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "container_name")
@container_name.setter
def container_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_name", value)
@property
@pulumi.getter
def divisor(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "divisor")
@divisor.setter
def divisor(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "divisor", value)
@property
@pulumi.getter
def resource(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "resource")
@resource.setter
def resource(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource", value)
@pulumi.input_type
class StorageClusterSpecAutopilotEnvValueFromSecretKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecAutopilotProvidersArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
params: Optional[pulumi.Input[Mapping[str, Any]]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] name: Unique name of the data provider.
:param pulumi.Input[Mapping[str, Any]] params: Map of key-value params for the provider.
:param pulumi.Input[str] type: Type of the data provider. For instance - prometheus
"""
if name is not None:
pulumi.set(__self__, "name", name)
if params is not None:
pulumi.set(__self__, "params", params)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Unique name of the data provider.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def params(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Map of key-value params for the provider.
"""
return pulumi.get(self, "params")
@params.setter
def params(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "params", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of the data provider. For instance - prometheus
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageClusterSpecCloudStorageArgs:
def __init__(__self__, *,
capacity_specs: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecCloudStorageCapacitySpecsArgs']]]] = None,
device_specs: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
journal_device_spec: Optional[pulumi.Input[str]] = None,
kvdb_device_spec: Optional[pulumi.Input[str]] = None,
max_storage_nodes: Optional[pulumi.Input[int]] = None,
max_storage_nodes_per_zone: Optional[pulumi.Input[int]] = None,
system_metadata_device_spec: Optional[pulumi.Input[str]] = None):
"""
Details of storage used in cloud environment.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecCloudStorageCapacitySpecsArgs']]] capacity_specs: List of cluster wide storage types and their capacities. A single capacity spec identifies a storage pool with a set of minimum requested IOPS and size. Based on the cloud provider, the total storage capacity will get divided amongst the nodes. The nodes bearing storage themselves will get uniformly distributed across all the zones.
:param pulumi.Input[Sequence[pulumi.Input[str]]] device_specs: List of storage device specs. A cloud storage device will be created for every spec in the list. The specs will be applied to all nodes in the cluster up to spec.cloudStorage.maxStorageNodes or spec.cloudStorage.maxStorageNodesPerZone. This will be ignored if spec.cloudStorage.capacitySpecs is present.
:param pulumi.Input[str] journal_device_spec: Device spec for the journal device.
:param pulumi.Input[str] kvdb_device_spec: Device spec for internal KVDB device.
:param pulumi.Input[int] max_storage_nodes: Maximum nodes that will have storage in the cluster.
:param pulumi.Input[int] max_storage_nodes_per_zone: Maximum nodes in every zone that will have storage in the cluster.
:param pulumi.Input[str] system_metadata_device_spec: Device spec for the metadata device. This device will be used to store system metadata by the driver.
"""
if capacity_specs is not None:
pulumi.set(__self__, "capacity_specs", capacity_specs)
if device_specs is not None:
pulumi.set(__self__, "device_specs", device_specs)
if journal_device_spec is not None:
pulumi.set(__self__, "journal_device_spec", journal_device_spec)
if kvdb_device_spec is not None:
pulumi.set(__self__, "kvdb_device_spec", kvdb_device_spec)
if max_storage_nodes is not None:
pulumi.set(__self__, "max_storage_nodes", max_storage_nodes)
if max_storage_nodes_per_zone is not None:
pulumi.set(__self__, "max_storage_nodes_per_zone", max_storage_nodes_per_zone)
if system_metadata_device_spec is not None:
pulumi.set(__self__, "system_metadata_device_spec", system_metadata_device_spec)
@property
@pulumi.getter(name="capacitySpecs")
def capacity_specs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecCloudStorageCapacitySpecsArgs']]]]:
"""
List of cluster wide storage types and their capacities. A single capacity spec identifies a storage pool with a set of minimum requested IOPS and size. Based on the cloud provider, the total storage capacity will get divided amongst the nodes. The nodes bearing storage themselves will get uniformly distributed across all the zones.
"""
return pulumi.get(self, "capacity_specs")
@capacity_specs.setter
def capacity_specs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecCloudStorageCapacitySpecsArgs']]]]):
pulumi.set(self, "capacity_specs", value)
@property
@pulumi.getter(name="deviceSpecs")
def device_specs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of storage device specs. A cloud storage device will be created for every spec in the list. The specs will be applied to all nodes in the cluster up to spec.cloudStorage.maxStorageNodes or spec.cloudStorage.maxStorageNodesPerZone. This will be ignored if spec.cloudStorage.capacitySpecs is present.
"""
return pulumi.get(self, "device_specs")
@device_specs.setter
def device_specs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "device_specs", value)
@property
@pulumi.getter(name="journalDeviceSpec")
def journal_device_spec(self) -> Optional[pulumi.Input[str]]:
"""
Device spec for the journal device.
"""
return pulumi.get(self, "journal_device_spec")
@journal_device_spec.setter
def journal_device_spec(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "journal_device_spec", value)
@property
@pulumi.getter(name="kvdbDeviceSpec")
def kvdb_device_spec(self) -> Optional[pulumi.Input[str]]:
"""
Device spec for internal KVDB device.
"""
return pulumi.get(self, "kvdb_device_spec")
@kvdb_device_spec.setter
def kvdb_device_spec(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kvdb_device_spec", value)
@property
@pulumi.getter(name="maxStorageNodes")
def max_storage_nodes(self) -> Optional[pulumi.Input[int]]:
"""
Maximum nodes that will have storage in the cluster.
"""
return pulumi.get(self, "max_storage_nodes")
@max_storage_nodes.setter
def max_storage_nodes(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_storage_nodes", value)
@property
@pulumi.getter(name="maxStorageNodesPerZone")
def max_storage_nodes_per_zone(self) -> Optional[pulumi.Input[int]]:
"""
Maximum nodes in every zone that will have storage in the cluster.
"""
return pulumi.get(self, "max_storage_nodes_per_zone")
@max_storage_nodes_per_zone.setter
def max_storage_nodes_per_zone(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_storage_nodes_per_zone", value)
@property
@pulumi.getter(name="systemMetadataDeviceSpec")
def system_metadata_device_spec(self) -> Optional[pulumi.Input[str]]:
"""
Device spec for the metadata device. This device will be used to store system metadata by the driver.
"""
return pulumi.get(self, "system_metadata_device_spec")
@system_metadata_device_spec.setter
def system_metadata_device_spec(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "system_metadata_device_spec", value)
@pulumi.input_type
class StorageClusterSpecCloudStorageCapacitySpecsArgs:
def __init__(__self__, *,
max_capacity_in_gi_b: Optional[pulumi.Input[int]] = None,
min_capacity_in_gi_b: Optional[pulumi.Input[int]] = None,
min_iops: Optional[pulumi.Input[int]] = None,
options: Optional[pulumi.Input[Mapping[str, Any]]] = None):
"""
:param pulumi.Input[int] max_capacity_in_gi_b: Maximum capacity for this storage cluster. The total capacity of devices created by this capacity spec should not be greater than this number for the entire cluster.
:param pulumi.Input[int] min_capacity_in_gi_b: Minimum capacity for this storage cluster. The total capacity of devices created by this capacity spec should not be less than this number for the entire cluster.
:param pulumi.Input[int] min_iops: Minimum IOPS expected from the cloud drive.
:param pulumi.Input[Mapping[str, Any]] options: Additional options required to provision the drive in cloud.
"""
if max_capacity_in_gi_b is not None:
pulumi.set(__self__, "max_capacity_in_gi_b", max_capacity_in_gi_b)
if min_capacity_in_gi_b is not None:
pulumi.set(__self__, "min_capacity_in_gi_b", min_capacity_in_gi_b)
if min_iops is not None:
pulumi.set(__self__, "min_iops", min_iops)
if options is not None:
pulumi.set(__self__, "options", options)
@property
@pulumi.getter(name="maxCapacityInGiB")
def max_capacity_in_gi_b(self) -> Optional[pulumi.Input[int]]:
"""
Maximum capacity for this storage cluster. The total capacity of devices created by this capacity spec should not be greater than this number for the entire cluster.
"""
return pulumi.get(self, "max_capacity_in_gi_b")
@max_capacity_in_gi_b.setter
def max_capacity_in_gi_b(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_capacity_in_gi_b", value)
@property
@pulumi.getter(name="minCapacityInGiB")
def min_capacity_in_gi_b(self) -> Optional[pulumi.Input[int]]:
"""
Minimum capacity for this storage cluster. The total capacity of devices created by this capacity spec should not be less than this number for the entire cluster.
"""
return pulumi.get(self, "min_capacity_in_gi_b")
@min_capacity_in_gi_b.setter
def min_capacity_in_gi_b(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_capacity_in_gi_b", value)
@property
@pulumi.getter(name="minIOPS")
def min_iops(self) -> Optional[pulumi.Input[int]]:
"""
Minimum IOPS expected from the cloud drive.
"""
return pulumi.get(self, "min_iops")
@min_iops.setter
def min_iops(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_iops", value)
@property
@pulumi.getter
def options(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Additional options required to provision the drive in cloud.
"""
return pulumi.get(self, "options")
@options.setter
def options(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "options", value)
@pulumi.input_type
class StorageClusterSpecDeleteStrategyArgs:
def __init__(__self__, *,
type: Optional[pulumi.Input[str]] = None):
"""
Delete strategy to uninstall and wipe the storage cluster.
:param pulumi.Input[str] type: Type of storage cluster delete. Can be Uninstall or UninstallAndWipe. There is no default delete strategy. When no delete strategy only objects managed by the StorageCluster controller and owned by the StorageCluster object are deleted. The storage driver will be left in a state where it will not be managed by any object. Uninstall strategy ensures that the cluster is completely uninstalled even from the storage driver perspective. UninstallAndWipe strategy ensures that the cluster is completely uninstalled as well as the storage devices and metadata are wiped for reuse. This may result in data loss.
"""
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of storage cluster delete. Can be Uninstall or UninstallAndWipe. There is no default delete strategy. When no delete strategy only objects managed by the StorageCluster controller and owned by the StorageCluster object are deleted. The storage driver will be left in a state where it will not be managed by any object. Uninstall strategy ensures that the cluster is completely uninstalled even from the storage driver perspective. UninstallAndWipe strategy ensures that the cluster is completely uninstalled as well as the storage devices and metadata are wiped for reuse. This may result in data loss.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageClusterSpecEnvArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
value_from: Optional[pulumi.Input['StorageClusterSpecEnvValueFromArgs']] = None):
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
if value_from is not None:
pulumi.set(__self__, "value_from", value_from)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="valueFrom")
def value_from(self) -> Optional[pulumi.Input['StorageClusterSpecEnvValueFromArgs']]:
return pulumi.get(self, "value_from")
@value_from.setter
def value_from(self, value: Optional[pulumi.Input['StorageClusterSpecEnvValueFromArgs']]):
pulumi.set(self, "value_from", value)
@pulumi.input_type
class StorageClusterSpecEnvValueFromArgs:
def __init__(__self__, *,
config_map_key_ref: Optional[pulumi.Input['StorageClusterSpecEnvValueFromConfigMapKeyRefArgs']] = None,
field_ref: Optional[pulumi.Input['StorageClusterSpecEnvValueFromFieldRefArgs']] = None,
resource_field_ref: Optional[pulumi.Input['StorageClusterSpecEnvValueFromResourceFieldRefArgs']] = None,
secret_key_ref: Optional[pulumi.Input['StorageClusterSpecEnvValueFromSecretKeyRefArgs']] = None):
if config_map_key_ref is not None:
pulumi.set(__self__, "config_map_key_ref", config_map_key_ref)
if field_ref is not None:
pulumi.set(__self__, "field_ref", field_ref)
if resource_field_ref is not None:
pulumi.set(__self__, "resource_field_ref", resource_field_ref)
if secret_key_ref is not None:
pulumi.set(__self__, "secret_key_ref", secret_key_ref)
@property
@pulumi.getter(name="configMapKeyRef")
def config_map_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecEnvValueFromConfigMapKeyRefArgs']]:
return pulumi.get(self, "config_map_key_ref")
@config_map_key_ref.setter
def config_map_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecEnvValueFromConfigMapKeyRefArgs']]):
pulumi.set(self, "config_map_key_ref", value)
@property
@pulumi.getter(name="fieldRef")
def field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecEnvValueFromFieldRefArgs']]:
return pulumi.get(self, "field_ref")
@field_ref.setter
def field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecEnvValueFromFieldRefArgs']]):
pulumi.set(self, "field_ref", value)
@property
@pulumi.getter(name="resourceFieldRef")
def resource_field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecEnvValueFromResourceFieldRefArgs']]:
return pulumi.get(self, "resource_field_ref")
@resource_field_ref.setter
def resource_field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecEnvValueFromResourceFieldRefArgs']]):
pulumi.set(self, "resource_field_ref", value)
@property
@pulumi.getter(name="secretKeyRef")
def secret_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecEnvValueFromSecretKeyRefArgs']]:
return pulumi.get(self, "secret_key_ref")
@secret_key_ref.setter
def secret_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecEnvValueFromSecretKeyRefArgs']]):
pulumi.set(self, "secret_key_ref", value)
@pulumi.input_type
class StorageClusterSpecEnvValueFromConfigMapKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecEnvValueFromFieldRefArgs:
def __init__(__self__, *,
api_version: Optional[pulumi.Input[str]] = None,
field_path: Optional[pulumi.Input[str]] = None):
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "field_path")
@field_path.setter
def field_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "field_path", value)
@pulumi.input_type
class StorageClusterSpecEnvValueFromResourceFieldRefArgs:
def __init__(__self__, *,
container_name: Optional[pulumi.Input[str]] = None,
divisor: Optional[pulumi.Input[str]] = None,
resource: Optional[pulumi.Input[str]] = None):
if container_name is not None:
pulumi.set(__self__, "container_name", container_name)
if divisor is not None:
pulumi.set(__self__, "divisor", divisor)
if resource is not None:
pulumi.set(__self__, "resource", resource)
@property
@pulumi.getter(name="containerName")
def container_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "container_name")
@container_name.setter
def container_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_name", value)
@property
@pulumi.getter
def divisor(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "divisor")
@divisor.setter
def divisor(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "divisor", value)
@property
@pulumi.getter
def resource(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "resource")
@resource.setter
def resource(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource", value)
@pulumi.input_type
class StorageClusterSpecEnvValueFromSecretKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecKvdbArgs:
def __init__(__self__, *,
auth_secret: Optional[pulumi.Input[str]] = None,
endpoints: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
internal: Optional[pulumi.Input[bool]] = None):
"""
Details of KVDB that the storage driver will use.
:param pulumi.Input[str] auth_secret: Authentication secret is the name of Kubernetes secret containing information to authenticate with the external KVDB. It could have the username/password for basic auth, certificate information or an ACL token.
:param pulumi.Input[Sequence[pulumi.Input[str]]] endpoints: If using external KVDB, this is the list of KVDB endpoints.
:param pulumi.Input[bool] internal: Flag indicating whether to use internal KVDB or an external KVDB.
"""
if auth_secret is not None:
pulumi.set(__self__, "auth_secret", auth_secret)
if endpoints is not None:
pulumi.set(__self__, "endpoints", endpoints)
if internal is not None:
pulumi.set(__self__, "internal", internal)
@property
@pulumi.getter(name="authSecret")
def auth_secret(self) -> Optional[pulumi.Input[str]]:
"""
Authentication secret is the name of Kubernetes secret containing information to authenticate with the external KVDB. It could have the username/password for basic auth, certificate information or an ACL token.
"""
return pulumi.get(self, "auth_secret")
@auth_secret.setter
def auth_secret(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "auth_secret", value)
@property
@pulumi.getter
def endpoints(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
If using external KVDB, this is the list of KVDB endpoints.
"""
return pulumi.get(self, "endpoints")
@endpoints.setter
def endpoints(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "endpoints", value)
@property
@pulumi.getter
def internal(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating whether to use internal KVDB or an external KVDB.
"""
return pulumi.get(self, "internal")
@internal.setter
def internal(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "internal", value)
@pulumi.input_type
class StorageClusterSpecMonitoringArgs:
def __init__(__self__, *,
enable_metrics: Optional[pulumi.Input[bool]] = None,
prometheus: Optional[pulumi.Input['StorageClusterSpecMonitoringPrometheusArgs']] = None):
"""
Contains monitoring configuration for the storage cluster.
:param pulumi.Input[bool] enable_metrics: If this flag is enabled it will expose the storage cluster metrics to external monitoring solutions like Prometheus. DEPRECATED - use prometheus.exportMetrics instead
:param pulumi.Input['StorageClusterSpecMonitoringPrometheusArgs'] prometheus: Contains configuration of Prometheus to monitor the storage cluster.
"""
if enable_metrics is not None:
pulumi.set(__self__, "enable_metrics", enable_metrics)
if prometheus is not None:
pulumi.set(__self__, "prometheus", prometheus)
@property
@pulumi.getter(name="enableMetrics")
def enable_metrics(self) -> Optional[pulumi.Input[bool]]:
"""
If this flag is enabled it will expose the storage cluster metrics to external monitoring solutions like Prometheus. DEPRECATED - use prometheus.exportMetrics instead
"""
return pulumi.get(self, "enable_metrics")
@enable_metrics.setter
def enable_metrics(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_metrics", value)
@property
@pulumi.getter
def prometheus(self) -> Optional[pulumi.Input['StorageClusterSpecMonitoringPrometheusArgs']]:
"""
Contains configuration of Prometheus to monitor the storage cluster.
"""
return pulumi.get(self, "prometheus")
@prometheus.setter
def prometheus(self, value: Optional[pulumi.Input['StorageClusterSpecMonitoringPrometheusArgs']]):
pulumi.set(self, "prometheus", value)
@pulumi.input_type
class StorageClusterSpecMonitoringPrometheusArgs:
def __init__(__self__, *,
enabled: Optional[pulumi.Input[bool]] = None,
export_metrics: Optional[pulumi.Input[bool]] = None,
remote_write_endpoint: Optional[pulumi.Input[str]] = None):
"""
Contains configuration of Prometheus to monitor the storage cluster.
:param pulumi.Input[bool] enabled: Flag indicating whether Prometheus stack needs to be enabled and deployed by the Storage operator.
:param pulumi.Input[bool] export_metrics: If this flag is enabled it will expose the storage cluster metrics to Prometheus.
:param pulumi.Input[str] remote_write_endpoint: Specifies the remote write endpoint for Prometheus.
"""
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if export_metrics is not None:
pulumi.set(__self__, "export_metrics", export_metrics)
if remote_write_endpoint is not None:
pulumi.set(__self__, "remote_write_endpoint", remote_write_endpoint)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating whether Prometheus stack needs to be enabled and deployed by the Storage operator.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter(name="exportMetrics")
def export_metrics(self) -> Optional[pulumi.Input[bool]]:
"""
If this flag is enabled it will expose the storage cluster metrics to Prometheus.
"""
return pulumi.get(self, "export_metrics")
@export_metrics.setter
def export_metrics(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "export_metrics", value)
@property
@pulumi.getter(name="remoteWriteEndpoint")
def remote_write_endpoint(self) -> Optional[pulumi.Input[str]]:
"""
Specifies the remote write endpoint for Prometheus.
"""
return pulumi.get(self, "remote_write_endpoint")
@remote_write_endpoint.setter
def remote_write_endpoint(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "remote_write_endpoint", value)
@pulumi.input_type
class StorageClusterSpecNetworkArgs:
def __init__(__self__, *,
data_interface: Optional[pulumi.Input[str]] = None,
mgmt_interface: Optional[pulumi.Input[str]] = None):
"""
Contains network information that is needed by the storage driver.
:param pulumi.Input[str] data_interface: Name of the network interface used by the storage driver for data traffic.
:param pulumi.Input[str] mgmt_interface: Name of the network interface used by the storage driver for management traffic.
"""
if data_interface is not None:
pulumi.set(__self__, "data_interface", data_interface)
if mgmt_interface is not None:
pulumi.set(__self__, "mgmt_interface", mgmt_interface)
@property
@pulumi.getter(name="dataInterface")
def data_interface(self) -> Optional[pulumi.Input[str]]:
"""
Name of the network interface used by the storage driver for data traffic.
"""
return pulumi.get(self, "data_interface")
@data_interface.setter
def data_interface(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_interface", value)
@property
@pulumi.getter(name="mgmtInterface")
def mgmt_interface(self) -> Optional[pulumi.Input[str]]:
"""
Name of the network interface used by the storage driver for management traffic.
"""
return pulumi.get(self, "mgmt_interface")
@mgmt_interface.setter
def mgmt_interface(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mgmt_interface", value)
@pulumi.input_type
class StorageClusterSpecNodesArgs:
def __init__(__self__, *,
env: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesEnvArgs']]]] = None,
network: Optional[pulumi.Input['StorageClusterSpecNodesNetworkArgs']] = None,
runtime_options: Optional[pulumi.Input[Mapping[str, Any]]] = None,
selector: Optional[pulumi.Input['StorageClusterSpecNodesSelectorArgs']] = None,
storage: Optional[pulumi.Input['StorageClusterSpecNodesStorageArgs']] = None):
"""
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesEnvArgs']]] env: List of environment variables used by the driver. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret. Environment variables specified here at the node level will be merged with the ones present in cluster configuration and sent to the nodes. If there is duplicate, the node level value will take precedence.
:param pulumi.Input['StorageClusterSpecNodesNetworkArgs'] network: Contains network information that is needed by the storage driver.
:param pulumi.Input[Mapping[str, Any]] runtime_options: This is map of any runtime options that need to be sent to the storage driver. The value is a string. If runtime options are present here at node level, they will override the ones from cluster configuration.
:param pulumi.Input['StorageClusterSpecNodesSelectorArgs'] selector: Configuration in this node block is applied to nodes based on this selector. Use either nodeName of labelSelector, not both. If nodeName is used, labelSelector will be ignored.
:param pulumi.Input['StorageClusterSpecNodesStorageArgs'] storage: Details of the storage used by the storage driver.
"""
if env is not None:
pulumi.set(__self__, "env", env)
if network is not None:
pulumi.set(__self__, "network", network)
if runtime_options is not None:
pulumi.set(__self__, "runtime_options", runtime_options)
if selector is not None:
pulumi.set(__self__, "selector", selector)
if storage is not None:
pulumi.set(__self__, "storage", storage)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesEnvArgs']]]]:
"""
List of environment variables used by the driver. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret. Environment variables specified here at the node level will be merged with the ones present in cluster configuration and sent to the nodes. If there is duplicate, the node level value will take precedence.
"""
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['StorageClusterSpecNodesNetworkArgs']]:
"""
Contains network information that is needed by the storage driver.
"""
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['StorageClusterSpecNodesNetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter(name="runtimeOptions")
def runtime_options(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
This is map of any runtime options that need to be sent to the storage driver. The value is a string. If runtime options are present here at node level, they will override the ones from cluster configuration.
"""
return pulumi.get(self, "runtime_options")
@runtime_options.setter
def runtime_options(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "runtime_options", value)
@property
@pulumi.getter
def selector(self) -> Optional[pulumi.Input['StorageClusterSpecNodesSelectorArgs']]:
"""
Configuration in this node block is applied to nodes based on this selector. Use either nodeName of labelSelector, not both. If nodeName is used, labelSelector will be ignored.
"""
return pulumi.get(self, "selector")
@selector.setter
def selector(self, value: Optional[pulumi.Input['StorageClusterSpecNodesSelectorArgs']]):
pulumi.set(self, "selector", value)
@property
@pulumi.getter
def storage(self) -> Optional[pulumi.Input['StorageClusterSpecNodesStorageArgs']]:
"""
Details of the storage used by the storage driver.
"""
return pulumi.get(self, "storage")
@storage.setter
def storage(self, value: Optional[pulumi.Input['StorageClusterSpecNodesStorageArgs']]):
pulumi.set(self, "storage", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
value_from: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromArgs']] = None):
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
if value_from is not None:
pulumi.set(__self__, "value_from", value_from)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="valueFrom")
def value_from(self) -> Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromArgs']]:
return pulumi.get(self, "value_from")
@value_from.setter
def value_from(self, value: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromArgs']]):
pulumi.set(self, "value_from", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvValueFromArgs:
def __init__(__self__, *,
config_map_key_ref: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromConfigMapKeyRefArgs']] = None,
field_ref: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromFieldRefArgs']] = None,
resource_field_ref: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromResourceFieldRefArgs']] = None,
secret_key_ref: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromSecretKeyRefArgs']] = None):
if config_map_key_ref is not None:
pulumi.set(__self__, "config_map_key_ref", config_map_key_ref)
if field_ref is not None:
pulumi.set(__self__, "field_ref", field_ref)
if resource_field_ref is not None:
pulumi.set(__self__, "resource_field_ref", resource_field_ref)
if secret_key_ref is not None:
pulumi.set(__self__, "secret_key_ref", secret_key_ref)
@property
@pulumi.getter(name="configMapKeyRef")
def config_map_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromConfigMapKeyRefArgs']]:
return pulumi.get(self, "config_map_key_ref")
@config_map_key_ref.setter
def config_map_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromConfigMapKeyRefArgs']]):
pulumi.set(self, "config_map_key_ref", value)
@property
@pulumi.getter(name="fieldRef")
def field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromFieldRefArgs']]:
return pulumi.get(self, "field_ref")
@field_ref.setter
def field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromFieldRefArgs']]):
pulumi.set(self, "field_ref", value)
@property
@pulumi.getter(name="resourceFieldRef")
def resource_field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromResourceFieldRefArgs']]:
return pulumi.get(self, "resource_field_ref")
@resource_field_ref.setter
def resource_field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromResourceFieldRefArgs']]):
pulumi.set(self, "resource_field_ref", value)
@property
@pulumi.getter(name="secretKeyRef")
def secret_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromSecretKeyRefArgs']]:
return pulumi.get(self, "secret_key_ref")
@secret_key_ref.setter
def secret_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecNodesEnvValueFromSecretKeyRefArgs']]):
pulumi.set(self, "secret_key_ref", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvValueFromConfigMapKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvValueFromFieldRefArgs:
def __init__(__self__, *,
api_version: Optional[pulumi.Input[str]] = None,
field_path: Optional[pulumi.Input[str]] = None):
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "field_path")
@field_path.setter
def field_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "field_path", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvValueFromResourceFieldRefArgs:
def __init__(__self__, *,
container_name: Optional[pulumi.Input[str]] = None,
divisor: Optional[pulumi.Input[str]] = None,
resource: Optional[pulumi.Input[str]] = None):
if container_name is not None:
pulumi.set(__self__, "container_name", container_name)
if divisor is not None:
pulumi.set(__self__, "divisor", divisor)
if resource is not None:
pulumi.set(__self__, "resource", resource)
@property
@pulumi.getter(name="containerName")
def container_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "container_name")
@container_name.setter
def container_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_name", value)
@property
@pulumi.getter
def divisor(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "divisor")
@divisor.setter
def divisor(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "divisor", value)
@property
@pulumi.getter
def resource(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "resource")
@resource.setter
def resource(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource", value)
@pulumi.input_type
class StorageClusterSpecNodesEnvValueFromSecretKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecNodesNetworkArgs:
def __init__(__self__, *,
data_interface: Optional[pulumi.Input[str]] = None,
mgmt_interface: Optional[pulumi.Input[str]] = None):
"""
Contains network information that is needed by the storage driver.
:param pulumi.Input[str] data_interface: Name of the network interface used by the storage driver for data traffic.
:param pulumi.Input[str] mgmt_interface: Name of the network interface used by the storage driver for management traffic.
"""
if data_interface is not None:
pulumi.set(__self__, "data_interface", data_interface)
if mgmt_interface is not None:
pulumi.set(__self__, "mgmt_interface", mgmt_interface)
@property
@pulumi.getter(name="dataInterface")
def data_interface(self) -> Optional[pulumi.Input[str]]:
"""
Name of the network interface used by the storage driver for data traffic.
"""
return pulumi.get(self, "data_interface")
@data_interface.setter
def data_interface(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_interface", value)
@property
@pulumi.getter(name="mgmtInterface")
def mgmt_interface(self) -> Optional[pulumi.Input[str]]:
"""
Name of the network interface used by the storage driver for management traffic.
"""
return pulumi.get(self, "mgmt_interface")
@mgmt_interface.setter
def mgmt_interface(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mgmt_interface", value)
@pulumi.input_type
class StorageClusterSpecNodesSelectorArgs:
def __init__(__self__, *,
label_selector: Optional[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorArgs']] = None,
node_name: Optional[pulumi.Input[str]] = None):
"""
Configuration in this node block is applied to nodes based on this selector. Use either nodeName of labelSelector, not both. If nodeName is used, labelSelector will be ignored.
:param pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorArgs'] label_selector: It is a label query over all the nodes. The result of matchLabels and matchExpressions is ANDed. An empty label selector matches all nodes. A null label selector matches no objects.
:param pulumi.Input[str] node_name: Name of the Kubernetes node that is to be selected. If present then the labelSelector is ignored even if the node with the given name is absent and the labelSelector matches another node.
"""
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if node_name is not None:
pulumi.set(__self__, "node_name", node_name)
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorArgs']]:
"""
It is a label query over all the nodes. The result of matchLabels and matchExpressions is ANDed. An empty label selector matches all nodes. A null label selector matches no objects.
"""
return pulumi.get(self, "label_selector")
@label_selector.setter
def label_selector(self, value: Optional[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorArgs']]):
pulumi.set(self, "label_selector", value)
@property
@pulumi.getter(name="nodeName")
def node_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the Kubernetes node that is to be selected. If present then the labelSelector is ignored even if the node with the given name is absent and the labelSelector matches another node.
"""
return pulumi.get(self, "node_name")
@node_name.setter
def node_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "node_name", value)
@pulumi.input_type
class StorageClusterSpecNodesSelectorLabelSelectorArgs:
def __init__(__self__, *,
match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs']]]] = None,
match_labels: Optional[pulumi.Input[Mapping[str, Any]]] = None):
"""
It is a label query over all the nodes. The result of matchLabels and matchExpressions is ANDed. An empty label selector matches all nodes. A null label selector matches no objects.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs']]] match_expressions: It is a list of label selector requirements. The requirements are ANDed.
:param pulumi.Input[Mapping[str, Any]] match_labels: It is a map of key-value pairs. A single key-value in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs']]]]:
"""
It is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@match_expressions.setter
def match_expressions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs']]]]):
pulumi.set(self, "match_expressions", value)
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
It is a map of key-value pairs. A single key-value in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
@match_labels.setter
def match_labels(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "match_labels", value)
@pulumi.input_type
class StorageClusterSpecNodesSelectorLabelSelectorMatchExpressionsArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
operator: Optional[pulumi.Input[str]] = None,
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
:param pulumi.Input[str] key: It is the label key that the selector applies to.
:param pulumi.Input[str] operator: It represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param pulumi.Input[Sequence[pulumi.Input[str]]] values: It is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.
"""
if key is not None:
pulumi.set(__self__, "key", key)
if operator is not None:
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
It is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> Optional[pulumi.Input[str]]:
"""
It represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
It is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.
"""
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class StorageClusterSpecNodesStorageArgs:
def __init__(__self__, *,
devices: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
force_use_disks: Optional[pulumi.Input[bool]] = None,
journal_device: Optional[pulumi.Input[str]] = None,
kvdb_device: Optional[pulumi.Input[str]] = None,
system_metadata_device: Optional[pulumi.Input[str]] = None,
use_all: Optional[pulumi.Input[bool]] = None,
use_all_with_partitions: Optional[pulumi.Input[bool]] = None):
"""
Details of the storage used by the storage driver.
:param pulumi.Input[Sequence[pulumi.Input[str]]] devices: List of devices to be used by the storage driver.
:param pulumi.Input[bool] force_use_disks: Flag indicating to use the devices even if there is file system present on it. Note that the devices may be wiped before using.
:param pulumi.Input[str] journal_device: Device used for journaling.
:param pulumi.Input[str] kvdb_device: Device used for internal KVDB.
:param pulumi.Input[str] system_metadata_device: Device that will be used to store system metadata by the driver.
:param pulumi.Input[bool] use_all: Use all available, unformatted, unpartitioned devices. This will be ignored if spec.storage.devices is not empty.
:param pulumi.Input[bool] use_all_with_partitions: Use all available unformatted devices. This will be ignored if spec.storage.devices is not empty.
"""
if devices is not None:
pulumi.set(__self__, "devices", devices)
if force_use_disks is not None:
pulumi.set(__self__, "force_use_disks", force_use_disks)
if journal_device is not None:
pulumi.set(__self__, "journal_device", journal_device)
if kvdb_device is not None:
pulumi.set(__self__, "kvdb_device", kvdb_device)
if system_metadata_device is not None:
pulumi.set(__self__, "system_metadata_device", system_metadata_device)
if use_all is not None:
pulumi.set(__self__, "use_all", use_all)
if use_all_with_partitions is not None:
pulumi.set(__self__, "use_all_with_partitions", use_all_with_partitions)
@property
@pulumi.getter
def devices(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of devices to be used by the storage driver.
"""
return pulumi.get(self, "devices")
@devices.setter
def devices(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "devices", value)
@property
@pulumi.getter(name="forceUseDisks")
def force_use_disks(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating to use the devices even if there is file system present on it. Note that the devices may be wiped before using.
"""
return pulumi.get(self, "force_use_disks")
@force_use_disks.setter
def force_use_disks(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_use_disks", value)
@property
@pulumi.getter(name="journalDevice")
def journal_device(self) -> Optional[pulumi.Input[str]]:
"""
Device used for journaling.
"""
return pulumi.get(self, "journal_device")
@journal_device.setter
def journal_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "journal_device", value)
@property
@pulumi.getter(name="kvdbDevice")
def kvdb_device(self) -> Optional[pulumi.Input[str]]:
"""
Device used for internal KVDB.
"""
return pulumi.get(self, "kvdb_device")
@kvdb_device.setter
def kvdb_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kvdb_device", value)
@property
@pulumi.getter(name="systemMetadataDevice")
def system_metadata_device(self) -> Optional[pulumi.Input[str]]:
"""
Device that will be used to store system metadata by the driver.
"""
return pulumi.get(self, "system_metadata_device")
@system_metadata_device.setter
def system_metadata_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "system_metadata_device", value)
@property
@pulumi.getter(name="useAll")
def use_all(self) -> Optional[pulumi.Input[bool]]:
"""
Use all available, unformatted, unpartitioned devices. This will be ignored if spec.storage.devices is not empty.
"""
return pulumi.get(self, "use_all")
@use_all.setter
def use_all(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "use_all", value)
@property
@pulumi.getter(name="useAllWithPartitions")
def use_all_with_partitions(self) -> Optional[pulumi.Input[bool]]:
"""
Use all available unformatted devices. This will be ignored if spec.storage.devices is not empty.
"""
return pulumi.get(self, "use_all_with_partitions")
@use_all_with_partitions.setter
def use_all_with_partitions(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "use_all_with_partitions", value)
@pulumi.input_type
class StorageClusterSpecPlacementArgs:
def __init__(__self__, *,
node_affinity: Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityArgs']] = None,
tolerations: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementTolerationsArgs']]]] = None):
"""
Describes placement configuration for the storage cluster pods.
:param pulumi.Input['StorageClusterSpecPlacementNodeAffinityArgs'] node_affinity: Describes node affinity scheduling rules for the storage cluster pods. This is exactly the same object as Kubernetes node affinity for pods.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementTolerationsArgs']]] tolerations: Tolerations for all the pods deployed by the StorageCluster controller. The pod with this toleration attached will tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>.
"""
if node_affinity is not None:
pulumi.set(__self__, "node_affinity", node_affinity)
if tolerations is not None:
pulumi.set(__self__, "tolerations", tolerations)
@property
@pulumi.getter(name="nodeAffinity")
def node_affinity(self) -> Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityArgs']]:
"""
Describes node affinity scheduling rules for the storage cluster pods. This is exactly the same object as Kubernetes node affinity for pods.
"""
return pulumi.get(self, "node_affinity")
@node_affinity.setter
def node_affinity(self, value: Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityArgs']]):
pulumi.set(self, "node_affinity", value)
@property
@pulumi.getter
def tolerations(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementTolerationsArgs']]]]:
"""
Tolerations for all the pods deployed by the StorageCluster controller. The pod with this toleration attached will tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>.
"""
return pulumi.get(self, "tolerations")
@tolerations.setter
def tolerations(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementTolerationsArgs']]]]):
pulumi.set(self, "tolerations", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityArgs:
def __init__(__self__, *,
preferred_during_scheduling_ignored_during_execution: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]] = None,
required_during_scheduling_ignored_during_execution: Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']] = None):
"""
Describes node affinity scheduling rules for the storage cluster pods. This is exactly the same object as Kubernetes node affinity for pods.
"""
if preferred_during_scheduling_ignored_during_execution is not None:
pulumi.set(__self__, "preferred_during_scheduling_ignored_during_execution", preferred_during_scheduling_ignored_during_execution)
if required_during_scheduling_ignored_during_execution is not None:
pulumi.set(__self__, "required_during_scheduling_ignored_during_execution", required_during_scheduling_ignored_during_execution)
@property
@pulumi.getter(name="preferredDuringSchedulingIgnoredDuringExecution")
def preferred_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]:
return pulumi.get(self, "preferred_during_scheduling_ignored_during_execution")
@preferred_during_scheduling_ignored_during_execution.setter
def preferred_during_scheduling_ignored_during_execution(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs']]]]):
pulumi.set(self, "preferred_during_scheduling_ignored_during_execution", value)
@property
@pulumi.getter(name="requiredDuringSchedulingIgnoredDuringExecution")
def required_during_scheduling_ignored_during_execution(self) -> Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]:
return pulumi.get(self, "required_during_scheduling_ignored_during_execution")
@required_during_scheduling_ignored_during_execution.setter
def required_during_scheduling_ignored_during_execution(self, value: Optional[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs']]):
pulumi.set(self, "required_during_scheduling_ignored_during_execution", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionArgs:
def __init__(__self__, *,
preference: pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceArgs'],
weight: pulumi.Input[int]):
pulumi.set(__self__, "preference", preference)
pulumi.set(__self__, "weight", weight)
@property
@pulumi.getter
def preference(self) -> pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceArgs']:
return pulumi.get(self, "preference")
@preference.setter
def preference(self, value: pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceArgs']):
pulumi.set(self, "preference", value)
@property
@pulumi.getter
def weight(self) -> pulumi.Input[int]:
return pulumi.get(self, "weight")
@weight.setter
def weight(self, value: pulumi.Input[int]):
pulumi.set(self, "weight", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceArgs:
def __init__(__self__, *,
match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchExpressionsArgs']]]] = None,
match_fields: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchFieldsArgs']]]] = None):
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_fields is not None:
pulumi.set(__self__, "match_fields", match_fields)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchExpressionsArgs']]]]:
return pulumi.get(self, "match_expressions")
@match_expressions.setter
def match_expressions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchExpressionsArgs']]]]):
pulumi.set(self, "match_expressions", value)
@property
@pulumi.getter(name="matchFields")
def match_fields(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchFieldsArgs']]]]:
return pulumi.get(self, "match_fields")
@match_fields.setter
def match_fields(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchFieldsArgs']]]]):
pulumi.set(self, "match_fields", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchExpressionsArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
operator: pulumi.Input[str],
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityPreferredDuringSchedulingIgnoredDuringExecutionPreferenceMatchFieldsArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
operator: pulumi.Input[str],
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionArgs:
def __init__(__self__, *,
node_selector_terms: pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsArgs']]]):
pulumi.set(__self__, "node_selector_terms", node_selector_terms)
@property
@pulumi.getter(name="nodeSelectorTerms")
def node_selector_terms(self) -> pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsArgs']]]:
return pulumi.get(self, "node_selector_terms")
@node_selector_terms.setter
def node_selector_terms(self, value: pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsArgs']]]):
pulumi.set(self, "node_selector_terms", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsArgs:
def __init__(__self__, *,
match_expressions: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchExpressionsArgs']]]] = None,
match_fields: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchFieldsArgs']]]] = None):
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_fields is not None:
pulumi.set(__self__, "match_fields", match_fields)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchExpressionsArgs']]]]:
return pulumi.get(self, "match_expressions")
@match_expressions.setter
def match_expressions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchExpressionsArgs']]]]):
pulumi.set(self, "match_expressions", value)
@property
@pulumi.getter(name="matchFields")
def match_fields(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchFieldsArgs']]]]:
return pulumi.get(self, "match_fields")
@match_fields.setter
def match_fields(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchFieldsArgs']]]]):
pulumi.set(self, "match_fields", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchExpressionsArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
operator: pulumi.Input[str],
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class StorageClusterSpecPlacementNodeAffinityRequiredDuringSchedulingIgnoredDuringExecutionNodeSelectorTermsMatchFieldsArgs:
def __init__(__self__, *,
key: pulumi.Input[str],
operator: pulumi.Input[str],
values: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> pulumi.Input[str]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input[str]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> pulumi.Input[str]:
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: pulumi.Input[str]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class StorageClusterSpecPlacementTolerationsArgs:
def __init__(__self__, *,
effect: Optional[pulumi.Input[str]] = None,
key: Optional[pulumi.Input[str]] = None,
operator: Optional[pulumi.Input[str]] = None,
toleration_seconds: Optional[pulumi.Input[int]] = None,
value: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] effect: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
:param pulumi.Input[str] key: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
:param pulumi.Input[str] operator: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
:param pulumi.Input[int] toleration_seconds: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
:param pulumi.Input[str] value: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
"""
if effect is not None:
pulumi.set(__self__, "effect", effect)
if key is not None:
pulumi.set(__self__, "key", key)
if operator is not None:
pulumi.set(__self__, "operator", operator)
if toleration_seconds is not None:
pulumi.set(__self__, "toleration_seconds", toleration_seconds)
if value is not None:
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def effect(self) -> Optional[pulumi.Input[str]]:
"""
Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.
"""
return pulumi.get(self, "effect")
@effect.setter
def effect(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "effect", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
"""
Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def operator(self) -> Optional[pulumi.Input[str]]:
"""
Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.
"""
return pulumi.get(self, "operator")
@operator.setter
def operator(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "operator", value)
@property
@pulumi.getter(name="tolerationSeconds")
def toleration_seconds(self) -> Optional[pulumi.Input[int]]:
"""
TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
"""
return pulumi.get(self, "toleration_seconds")
@toleration_seconds.setter
def toleration_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "toleration_seconds", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
"""
Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@pulumi.input_type
class StorageClusterSpecStorageArgs:
def __init__(__self__, *,
devices: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
force_use_disks: Optional[pulumi.Input[bool]] = None,
journal_device: Optional[pulumi.Input[str]] = None,
kvdb_device: Optional[pulumi.Input[str]] = None,
system_metadata_device: Optional[pulumi.Input[str]] = None,
use_all: Optional[pulumi.Input[bool]] = None,
use_all_with_partitions: Optional[pulumi.Input[bool]] = None):
"""
Details of the storage used by the storage driver.
:param pulumi.Input[Sequence[pulumi.Input[str]]] devices: List of devices to be used by the storage driver.
:param pulumi.Input[bool] force_use_disks: Flag indicating to use the devices even if there is file system present on it. Note that the devices may be wiped before using.
:param pulumi.Input[str] journal_device: Device used for journaling.
:param pulumi.Input[str] kvdb_device: Device used for internal KVDB.
:param pulumi.Input[str] system_metadata_device: Device that will be used to store system metadata by the driver.
:param pulumi.Input[bool] use_all: Use all available, unformatted, unpartitioned devices. This will be ignored if spec.storage.devices is not empty.
:param pulumi.Input[bool] use_all_with_partitions: Use all available unformatted devices. This will be ignored if spec.storage.devices is not empty.
"""
if devices is not None:
pulumi.set(__self__, "devices", devices)
if force_use_disks is not None:
pulumi.set(__self__, "force_use_disks", force_use_disks)
if journal_device is not None:
pulumi.set(__self__, "journal_device", journal_device)
if kvdb_device is not None:
pulumi.set(__self__, "kvdb_device", kvdb_device)
if system_metadata_device is not None:
pulumi.set(__self__, "system_metadata_device", system_metadata_device)
if use_all is not None:
pulumi.set(__self__, "use_all", use_all)
if use_all_with_partitions is not None:
pulumi.set(__self__, "use_all_with_partitions", use_all_with_partitions)
@property
@pulumi.getter
def devices(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
List of devices to be used by the storage driver.
"""
return pulumi.get(self, "devices")
@devices.setter
def devices(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "devices", value)
@property
@pulumi.getter(name="forceUseDisks")
def force_use_disks(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating to use the devices even if there is file system present on it. Note that the devices may be wiped before using.
"""
return pulumi.get(self, "force_use_disks")
@force_use_disks.setter
def force_use_disks(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "force_use_disks", value)
@property
@pulumi.getter(name="journalDevice")
def journal_device(self) -> Optional[pulumi.Input[str]]:
"""
Device used for journaling.
"""
return pulumi.get(self, "journal_device")
@journal_device.setter
def journal_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "journal_device", value)
@property
@pulumi.getter(name="kvdbDevice")
def kvdb_device(self) -> Optional[pulumi.Input[str]]:
"""
Device used for internal KVDB.
"""
return pulumi.get(self, "kvdb_device")
@kvdb_device.setter
def kvdb_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kvdb_device", value)
@property
@pulumi.getter(name="systemMetadataDevice")
def system_metadata_device(self) -> Optional[pulumi.Input[str]]:
"""
Device that will be used to store system metadata by the driver.
"""
return pulumi.get(self, "system_metadata_device")
@system_metadata_device.setter
def system_metadata_device(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "system_metadata_device", value)
@property
@pulumi.getter(name="useAll")
def use_all(self) -> Optional[pulumi.Input[bool]]:
"""
Use all available, unformatted, unpartitioned devices. This will be ignored if spec.storage.devices is not empty.
"""
return pulumi.get(self, "use_all")
@use_all.setter
def use_all(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "use_all", value)
@property
@pulumi.getter(name="useAllWithPartitions")
def use_all_with_partitions(self) -> Optional[pulumi.Input[bool]]:
"""
Use all available unformatted devices. This will be ignored if spec.storage.devices is not empty.
"""
return pulumi.get(self, "use_all_with_partitions")
@use_all_with_partitions.setter
def use_all_with_partitions(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "use_all_with_partitions", value)
@pulumi.input_type
class StorageClusterSpecStorkArgs:
def __init__(__self__, *,
args: Optional[pulumi.Input[Mapping[str, Any]]] = None,
enabled: Optional[pulumi.Input[bool]] = None,
env: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecStorkEnvArgs']]]] = None,
image: Optional[pulumi.Input[str]] = None,
lock_image: Optional[pulumi.Input[bool]] = None):
"""
Contains STORK related spec.
:param pulumi.Input[Mapping[str, Any]] args: It is map of arguments given to STORK. Example: driver: pxd
:param pulumi.Input[bool] enabled: Flag indicating whether STORK needs to be enabled.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecStorkEnvArgs']]] env: List of environment variables used by STORK. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
:param pulumi.Input[str] image: Docker image of the STORK container.
:param pulumi.Input[bool] lock_image: Flag indicating if the STORK image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
"""
if args is not None:
pulumi.set(__self__, "args", args)
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if env is not None:
pulumi.set(__self__, "env", env)
if image is not None:
pulumi.set(__self__, "image", image)
if lock_image is not None:
pulumi.set(__self__, "lock_image", lock_image)
@property
@pulumi.getter
def args(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
It is map of arguments given to STORK. Example: driver: pxd
"""
return pulumi.get(self, "args")
@args.setter
def args(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "args", value)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating whether STORK needs to be enabled.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecStorkEnvArgs']]]]:
"""
List of environment variables used by STORK. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
"""
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecStorkEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Docker image of the STORK container.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="lockImage")
def lock_image(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating if the STORK image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
"""
return pulumi.get(self, "lock_image")
@lock_image.setter
def lock_image(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "lock_image", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
value_from: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromArgs']] = None):
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
if value_from is not None:
pulumi.set(__self__, "value_from", value_from)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="valueFrom")
def value_from(self) -> Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromArgs']]:
return pulumi.get(self, "value_from")
@value_from.setter
def value_from(self, value: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromArgs']]):
pulumi.set(self, "value_from", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvValueFromArgs:
def __init__(__self__, *,
config_map_key_ref: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromConfigMapKeyRefArgs']] = None,
field_ref: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromFieldRefArgs']] = None,
resource_field_ref: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromResourceFieldRefArgs']] = None,
secret_key_ref: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromSecretKeyRefArgs']] = None):
if config_map_key_ref is not None:
pulumi.set(__self__, "config_map_key_ref", config_map_key_ref)
if field_ref is not None:
pulumi.set(__self__, "field_ref", field_ref)
if resource_field_ref is not None:
pulumi.set(__self__, "resource_field_ref", resource_field_ref)
if secret_key_ref is not None:
pulumi.set(__self__, "secret_key_ref", secret_key_ref)
@property
@pulumi.getter(name="configMapKeyRef")
def config_map_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromConfigMapKeyRefArgs']]:
return pulumi.get(self, "config_map_key_ref")
@config_map_key_ref.setter
def config_map_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromConfigMapKeyRefArgs']]):
pulumi.set(self, "config_map_key_ref", value)
@property
@pulumi.getter(name="fieldRef")
def field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromFieldRefArgs']]:
return pulumi.get(self, "field_ref")
@field_ref.setter
def field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromFieldRefArgs']]):
pulumi.set(self, "field_ref", value)
@property
@pulumi.getter(name="resourceFieldRef")
def resource_field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromResourceFieldRefArgs']]:
return pulumi.get(self, "resource_field_ref")
@resource_field_ref.setter
def resource_field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromResourceFieldRefArgs']]):
pulumi.set(self, "resource_field_ref", value)
@property
@pulumi.getter(name="secretKeyRef")
def secret_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromSecretKeyRefArgs']]:
return pulumi.get(self, "secret_key_ref")
@secret_key_ref.setter
def secret_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecStorkEnvValueFromSecretKeyRefArgs']]):
pulumi.set(self, "secret_key_ref", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvValueFromConfigMapKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvValueFromFieldRefArgs:
def __init__(__self__, *,
api_version: Optional[pulumi.Input[str]] = None,
field_path: Optional[pulumi.Input[str]] = None):
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "field_path")
@field_path.setter
def field_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "field_path", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvValueFromResourceFieldRefArgs:
def __init__(__self__, *,
container_name: Optional[pulumi.Input[str]] = None,
divisor: Optional[pulumi.Input[str]] = None,
resource: Optional[pulumi.Input[str]] = None):
if container_name is not None:
pulumi.set(__self__, "container_name", container_name)
if divisor is not None:
pulumi.set(__self__, "divisor", divisor)
if resource is not None:
pulumi.set(__self__, "resource", resource)
@property
@pulumi.getter(name="containerName")
def container_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "container_name")
@container_name.setter
def container_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_name", value)
@property
@pulumi.getter
def divisor(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "divisor")
@divisor.setter
def divisor(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "divisor", value)
@property
@pulumi.getter
def resource(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "resource")
@resource.setter
def resource(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource", value)
@pulumi.input_type
class StorageClusterSpecStorkEnvValueFromSecretKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecUpdateStrategyArgs:
def __init__(__self__, *,
rolling_update: Optional[pulumi.Input['StorageClusterSpecUpdateStrategyRollingUpdateArgs']] = None,
type: Optional[pulumi.Input[str]] = None):
"""
An update strategy to replace existing StorageCluster pods with new pods.
:param pulumi.Input['StorageClusterSpecUpdateStrategyRollingUpdateArgs'] rolling_update: Spec to control the desired behavior of storage cluster rolling update.
:param pulumi.Input[str] type: Type of storage cluster update. Can be RollingUpdate or OnDelete. Default is RollingUpdate.
"""
if rolling_update is not None:
pulumi.set(__self__, "rolling_update", rolling_update)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="rollingUpdate")
def rolling_update(self) -> Optional[pulumi.Input['StorageClusterSpecUpdateStrategyRollingUpdateArgs']]:
"""
Spec to control the desired behavior of storage cluster rolling update.
"""
return pulumi.get(self, "rolling_update")
@rolling_update.setter
def rolling_update(self, value: Optional[pulumi.Input['StorageClusterSpecUpdateStrategyRollingUpdateArgs']]):
pulumi.set(self, "rolling_update", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of storage cluster update. Can be RollingUpdate or OnDelete. Default is RollingUpdate.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageClusterSpecUpdateStrategyRollingUpdateArgs:
def __init__(__self__, *,
max_unavailable: Optional[pulumi.Input[Union[int, str]]] = None):
"""
Spec to control the desired behavior of storage cluster rolling update.
:param pulumi.Input[Union[int, str]] max_unavailable: The maximum number of StorageCluster pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of StorageCluster pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the storage pod can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those StorageCluster pods and then brings up new StorageCluster pods in their place. Once the new pods are available, it then proceeds onto other StorageCluster pods, thus ensuring that at least 70% of original number of StorageCluster pods are available at all times during the update.
"""
if max_unavailable is not None:
pulumi.set(__self__, "max_unavailable", max_unavailable)
@property
@pulumi.getter(name="maxUnavailable")
def max_unavailable(self) -> Optional[pulumi.Input[Union[int, str]]]:
"""
The maximum number of StorageCluster pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of StorageCluster pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the storage pod can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those StorageCluster pods and then brings up new StorageCluster pods in their place. Once the new pods are available, it then proceeds onto other StorageCluster pods, thus ensuring that at least 70% of original number of StorageCluster pods are available at all times during the update.
"""
return pulumi.get(self, "max_unavailable")
@max_unavailable.setter
def max_unavailable(self, value: Optional[pulumi.Input[Union[int, str]]]):
pulumi.set(self, "max_unavailable", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceArgs:
def __init__(__self__, *,
enabled: Optional[pulumi.Input[bool]] = None,
env: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecUserInterfaceEnvArgs']]]] = None,
image: Optional[pulumi.Input[str]] = None,
lock_image: Optional[pulumi.Input[bool]] = None):
"""
Contains spec of a user interface for the storage driver.
:param pulumi.Input[bool] enabled: Flag indicating whether the user interface needs to be enabled.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecUserInterfaceEnvArgs']]] env: List of environment variables used by the UI components. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
:param pulumi.Input[str] image: Docker image of the user interface container.
:param pulumi.Input[bool] lock_image: Flag indicating if the user interface image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
"""
if enabled is not None:
pulumi.set(__self__, "enabled", enabled)
if env is not None:
pulumi.set(__self__, "env", env)
if image is not None:
pulumi.set(__self__, "image", image)
if lock_image is not None:
pulumi.set(__self__, "lock_image", lock_image)
@property
@pulumi.getter
def enabled(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating whether the user interface needs to be enabled.
"""
return pulumi.get(self, "enabled")
@enabled.setter
def enabled(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enabled", value)
@property
@pulumi.getter
def env(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecUserInterfaceEnvArgs']]]]:
"""
List of environment variables used by the UI components. This is an array of Kubernetes EnvVar where the value can be given directly or from a source like field, config map or secret.
"""
return pulumi.get(self, "env")
@env.setter
def env(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterSpecUserInterfaceEnvArgs']]]]):
pulumi.set(self, "env", value)
@property
@pulumi.getter
def image(self) -> Optional[pulumi.Input[str]]:
"""
Docker image of the user interface container.
"""
return pulumi.get(self, "image")
@image.setter
def image(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "image", value)
@property
@pulumi.getter(name="lockImage")
def lock_image(self) -> Optional[pulumi.Input[bool]]:
"""
Flag indicating if the user interface image needs to be locked to the given image. If the image is not locked, it can be updated by the storage driver during upgrades.
"""
return pulumi.get(self, "lock_image")
@lock_image.setter
def lock_image(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "lock_image", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvArgs:
def __init__(__self__, *,
name: Optional[pulumi.Input[str]] = None,
value: Optional[pulumi.Input[str]] = None,
value_from: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromArgs']] = None):
if name is not None:
pulumi.set(__self__, "name", name)
if value is not None:
pulumi.set(__self__, "value", value)
if value_from is not None:
pulumi.set(__self__, "value_from", value_from)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "value")
@value.setter
def value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "value", value)
@property
@pulumi.getter(name="valueFrom")
def value_from(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromArgs']]:
return pulumi.get(self, "value_from")
@value_from.setter
def value_from(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromArgs']]):
pulumi.set(self, "value_from", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvValueFromArgs:
def __init__(__self__, *,
config_map_key_ref: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromConfigMapKeyRefArgs']] = None,
field_ref: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromFieldRefArgs']] = None,
resource_field_ref: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromResourceFieldRefArgs']] = None,
secret_key_ref: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromSecretKeyRefArgs']] = None):
if config_map_key_ref is not None:
pulumi.set(__self__, "config_map_key_ref", config_map_key_ref)
if field_ref is not None:
pulumi.set(__self__, "field_ref", field_ref)
if resource_field_ref is not None:
pulumi.set(__self__, "resource_field_ref", resource_field_ref)
if secret_key_ref is not None:
pulumi.set(__self__, "secret_key_ref", secret_key_ref)
@property
@pulumi.getter(name="configMapKeyRef")
def config_map_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromConfigMapKeyRefArgs']]:
return pulumi.get(self, "config_map_key_ref")
@config_map_key_ref.setter
def config_map_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromConfigMapKeyRefArgs']]):
pulumi.set(self, "config_map_key_ref", value)
@property
@pulumi.getter(name="fieldRef")
def field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromFieldRefArgs']]:
return pulumi.get(self, "field_ref")
@field_ref.setter
def field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromFieldRefArgs']]):
pulumi.set(self, "field_ref", value)
@property
@pulumi.getter(name="resourceFieldRef")
def resource_field_ref(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromResourceFieldRefArgs']]:
return pulumi.get(self, "resource_field_ref")
@resource_field_ref.setter
def resource_field_ref(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromResourceFieldRefArgs']]):
pulumi.set(self, "resource_field_ref", value)
@property
@pulumi.getter(name="secretKeyRef")
def secret_key_ref(self) -> Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromSecretKeyRefArgs']]:
return pulumi.get(self, "secret_key_ref")
@secret_key_ref.setter
def secret_key_ref(self, value: Optional[pulumi.Input['StorageClusterSpecUserInterfaceEnvValueFromSecretKeyRefArgs']]):
pulumi.set(self, "secret_key_ref", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvValueFromConfigMapKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvValueFromFieldRefArgs:
def __init__(__self__, *,
api_version: Optional[pulumi.Input[str]] = None,
field_path: Optional[pulumi.Input[str]] = None):
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "api_version")
@api_version.setter
def api_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "api_version", value)
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "field_path")
@field_path.setter
def field_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "field_path", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvValueFromResourceFieldRefArgs:
def __init__(__self__, *,
container_name: Optional[pulumi.Input[str]] = None,
divisor: Optional[pulumi.Input[str]] = None,
resource: Optional[pulumi.Input[str]] = None):
if container_name is not None:
pulumi.set(__self__, "container_name", container_name)
if divisor is not None:
pulumi.set(__self__, "divisor", divisor)
if resource is not None:
pulumi.set(__self__, "resource", resource)
@property
@pulumi.getter(name="containerName")
def container_name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "container_name")
@container_name.setter
def container_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "container_name", value)
@property
@pulumi.getter
def divisor(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "divisor")
@divisor.setter
def divisor(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "divisor", value)
@property
@pulumi.getter
def resource(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "resource")
@resource.setter
def resource(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource", value)
@pulumi.input_type
class StorageClusterSpecUserInterfaceEnvValueFromSecretKeyRefArgs:
def __init__(__self__, *,
key: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
optional: Optional[pulumi.Input[bool]] = None):
if key is not None:
pulumi.set(__self__, "key", key)
if name is not None:
pulumi.set(__self__, "name", name)
if optional is not None:
pulumi.set(__self__, "optional", optional)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def optional(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "optional")
@optional.setter
def optional(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "optional", value)
@pulumi.input_type
class StorageClusterStatusArgs:
def __init__(__self__, *,
cluster_name: Optional[pulumi.Input[str]] = None,
cluster_uid: Optional[pulumi.Input[str]] = None,
collision_count: Optional[pulumi.Input[int]] = None,
conditions: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterStatusConditionsArgs']]]] = None,
phase: Optional[pulumi.Input[str]] = None,
storage: Optional[pulumi.Input['StorageClusterStatusStorageArgs']] = None):
"""
Most recently observed status of the storage cluster. This data may not be up to date.
:param pulumi.Input[str] cluster_name: Name of the storage cluster.
:param pulumi.Input[str] cluster_uid: Unique ID of the storage cluster.
:param pulumi.Input[int] collision_count: Count of hash collisions for the StorageCluster. The StorageCluster controller uses this field as a collision avoidance mechanism when it needs to create the name of the newest ControllerRevision.
:param pulumi.Input[Sequence[pulumi.Input['StorageClusterStatusConditionsArgs']]] conditions: Contains details for the current condition of this cluster.
:param pulumi.Input[str] phase: Phase of the StorageCluster is a simple, high-level summary of where the StorageCluster is in its lifecycle. The condition array contains more detailed information about the state of the cluster.
:param pulumi.Input['StorageClusterStatusStorageArgs'] storage: Contains details of storage in the cluster.
"""
if cluster_name is not None:
pulumi.set(__self__, "cluster_name", cluster_name)
if cluster_uid is not None:
pulumi.set(__self__, "cluster_uid", cluster_uid)
if collision_count is not None:
pulumi.set(__self__, "collision_count", collision_count)
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if storage is not None:
pulumi.set(__self__, "storage", storage)
@property
@pulumi.getter(name="clusterName")
def cluster_name(self) -> Optional[pulumi.Input[str]]:
"""
Name of the storage cluster.
"""
return pulumi.get(self, "cluster_name")
@cluster_name.setter
def cluster_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cluster_name", value)
@property
@pulumi.getter(name="clusterUid")
def cluster_uid(self) -> Optional[pulumi.Input[str]]:
"""
Unique ID of the storage cluster.
"""
return pulumi.get(self, "cluster_uid")
@cluster_uid.setter
def cluster_uid(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cluster_uid", value)
@property
@pulumi.getter(name="collisionCount")
def collision_count(self) -> Optional[pulumi.Input[int]]:
"""
Count of hash collisions for the StorageCluster. The StorageCluster controller uses this field as a collision avoidance mechanism when it needs to create the name of the newest ControllerRevision.
"""
return pulumi.get(self, "collision_count")
@collision_count.setter
def collision_count(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "collision_count", value)
@property
@pulumi.getter
def conditions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterStatusConditionsArgs']]]]:
"""
Contains details for the current condition of this cluster.
"""
return pulumi.get(self, "conditions")
@conditions.setter
def conditions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageClusterStatusConditionsArgs']]]]):
pulumi.set(self, "conditions", value)
@property
@pulumi.getter
def phase(self) -> Optional[pulumi.Input[str]]:
"""
Phase of the StorageCluster is a simple, high-level summary of where the StorageCluster is in its lifecycle. The condition array contains more detailed information about the state of the cluster.
"""
return pulumi.get(self, "phase")
@phase.setter
def phase(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "phase", value)
@property
@pulumi.getter
def storage(self) -> Optional[pulumi.Input['StorageClusterStatusStorageArgs']]:
"""
Contains details of storage in the cluster.
"""
return pulumi.get(self, "storage")
@storage.setter
def storage(self, value: Optional[pulumi.Input['StorageClusterStatusStorageArgs']]):
pulumi.set(self, "storage", value)
@pulumi.input_type
class StorageClusterStatusConditionsArgs:
def __init__(__self__, *,
reason: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] reason: Reason is human readable message indicating details about the current state of the cluster.
:param pulumi.Input[str] status: Status of the condition.
:param pulumi.Input[str] type: Type of the condition.
"""
if reason is not None:
pulumi.set(__self__, "reason", reason)
if status is not None:
pulumi.set(__self__, "status", status)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def reason(self) -> Optional[pulumi.Input[str]]:
"""
Reason is human readable message indicating details about the current state of the cluster.
"""
return pulumi.get(self, "reason")
@reason.setter
def reason(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "reason", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Status of the condition.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of the condition.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageClusterStatusStorageArgs:
def __init__(__self__, *,
storage_nodes_per_zone: Optional[pulumi.Input[int]] = None):
"""
Contains details of storage in the cluster.
:param pulumi.Input[int] storage_nodes_per_zone: The number of storage nodes per zone in the cluster.
"""
if storage_nodes_per_zone is not None:
pulumi.set(__self__, "storage_nodes_per_zone", storage_nodes_per_zone)
@property
@pulumi.getter(name="storageNodesPerZone")
def storage_nodes_per_zone(self) -> Optional[pulumi.Input[int]]:
"""
The number of storage nodes per zone in the cluster.
"""
return pulumi.get(self, "storage_nodes_per_zone")
@storage_nodes_per_zone.setter
def storage_nodes_per_zone(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "storage_nodes_per_zone", value)
@pulumi.input_type
class StorageNodeSpecArgs:
def __init__(__self__, *,
cloud_storage: Optional[pulumi.Input['StorageNodeSpecCloudStorageArgs']] = None,
version: Optional[pulumi.Input[str]] = None):
"""
The desired behavior of the storage node. Currently changing the spec does not affect the actual storage node in the cluster. Eventually spec in StorageNode will override the spec from StorageCluster so that configuration can be overridden at node level.
:param pulumi.Input['StorageNodeSpecCloudStorageArgs'] cloud_storage: Details of storage on the node for cloud environments.
:param pulumi.Input[str] version: Version of the storage driver on the node.
"""
if cloud_storage is not None:
pulumi.set(__self__, "cloud_storage", cloud_storage)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="cloudStorage")
def cloud_storage(self) -> Optional[pulumi.Input['StorageNodeSpecCloudStorageArgs']]:
"""
Details of storage on the node for cloud environments.
"""
return pulumi.get(self, "cloud_storage")
@cloud_storage.setter
def cloud_storage(self, value: Optional[pulumi.Input['StorageNodeSpecCloudStorageArgs']]):
pulumi.set(self, "cloud_storage", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
Version of the storage driver on the node.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
@pulumi.input_type
class StorageNodeSpecCloudStorageArgs:
def __init__(__self__, *,
drive_configs: Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeSpecCloudStorageDriveConfigsArgs']]]] = None):
"""
Details of storage on the node for cloud environments.
:param pulumi.Input[Sequence[pulumi.Input['StorageNodeSpecCloudStorageDriveConfigsArgs']]] drive_configs: List of cloud drive configs for the storage node.
"""
if drive_configs is not None:
pulumi.set(__self__, "drive_configs", drive_configs)
@property
@pulumi.getter(name="driveConfigs")
def drive_configs(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeSpecCloudStorageDriveConfigsArgs']]]]:
"""
List of cloud drive configs for the storage node.
"""
return pulumi.get(self, "drive_configs")
@drive_configs.setter
def drive_configs(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeSpecCloudStorageDriveConfigsArgs']]]]):
pulumi.set(self, "drive_configs", value)
@pulumi.input_type
class StorageNodeSpecCloudStorageDriveConfigsArgs:
def __init__(__self__, *,
iops: Optional[pulumi.Input[int]] = None,
options: Optional[pulumi.Input[Mapping[str, Any]]] = None,
size_in_gi_b: Optional[pulumi.Input[int]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[int] iops: IOPS required from the cloud drive.
:param pulumi.Input[Mapping[str, Any]] options: Additional options for the cloud drive.
:param pulumi.Input[int] size_in_gi_b: Size of cloud drive in GiB.
:param pulumi.Input[str] type: Type of cloud drive.
"""
if iops is not None:
pulumi.set(__self__, "iops", iops)
if options is not None:
pulumi.set(__self__, "options", options)
if size_in_gi_b is not None:
pulumi.set(__self__, "size_in_gi_b", size_in_gi_b)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def iops(self) -> Optional[pulumi.Input[int]]:
"""
IOPS required from the cloud drive.
"""
return pulumi.get(self, "iops")
@iops.setter
def iops(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "iops", value)
@property
@pulumi.getter
def options(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Additional options for the cloud drive.
"""
return pulumi.get(self, "options")
@options.setter
def options(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "options", value)
@property
@pulumi.getter(name="sizeInGiB")
def size_in_gi_b(self) -> Optional[pulumi.Input[int]]:
"""
Size of cloud drive in GiB.
"""
return pulumi.get(self, "size_in_gi_b")
@size_in_gi_b.setter
def size_in_gi_b(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size_in_gi_b", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of cloud drive.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageNodeStatusArgs:
def __init__(__self__, *,
conditions: Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeStatusConditionsArgs']]]] = None,
geography: Optional[pulumi.Input['StorageNodeStatusGeographyArgs']] = None,
network: Optional[pulumi.Input['StorageNodeStatusNetworkArgs']] = None,
node_uid: Optional[pulumi.Input[str]] = None,
phase: Optional[pulumi.Input[str]] = None):
"""
Most recently observed status of the storage node. The data may not be up to date.
:param pulumi.Input[Sequence[pulumi.Input['StorageNodeStatusConditionsArgs']]] conditions: Contains details for the current condition of this storage node.
:param pulumi.Input['StorageNodeStatusGeographyArgs'] geography: Contains topology information for the storage node.
:param pulumi.Input[str] node_uid: Unique ID of the storage node.
:param pulumi.Input[str] phase: Phase of the StorageNode is a simple, high-level summary of where the StorageNode is in its lifecycle. The condition array contains more detailed information about the state of the node.
"""
if conditions is not None:
pulumi.set(__self__, "conditions", conditions)
if geography is not None:
pulumi.set(__self__, "geography", geography)
if network is not None:
pulumi.set(__self__, "network", network)
if node_uid is not None:
pulumi.set(__self__, "node_uid", node_uid)
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter
def conditions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeStatusConditionsArgs']]]]:
"""
Contains details for the current condition of this storage node.
"""
return pulumi.get(self, "conditions")
@conditions.setter
def conditions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['StorageNodeStatusConditionsArgs']]]]):
pulumi.set(self, "conditions", value)
@property
@pulumi.getter
def geography(self) -> Optional[pulumi.Input['StorageNodeStatusGeographyArgs']]:
"""
Contains topology information for the storage node.
"""
return pulumi.get(self, "geography")
@geography.setter
def geography(self, value: Optional[pulumi.Input['StorageNodeStatusGeographyArgs']]):
pulumi.set(self, "geography", value)
@property
@pulumi.getter
def network(self) -> Optional[pulumi.Input['StorageNodeStatusNetworkArgs']]:
return pulumi.get(self, "network")
@network.setter
def network(self, value: Optional[pulumi.Input['StorageNodeStatusNetworkArgs']]):
pulumi.set(self, "network", value)
@property
@pulumi.getter(name="nodeUid")
def node_uid(self) -> Optional[pulumi.Input[str]]:
"""
Unique ID of the storage node.
"""
return pulumi.get(self, "node_uid")
@node_uid.setter
def node_uid(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "node_uid", value)
@property
@pulumi.getter
def phase(self) -> Optional[pulumi.Input[str]]:
"""
Phase of the StorageNode is a simple, high-level summary of where the StorageNode is in its lifecycle. The condition array contains more detailed information about the state of the node.
"""
return pulumi.get(self, "phase")
@phase.setter
def phase(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "phase", value)
@pulumi.input_type
class StorageNodeStatusConditionsArgs:
def __init__(__self__, *,
reason: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] reason: Reason is the human readable message indicating details about the current state of the cluster.
:param pulumi.Input[str] status: Status of the condition.
:param pulumi.Input[str] type: Type of the condition.
"""
if reason is not None:
pulumi.set(__self__, "reason", reason)
if status is not None:
pulumi.set(__self__, "status", status)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def reason(self) -> Optional[pulumi.Input[str]]:
"""
Reason is the human readable message indicating details about the current state of the cluster.
"""
return pulumi.get(self, "reason")
@reason.setter
def reason(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "reason", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
"""
Status of the condition.
"""
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Type of the condition.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class StorageNodeStatusGeographyArgs:
def __init__(__self__, *,
rack: Optional[pulumi.Input[str]] = None,
region: Optional[pulumi.Input[str]] = None,
zone: Optional[pulumi.Input[str]] = None):
"""
Contains topology information for the storage node.
:param pulumi.Input[str] rack: Rack on which the storage node is placed.
:param pulumi.Input[str] region: Region in which the storage node is placed.
:param pulumi.Input[str] zone: Zone in which the storage node is placed.
"""
if rack is not None:
pulumi.set(__self__, "rack", rack)
if region is not None:
pulumi.set(__self__, "region", region)
if zone is not None:
pulumi.set(__self__, "zone", zone)
@property
@pulumi.getter
def rack(self) -> Optional[pulumi.Input[str]]:
"""
Rack on which the storage node is placed.
"""
return pulumi.get(self, "rack")
@rack.setter
def rack(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "rack", value)
@property
@pulumi.getter
def region(self) -> Optional[pulumi.Input[str]]:
"""
Region in which the storage node is placed.
"""
return pulumi.get(self, "region")
@region.setter
def region(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "region", value)
@property
@pulumi.getter
def zone(self) -> Optional[pulumi.Input[str]]:
"""
Zone in which the storage node is placed.
"""
return pulumi.get(self, "zone")
@zone.setter
def zone(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "zone", value)
@pulumi.input_type
class StorageNodeStatusNetworkArgs:
def __init__(__self__, *,
data_ip: Optional[pulumi.Input[str]] = None,
mgmt_ip: Optional[pulumi.Input[str]] = None):
"""
:param pulumi.Input[str] data_ip: IP address used by the storage driver for data traffic.
:param pulumi.Input[str] mgmt_ip: IP address used by the storage driver for management traffic.
"""
if data_ip is not None:
pulumi.set(__self__, "data_ip", data_ip)
if mgmt_ip is not None:
pulumi.set(__self__, "mgmt_ip", mgmt_ip)
@property
@pulumi.getter(name="dataIP")
def data_ip(self) -> Optional[pulumi.Input[str]]:
"""
IP address used by the storage driver for data traffic.
"""
return pulumi.get(self, "data_ip")
@data_ip.setter
def data_ip(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "data_ip", value)
@property
@pulumi.getter(name="mgmtIP")
def mgmt_ip(self) -> Optional[pulumi.Input[str]]:
"""
IP address used by the storage driver for management traffic.
"""
return pulumi.get(self, "mgmt_ip")
@mgmt_ip.setter
def mgmt_ip(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "mgmt_ip", value)
| 43.499494 | 886 | 0.682875 | 19,512 | 171,910 | 5.849682 | 0.033518 | 0.099265 | 0.115359 | 0.06245 | 0.818888 | 0.777464 | 0.748543 | 0.686242 | 0.670051 | 0.643391 | 0 | 0.000311 | 0.21367 | 171,910 | 3,951 | 887 | 43.510504 | 0.84405 | 0.211116 | 0 | 0.68787 | 1 | 0 | 0.169471 | 0.111636 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205621 | false | 0 | 0.001849 | 0.04142 | 0.323595 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
98a3343b179502bd912be5095c6c1e0b1274bae0 | 189 | py | Python | src/estate/admin.py | Rey092/SwipeApp | 8451912aeb8b1a8fcd52bd9c58afb09b3e49768d | [
"MIT"
] | null | null | null | src/estate/admin.py | Rey092/SwipeApp | 8451912aeb8b1a8fcd52bd9c58afb09b3e49768d | [
"MIT"
] | null | null | null | src/estate/admin.py | Rey092/SwipeApp | 8451912aeb8b1a8fcd52bd9c58afb09b3e49768d | [
"MIT"
] | null | null | null | from django.contrib import admin
from src.estate.models import Advertisement, Complex
from src.users.models import Contact
admin.site.register(Advertisement)
admin.site.register(Complex)
| 23.625 | 52 | 0.835979 | 26 | 189 | 6.076923 | 0.538462 | 0.088608 | 0.21519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089947 | 189 | 7 | 53 | 27 | 0.918605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f4f320c973fa32badff4999debc2db3d5852170 | 203 | py | Python | main/main.py | supuuu/github_action_python | c6a7cfdedc4a6c5a4647bb326fafe4a4813cc259 | [
"Apache-2.0"
] | null | null | null | main/main.py | supuuu/github_action_python | c6a7cfdedc4a6c5a4647bb326fafe4a4813cc259 | [
"Apache-2.0"
] | null | null | null | main/main.py | supuuu/github_action_python | c6a7cfdedc4a6c5a4647bb326fafe4a4813cc259 | [
"Apache-2.0"
] | null | null | null | class Car:
def __init__(self, name='Banana', amount=10_000_000):
self.car_name = name
self.car_amount = amount
def car_display_price(self):
return f'{self.car_amount:,}'
| 25.375 | 57 | 0.640394 | 29 | 203 | 4.103448 | 0.482759 | 0.176471 | 0.218487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0.241379 | 203 | 7 | 58 | 29 | 0.720779 | 0 | 0 | 0 | 0 | 0 | 0.123153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7f73e9adb5cda2c235148ee5d57461eac885d056 | 22,835 | py | Python | code/pyorg/surf/test/test_listNhoodVol.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 12 | 2020-01-08T01:33:02.000Z | 2022-03-16T00:25:34.000Z | code/pyorg/surf/test/test_listNhoodVol.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 8 | 2019-12-19T19:34:56.000Z | 2022-03-10T10:11:28.000Z | code/pyorg/surf/test/test_listNhoodVol.py | anmartinezs/pyseg_system | 5bb07c7901062452a34b73f376057cabc15a13c3 | [
"Apache-2.0"
] | 2 | 2022-03-30T13:12:22.000Z | 2022-03-30T18:12:10.000Z |
import vtk
import math
import time
import numpy as np
from unittest import TestCase
from ..surface import ListShellNhood, ListSphereNhood
from ..model import ModelCSRV
from ..utils import disperse_io, iso_surface, poly_decimate
from matplotlib import pyplot as plt, rcParams
# Global variables
RAD_RG = np.arange(2, 80, 3)
SHELL_THICKS = (3, 6, 9, 12)
THICK_COLORS = ('b', 'r', 'g', 'c')
CONV_ITER = 1000 # 100 # 1000
MAX_ITER = 100000
N_SIMS = 100 # 1000
PERCENT = 5 # %
BAR_WIDTH = .4
OUT_DIR = './surf/test/out/nhood_vol'
rcParams['axes.labelsize'] = 14
rcParams['xtick.labelsize'] = 14
rcParams['ytick.labelsize'] = 14
rcParams['patch.linewidth'] = 2
class TestListNhoodVol(TestCase):
# Check precision for Nhoods using VOIs as 3D surfaces and Monte Carlo method
def test_volume_MCS(self):
# return
# VOI Creation
# cuber = vtk.vtkCubeSource()
# off = RAD_RG.max()
# cuber.SetBounds(-off - SHELL_THICKS[3], off + SHELL_THICKS[3], -off - SHELL_THICKS[3], off + SHELL_THICKS[3],
# 0, off + SHELL_THICKS[3])
# cuber.Update()
# voi = cuber.GetOutput()
# orienter = vtk.vtkPolyDataNormals()
# orienter.SetInputData(voi)
# orienter.AutoOrientNormalsOn()
# orienter.Update()
# voi = orienter.GetOutput()
# disperse_io.save_vtp(voi, OUT_DIR + '/cube_voi.vtp')
# voi_center = (0, 0, 0)
r_m_h = 5
rad_max = RAD_RG.max()
max_size = int(math.ceil(2 * rad_max + SHELL_THICKS[3]))
seg = np.ones(shape=(max_size, max_size, max_size), dtype=np.bool)
voi_center = np.asarray((.5 * seg.shape[0], .5 * seg.shape[1], .5 * seg.shape[2]))
X = np.meshgrid(np.arange(seg.shape[0]), np.arange(seg.shape[1]), np.arange(seg.shape[2]))[0]
seg[X > voi_center[0] + r_m_h] = False
voi = iso_surface(seg, .5, closed=True, normals='outwards')
voi = poly_decimate(voi, .9)
# Computations loop
sph_errs = np.zeros(shape=(len(RAD_RG), N_SIMS), dtype=np.float32)
she_errs = np.zeros(shape=(len(RAD_RG), N_SIMS), dtype=np.float32)
sph_times = np.zeros(shape=N_SIMS, dtype=np.float32)
she_times = np.zeros(shape=N_SIMS, dtype=np.float32)
for i in range(N_SIMS):
# Computations
hold_time = time.time()
spheres = ListSphereNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=CONV_ITER,
max_iter=MAX_ITER)
sph_vol = np.asarray(spheres.get_volumes())
sph_times[i] = time.time() - hold_time
hold_time = time.time()
shells = ListShellNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=CONV_ITER,
max_iter=MAX_ITER, thick=SHELL_THICKS[0])
she_vol = np.asarray(shells.get_volumes())
she_times[i] = time.time() - hold_time
# Computing the errors
h_arr = RAD_RG - r_m_h
sph_volr = np.pi * ((4. / 3.)*(RAD_RG**3) - RAD_RG*(h_arr**2) + (h_arr**3)*(1./3.))
rad_rg_1, rad_rg_2 = RAD_RG - .5*SHELL_THICKS[0], RAD_RG + .5*SHELL_THICKS[0]
h_arr_1, h_arr_2 = rad_rg_1 - r_m_h, rad_rg_2 - r_m_h
she_volr = np.pi * ((4. / 3.)*(rad_rg_2**3) - rad_rg_2*(h_arr_2**2) + (h_arr_2**3)*(1./3.)
- (4. / 3.)*(rad_rg_1**3) + rad_rg_1*(h_arr_1**2) - (h_arr_1**3)*(1./3.))
# sph_volr = (4. / 3.) * np.pi * (RAD_RG ** 3)
# she_volr = (4. / 3.) * np.pi * \
# ((RAD_RG + 0.5 * SHELL_THICKS[
# 0]) ** 3 - (RAD_RG - 0.5 * SHELL_THICKS[0]) ** 3)
sph_errs[:, i] = 100. * (sph_vol - sph_volr) / sph_volr
she_errs[:, i] = 100. * (she_vol - she_volr) / she_volr
# Plotting error
plt.figure()
# plt.title('Averaged fractional error for volume estimation (MCS-SPHERE)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
ic_low = np.percentile(sph_errs, PERCENT, axis=1, interpolation='linear')
ic_med = np.percentile(sph_errs, 50, axis=1, interpolation='linear')
ic_high = np.percentile(sph_errs, 100 - PERCENT, axis=1, interpolation='linear')
plt.fill_between(RAD_RG, ic_low, ic_high, alpha=.5, color='gray', edgecolor='w')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
plt.plot(RAD_RG, ic_low, linewidth=2, color='k', linestyle='--')
plt.plot(RAD_RG, ic_med, linewidth=2, color='k')
plt.plot(RAD_RG, ic_high, linewidth=2, color='k', linestyle='--')
plt.xlim(r_m_h, RAD_RG.max())
# plt.ylim(-5, 5)
plt.ylim(-25, 25)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mcs_sph_err_' + str(CONV_ITER) + '.png')
plt.close()
plt.figure()
# plt.title('Averaged fractional error for volume estimation (MCS-SHELL)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
ic_low = np.percentile(she_errs, PERCENT, axis=1, interpolation='linear')
ic_med = np.percentile(she_errs, 50, axis=1, interpolation='linear')
ic_high = np.percentile(she_errs, 100 - PERCENT, axis=1, interpolation='linear')
plt.fill_between(RAD_RG, ic_low, ic_high, alpha=.5, color='gray', edgecolor='w')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
plt.plot(RAD_RG, ic_low, linewidth=2, color='k', linestyle='--')
plt.plot(RAD_RG, ic_med, linewidth=2, color='k')
plt.plot(RAD_RG, ic_high, linewidth=2, color='k', linestyle='--')
plt.xlim(r_m_h, RAD_RG.max())
# plt.ylim(-5, 5)
plt.ylim(-25, 25)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mcs_she_err_' + str(CONV_ITER) + '.png')
plt.close()
# Plotting times
plt.figure()
# plt.title('Averaged time for computing unit')
plt.ylabel('Time [s]')
sph_ic_low = np.percentile(sph_times, PERCENT, axis=0, interpolation='linear')
sph_ic_med = np.percentile(sph_times, 50, axis=0, interpolation='linear')
sph_ic_high = np.percentile(sph_times, 100 - PERCENT, axis=0, interpolation='linear')
she_ic_low = np.percentile(she_times, PERCENT, axis=0, interpolation='linear')
she_ic_med = np.percentile(she_times, 50, axis=0, interpolation='linear')
she_ic_high = np.percentile(she_times, 100 - PERCENT, axis=0, interpolation='linear')
plt.bar(1, sph_ic_med, BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.errorbar(1, sph_ic_med,
yerr=np.asarray([[sph_ic_med - sph_ic_low, sph_ic_high - sph_ic_med], ]).reshape(2, 1),
ecolor='k', elinewidth=4, capthick=4, capsize=8)
plt.bar(2, she_ic_med, BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.errorbar(2, she_ic_med,
yerr=np.asarray([[she_ic_med - she_ic_low, she_ic_high - she_ic_med], ]).reshape(2, 1),
ecolor='k', elinewidth=4, capthick=4, capsize=8)
plt.xticks((1, 2), ('SPHERE', 'SHELL'))
plt.xlim(0.5, 2.5)
plt.ylim(0, 1.8)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mcs_time_' + str(CONV_ITER) + '.png')
plt.close()
# Check precision for Nhoods using VOIs as 3D arrays and Monte Carlo method
def test_volume_MCA(self):
# return
# VOI array creation
rad_max = RAD_RG.max()
max_size = int(math.ceil(2 * rad_max + SHELL_THICKS[3]))
voi = np.ones(shape=(max_size, max_size, max_size), dtype=np.bool)
voi_center = (.5 * voi.shape[0], .5 * voi.shape[1], .5 * voi.shape[2])
X = np.meshgrid(np.arange(voi.shape[0]), np.arange(voi.shape[1]), np.arange(voi.shape[2]))[0]
voi[X > voi_center[0]] = False
# Computations loop
sph_errs = np.zeros(shape=(len(RAD_RG), N_SIMS), dtype=np.float32)
she_errs = np.zeros(shape=(len(RAD_RG), N_SIMS), dtype=np.float32)
sph_times = np.zeros(shape=N_SIMS, dtype=np.float32)
she_times = np.zeros(shape=N_SIMS, dtype=np.float32)
for i in range(N_SIMS):
# Computations
hold_time = time.time()
spheres = ListSphereNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=CONV_ITER,
max_iter=MAX_ITER)
sph_vol = np.asarray(spheres.get_volumes())
sph_times[i] = time.time() - hold_time
hold_time = time.time()
shells = ListShellNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=CONV_ITER,
max_iter=MAX_ITER, thick=SHELL_THICKS[0])
she_vol = np.asarray(shells.get_volumes())
she_times[i] = time.time() - hold_time
# Computing the errors
sph_volr = .5 * (4. / 3.) * np.pi * (RAD_RG ** 3)
she_volr = .5 * (4. / 3.) * np.pi * \
((RAD_RG + 0.5 * SHELL_THICKS[0]) ** 3 - (RAD_RG - 0.5 * SHELL_THICKS[0]) ** 3)
sph_errs[:, i] = 100. * (sph_vol - sph_volr) / sph_volr
she_errs[:, i] = 100. * (she_vol - she_volr) / she_volr
# Plotting error
plt.figure()
# plt.title('Averaged fractional error for volume estimation (MCA-SPHERE)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
ic_low = np.percentile(sph_errs, PERCENT, axis=1, interpolation='linear')
ic_med = np.percentile(sph_errs, 50, axis=1, interpolation='linear')
ic_high = np.percentile(sph_errs, 100 - PERCENT, axis=1, interpolation='linear')
plt.fill_between(RAD_RG, ic_low, ic_high, alpha=.5, color='gray', edgecolor='w')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
plt.plot(RAD_RG, ic_low, linewidth=2, color='k', linestyle='--')
plt.plot(RAD_RG, ic_med, linewidth=2, color='k')
plt.plot(RAD_RG, ic_high, linewidth=2, color='k', linestyle='--')
plt.xlim(RAD_RG.min(), RAD_RG.max())
plt.ylim(-100, 100)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mca_sph_err_' + str(CONV_ITER) + '.png')
plt.close()
plt.figure()
# plt.title('Averaged fractional error for volume estimation (MCA-SHELL)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
ic_low = np.percentile(she_errs, PERCENT, axis=1, interpolation='linear')
ic_med = np.percentile(she_errs, 50, axis=1, interpolation='linear')
ic_high = np.percentile(she_errs, 100 - PERCENT, axis=1, interpolation='linear')
plt.fill_between(RAD_RG, ic_low, ic_high, alpha=.5, color='gray', edgecolor='w')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
plt.plot(RAD_RG, ic_low, linewidth=2, color='k', linestyle='--')
plt.plot(RAD_RG, ic_med, linewidth=2, color='k')
plt.plot(RAD_RG, ic_high, linewidth=2, color='k', linestyle='--')
plt.xlim(RAD_RG.min(), RAD_RG.max())
plt.ylim(-100, 100)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mca_she_err_' + str(CONV_ITER) + '.png')
plt.close()
# Plotting times
plt.figure()
# plt.title('Averaged time for computing unit')
plt.ylabel('Time [s]')
sph_ic_low = np.percentile(sph_times, PERCENT, axis=0, interpolation='linear')
sph_ic_med = np.percentile(sph_times, 50, axis=0, interpolation='linear')
sph_ic_high = np.percentile(sph_times, 100 - PERCENT, axis=0, interpolation='linear')
she_ic_low = np.percentile(she_times, PERCENT, axis=0, interpolation='linear')
she_ic_med = np.percentile(she_times, 50, axis=0, interpolation='linear')
she_ic_high = np.percentile(she_times, 100 - PERCENT, axis=0, interpolation='linear')
plt.bar(1, sph_ic_med, BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.errorbar(1, sph_ic_med,
yerr=np.asarray([[sph_ic_med - sph_ic_low, sph_ic_high - sph_ic_med], ]).reshape(2, 1),
ecolor='k', elinewidth=4, capthick=4, capsize=8)
plt.bar(2, she_ic_med, BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.errorbar(2, she_ic_med,
yerr=np.asarray([[she_ic_med - she_ic_low, she_ic_high - she_ic_med], ]).reshape(2, 1),
ecolor='k', elinewidth=4, capthick=4, capsize=8)
plt.xticks((1, 2), ('SPHERE', 'SHELL'))
plt.ylim(0, 1.8)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/mca_time_' + str(CONV_ITER) + '.png')
plt.close()
# Check precision for Nhoods using VOIs as 3D arrays and Direct Sum method
def test_volume_DSA(self):
# return
# VOI array creation
r_m_h = 5
rad_max = RAD_RG.max()
max_size = int(math.ceil(2 * rad_max + SHELL_THICKS[2]))
voi = np.ones(shape=(max_size, max_size, max_size), dtype=np.bool)
voi_center = (.5 * voi.shape[0], .5 * voi.shape[1], .5 * voi.shape[2])
X = np.meshgrid(np.arange(voi.shape[0]), np.arange(voi.shape[1]), np.arange(voi.shape[2]))[0]
# voi[X > voi_center[0]] = False
voi[X > voi_center[0] + r_m_h] = False
# Computations loop
# Computations for sphere
hold_time = time.time()
spheres = ListSphereNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=None,
max_iter=None)
sph_vol = np.asarray(spheres.get_volumes())
sph_times = time.time() - hold_time
hold_time = time.time()
# sph_volr = .5 * (4. / 3.) * np.pi * (RAD_RG ** 3)
h_arr = RAD_RG - r_m_h
sph_volr = np.pi * ((4. / 3.) * (RAD_RG ** 3) - RAD_RG * (h_arr ** 2) + (h_arr ** 3) * (1. / 3.))
sph_errs = 100. * (sph_vol - sph_volr) / sph_volr
# Computations for shell
she_errs = np.zeros(shape=(len(RAD_RG), len(SHELL_THICKS)), dtype=np.float32)
she_times = np.zeros(shape=N_SIMS, dtype=np.float32)
for i, thick in enumerate(SHELL_THICKS):
shells = ListShellNhood(center=voi_center, radius_rg=RAD_RG, voi=voi, conv_iter=None,
max_iter=None, thick=thick)
she_vol = np.asarray(shells.get_volumes())
she_times[i] = time.time() - hold_time
# she_volr = .5 * (4. / 3.) * np.pi * ((RAD_RG + 0.5 * thick) ** 3 - (RAD_RG - 0.5 * thick) ** 3)
rad_rg_1, rad_rg_2 = RAD_RG - .5 * thick, RAD_RG + .5 * thick
h_arr_1, h_arr_2 = rad_rg_1 - r_m_h, rad_rg_2 - r_m_h
she_volr = np.pi * ((4. / 3.) * (rad_rg_2 ** 3) - rad_rg_2 * (h_arr_2 ** 2) + (h_arr_2 ** 3) * (1. / 3.)
- (4. / 3.) * (rad_rg_1 ** 3) + rad_rg_1 * (h_arr_1 ** 2) - (h_arr_1 ** 3) * (1. / 3.))
she_errs[:, i] = 100. * (she_vol - she_volr) / she_volr
# Plotting error
plt.figure()
# plt.title('Averaged fractional error for volume estimation (DSA-SPHERE)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
plt.plot(RAD_RG, sph_errs, linewidth=2, color='k')
plt.xlim(r_m_h, RAD_RG.max())
plt.ylim(-10, 10)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/dsa_sph_err_' + str(CONV_ITER) + '.png')
plt.close()
plt.figure()
# plt.title('Averaged fractional error for volume estimation (DSA-SHELL)')
plt.xlabel('Scale')
plt.ylabel('E [%]')
plt.plot(RAD_RG, np.zeros(shape=len(RAD_RG)), 'k--', linewidth=1)
for i, thick in enumerate(SHELL_THICKS):
plt.plot(RAD_RG, she_errs[:, i], linewidth=2, color=THICK_COLORS[i], label=str(thick))
plt.xlim(r_m_h, RAD_RG.max())
plt.ylim(-10, 10)
plt.legend(title='Thickness', loc=4)
plt.tight_layout()
plt.savefig(OUT_DIR + '/dsa_she_err_' + str(CONV_ITER) + '.png')
# plt.show(block=True)
plt.close()
# Plotting times
plt.figure()
# plt.title('Averaged time for computing unit')
plt.ylabel('Time [s]')
plt.bar(1, sph_times, BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.bar(2, she_times.mean(), BAR_WIDTH, color='blue', linewidth=2, edgecolor='k')
plt.xticks((1, 2), ('SPHERE', 'SHELL'))
plt.ylim(0, 1.8)
plt.tight_layout()
plt.savefig(OUT_DIR + '/dsa_time_' + str(CONV_ITER) + '.png')
# plt.show(block=True)
plt.close()
# Check precision for Nhoods using VOIs as 3D arrays and Direct Sum method
def test_times(self):
# return
# VOI array creation
N_SIMS = 1
rad_max = RAD_RG.max()
max_size = int(math.ceil(2 * 200 + SHELL_THICKS[2]))
voi = np.ones(shape=(max_size, max_size, max_size), dtype=np.bool)
voi_center = (.5 * voi.shape[0], .5 * voi.shape[1], .5 * voi.shape[2])
X = np.meshgrid(np.arange(voi.shape[0]), np.arange(voi.shape[1]), np.arange(voi.shape[2]))[0]
voi[X > voi_center[0]] = False
voi_surf = iso_surface(voi, .5, closed=True, normals='outwards')
voi_surf = poly_decimate(voi_surf, .9)
# Loop for maximum radius
n_rads_arr = np.arange(15, 200, 5)
sph_times_dsa = np.zeros(shape=(len(n_rads_arr), N_SIMS), dtype=np.float32)
she_times_dsa = np.zeros(shape=(len(n_rads_arr), N_SIMS), dtype=np.float32)
sph_times_mcs = np.zeros(shape=(len(n_rads_arr), N_SIMS), dtype=np.float32)
she_times_mcs = np.zeros(shape=(len(n_rads_arr), N_SIMS), dtype=np.float32)
# Loop for the number of particles within the neighborhod
for i, max_rad in enumerate(n_rads_arr[1:]):
rad_rg = np.arange(10, max_rad, 5)
tot_vol, tot_area = 0., 0.
for rad in rad_rg:
tot_vol += ((4./3.) * np.pi * rad * rad * rad)
tot_area += (4. * np.pi * rad * rad)
# Loop for the number of simulation
for j in range(N_SIMS):
# Computations for DSA
hold_time = time.time()
spheres = ListSphereNhood(center=voi_center, radius_rg=rad_rg, voi=voi, conv_iter=None,
max_iter=None)
spheres.get_volumes()
sph_times_dsa[i, j] = time.time() - hold_time
hold_time = time.time()
shells = ListShellNhood(center=voi_center, radius_rg=rad_rg, voi=voi, conv_iter=None,
max_iter=None, thick=6)
shells.get_volumes()
she_times_dsa[i, j] = time.time() - hold_time
# Computations for MCS
hold_time = time.time()
spheres = ListSphereNhood(center=voi_center, radius_rg=rad_rg, voi=voi_surf, conv_iter=CONV_ITER,
max_iter=MAX_ITER)
spheres.get_volumes()
sph_times_mcs[i, j] = time.time() - hold_time
hold_time = time.time()
shells = ListShellNhood(center=voi_center, radius_rg=rad_rg, voi=voi_surf, conv_iter=CONV_ITER,
max_iter=MAX_ITER, thick=6)
shells.get_volumes()
she_times_mcs[i, j] = time.time() - hold_time
sph_times_dsa[i, :] /= float(tot_vol)
she_times_dsa[i, :] /= float(tot_area)
sph_times_mcs[i, :] /= float(tot_vol)
she_times_mcs[i, :] /= float(tot_area)
# Plotting times
plt.figure()
plt.title('HALF-SPHERE')
plt.ylabel('Time/v.u. [s]')
plt.xlabel('Maximum scale')
# ic_low = np.percentile(sph_times_dsa, PERCENT, axis=1, interpolation='linear')
# ic_med = np.percentile(sph_times_dsa, 50, axis=1, interpolation='linear')
# ic_high = np.percentile(sph_times_dsa, 100 - PERCENT, axis=1, interpolation='linear')
# plt.plot(n_rads_arr, ic_med, color='blue', linewidth=2, label='3D Array')
plt.plot(n_rads_arr, sph_times_dsa[:, 0], color='blue', linewidth=2, label='3D Array')
# plt.fill_between(n_rads_arr, ic_low, ic_high, alpha=0.5, color='blue', edgecolor='w')
# ic_low = np.percentile(sph_times_mcs, PERCENT, axis=1, interpolation='linear')
# ic_med = np.percentile(sph_times_mcs, 50, axis=1, interpolation='linear')
# ic_high = np.percentile(sph_times_mcs, 100 - PERCENT, axis=1, interpolation='linear')
# plt.plot(n_rads_arr, ic_med, color='red', linewidth=2, linestyle='--', label='MCS')
plt.plot(n_rads_arr, sph_times_mcs[:, 0], color='red', linewidth=2, label='MCS')
# plt.fill_between(n_rads_arr, ic_low, ic_high, alpha=0.5, color='red', edgecolor='w')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0, 0))
plt.legend(loc=1)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/times_vol_sphere.png')
plt.close()
# Plotting times
plt.figure()
plt.title('HALF-SHELL')
plt.ylabel('Time/a.u. [s]')
plt.xlabel('Maximum scale')
# ic_low = np.percentile(she_times_dsa, PERCENT, axis=1, interpolation='linear')
# ic_med = np.percentile(she_times_dsa, 50, axis=1, interpolation='linear')
# ic_high = np.percentile(she_times_dsa, 100 - PERCENT, axis=1, interpolation='linear')
# plt.plot(n_rads_arr, ic_med, color='blue', linewidth=2, label='3D Array')
plt.plot(n_rads_arr, she_times_dsa[:, 0], color='blue', linewidth=2, label='3D Array')
# plt.fill_between(n_rads_arr, ic_low, ic_high, alpha=0.5, color='blue', edgecolor='w')
# ic_low = np.percentile(she_times_mcs, PERCENT, axis=1, interpolation='linear')
# ic_med = np.percentile(she_times_mcs, 50, axis=1, interpolation='linear')
# ic_high = np.percentile(she_times_mcs, 100 - PERCENT, axis=1, interpolation='linear')
# plt.plot(n_rads_arr, ic_med, color='red', linewidth=2, linestyle='--', label='MCS')
plt.plot(n_rads_arr, she_times_mcs[:, 0], color='red', linewidth=2, label='MCS')
# plt.fill_between(n_rads_arr, ic_low, ic_high, alpha=0.5, color='red', edgecolor='w')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0, 0))
plt.legend(loc=1)
plt.tight_layout()
# plt.show(block=True)
plt.savefig(OUT_DIR + '/times_vol_shell.png')
plt.close() | 51.199552 | 119 | 0.58568 | 3,318 | 22,835 | 3.806811 | 0.073538 | 0.038002 | 0.034202 | 0.045602 | 0.881878 | 0.862402 | 0.851002 | 0.828755 | 0.807537 | 0.793128 | 0 | 0.03194 | 0.26236 | 22,835 | 446 | 120 | 51.199552 | 0.717941 | 0.180819 | 0 | 0.68239 | 0 | 0 | 0.041765 | 0.002473 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012579 | false | 0 | 0.028302 | 0 | 0.044025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f756df40178e58f3b309113feea634e6d3f53f8 | 101 | py | Python | nakiri/util.py | anxious-witch/nakiri | 3dde4a1712650213e6ba80991f5f75b28d594b8b | [
"MIT"
] | null | null | null | nakiri/util.py | anxious-witch/nakiri | 3dde4a1712650213e6ba80991f5f75b28d594b8b | [
"MIT"
] | null | null | null | nakiri/util.py | anxious-witch/nakiri | 3dde4a1712650213e6ba80991f5f75b28d594b8b | [
"MIT"
] | null | null | null | import sys
def print_d(obj):
""" Print something to stderr """
print(obj, file=sys.stderr)
| 14.428571 | 37 | 0.643564 | 15 | 101 | 4.266667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217822 | 101 | 6 | 38 | 16.833333 | 0.810127 | 0.247525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
7f8ba17ecc10ae31836767a10313cb8426531db1 | 47 | py | Python | cluster_tools/costs/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 28 | 2018-12-09T22:11:52.000Z | 2022-02-01T16:48:23.000Z | cluster_tools/costs/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 16 | 2019-01-27T10:59:33.000Z | 2022-01-11T09:09:24.000Z | cluster_tools/costs/__init__.py | constantinpape/cluster_tools | a7e88545b58f8315723bc47583916e1900a7892d | [
"MIT"
] | 11 | 2018-12-09T22:11:56.000Z | 2021-08-08T20:10:13.000Z | from . costs_workflow import EdgeCostsWorkflow
| 23.5 | 46 | 0.87234 | 5 | 47 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 47 | 1 | 47 | 47 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fac66cc20e32a1d772468ec112b69bdef8f4f58 | 4,206 | py | Python | tests/test_trig_pos_embd.py | CyberZHG/keras-pos-embd | 6f4bdb013629785865fddf79e4ca5f4018d51b1f | [
"MIT"
] | 59 | 2018-09-19T22:10:10.000Z | 2022-01-28T21:51:42.000Z | tests/test_trig_pos_embd.py | CyberZHG/keras-pos-embd | 6f4bdb013629785865fddf79e4ca5f4018d51b1f | [
"MIT"
] | 5 | 2018-12-11T06:33:03.000Z | 2022-01-22T09:43:01.000Z | tests/test_trig_pos_embd.py | CyberZHG/keras-pos-embd | 6f4bdb013629785865fddf79e4ca5f4018d51b1f | [
"MIT"
] | 25 | 2018-11-19T15:20:24.000Z | 2021-12-09T18:45:22.000Z | import os
import tempfile
import unittest
import numpy as np
from keras_pos_embd.backend import keras
from keras_pos_embd import TrigPosEmbedding
class TestSinCosPosEmbd(unittest.TestCase):
def test_invalid_output_dim(self):
with self.assertRaises(NotImplementedError):
TrigPosEmbedding(
mode=TrigPosEmbedding.MODE_EXPAND,
output_dim=5,
)
def test_missing_output_dim(self):
with self.assertRaises(NotImplementedError):
TrigPosEmbedding(
mode=TrigPosEmbedding.MODE_EXPAND,
)
def test_brute(self):
seq_len = np.random.randint(1, 10)
embd_dim = np.random.randint(1, 20) * 2
indices = np.expand_dims(np.arange(seq_len), 0)
model = keras.models.Sequential()
model.add(TrigPosEmbedding(
input_shape=(seq_len,),
mode=TrigPosEmbedding.MODE_EXPAND,
output_dim=embd_dim,
name='Pos-Embd',
))
model.compile('adam', 'mse')
model_path = os.path.join(tempfile.gettempdir(), 'test_trig_pos_embd_%f.h5' % np.random.random())
model.save(model_path)
model = keras.models.load_model(model_path, custom_objects={'TrigPosEmbedding': TrigPosEmbedding})
model.summary()
predicts = model.predict(indices)[0].tolist()
for i in range(seq_len):
for j in range(embd_dim):
actual = predicts[i][j]
if j % 2 == 0:
expect = np.sin(i / 10000.0 ** (float(j) / embd_dim))
else:
expect = np.cos(i / 10000.0 ** ((j - 1.0) / embd_dim))
self.assertAlmostEqual(expect, actual, places=6, msg=(embd_dim, i, j, expect, actual))
def test_add(self):
seq_len = np.random.randint(1, 10)
embed_dim = np.random.randint(1, 20) * 2
inputs = np.ones((1, seq_len, embed_dim))
model = keras.models.Sequential()
model.add(TrigPosEmbedding(
input_shape=(seq_len, embed_dim),
mode=TrigPosEmbedding.MODE_ADD,
name='Pos-Embd',
))
model.compile('adam', 'mse')
model_path = os.path.join(tempfile.gettempdir(), 'test_trig_pos_embd_%f.h5' % np.random.random())
model.save(model_path)
model = keras.models.load_model(model_path, custom_objects={'TrigPosEmbedding': TrigPosEmbedding})
model.summary()
predicts = model.predict(inputs)[0].tolist()
for i in range(seq_len):
for j in range(embed_dim):
actual = predicts[i][j]
if j % 2 == 0:
expect = 1.0 + np.sin(i / 10000.0 ** (float(j) / embed_dim))
else:
expect = 1.0 + np.cos(i / 10000.0 ** ((j - 1.0) / embed_dim))
self.assertAlmostEqual(expect, actual, places=6, msg=(embed_dim, i, j, expect, actual))
def test_concat(self):
seq_len = np.random.randint(1, 10)
feature_dim = np.random.randint(1, 20)
embed_dim = np.random.randint(1, 20) * 2
inputs = np.ones((1, seq_len, feature_dim))
model = keras.models.Sequential()
model.add(TrigPosEmbedding(
input_shape=(seq_len, feature_dim),
output_dim=embed_dim,
mode=TrigPosEmbedding.MODE_CONCAT,
name='Pos-Embd',
))
model.compile('adam', 'mse')
model_path = os.path.join(tempfile.gettempdir(), 'test_trig_pos_embd_%f.h5' % np.random.random())
model.save(model_path)
model = keras.models.load_model(model_path, custom_objects={'TrigPosEmbedding': TrigPosEmbedding})
model.summary()
predicts = model.predict(inputs)[0].tolist()
for i in range(seq_len):
for j in range(embed_dim):
actual = predicts[i][feature_dim + j]
if j % 2 == 0:
expect = np.sin(i / 10000.0 ** (float(j) / embed_dim))
else:
expect = np.cos(i / 10000.0 ** ((j - 1.0) / embed_dim))
self.assertAlmostEqual(expect, actual, places=6, msg=(embed_dim, i, j, expect, actual))
| 40.834951 | 106 | 0.577746 | 515 | 4,206 | 4.549515 | 0.170874 | 0.044388 | 0.044814 | 0.047802 | 0.854887 | 0.827571 | 0.798122 | 0.775501 | 0.720017 | 0.720017 | 0 | 0.030169 | 0.298621 | 4,206 | 102 | 107 | 41.235294 | 0.764068 | 0 | 0 | 0.602151 | 0 | 0 | 0.03923 | 0.017118 | 0 | 0 | 0 | 0 | 0.053763 | 1 | 0.053763 | false | 0 | 0.064516 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6c72671fa8af253fac6f39351faf0851af9033f | 8,602 | py | Python | Anime/migrations/0001_initial.py | Krystian19/cactus-app-backend-admin-service | 64e322545a47133b1651641d5ba96142f0e2daeb | [
"MIT"
] | null | null | null | Anime/migrations/0001_initial.py | Krystian19/cactus-app-backend-admin-service | 64e322545a47133b1651641d5ba96142f0e2daeb | [
"MIT"
] | 2 | 2020-02-11T23:32:20.000Z | 2021-06-10T20:59:20.000Z | Anime/migrations/0001_initial.py | Krystian19/cactus-app-admin-service | 64e322545a47133b1651641d5ba96142f0e2daeb | [
"MIT"
] | null | null | null | # Generated by Django 2.0.13 on 2019-07-12 09:30
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Anime',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'Animes',
},
),
migrations.CreateModel(
name='Episode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('episode_order', models.IntegerField()),
('thumbnail', models.CharField(blank=True, max_length=250, null=True)),
('episode_code', models.UUIDField(default=uuid.uuid4, editable=False, unique=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'Episodes',
},
),
migrations.CreateModel(
name='EpisodeSubtitle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('subtitle_code', models.UUIDField(default=uuid.uuid4, editable=False, unique=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('episode_id', models.ForeignKey(db_column='episode_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Episode')),
],
options={
'db_table': 'EpisodeSubtitles',
},
),
migrations.CreateModel(
name='Genre',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250)),
('thumbnail', models.CharField(blank=True, max_length=250, null=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'Genres',
},
),
migrations.CreateModel(
name='GenreTitle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('genre_id', models.ForeignKey(db_column='genre_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Genre')),
],
options={
'db_table': 'GenreTitles',
},
),
migrations.CreateModel(
name='Language',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=250)),
('iso_code', models.CharField(max_length=250)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'Languages',
},
),
migrations.CreateModel(
name='Release',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('release_order', models.IntegerField()),
('title', models.CharField(max_length=250)),
('started_airing', models.DateTimeField()),
('stopped_airing', models.DateTimeField(blank=True, null=True)),
('poster', models.CharField(blank=True, max_length=250, null=True)),
('background', models.CharField(blank=True, max_length=250, null=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('anime_id', models.ForeignKey(db_column='anime_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Anime')),
],
options={
'db_table': 'Releases',
},
),
migrations.CreateModel(
name='ReleaseAlternativeTitle',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('release_id', models.ForeignKey(db_column='release_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Release')),
],
options={
'db_table': 'ReleaseAlternativeTitles',
},
),
migrations.CreateModel(
name='ReleaseDescription',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('description', models.TextField()),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('language_id', models.ForeignKey(db_column='language_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Language')),
('release_id', models.ForeignKey(db_column='release_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Release')),
],
options={
'db_table': 'ReleaseDescriptions',
},
),
migrations.CreateModel(
name='ReleaseGenre',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('genre_id', models.ForeignKey(db_column='genre_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Genre')),
('release_id', models.ForeignKey(db_column='release_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Release')),
],
options={
'db_table': 'ReleaseGenres',
},
),
migrations.CreateModel(
name='ReleaseType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(max_length=250)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'db_table': 'ReleaseType',
},
),
migrations.AddField(
model_name='release',
name='release_type_id',
field=models.ForeignKey(db_column='release_type_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.ReleaseType'),
),
migrations.AddField(
model_name='GenreTitle',
name='language_id',
field=models.ForeignKey(db_column='language_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Language'),
),
migrations.AddField(
model_name='episodesubtitle',
name='language_id',
field=models.ForeignKey(db_column='language_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Language'),
),
migrations.AddField(
model_name='episode',
name='release_id',
field=models.ForeignKey(db_column='release_id', on_delete=django.db.models.deletion.CASCADE, to='Anime.Release'),
),
]
| 46.497297 | 142 | 0.564171 | 831 | 8,602 | 5.632972 | 0.119134 | 0.097415 | 0.098697 | 0.117496 | 0.775048 | 0.741508 | 0.734672 | 0.725059 | 0.725059 | 0.715659 | 0 | 0.008915 | 0.295861 | 8,602 | 184 | 143 | 46.75 | 0.76391 | 0.005348 | 0 | 0.638418 | 1 | 0 | 0.141454 | 0.005495 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016949 | 0 | 0.039548 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f6cd808748b77647717ec270f90d6629b1639949 | 1,440 | py | Python | user_service_sdk/client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | 5 | 2019-07-31T04:11:05.000Z | 2021-01-07T03:23:20.000Z | user_service_sdk/client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | user_service_sdk/client.py | easyopsapis/easyops-api-python | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import user_service_sdk.api.apikey.apikey_client
import user_service_sdk.api.auth.auth_client
import user_service_sdk.api.gateway.gateway_client
import user_service_sdk.api.invitation_code.invitation_code_client
import user_service_sdk.api.ldap.ldap_client
import user_service_sdk.api.organization.organization_client
import user_service_sdk.api.user_admin.user_admin_client
class Client(object):
def __init__(self, server_ip="", server_port=0, service_name=""):
self.apikey = user_service_sdk.api.apikey.apikey_client.ApikeyClient(server_ip, server_port, service_name)
self.auth = user_service_sdk.api.auth.auth_client.AuthClient(server_ip, server_port, service_name)
self.gateway = user_service_sdk.api.gateway.gateway_client.GatewayClient(server_ip, server_port, service_name)
self.invitation_code = user_service_sdk.api.invitation_code.invitation_code_client.InvitationCodeClient(server_ip, server_port, service_name)
self.ldap = user_service_sdk.api.ldap.ldap_client.LdapClient(server_ip, server_port, service_name)
self.organization = user_service_sdk.api.organization.organization_client.OrganizationClient(server_ip, server_port, service_name)
self.user_admin = user_service_sdk.api.user_admin.user_admin_client.UserAdminClient(server_ip, server_port, service_name)
| 40 | 149 | 0.778472 | 194 | 1,440 | 5.365979 | 0.175258 | 0.147935 | 0.188281 | 0.228626 | 0.782901 | 0.777137 | 0.714697 | 0.176753 | 0.176753 | 0 | 0 | 0.001617 | 0.140972 | 1,440 | 35 | 150 | 41.142857 | 0.839935 | 0.014583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.4375 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f6db188eecb91cb2f2977496a867666c9aa528bb | 2,127 | py | Python | terrascript/scaleway/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 507 | 2017-07-26T02:58:38.000Z | 2022-01-21T12:35:13.000Z | terrascript/scaleway/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 135 | 2017-07-20T12:01:59.000Z | 2021-10-04T22:25:40.000Z | terrascript/scaleway/r.py | mjuenema/python-terrascript | 6d8bb0273a14bfeb8ff8e950fe36f97f7c6e7b1d | [
"BSD-2-Clause"
] | 81 | 2018-02-20T17:55:28.000Z | 2022-01-31T07:08:40.000Z | # terrascript/scaleway/r.py
# Automatically generated by tools/makecode.py ()
import warnings
warnings.warn(
"using the 'legacy layout' is deprecated", DeprecationWarning, stacklevel=2
)
import terrascript
class scaleway_account_ssh_key(terrascript.Resource):
pass
class scaleway_apple_silicon_server(terrascript.Resource):
pass
class scaleway_baremetal_server(terrascript.Resource):
pass
class scaleway_instance_ip(terrascript.Resource):
pass
class scaleway_instance_ip_reverse_dns(terrascript.Resource):
pass
class scaleway_instance_placement_group(terrascript.Resource):
pass
class scaleway_instance_private_nic(terrascript.Resource):
pass
class scaleway_instance_security_group(terrascript.Resource):
pass
class scaleway_instance_security_group_rules(terrascript.Resource):
pass
class scaleway_instance_server(terrascript.Resource):
pass
class scaleway_instance_snapshot(terrascript.Resource):
pass
class scaleway_instance_volume(terrascript.Resource):
pass
class scaleway_iot_device(terrascript.Resource):
pass
class scaleway_iot_hub(terrascript.Resource):
pass
class scaleway_iot_network(terrascript.Resource):
pass
class scaleway_iot_route(terrascript.Resource):
pass
class scaleway_k8s_cluster(terrascript.Resource):
pass
class scaleway_k8s_pool(terrascript.Resource):
pass
class scaleway_lb(terrascript.Resource):
pass
class scaleway_lb_backend(terrascript.Resource):
pass
class scaleway_lb_certificate(terrascript.Resource):
pass
class scaleway_lb_frontend(terrascript.Resource):
pass
class scaleway_lb_ip(terrascript.Resource):
pass
class scaleway_object_bucket(terrascript.Resource):
pass
class scaleway_rdb_acl(terrascript.Resource):
pass
class scaleway_rdb_database(terrascript.Resource):
pass
class scaleway_rdb_instance(terrascript.Resource):
pass
class scaleway_rdb_user(terrascript.Resource):
pass
class scaleway_registry_namespace(terrascript.Resource):
pass
class scaleway_vpc_private_network(terrascript.Resource):
pass
| 16.361538 | 79 | 0.794546 | 244 | 2,127 | 6.643443 | 0.262295 | 0.240592 | 0.425663 | 0.500925 | 0.74707 | 0.680444 | 0.191857 | 0.070327 | 0 | 0 | 0 | 0.00164 | 0.140103 | 2,127 | 129 | 80 | 16.488372 | 0.884636 | 0.034321 | 0 | 0.461538 | 1 | 0 | 0.019015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.461538 | 0.030769 | 0 | 0.492308 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
f6f54ce52a939efa00fc0268e36f6be3dcac9bda | 183 | py | Python | SCRAPE/Lib/site-packages/IPython/utils/tests/test_deprecated.py | Chinmoy-Prasad-Dutta/scrapy_scraper | 09f6abfc3bcf10ee28f486d83b450c89a07e066e | [
"MIT"
] | 2 | 2022-02-26T11:19:40.000Z | 2022-03-28T08:23:25.000Z | SCRAPE/Lib/site-packages/IPython/utils/tests/test_deprecated.py | Chinmoy-Prasad-Dutta/scrapy_scraper | 09f6abfc3bcf10ee28f486d83b450c89a07e066e | [
"MIT"
] | null | null | null | SCRAPE/Lib/site-packages/IPython/utils/tests/test_deprecated.py | Chinmoy-Prasad-Dutta/scrapy_scraper | 09f6abfc3bcf10ee28f486d83b450c89a07e066e | [
"MIT"
] | 1 | 2022-03-28T09:19:34.000Z | 2022-03-28T09:19:34.000Z | from IPython.utils.syspathcontext import appended_to_syspath
import pytest
def test_append_deprecated():
with pytest.warns(DeprecationWarning):
appended_to_syspath(".")
| 22.875 | 60 | 0.786885 | 21 | 183 | 6.571429 | 0.761905 | 0.144928 | 0.246377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136612 | 183 | 7 | 61 | 26.142857 | 0.873418 | 0 | 0 | 0 | 0 | 0 | 0.005464 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
10199b3a7e90925c25b788c16949570b57c889c3 | 144 | py | Python | core/apps/search_head_api/views/__init__.py | near-feign/pcapdb | f85cbb7bfe8479832dc557368bb8b021945e6601 | [
"BSD-2-Clause"
] | 244 | 2017-01-02T23:07:07.000Z | 2022-03-17T06:13:59.000Z | core/apps/search_head_api/views/__init__.py | near-feign/pcapdb | f85cbb7bfe8479832dc557368bb8b021945e6601 | [
"BSD-2-Clause"
] | 31 | 2016-11-10T15:46:48.000Z | 2020-09-29T03:37:58.000Z | core/apps/search_head_api/views/__init__.py | near-feign/pcapdb | f85cbb7bfe8479832dc557368bb8b021945e6601 | [
"BSD-2-Clause"
] | 54 | 2017-01-05T07:16:51.000Z | 2021-04-24T05:05:19.000Z | from apps.search_head_api.views.base import SearchHeadAPIView
from . import capturenode, search, tests
__all__ = [capturenode, search, tests]
| 24 | 61 | 0.805556 | 18 | 144 | 6.111111 | 0.666667 | 0.309091 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118056 | 144 | 5 | 62 | 28.8 | 0.866142 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63ea8cc86d10316befea74b24e11af30cad16e4a | 18 | py | Python | plib/supersecure/__init__.py | hjalti/passwordlib | ab3996e5fc691d8e79da3cd90bd81b84456643ba | [
"MIT"
] | null | null | null | plib/supersecure/__init__.py | hjalti/passwordlib | ab3996e5fc691d8e79da3cd90bd81b84456643ba | [
"MIT"
] | null | null | null | plib/supersecure/__init__.py | hjalti/passwordlib | ab3996e5fc691d8e79da3cd90bd81b84456643ba | [
"MIT"
] | null | null | null | from . import nsa
| 9 | 17 | 0.722222 | 3 | 18 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 18 | 1 | 18 | 18 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
63ee959a17e2f3f908a568fe5ae2cee54e72f21e | 50 | py | Python | sha_rnn/__init__.py | rish-16/SHA-RNN | 08c701396217f0b645de043963ff8ec4bf27e835 | [
"MIT"
] | null | null | null | sha_rnn/__init__.py | rish-16/SHA-RNN | 08c701396217f0b645de043963ff8ec4bf27e835 | [
"MIT"
] | null | null | null | sha_rnn/__init__.py | rish-16/SHA-RNN | 08c701396217f0b645de043963ff8ec4bf27e835 | [
"MIT"
] | null | null | null | from sha_rnn.sha_rnn import EncoderRNN, DecoderRNN | 50 | 50 | 0.88 | 8 | 50 | 5.25 | 0.75 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 50 | 1 | 50 | 50 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
123b19882850821266aa326d100f8f6b17baaf4b | 20,783 | py | Python | Krypton/WebApp/Monitor/__init__.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | Krypton/WebApp/Monitor/__init__.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | Krypton/WebApp/Monitor/__init__.py | BolunHan/Krypton | 8caf8e8efad6172ea0783c777e7df49a2ac512cb | [
"MIT"
] | null | null | null | import dash
import dash.dependencies
import dash_core_components
import dash_html_components
from flask import Flask
from ... import KryptoniteDataClient
from ...Base import LOGGER, CONFIG
LOGGER = LOGGER.Monitor
CONFIG = CONFIG
REDIS_HOST = CONFIG.WebApp.Monitor.REDIS_HOST
REDIS_PORT = CONFIG.WebApp.Monitor.REDIS_PORT
REDIS_AUTH = CONFIG.WebApp.Monitor.REDIS_AUTH
FLASK_APP = Flask('Monitor')
DASH_APP = dash.Dash(name=__name__, server=FLASK_APP, requests_pathname_prefix='/Monitor/', update_title=None, title='Monitor')
from . import Market, Trade
DATA_CLIENT = KryptoniteDataClient.RelayClient(
url=f'redis://{REDIS_HOST}:{REDIS_PORT}',
password=REDIS_AUTH,
on_bar=Market.on_bar,
on_trade=Market.on_trade,
on_orderbook=Market.on_orderbook
)
def init_layout() -> dash.Dash:
"""
init a plotly app layout
:keyword: external_stylesheets (List[str]): a list of additional css file string path
:return: a Dash app
"""
layout = dash_html_components.Div(
children=[
dash_html_components.Div(
children=[
dash_html_components.Table(
children=[
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.H2(children=['Kryptontie Monitor']),
],
style={'width': "100%", 'border': 'medium solid'},
colSpan=4
)
],
style={'height': '5%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Div(
children=[
dash_html_components.Table(
children=[
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.P(children=["Monitor Ticker: "]),
],
style={'width': "50%", 'border': 'medium solid'},
),
dash_html_components.Td(
children=[
dash_core_components.Dropdown(id='Kryptonite-Monitor-Ticker-Dropdown', options=[{'label': x.upper(), 'value': x} for x in DATA_CLIENT.tickers], style={'width': "100%", 'height': '100%'})
],
style={'width': "50%", 'border': 'medium solid'},
),
],
style={'height': "100%"}
)
],
style={'width': "100%", 'border': 'medium solid'},
)
],
style={'width': "50%"},
),
],
style={'width': "100%", 'border': 'medium solid'},
colSpan=4
)
],
style={'height': '5%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Div(id='Kryptonite-Monitor-Text', style={'width': "100%", 'height': '100%'}),
],
style={'width': "25%", 'border': 'medium solid'}
),
dash_html_components.Td(
children=[
dash_html_components.Div(
children=[
dash_html_components.Table(
children=[
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Button(children=['Wind'], id='Wind-Button', n_clicks=0, style={'width': "100%", 'height': '100%'}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '25%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Button(children=['Unwind'], id='Unwind-Button', n_clicks=0, style={'width': "100%", 'height': '100%'}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '25%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Button(children=['Cancel'], id='Cancel-Button', n_clicks=0, style={'width': "100%", 'height': '100%'}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '25%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_html_components.Button(children=['Refresh'], id='Refresh-Button', n_clicks=0, style={'width': "100%", 'height': '100%'}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '25%'}
)
],
style={'width': "100%", 'height': '100%', 'border': 'medium solid'},
)
],
style={'width': "100%", 'height': '100%'}
)
],
style={'width': "25%", 'border': 'medium solid'}
),
dash_html_components.Td(
children=[
dash_html_components.Div(
children=[
dash_html_components.Table(
children=[
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=['Order Book'],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '10%', 'border': 'medium solid'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_core_components.Graph(id='Kryptonite-Monitor-OrderBook', style={'width': "100%", 'height': '100%'}, config={'displayModeBar': False},
figure={'layout': {"xaxis": {"visible": False}, "yaxis": {"visible": False}, "annotations": [{"text": "No matching data found", "xref": "paper", "yref": "paper", "showarrow": False, "font": {"size": 28}}]}}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '90%', 'border': 'medium solid'}
)
],
style={'width': "100%", 'height': '100%', 'border': 'medium solid'}
)
],
style={'width': "100%", 'height': '100%'}
)
],
style={'width': "25%", 'border': 'medium solid'}
),
dash_html_components.Td(
children=[
dash_html_components.Div(
children=[
dash_html_components.Table(
children=[
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=['Balance'],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '10%', 'border': 'medium solid'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_core_components.Graph(id='Kryptonite-Monitor-Balance', style={'width': "100%", 'height': '100%'}, config={'displayModeBar': False},
figure={'layout': {"xaxis": {"visible": False}, "yaxis": {"visible": False}, "annotations": [{"text": "No matching data found", "xref": "paper", "yref": "paper", "showarrow": False, "font": {"size": 28}}]}}),
],
style={'width': "100%", 'border': 'medium solid'}
)
],
style={'height': '90%', 'border': 'medium solid'}
)
],
style={'width': "100%", 'height': '100%', 'border': 'medium solid'}
)
],
style={'width': "100%", 'height': '100%'}
)
],
style={'width': "25%", 'border': 'medium solid'}
)
],
style={'height': '10%'}
),
dash_html_components.Tr(
children=[
dash_html_components.Td(
children=[
dash_core_components.Graph(id='Kryptonite-Monitor-MarketView', style={'width': "100%", 'height': '100%'},
figure={'layout': {"xaxis": {"visible": False}, "yaxis": {"visible": False}, "annotations": [{"text": "No matching data found", "xref": "paper", "yref": "paper", "showarrow": False, "font": {"size": 28}}]}}),
],
style={'width': "100%", 'border': 'medium solid'},
colSpan=4
)
],
style={'height': '80%'}
)
],
style={'width': "100%", 'height': '100vh', 'border': 'medium solid'}
)
],
style={'display': 'grid'}
),
dash_core_components.Interval(
id='interval-component',
interval=1 * 1000, # in milliseconds
n_intervals=0
)
]
)
return layout
def register_callbacks():
# Register Timer Callback: render monitor text Div
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Kryptonite-Monitor-Text', component_property='children'),
inputs=[dash.dependencies.Input(component_id='interval-component', component_property='n_intervals')],
state=[
dash.dependencies.State(component_id='Kryptonite-Monitor-Ticker-Dropdown', component_property='value'),
dash.dependencies.State(component_id='Kryptonite-Monitor-Text', component_property='children')
]
)(lambda _, ticker, status: Market.render_monitor_text(ticker=ticker, status=status))
# Register Timer Callback: render MarketView Graph
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Kryptonite-Monitor-MarketView', component_property='figure'),
inputs=[dash.dependencies.Input(component_id='interval-component', component_property='n_intervals')],
state=[
dash.dependencies.State(component_id='Kryptonite-Monitor-MarketView', component_property='figure'),
dash.dependencies.State(component_id='Kryptonite-Monitor-Ticker-Dropdown', component_property='value')
]
)(lambda _, fig, ticker: Market.render_market_view(fig=fig, ticker=ticker))
# Register Button Callback: Wind function
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Wind-Button', component_property='children'),
inputs=[dash.dependencies.Input(component_id='Wind-Button', component_property='n_clicks')]
)(lambda n_clicks: Trade.wind(clicked=n_clicks))
# Register Button Callback: Wind function
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Unwind-Button', component_property='children'),
inputs=[dash.dependencies.Input(component_id='Unwind-Button', component_property='n_clicks')]
)(lambda n_clicks: Trade.unwind(clicked=n_clicks))
# Register Button Callback: Wind function
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Cancel-Button', component_property='children'),
inputs=[dash.dependencies.Input(component_id='Cancel-Button', component_property='n_clicks')]
)(lambda n_clicks: Trade.cancel(clicked=n_clicks))
# Register Button Callback: Wind function
DASH_APP.callback(
output=dash.dependencies.Output(component_id='Refresh-Button', component_property='children'),
inputs=[dash.dependencies.Input(component_id='Refresh-Button', component_property='n_clicks')]
)(lambda n_clicks: Trade.refresh(clicked=n_clicks))
DASH_APP.layout = init_layout()
register_callbacks()
| 66.399361 | 311 | 0.307752 | 1,115 | 20,783 | 5.55157 | 0.130942 | 0.063328 | 0.142488 | 0.147011 | 0.775767 | 0.766074 | 0.750727 | 0.742165 | 0.715994 | 0.665913 | 0 | 0.023242 | 0.604581 | 20,783 | 312 | 312 | 66.612179 | 0.729983 | 0.019487 | 0 | 0.639576 | 0 | 0 | 0.107185 | 0.016955 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007067 | false | 0.003534 | 0.028269 | 0 | 0.038869 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d629609acfce2d40d75bb97ce5aa0740b7639f33 | 2,435 | py | Python | prepare_model.py | www516717402/Minimal-IK | 62e1ed33a7ef78cbf88ba9f0b6744a745558eb93 | [
"MIT"
] | 213 | 2020-02-13T17:34:08.000Z | 2022-03-29T11:22:14.000Z | prepare_model.py | Lady-Ariel/Minimal-IK | edec4c656d819b4a7a01d541ac349ecf993cc327 | [
"MIT"
] | 13 | 2020-02-14T09:22:24.000Z | 2022-03-30T02:02:53.000Z | prepare_model.py | Lady-Ariel/Minimal-IK | edec4c656d819b4a7a01d541ac349ecf993cc327 | [
"MIT"
] | 32 | 2020-02-29T12:16:57.000Z | 2022-02-25T11:01:26.000Z | from config import *
import pickle
import numpy as np
def prepare_mano_model():
"""
Convert the official MANO model into compatible format with this project.
"""
with open(OFFICIAL_MANO_PATH, 'rb') as f:
data = pickle.load(f, encoding='latin1')
params = {
'pose_pca_basis': np.array(data['hands_components']),
'pose_pca_mean': np.array(data['hands_mean']),
'J_regressor': data['J_regressor'].toarray(),
'skinning_weights': np.array(data['weights']),
# pose blend shape
'mesh_pose_basis': np.array(data['posedirs']),
'mesh_shape_basis': np.array(data['shapedirs']),
'mesh_template': np.array(data['v_template']),
'faces': np.array(data['f']),
'parents': data['kintree_table'][0].tolist(),
}
params['parents'][0] = None
with open(MANO_MODEL_PATH, 'wb') as f:
pickle.dump(params, f)
def prepare_smpl_model():
"""
Convert the official SMPL model into compatible format with this project.
"""
with open(OFFICIAL_SMPL_PATH, 'rb') as f:
data = pickle.load(f, encoding='latin1')
params = {
# SMPL does not provide pose PCA
'pose_pca_basis': np.eye(23 * 3),
'pose_pca_mean': np.zeros(23 * 3),
'J_regressor': data['J_regressor'].toarray(),
'skinning_weights': np.array(data['weights']),
# pose blend shape
'mesh_pose_basis': np.array(data['posedirs']),
'mesh_shape_basis': np.array(data['shapedirs']),
'mesh_template': np.array(data['v_template']),
'faces': np.array(data['f']),
'parents': data['kintree_table'][0].tolist(),
}
params['parents'][0] = None
with open(SMPL_MODEL_PATH, 'wb') as f:
pickle.dump(params, f)
def prepare_smplh_model():
"""
Convert the official SMPLH model into compatible format with this project.
"""
data = np.load(OFFICIAL_SMPLH_PATH)
params = {
# SMPL does not provide pose PCA
'pose_pca_basis': np.eye(51 * 3),
'pose_pca_mean': np.zeros(51 * 3),
'J_regressor': data['J_regressor'],
'skinning_weights': np.array(data['weights']),
# pose blend shape
'mesh_pose_basis': np.array(data['posedirs']),
'mesh_shape_basis': np.array(data['shapedirs']),
'mesh_template': np.array(data['v_template']),
'faces': np.array(data['f']),
'parents': data['kintree_table'][0].tolist(),
}
params['parents'][0] = None
with open(SMPLH_MODEL_PATH, 'wb') as f:
pickle.dump(params, f)
if __name__ == '__main__':
prepare_smplh_model()
| 31.217949 | 76 | 0.655031 | 341 | 2,435 | 4.466276 | 0.205279 | 0.078135 | 0.122784 | 0.073539 | 0.80302 | 0.80302 | 0.760998 | 0.734734 | 0.734734 | 0.71438 | 0 | 0.009911 | 0.171253 | 2,435 | 77 | 77 | 31.623377 | 0.744797 | 0.137988 | 0 | 0.563636 | 0 | 0 | 0.283358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054545 | false | 0 | 0.054545 | 0 | 0.109091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c3f923753274fb06c50f9887db3360f78228409c | 366 | py | Python | get2unix/views/users.py | death-finger/get2unix | 1ff6f729f076040d6493251471cc0ee9cdcdc661 | [
"MIT"
] | null | null | null | get2unix/views/users.py | death-finger/get2unix | 1ff6f729f076040d6493251471cc0ee9cdcdc661 | [
"MIT"
] | null | null | null | get2unix/views/users.py | death-finger/get2unix | 1ff6f729f076040d6493251471cc0ee9cdcdc661 | [
"MIT"
] | null | null | null | from django.http import HttpResponseRedirect, JsonResponse
from django.shortcuts import render
from django.contrib.auth.decorators import login_required, permission_required
from django.contrib.auth.models import User, Group, Permission
from django.contrib.auth.mixins import LoginRequiredMixin
from django.db.models import Q
from django.views.generic import View
| 36.6 | 78 | 0.855191 | 49 | 366 | 6.346939 | 0.489796 | 0.22508 | 0.163987 | 0.202572 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092896 | 366 | 9 | 79 | 40.666667 | 0.936747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f00caad1cc3a84648fcf2ff5b26be65822bc8b0 | 100 | py | Python | back/facades/__init__.py | TLPhong/tomcat-manager | 02a4876b49fff794e6f2045e41e7ee15bb85d2d6 | [
"MIT"
] | null | null | null | back/facades/__init__.py | TLPhong/tomcat-manager | 02a4876b49fff794e6f2045e41e7ee15bb85d2d6 | [
"MIT"
] | 7 | 2021-09-25T22:31:01.000Z | 2021-10-01T10:23:59.000Z | back/facades/__init__.py | TLPhong/tomcat-manager | 02a4876b49fff794e6f2045e41e7ee15bb85d2d6 | [
"MIT"
] | null | null | null | from facades.Users import Users
from facades.UserRepository import UserRepository as _UserRepository | 50 | 68 | 0.89 | 12 | 100 | 7.333333 | 0.5 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 100 | 2 | 68 | 50 | 0.967033 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f014689946a3b4039263773fbde8efe51bae610 | 41 | py | Python | tests/test_appconfig_template.py | erkandem/multi-dump-restore | dd84f7f4b5661f0647d4fd66c6463ea15497595b | [
"MIT"
] | null | null | null | tests/test_appconfig_template.py | erkandem/multi-dump-restore | dd84f7f4b5661f0647d4fd66c6463ea15497595b | [
"MIT"
] | null | null | null | tests/test_appconfig_template.py | erkandem/multi-dump-restore | dd84f7f4b5661f0647d4fd66c6463ea15497595b | [
"MIT"
] | null | null | null | from mdr import appconfig_template as ac
| 20.5 | 40 | 0.853659 | 7 | 41 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146341 | 41 | 1 | 41 | 41 | 0.971429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7f04201021ce6d325ecf1779e40c6287edfd71cf | 14,478 | py | Python | tests/units/fastsync/commons/test_fastsync_target_bigquery.py | jmriego/pipelinewise | 1220164db152803c4537721b4ec078e03ea3474b | [
"Apache-2.0"
] | null | null | null | tests/units/fastsync/commons/test_fastsync_target_bigquery.py | jmriego/pipelinewise | 1220164db152803c4537721b4ec078e03ea3474b | [
"Apache-2.0"
] | 37 | 2021-06-07T07:12:23.000Z | 2022-03-28T23:08:04.000Z | tests/units/fastsync/commons/test_fastsync_target_bigquery.py | jmriego/pipelinewise | 1220164db152803c4537721b4ec078e03ea3474b | [
"Apache-2.0"
] | 1 | 2020-08-03T06:53:35.000Z | 2020-08-03T06:53:35.000Z | import pytest
from unittest.mock import Mock, patch, ANY, mock_open
from google.cloud import bigquery
from pipelinewise.fastsync.commons.target_bigquery import FastSyncTargetBigquery
@pytest.fixture
def query_result():
"""
Mocked Bigquery Run Query Results
"""
qr = Mock()
qr.return_value = []
return qr
@pytest.fixture
def bigquery_job(query_result):
"""
Mocked Bigquery Job Query
"""
qj = Mock()
qj.output_rows = 0
qj.job_id = 1
qj.result().total_rows = 0
return qj
@pytest.fixture
def bigquery_job_config():
"""
Mocked Bigquery Job Config
"""
qc = Mock()
return qc
class FastSyncTargetBigqueryMock(FastSyncTargetBigquery):
"""
Mocked FastSyncTargetBigquery class
"""
def __init__(self, connection_config, transformation_config=None):
super().__init__(connection_config, transformation_config)
# pylint: disable=attribute-defined-outside-init
class TestFastSyncTargetBigquery:
"""
Unit tests for fastsync target bigquery
"""
def setup_method(self):
"""Initialise test FastSyncTargetPostgres object"""
self.bigquery = FastSyncTargetBigqueryMock(connection_config={'project_id': 'dummy-project'},
transformation_config={})
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_create_schema(self, Client, bigquery_job):
"""Validate if create schema queries generated correctly"""
Client().query.return_value = bigquery_job
self.bigquery.create_schema('new_schema')
Client().create_dataset.assert_called_with('new_schema', exists_ok=True)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_drop_table(self, Client, bigquery_job):
"""Validate if drop table queries generated correctly"""
Client().query.return_value = bigquery_job
self.bigquery.drop_table('test_schema', 'test_table')
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`test_table`', job_config=ANY)
self.bigquery.drop_table('test_schema', 'test_table', is_temporary=True)
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`test_table_temp`', job_config=ANY)
self.bigquery.drop_table('test_schema', 'UPPERCASE_TABLE')
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`uppercase_table`', job_config=ANY)
self.bigquery.drop_table('test_schema', 'UPPERCASE_TABLE', is_temporary=True)
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`uppercase_table_temp`', job_config=ANY)
self.bigquery.drop_table('test_schema', 'test_table_with_space')
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`test_table_with_space`', job_config=ANY)
self.bigquery.drop_table('test_schema', 'test table with space', is_temporary=True)
Client().query.assert_called_with(
'DROP TABLE IF EXISTS test_schema.`test_table_with_space_temp`', job_config=ANY)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_create_table(self, Client, bigquery_job):
"""Validate if create table queries generated correctly"""
Client().query.return_value = bigquery_job
# Create table with standard table and column names
self.bigquery.create_table(target_schema='test_schema',
table_name='test_table',
columns=['`id` INTEGER',
'`txt` STRING'])
Client().query.assert_called_with(
'CREATE OR REPLACE TABLE test_schema.`test_table` ('
'`id` integer,`txt` string,'
'_sdc_extracted_at timestamp,'
'_sdc_batched_at timestamp,'
'_sdc_deleted_at timestamp)',
job_config=ANY)
# Create table with reserved words in table and column names
self.bigquery.create_table(target_schema='test_schema',
table_name='order',
columns=['`id` INTEGER',
'`txt` STRING',
'`select` STRING'])
Client().query.assert_called_with(
'CREATE OR REPLACE TABLE test_schema.`order` ('
'`id` integer,`txt` string,`select` string,'
'_sdc_extracted_at timestamp,'
'_sdc_batched_at timestamp,'
'_sdc_deleted_at timestamp)',
job_config=ANY)
# Create table with mixed lower and uppercase and space characters
self.bigquery.create_table(target_schema='test_schema',
table_name='TABLE with SPACE',
columns=['`ID` INTEGER',
'`COLUMN WITH SPACE` STRING'])
Client().query.assert_called_with(
'CREATE OR REPLACE TABLE test_schema.`table_with_space` ('
'`id` integer,`column with space` string,'
'_sdc_extracted_at timestamp,'
'_sdc_batched_at timestamp,'
'_sdc_deleted_at timestamp)',
job_config=ANY)
# Create table with no primary key
self.bigquery.create_table(target_schema='test_schema',
table_name='test_table_no_pk',
columns=['`ID` INTEGER',
'`TXT` STRING'])
Client().query.assert_called_with(
'CREATE OR REPLACE TABLE test_schema.`test_table_no_pk` ('
'`id` integer,`txt` string,'
'_sdc_extracted_at timestamp,'
'_sdc_batched_at timestamp,'
'_sdc_deleted_at timestamp)',
job_config=ANY)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.LoadJobConfig")
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_copy_to_table(self, Client, LoadJobConfig, bigquery_job_config, bigquery_job):
"""Validate if COPY command generated correctly"""
# COPY table with standard table and column names
Client().load_table_from_file.return_value = bigquery_job
LoadJobConfig.return_value = bigquery_job_config
m = mock_open()
with patch('pipelinewise.fastsync.commons.target_bigquery.open', m):
self.bigquery.copy_to_table(filepath='/path/to/dummy-file.csv.gz',
target_schema='test_schema',
table_name='test_table',
size_bytes=1000,
is_temporary=False,
skip_csv_header=False)
m.assert_called_with('/path/to/dummy-file.csv.gz', 'rb')
assert bigquery_job_config.source_format == bigquery.SourceFormat.CSV
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
assert bigquery_job_config.allow_quoted_newlines == True
assert bigquery_job_config.skip_leading_rows == 0
Client().dataset.assert_called_with('test_schema')
Client().dataset().table.assert_called_with('test_table')
assert Client().load_table_from_file.call_count == 1
# COPY table with reserved word in table and column names in temp table
with patch('pipelinewise.fastsync.commons.target_bigquery.open', m):
self.bigquery.copy_to_table(filepath='/path/to/full-file.csv.gz',
target_schema='test_schema',
table_name='full',
size_bytes=1000,
is_temporary=True,
skip_csv_header=False)
m.assert_called_with('/path/to/full-file.csv.gz', 'rb')
assert bigquery_job_config.source_format == bigquery.SourceFormat.CSV
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
assert bigquery_job_config.allow_quoted_newlines == True
assert bigquery_job_config.skip_leading_rows == 0
Client().dataset.assert_called_with('test_schema')
Client().dataset().table.assert_called_with('full_temp')
assert Client().load_table_from_file.call_count == 2
# COPY table with space and uppercase in table name and s3 key
with patch('pipelinewise.fastsync.commons.target_bigquery.open', m):
self.bigquery.copy_to_table(filepath='/path/to/file with space.csv.gz',
target_schema='test_schema',
table_name='table with SPACE and UPPERCASE',
size_bytes=1000,
is_temporary=True,
skip_csv_header=False)
m.assert_called_with('/path/to/file with space.csv.gz', 'rb')
assert bigquery_job_config.source_format == bigquery.SourceFormat.CSV
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
assert bigquery_job_config.allow_quoted_newlines == True
assert bigquery_job_config.skip_leading_rows == 0
Client().dataset.assert_called_with('test_schema')
Client().dataset().table.assert_called_with('table_with_space_and_uppercase_temp')
assert Client().load_table_from_file.call_count == 3
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_grant_select_on_table(self, Client, bigquery_job):
"""Validate if GRANT command generated correctly"""
# GRANT table with standard table and column names
Client().query.return_value = bigquery_job
self.bigquery.grant_select_on_table(target_schema='test_schema',
table_name='test_table',
role='test_role',
is_temporary=False)
Client().query.assert_called_with(
'GRANT SELECT ON test_schema.`test_table` TO ROLE test_role', job_config=ANY)
# GRANT table with reserved word in table and column names in temp table
self.bigquery.grant_select_on_table(target_schema='test_schema',
table_name='full',
role='test_role',
is_temporary=False)
Client().query.assert_called_with(
'GRANT SELECT ON test_schema.`full` TO ROLE test_role', job_config=ANY)
# GRANT table with with space and uppercase in table name and s3 key
self.bigquery.grant_select_on_table(target_schema='test_schema',
table_name='table with SPACE and UPPERCASE',
role='test_role',
is_temporary=False)
Client().query.assert_called_with(
'GRANT SELECT ON test_schema.`table_with_space_and_uppercase` TO ROLE test_role', job_config=ANY)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_grant_usage_on_schema(self, Client, bigquery_job):
"""Validate if GRANT command generated correctly"""
self.bigquery.grant_usage_on_schema(target_schema='test_schema',
role='test_role')
Client().query.assert_called_with(
'GRANT USAGE ON SCHEMA test_schema TO ROLE test_role', job_config=ANY)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_grant_select_on_schema(self, Client, bigquery_job):
"""Validate if GRANT command generated correctly"""
self.bigquery.grant_select_on_schema(target_schema='test_schema',
role='test_role')
Client().query.assert_called_with(
'GRANT SELECT ON ALL TABLES IN SCHEMA test_schema TO ROLE test_role', job_config=ANY)
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.CopyJobConfig")
@patch("pipelinewise.fastsync.commons.target_bigquery.bigquery.Client")
def test_swap_tables(self, Client, CopyJobConfig, bigquery_job_config, bigquery_job):
"""Validate if swap table commands generated correctly"""
# Swap tables with standard table and column names
Client().copy_table.return_value = bigquery_job
CopyJobConfig.return_value = bigquery_job_config
self.bigquery.swap_tables(schema='test_schema',
table_name='test_table')
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
Client().copy_table.assert_called_with(
'dummy-project.test_schema.test_table_temp',
'dummy-project.test_schema.test_table',
job_config=ANY)
Client().delete_table.assert_called_with('dummy-project.test_schema.test_table_temp')
# Swap tables with reserved word in table and column names in temp table
self.bigquery.swap_tables(schema='test_schema',
table_name='full')
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
Client().copy_table.assert_called_with(
'dummy-project.test_schema.full_temp',
'dummy-project.test_schema.full',
job_config=ANY)
Client().delete_table.assert_called_with('dummy-project.test_schema.full_temp')
# Swap tables with with space and uppercase in table name and s3 key
self.bigquery.swap_tables(schema='test_schema',
table_name='table with SPACE and UPPERCASE')
assert bigquery_job_config.write_disposition == 'WRITE_TRUNCATE'
Client().copy_table.assert_called_with(
'dummy-project.test_schema.table_with_space_and_uppercase_temp',
'dummy-project.test_schema.table_with_space_and_uppercase',
job_config=ANY)
Client().delete_table.assert_called_with('dummy-project.test_schema.table_with_space_and_uppercase_temp')
| 51.159011 | 113 | 0.624465 | 1,611 | 14,478 | 5.30478 | 0.098076 | 0.056167 | 0.058039 | 0.04037 | 0.846244 | 0.816756 | 0.798502 | 0.758249 | 0.735549 | 0.698923 | 0 | 0.002323 | 0.286366 | 14,478 | 282 | 114 | 51.340426 | 0.824816 | 0.09732 | 0 | 0.537383 | 0 | 0 | 0.271279 | 0.129184 | 0 | 0 | 0 | 0 | 0.228972 | 1 | 0.060748 | false | 0 | 0.018692 | 0 | 0.102804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7f0f36981cc87801462881e99b6ca74640f51e07 | 39 | py | Python | py/bluemesa/groups/sp500/__init__.py | stormp/bluemesa | 0295bd234d69c4f9cd78e725924d887bc35af508 | [
"MIT"
] | null | null | null | py/bluemesa/groups/sp500/__init__.py | stormp/bluemesa | 0295bd234d69c4f9cd78e725924d887bc35af508 | [
"MIT"
] | 3 | 2020-12-11T19:12:19.000Z | 2021-05-21T01:26:57.000Z | py/bluemesa/groups/sp500/__init__.py | stormp/bluemesa | 0295bd234d69c4f9cd78e725924d887bc35af508 | [
"MIT"
] | 14 | 2020-06-17T15:23:36.000Z | 2022-01-03T03:04:16.000Z | from bluemesa.groups.sp500 import util
| 19.5 | 38 | 0.846154 | 6 | 39 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.102564 | 39 | 1 | 39 | 39 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
615c3021df5c16636231f41678a7d68a8aae79a2 | 4,787 | py | Python | tests/dataframe/test_filters.py | vishalbelsare/boo | 96d08857abd790bc44f48256e7be7da130543a84 | [
"MIT"
] | 14 | 2019-07-06T13:00:54.000Z | 2021-09-14T16:05:23.000Z | tests/dataframe/test_filters.py | vishalbelsare/boo | 96d08857abd790bc44f48256e7be7da130543a84 | [
"MIT"
] | 31 | 2019-07-05T09:31:40.000Z | 2021-08-03T21:16:56.000Z | tests/dataframe/test_filters.py | vishalbelsare/boo | 96d08857abd790bc44f48256e7be7da130543a84 | [
"MIT"
] | 4 | 2019-12-06T21:13:11.000Z | 2021-03-12T10:11:18.000Z | from io import StringIO
import pandas as pd
from boo.dataframe.filter import large_companies, minimal_columns
def dataframe():
doc_ = """inn,title,org,okpo,okopf,okfs,okved,unit,ok1,ok2,ok3,region,of,of_lag,ta_fix,ta_fix_lag,cash,cash_lag,ta_nonfix,ta_nonfix_lag,ta,ta_lag,tp_capital,tp_capital_lag,debt_long,debt_long_lag,tp_long,tp_long_lag,debt_short,debt_short_lag,tp_short,tp_short_lag,tp,tp_lag,sales,sales_lag,profit_oper,profit_oper_lag,exp_interest,exp_interest_lag,profit_before_tax,profit_before_tax_lag,profit_after_tax,profit_after_tax_lag,cf_oper_in,cf_oper_in_sales,cf_oper_out,paid_to_supplier,paid_to_worker,paid_interest,paid_profit_tax,paid_other_costs,cf_oper,cf_inv_in,cf_inv_out,paid_fa_investment,cf_inv,cf_fin_in,cf_fin_out,cf_fin,cf
9723031303,КИРБИ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,16179225,65,16,45.31,384,45,31,0,97,0,0,0,0,0,0,1190596,0,1190596,0,30,0,0,0,0,0,0,0,1190566,0,1190596,0,1190566,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
5808004284,БЕКОВСКИЙ САХАРНЫЙ ЗАВОД,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,88042176,12300,16,10.81,384,10,81,0,58,838114,974114,850451,1339464,112,1018,615767,2115066,1466219,3454530,1609,78759,674865,765472,1437600,3299526,13861,9353,27010,76245,1466219,3454530,258074,2973297,-27716,115924,23864,34772,4811,52861,2466,35449,2294362,2276899,2532805,2300476,49916,0,182413,0,-238443,368795,19396,0,349399,8361,120223,-111862,-906
5256083213,АВТОКОМПОНЕНТЫ - ГРУППА ГАЗ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,88605449,12300,16,45.31.1,384,45,31,1,52,1605,1816,937605,17404,60288,76670,3031561,4065688,3969166,4083092,683166,354657,0,0,15735,50887,1675,230365,3270265,3677548,3969166,4083092,20005657,17298393,524309,289159,17110,39215,397456,445147,328509,353518,18245194,18061910,18271154,17865407,193038,21000,130945,60764,-25960,1217077,982770,331,234307,0,224800,-224800,-16453
5050115503,МАЙ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,45678395,12300,16,82.92,384,82,92,0,50,1491045,2098817,1564484,2311733,21677,508642,630147,1538359,2194631,3850092,1055556,1430621,0,0,0,3423,0,0,1139075,2416048,2194631,3850092,3214040,2912029,1177304,1146590,116603,184009,855226,752861,674935,597651,5400857,5215986,5875113,2983446,751825,0,240438,1899404,-474256,467181,17020,0,450161,0,462870,-462870,-486965
2983009240,ВАРАНДЕЙСКИЙ ТЕРМИНАЛ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,72357966,12300,16,52.10.21,384,52,10,21,29,15507533,17011233,16380791,17874644,0,0,2269210,19221763,18650001,37096407,15251256,33804444,0,0,2307856,2336906,0,0,1090889,955057,18650001,37096407,18417236,20413127,12997125,14852461,0,0,14301852,15876212,11446812,14772656,19048438,17880058,6566912,2777866,489765,0,2724465,574816,12481526,36104888,18618668,73726,17486220,0,29961000,-29961000,6746
7713725683,ТОРГОВЫЙ ДОМ СВЯЗЬ ИНЖИНИРИНГ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,90687584,65,16,33.20,384,33,20,0,77,1549,2888,1549,2888,230,195,4879339,2876786,4880888,2879674,434474,172435,0,0,2,2,0,0,4446412,2707237,4880888,2879674,2882993,1449191,288482,208633,0,97190,327861,293963,262039,232421,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
7728694325,СРЕДНЕ-ВОЛЖСКАЯ СУДОХОДНАЯ КОМПАНИЯ,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,60480897,65,16,61.20,384,61,20,0,77,2112941,1565059,2112941,1565059,5333,74709,149938,253104,2262879,1818163,11208,6463,2131406,1600321,2131406,1600321,0,0,120265,211379,2262879,1818163,1065493,727980,79615,181378,110811,131501,5931,3521,4745,2817,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
7805056048,ПАТРИОТ-ДЕВЕЛОПМЕНТ СЕВЕРО-ЗАПАД,ОБЩЕСТВО С ОГРАНИЧЕННОЙ ОТВЕТСТВЕННОСТЬЮ,79749925,12300,16,45.20,384,45,20,0,78,0,206,13626,4978,263,349,1455220,1769657,1468846,1774634,-57294,-20614,0,0,0,48216,14675,13530,1526140,1747032,1468846,1774634,88126,116843,-62835,-21008,1145,825,-45573,-13088,-36680,-10922,986663,37455,1030520,155671,67144,0,0,807705,-43857,56384,12613,0,43771,0,0,0,-86
"""
return pd.read_csv(StringIO(doc_)).set_index("inn")
def reference_dict():
return {
"region": {2983009240: 29, 7713725683: 77, 5256083213: 52},
"ok1": {2983009240: 52, 7713725683: 33, 5256083213: 45},
"title": {
2983009240: "ВАРАНДЕЙСКИЙ ТЕРМИНАЛ",
7713725683: "ТОРГОВЫЙ ДОМ СВЯЗЬ ИНЖИНИРИНГ",
5256083213: "АВТОКОМПОНЕНТЫ - ГРУППА ГАЗ",
},
"ta": {2983009240: 18.7, 7713725683: 4.9, 5256083213: 4.0},
"of": {2983009240: 15.5, 7713725683: 0.0, 5256083213: 0.0},
"sales": {2983009240: 18.4, 7713725683: 2.9, 5256083213: 20.0},
"p": {2983009240: 14.3, 7713725683: 0.3, 5256083213: 0.4},
"cf": {2983009240: 0.0, 7713725683: 0.0, 5256083213: -0.0},
}
def test_large_companies_and_less_columns():
x = minimal_columns(large_companies(dataframe()))
assert x.head(3).to_dict() == reference_dict()
| 113.97619 | 634 | 0.787132 | 799 | 4,787 | 4.607009 | 0.493116 | 0.048356 | 0.05379 | 0.063026 | 0.052431 | 0.032872 | 0.018202 | 0.0163 | 0.0163 | 0.0163 | 0 | 0.527648 | 0.059327 | 4,787 | 41 | 635 | 116.756098 | 0.289807 | 0 | 0 | 0 | 0 | 0.272727 | 0.799248 | 0.714435 | 0 | 0 | 0 | 0 | 0.030303 | 1 | 0.090909 | false | 0 | 0.090909 | 0.030303 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
61afeb69068074cb5956e4d07ed30a7b753a8456 | 26 | py | Python | tests/apps/nlsupporting/components/news/__init__.py | blazelibs/blazeweb | b120a6a2e38c8b53da2b73443ff242e2d1438053 | [
"BSD-3-Clause"
] | null | null | null | tests/apps/nlsupporting/components/news/__init__.py | blazelibs/blazeweb | b120a6a2e38c8b53da2b73443ff242e2d1438053 | [
"BSD-3-Clause"
] | 6 | 2016-11-01T18:42:34.000Z | 2020-11-16T16:52:14.000Z | tests/apps/nlsupporting/components/news/__init__.py | blazelibs/blazeweb | b120a6a2e38c8b53da2b73443ff242e2d1438053 | [
"BSD-3-Clause"
] | 1 | 2020-01-22T18:20:46.000Z | 2020-01-22T18:20:46.000Z |
def somefunc():
pass
| 6.5 | 15 | 0.576923 | 3 | 26 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307692 | 26 | 3 | 16 | 8.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
61c304e198c17582154f329f076a445408cf69a6 | 125 | py | Python | tests/optimizers/__init__.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | 23 | 2019-06-08T17:24:05.000Z | 2022-01-16T17:53:15.000Z | tests/optimizers/__init__.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | null | null | null | tests/optimizers/__init__.py | kpe/params-flow | 5857dfd67cf15e89803e5987316c81fe6c4cc54d | [
"MIT"
] | 4 | 2019-09-29T07:25:24.000Z | 2020-11-25T15:02:15.000Z | # coding=utf-8
#
# created by kpe on 16.Aug.2019 at 14:58
#
from __future__ import absolute_import, division, print_function | 20.833333 | 64 | 0.768 | 21 | 125 | 4.285714 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102804 | 0.144 | 125 | 6 | 64 | 20.833333 | 0.738318 | 0.408 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
f610e1ec52181a82290c1e93ef97aa230bb1f8c8 | 43 | py | Python | asteroid/filterbanks/enc_dec.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | 1 | 2021-02-22T21:55:40.000Z | 2021-02-22T21:55:40.000Z | asteroid/filterbanks/enc_dec.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | null | null | null | asteroid/filterbanks/enc_dec.py | julien-c/asteroid | 77d1b744017408b8bf4f1812e949c3c3aa4b16d3 | [
"MIT"
] | 1 | 2021-05-10T08:58:35.000Z | 2021-05-10T08:58:35.000Z | from asteroid_filterbanks.enc_dec import *
| 21.5 | 42 | 0.860465 | 6 | 43 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f65fc3be6ac9aa0fc54339dc862bd18e12db68ec | 6,264 | py | Python | nca47/api/controllers/v1/firewall/net_service.py | WosunOO/nca_xianshu | bbb548cb67b755a57528796d4c5a66ee68df2678 | [
"Apache-2.0"
] | null | null | null | nca47/api/controllers/v1/firewall/net_service.py | WosunOO/nca_xianshu | bbb548cb67b755a57528796d4c5a66ee68df2678 | [
"Apache-2.0"
] | null | null | null | nca47/api/controllers/v1/firewall/net_service.py | WosunOO/nca_xianshu | bbb548cb67b755a57528796d4c5a66ee68df2678 | [
"Apache-2.0"
] | null | null | null | from nca47.manager.central import CentralManager
from nca47.common.exception import BadRequest
from nca47.common.exception import ParamFormatError
from nca47.common.exception import Nca47Exception
from oslo_log import log
from nca47.common.i18n import _LI, _LE
from oslo_serialization import jsonutils as json
from nca47.api.controllers.v1 import tools
from oslo_messaging import RemoteError
from nca47.api.controllers.v1 import base
LOG = log.getLogger(__name__)
class NetServiceController(base.BaseRestController):
"""the method NetService operation"""
def __init__(self):
self.manager = CentralManager.get_instance()
super(NetServiceController, self).__init__()
def create(self, req, *args, **kwargs):
try:
url = req.url
if len(args) != 1:
raise BadRequest(resource="netservice operation", msg=url)
json_body = req.body
context = req.context
dic = json.loads(json_body)
LOG.info(_LI("create_netservice body is %(json)s,args is "
"%(args)s, kwargs is %(kwargs)s"),
{"json": dic, "args": args, "kwargs": kwargs})
list_ = ['tenant_id', 'dc_name', 'proto',
'network_zone', 'port', 'vfwname']
dic_body = self.firewall_params(dic, list_)
response = self.manager.creat_netservice(context, dic_body)
return response
except Nca47Exception as e:
LOG.error(_LE('Nca47Exception! error info: ' + e.message))
self.response.status = e.code
return tools.ret_info(e.code, e.message)
except RemoteError as e:
self.response.status = 500
return tools.ret_info(self.response.status, e.value)
except Exception as e:
LOG.error(_LE('Exception! error info: ' + e.message))
self.response.status = 500
return tools.ret_info(self.response.status, e.message)
def remove(self, req, *args, **kwargs):
try:
url = req.url
if len(args) != 1:
raise BadRequest(resource="netservice operation", msg=url)
json_body = req.body
context = req.context
dic = json.loads(json_body)
LOG.info(_LI("del_netservice body is %(json)s, args is %(args)s,"
"kwargs is %(kwargs)s"),
{"json": dic, "args": args, "kwargs": kwargs})
list_ = ['id', 'tenant_id', 'dc_name', 'network_zone']
dic_body = self.firewall_params(dic, list_)
response = self.manager.del_netservice(context, dic_body)
return response
except Nca47Exception as e:
LOG.error(_LE('Nca47Exception! error info: ' + e.message))
self.response.status = e.code
return tools.ret_info(e.code, e.message)
except RemoteError as e:
self.response.status = 500
return tools.ret_info(self.response.status, e.value)
except Exception as e:
LOG.error(_LE('Exception! error info: ' + e.message))
self.response.status = 500
return tools.ret_info(self.response.status, e.message)
def show(self, req, *args, **kwargs):
try:
url = req.url
if len(args) != 1:
raise BadRequest(resource="netservice operation", msg=url)
json_body = req.body
context = req.context
dic = json.loads(json_body)
LOG.info(_LI("get_vlan body is %(json)s, args is %(args)s,"
"kwargs is %(kwargs)s"),
{"json": dic, "args": args, "kwargs": kwargs})
list_ = ['id']
dic_body = self.firewall_params(dic, list_)
response = self.manager.get_netservice(context, dic_body)
return response
except Nca47Exception as e:
LOG.error(_LE('Nca47Exception! error info: ' + e.message))
self.response.status = e.code
return tools.ret_info(e.code, e.message)
except RemoteError as e:
self.response.status = 500
return tools.ret_info(self.response.status, e.value)
except Exception as e:
LOG.error(_LE('Exception! error info: ' + e.message))
self.response.status = 500
return tools.ret_info(self.response.status, e.message)
def list(self, req, *args, **kwargs):
try:
url = req.url
if len(args) != 1:
raise BadRequest(resource="netservice operation", msg=url)
context = req.context
json_body = req.body
dic = json.loads(json_body)
LOG.info(_LI("get_all_vlan body is %(json)s, args is %(args)s,"
"kwargs is %(kwargs)s"),
{"json": dic, "args": args, "kwargs": kwargs})
list_ = ['tenant_id', 'dc_name', 'network_zone', 'vfwname']
dic_body = self.firewall_params(dic, list_)
response = self.manager.get_netservices(context, dic_body)
return response
except Nca47Exception as e:
LOG.error(_LE('Nca47Exception! error info: ' + e.message))
self.response.status = e.code
return tools.ret_info(e.code, e.message)
except RemoteError as e:
self.response.status = 500
return tools.ret_info(self.response.status, e.value)
except Exception as e:
LOG.error(_LE('Exception! error info: ' + e.message))
self.response.status = 500
return tools.ret_info(self.response.status, e.message)
def firewall_params(self, dic, list_):
dic = tools.filter_string_not_null(dic, list_)
dic_key = dic.keys()
for key in dic_key:
val_key = dic[key]
if key == "proto":
if not tools.is_proto_range(val_key):
raise ParamFormatError(param_name=key)
elif key == "port":
if not tools._is_valid_port_range(val_key):
raise ParamFormatError(param_name=key)
else:
continue
return dic
| 42.90411 | 77 | 0.576948 | 739 | 6,264 | 4.745602 | 0.142084 | 0.068435 | 0.102652 | 0.065013 | 0.807528 | 0.781865 | 0.753921 | 0.753921 | 0.728828 | 0.718848 | 0 | 0.014943 | 0.316252 | 6,264 | 145 | 78 | 43.2 | 0.803876 | 0.004949 | 0 | 0.664179 | 0 | 0 | 0.118034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.074627 | 0 | 0.253731 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cd32f9ccc077dcb484715fdcc8d31922ff8b20e | 146 | py | Python | genshin/models/genshin/chronicle/__init__.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 63 | 2021-10-04T19:53:54.000Z | 2022-03-30T07:21:03.000Z | genshin/models/genshin/chronicle/__init__.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 17 | 2021-11-16T20:42:52.000Z | 2022-03-31T10:11:52.000Z | genshin/models/genshin/chronicle/__init__.py | thesadru/genshin.py | 806b8d0dd059a06605e66dead917fdf550a552bc | [
"MIT"
] | 10 | 2021-10-16T22:41:41.000Z | 2022-02-19T17:55:23.000Z | """Battle chronicle models."""
from .abyss import *
from .activities import *
from .characters import *
from .notes import *
from .stats import *
| 20.857143 | 30 | 0.726027 | 18 | 146 | 5.888889 | 0.555556 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157534 | 146 | 6 | 31 | 24.333333 | 0.861789 | 0.164384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9ce81dba0e41c4b5a99b2d763bbe767c0ee742de | 4,839 | py | Python | tests/test_read_records_pipeline.py | mitcheccles/beam-mysql-connector | f94ad258b66aba4eff4f7d3d276e56864404a777 | [
"MIT"
] | 12 | 2020-06-24T04:16:11.000Z | 2022-01-10T23:51:21.000Z | tests/test_read_records_pipeline.py | mitcheccles/beam-mysql-connector | f94ad258b66aba4eff4f7d3d276e56864404a777 | [
"MIT"
] | 7 | 2020-04-30T11:06:36.000Z | 2021-12-02T15:56:20.000Z | tests/test_read_records_pipeline.py | mitcheccles/beam-mysql-connector | f94ad258b66aba4eff4f7d3d276e56864404a777 | [
"MIT"
] | 10 | 2020-06-24T04:16:14.000Z | 2022-01-10T23:51:12.000Z | """A test of read records pipeline."""
from apache_beam.testing.test_pipeline import TestPipeline
from apache_beam.testing.util import assert_that
from apache_beam.testing.util import equal_to
from datetime import date
from beam_mysql.connector import splitters
from beam_mysql.connector.io import ReadFromMySQL
from tests.test_base import TestBase
class TestReadRecordsPipeline(TestBase):
def test_pipeline_no_splitter(self):
expected = [
{"id": 1, "name": "test data1", "date": date(2020, 1, 1), "memo": "memo1"},
{"id": 2, "name": "test data2", "date": date(2020, 2, 2), "memo": None},
{"id": 3, "name": "test data3", "date": date(2020, 3, 3), "memo": "memo3"},
{"id": 4, "name": "test data4", "date": date(2020, 4, 4), "memo": None},
{"id": 5, "name": "test data5", "date": date(2020, 5, 5), "memo": None},
]
with TestPipeline() as p:
# Access to mysql on docker
read_from_mysql = ReadFromMySQL(
query="SELECT * FROM test_db.tests;",
host="0.0.0.0",
database="test_db",
user="root",
password="root",
port=3307,
splitter=splitters.NoSplitter(),
)
actual = p | read_from_mysql
assert_that(actual, equal_to(expected))
def test_pipeline_limit_offset_splitter(self):
expected = [
{"id": 1, "name": "test data1", "date": date(2020, 1, 1), "memo": "memo1"},
{"id": 2, "name": "test data2", "date": date(2020, 2, 2), "memo": None},
{"id": 3, "name": "test data3", "date": date(2020, 3, 3), "memo": "memo3"},
{"id": 4, "name": "test data4", "date": date(2020, 4, 4), "memo": None},
{"id": 5, "name": "test data5", "date": date(2020, 5, 5), "memo": None},
]
with TestPipeline() as p:
# Access to mysql on docker
read_from_mysql = ReadFromMySQL(
query="SELECT * FROM test_db.tests;",
host="0.0.0.0",
database="test_db",
user="root",
password="root",
port=3307,
splitter=splitters.LimitOffsetSplitter(),
)
actual = p | read_from_mysql
assert_that(actual, equal_to(expected))
def test_pipeline_ids_splitter(self):
expected = [
{"id": 1, "name": "test data1", "date": date(2020, 1, 1), "memo": "memo1"},
{"id": 2, "name": "test data2", "date": date(2020, 2, 2), "memo": None},
]
with TestPipeline() as p:
# Access to mysql on docker
read_from_mysql = ReadFromMySQL(
query="SELECT * FROM test_db.tests WHERE id IN ({ids});",
host="0.0.0.0",
database="test_db",
user="root",
password="root",
port=3307,
splitter=splitters.IdsSplitter(generate_ids_fn=lambda: [1, 2]),
)
actual = p | read_from_mysql
assert_that(actual, equal_to(expected))
def test_pipeline_date_splitter(self):
expected = [
{"id": 1, "name": "test data1", "date": date(2020, 1, 1), "memo": "memo1"},
{"id": 2, "name": "test data2", "date": date(2020, 2, 2), "memo": None},
{"id": 3, "name": "test data3", "date": date(2020, 3, 3), "memo": "memo3"},
]
with TestPipeline() as p:
# Access to mysql on docker
read_from_mysql = ReadFromMySQL(
query="SELECT * FROM test_db.tests WHERE date BETWEEN '2020-01-01' AND '2020-03-03';",
host="0.0.0.0",
database="test_db",
user="root",
password="root",
port=3307,
splitter=splitters.DateSplitter(),
)
actual = p | read_from_mysql
assert_that(actual, equal_to(expected))
def test_pipeline_partitions_splitter(self):
expected = [
{"id": 2, "name": "test data2", "date": date(2020, 2, 2), "memo": None},
{"id": 3, "name": "test data3", "date": date(2020, 3, 3), "memo": "memo3"},
]
with TestPipeline() as p:
# Access to mysql on docker
read_from_mysql = ReadFromMySQL(
query="SELECT * FROM test_db.tests PARTITION (p202002,p202003);",
host="0.0.0.0",
database="test_db",
user="root",
password="root",
port=3307,
splitter=splitters.PartitionSplitter(),
)
actual = p | read_from_mysql
assert_that(actual, equal_to(expected))
| 37.223077 | 102 | 0.508576 | 554 | 4,839 | 4.3213 | 0.162455 | 0.056809 | 0.085213 | 0.045948 | 0.800334 | 0.800334 | 0.774436 | 0.774436 | 0.774436 | 0.774436 | 0 | 0.067085 | 0.340773 | 4,839 | 129 | 103 | 37.511628 | 0.683386 | 0.033685 | 0 | 0.69 | 0 | 0.01 | 0.170381 | 0 | 0 | 0 | 0 | 0 | 0.06 | 1 | 0.05 | false | 0.05 | 0.07 | 0 | 0.13 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9cf0d2fd8d2a324d6822b3ab3dd4703306daafd6 | 158 | py | Python | home/views.py | jeanmariepm/vipapi | 417d60677ed6b29515fc9c9f172d66955bd0f5ba | [
"Apache-2.0"
] | null | null | null | home/views.py | jeanmariepm/vipapi | 417d60677ed6b29515fc9c9f172d66955bd0f5ba | [
"Apache-2.0"
] | null | null | null | home/views.py | jeanmariepm/vipapi | 417d60677ed6b29515fc9c9f172d66955bd0f5ba | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
def index(request):
return HttpResponse('Use vipveed.herokuapp.com to access')
| 22.571429 | 62 | 0.797468 | 21 | 158 | 6 | 0.809524 | 0.15873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132911 | 158 | 6 | 63 | 26.333333 | 0.919708 | 0 | 0 | 0 | 0 | 0 | 0.221519 | 0.132911 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
14008fa428a9b66d386222cb82605bc2da0503cb | 159 | py | Python | src/test_vpn_switcher.py | baumartig/vpn_switcher | 0c096159b032fe3a845866dd7bc3dcd40ff7c9f2 | [
"Apache-2.0"
] | null | null | null | src/test_vpn_switcher.py | baumartig/vpn_switcher | 0c096159b032fe3a845866dd7bc3dcd40ff7c9f2 | [
"Apache-2.0"
] | null | null | null | src/test_vpn_switcher.py | baumartig/vpn_switcher | 0c096159b032fe3a845866dd7bc3dcd40ff7c9f2 | [
"Apache-2.0"
] | null | null | null | from vpn_switcher import check_country
print "Test country"
print "result: us is %s" % (check_country('us'))
print "result: de is %s" % (check_country('de'))
| 26.5 | 48 | 0.710692 | 25 | 159 | 4.36 | 0.52 | 0.330275 | 0.146789 | 0.275229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138365 | 159 | 5 | 49 | 31.8 | 0.79562 | 0 | 0 | 0 | 0 | 0 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.75 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
148e9ceda6f8e757f7ebace0343b2dbdd7ef6fe7 | 109 | py | Python | tests/utils.py | maxme1/dpipe_transpiler | dcf67a8a2bd1b2a47370b02719f36589e1e6d6d5 | [
"MIT"
] | 8 | 2021-04-03T08:13:12.000Z | 2022-01-17T12:36:46.000Z | tests/utils.py | maxme1/dpipe_transpiler | dcf67a8a2bd1b2a47370b02719f36589e1e6d6d5 | [
"MIT"
] | 41 | 2017-10-24T19:16:14.000Z | 2020-11-16T18:13:21.000Z | tests/utils.py | maxme1/lazycon | 9a898bedeb0e7af506dad1f73a8f68062414b00d | [
"MIT"
] | 1 | 2019-09-27T14:22:09.000Z | 2019-09-27T14:22:09.000Z | from pathlib import Path
import pytest
@pytest.fixture
def tests_path():
return Path(__file__).parent
| 12.111111 | 32 | 0.761468 | 15 | 109 | 5.2 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165138 | 109 | 8 | 33 | 13.625 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
14ad4f16cbfa876994c55a9b6ae02c17ff8c4518 | 22,097 | py | Python | tests/test_fileguard.py | iluxonchik/fileguard | 73854b783faadb8e5370c407b897049131a42e50 | [
"MIT"
] | null | null | null | tests/test_fileguard.py | iluxonchik/fileguard | 73854b783faadb8e5370c407b897049131a42e50 | [
"MIT"
] | 4 | 2019-07-28T13:26:18.000Z | 2019-07-28T14:13:57.000Z | tests/test_fileguard.py | iluxonchik/fileguard | 73854b783faadb8e5370c407b897049131a42e50 | [
"MIT"
] | null | null | null | import unittest
import os
import shutil
import filecmp
from pathlib import Path
from unittest.mock import Mock
from fileguard.fileguard import guard
class TestFileGuardDecorator(unittest.TestCase):
TEST_TEXT_FILE_PATH = './tests/resources/test_text_file.txt'
TEST_FILE_CONTENTS = ['would\n', 'you do it\n', 'if my name was\n', 'dre\n']
def setUp(self):
with open(TestFileGuardDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(TestFileGuardDecorator.TEST_FILE_CONTENTS)
def tearDown(self):
try:
os.remove(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, lines):
with open(TestFileGuardDecorator.TEST_TEXT_FILE_PATH, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def test_guard_change_text_file(self):
@guard(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
def function_that_changes_the_file():
lines_to_write = ['of course\n', 'I would\n']
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
with open(TestFileGuardDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(lines_to_write)
function_that_changes_the_file()
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
def test_guard_if_function_throws_exception_text_file(self):
@guard(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
def function_that_changes_the_file():
lines_to_write = ['of course\n', 'I would\n']
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
with open(TestFileGuardDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(lines_to_write)
raise Exception('Something happened #Windows10')
with self.assertRaises(Exception):
function_that_changes_the_file()
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
def test_guard_if_function_not_changes_text_file(self):
@guard(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
def function_that_changes_the_file():
pass
function_that_changes_the_file()
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
def test_guard_deleted_file_restores_contents_test_file(self):
@guard(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
def function_that_deletes_the_file():
os.remove(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
# make sure that the file does not exist after running the function
is_file = path.is_file()
self.assertFalse(is_file, 'File does exist.')
# make sure that the file is existent
path = Path(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
is_file = path.is_file()
self.assertTrue(is_file, 'File does not exist.')
function_that_deletes_the_file()
self._assert_file_content_equals(TestFileGuardDecorator.TEST_FILE_CONTENTS)
def test_that_decorator_calls_funcion(self):
value_1 = 'uno'
value_2 = 'dos'
mocked_func = Mock()
pre_decorated = guard(TestFileGuardDecorator.TEST_TEXT_FILE_PATH)
decorated = pre_decorated(mocked_func)
result = decorated(value_1, value_2)
mocked_func.assert_called_once_with(value_1, value_2)
class TestFileGuardContextManager(unittest.TestCase):
TEST_TEXT_FILE_PATH = './tests/resources/test_text_file.txt'
TEST_FILE_CONTENTS = ['would\n', 'you do it\n', 'if my name was\n', 'dre\n']
def setUp(self):
with open(TestFileGuardContextManager.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(TestFileGuardContextManager.TEST_FILE_CONTENTS)
def tearDown(self):
try:
os.remove(TestFileGuardContextManager.TEST_TEXT_FILE_PATH)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, lines):
with open(TestFileGuardContextManager.TEST_TEXT_FILE_PATH, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def test_guard_change_text_file(self):
with guard(TestFileGuardContextManager.TEST_TEXT_FILE_PATH):
lines_to_write = ['of course\n', 'I would\n']
self._assert_file_content_equals(TestFileGuardContextManager.TEST_FILE_CONTENTS)
with open(TestFileGuardContextManager.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(lines_to_write)
self._assert_file_content_equals(TestFileGuardContextManager.TEST_FILE_CONTENTS)
class TestFileGuardClassDecorator(unittest.TestCase):
TEST_TEXT_FILE_PATH = './tests/resources/test_text_file.txt'
TEST_FILE_CONTENTS = ['would\n', 'you do it\n', 'if my name was\n', 'dre\n']
@guard('./tests/resources/test_text_file.txt')
class TheCodeumentary(object):
TEXT_1 = ['The\n', 'Documentary\n']
TEXT_2 = ['The\n', 'Doctor\'s\n', 'Advocate']
TEXT_3 = ['appended\n', 'content\n']
def __init__(_self, value_1, value_2, test_case):
_self._value_1 = value_1
_self._value_2 = value_2
_self._test_case = test_case # allow asserts in methods
@property
def value_1(self):
return self._value_1
def change_the_file(_self):
lines_to_write = TestFileGuardClassDecorator.TheCodeumentary.TEXT_1
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(lines_to_write)
_self._test_case._assert_file_content_equals(TestFileGuardClassDecorator.TheCodeumentary.TEXT_1)
def change_the_file_again(_self):
lines_to_write = TestFileGuardClassDecorator.TheCodeumentary.TEXT_2
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(lines_to_write)
_self._test_case._assert_file_content_equals(TestFileGuardClassDecorator.TheCodeumentary.TEXT_2)
def do_not_change_the_file(_self, value):
_self._value_3 = 'L.A.X.'
_self._value_3 = _self._value_3 + value
def append_text(_self):
lines_to_write = TestFileGuardClassDecorator.TheCodeumentary.TEXT_3
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'a') as file:
file.writelines(lines_to_write)
expected_content = TestFileGuardClassDecorator.TEST_FILE_CONTENTS + lines_to_write + lines_to_write
_self._test_case._assert_file_content_equals(expected_content)
def nested_write_call(_self):
lines_to_write = TestFileGuardClassDecorator.TheCodeumentary.TEXT_3
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'a') as file:
file.writelines(lines_to_write)
expected_content = TestFileGuardClassDecorator.TEST_FILE_CONTENTS + lines_to_write
_self._test_case._assert_file_content_equals(expected_content)
_self.append_text()
_self._test_case._assert_file_content_equals(expected_content)
def setUp(self):
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'w') as file:
file.writelines(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
def tearDown(self):
try:
os.remove(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, lines):
with open(TestFileGuardClassDecorator.TEST_TEXT_FILE_PATH, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def test_decorated_class_restores_file_contents(self):
the_codeumentary = TestFileGuardClassDecorator.TheCodeumentary('value_1', 'value_2', self)
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
the_codeumentary.change_the_file()
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
the_codeumentary.change_the_file_again()
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
the_codeumentary.do_not_change_the_file('hello')
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
the_codeumentary.value_1
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
the_codeumentary._value_2 = 'value_3'
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
def test_fileguarded_callables_calling_fileguarded_callables(self):
"""
Make sure that the when a decorated callable calls a decorated callable
within a class everything works as expectected. Internally, a stack is
used.
"""
the_codeumentary = TestFileGuardClassDecorator.TheCodeumentary('value_1', 'value_2', self)
the_codeumentary.nested_write_call()
self._assert_file_content_equals(TestFileGuardClassDecorator.TEST_FILE_CONTENTS)
class TestFileGuardDecoratorWithBinaryFiles(unittest.TestCase):
"""
Make sure that non-text files are guarded as well.
"""
RESOURCES_PATH = './tests/resources/'
FROZEN_RESOURCES_PATH = './tests/frozen_resources/'
EXECUTABLE_BINARY_NAME = 'encrypt_harddrive_with_ransomware'
IMAGE_NAME = 'kitty_cent.jpg'
def setUp(self):
self._files_to_remove = []
# copy files from frozen_resources to resources
binary_src = os.path.join(self.FROZEN_RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
binary_dst = os.path.join(self.RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
img_src = os.path.join(self.FROZEN_RESOURCES_PATH, self.IMAGE_NAME)
img_dst = os.path.join(self.RESOURCES_PATH, self.IMAGE_NAME)
self._files_to_remove.append(binary_dst)
self._files_to_remove.append(img_dst)
shutil.copy(binary_src, binary_dst)
shutil.copy(img_src, img_dst)
def tearDown(self):
for file in self._files_to_remove:
try:
os.remove(file)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, path, lines):
with open(path, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def _assert_files_equal(self, path_1, path_2):
is_files_equal = filecmp.cmp(path_1, path_2)
self.assertTrue(is_files_equal, f'Files {path_1} and {path_2} are not equal')
def test_binary_file_content_restored_on_change(self):
binary_frozen = os.path.join(self.FROZEN_RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
binary_path = os.path.join(self.RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
self._assert_files_equal(binary_path, binary_frozen)
@guard(binary_path)
def function_that_changes_the_file():
lines_to_write = ['hello, world\n']
with open(binary_path, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(binary_path, lines_to_write)
function_that_changes_the_file()
self._assert_files_equal(binary_path, binary_frozen)
def test_binary_file_content_restored_on_delete(self):
binary_frozen = os.path.join(self.FROZEN_RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
binary_path = os.path.join(self.RESOURCES_PATH, self.EXECUTABLE_BINARY_NAME)
self._assert_files_equal(binary_path, binary_frozen)
@guard(binary_path)
def function_that_removes_the_file():
os.remove(binary_path)
p = Path(binary_path)
self.assertFalse(p.is_file(), f'File was not removed')
function_that_removes_the_file()
self._assert_files_equal(binary_path, binary_frozen)
def test_image_file_content_restored_on_change(self):
image_frozen = os.path.join(self.FROZEN_RESOURCES_PATH, self.IMAGE_NAME)
image_path = os.path.join(self.RESOURCES_PATH, self.IMAGE_NAME)
self._assert_files_equal(image_path, image_frozen)
@guard(image_path)
def function_that_changes_the_file():
lines_to_write = ['hello, world\n']
with open(image_path, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(image_path, lines_to_write)
function_that_changes_the_file()
self._assert_files_equal(image_path, image_frozen)
def test_image_file_content_restored_on_delete(self):
image_frozen = os.path.join(self.FROZEN_RESOURCES_PATH, self.IMAGE_NAME)
image_path = os.path.join(self.RESOURCES_PATH, self.IMAGE_NAME)
self._assert_files_equal(image_path, image_frozen)
@guard(image_path)
def function_that_removes_the_file():
os.remove(image_path)
p = Path(image_path)
self.assertFalse(p.is_file(), f'File was not removed')
function_that_removes_the_file()
self._assert_files_equal(image_path, image_frozen)
class TestFileGuardDecoratorMultipleFiles(unittest.TestCase):
TEST_TEXT_FILE_1_PATH = './tests/resources/test_text_file_1.txt'
TEST_FILE_1_CONTENTS = ['would\n', 'you do it\n', 'if my name was\n', 'dre\n']
TEST_TEXT_FILE_2_PATH = './tests/resources/test_text_file_2.txt'
TEST_FILE_2_CONTENTS = ['throw\n', 'it up\n', 'for the king\n', 'of L.A.\n']
def setUp(self):
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(self.TEST_FILE_1_CONTENTS)
with open(self.TEST_TEXT_FILE_2_PATH, 'w') as file:
file.writelines(self.TEST_FILE_2_CONTENTS)
def tearDown(self):
try:
os.remove(self.TEST_TEXT_FILE_1_PATH)
except FileNotFoundError:
pass
try:
os.remove(self.TEST_TEXT_FILE_2_PATH)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, path, lines):
with open(path, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def test_guard_change_two_text_files(self):
@guard(TestFileGuardDecoratorMultipleFiles.TEST_TEXT_FILE_2_PATH)
@guard(TestFileGuardDecoratorMultipleFiles.TEST_TEXT_FILE_1_PATH)
def function_that_changes_the_file():
lines_to_write = ['of course\n', 'I would\n']
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(lines_to_write)
with open(self.TEST_TEXT_FILE_2_PATH, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, lines_to_write)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, lines_to_write)
function_that_changes_the_file()
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
def test_guard_change_two_text_files_multiple_arguments(self):
@guard(TestFileGuardDecoratorMultipleFiles.TEST_TEXT_FILE_1_PATH, TestFileGuardDecoratorMultipleFiles.TEST_TEXT_FILE_2_PATH)
def function_that_changes_the_file():
lines_to_write = ['of course\n', 'I would\n']
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(lines_to_write)
with open(self.TEST_TEXT_FILE_2_PATH, 'w') as file:
file.writelines(lines_to_write)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, lines_to_write)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, lines_to_write)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
function_that_changes_the_file()
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
class TestFileGuardDecoratorDirectory(unittest.TestCase):
DIRECTORY_PATH = './tests/resources/dir_to_guard/'
TEST_TEXT_FILE_1_PATH = os.path.join('./tests/resources/dir_to_guard/', 'test_text_file_1.txt')
TEST_FILE_1_CONTENTS = ['would\n', 'you do it\n', 'if my name was\n', 'dre\n']
TEST_TEXT_FILE_2_PATH = os.path.join('./tests/resources/dir_to_guard/', 'test_text_file_2.txt')
TEST_FILE_2_CONTENTS = ['throw\n', 'it up\n', 'for the king\n', 'of L.A.\n']
TEST_TEXT_FILE_3_PATH = './tests/resources/test_text_file_1.txt'
TEST_FILE_3_CONTENTS = ['still\n', 'dre\n', 'day\n']
def setUp(self):
if not os.path.exists(self.DIRECTORY_PATH):
os.makedirs(self.DIRECTORY_PATH)
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(self.TEST_FILE_1_CONTENTS)
with open(self.TEST_TEXT_FILE_2_PATH, 'w') as file:
file.writelines(self.TEST_FILE_2_CONTENTS)
with open(self.TEST_TEXT_FILE_3_PATH, 'w') as file:
file.writelines(self.TEST_FILE_3_CONTENTS)
def tearDown(self):
shutil.rmtree(self.DIRECTORY_PATH, ignore_errors=True)
try:
os.remove(self.TEST_TEXT_FILE_3_PATH)
except FileNotFoundError:
pass
def _assert_file_content_equals(self, path, lines):
with open(path, 'r') as file:
file_contents = file.readlines()
self.assertEqual(len(lines), len(file_contents))
for i in range(len(lines)):
self.assertEqual(lines[i], file_contents[i], f'File differs in line {i}')
def test_directory_contents_resotred_on_file_change(self):
@guard(self.DIRECTORY_PATH)
def change_directory_contents():
lines_to_write = ['of course\n', 'I would\n']
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(lines_to_write)
os.remove(self.TEST_TEXT_FILE_2_PATH)
is_file_exits = os.path.exists(self.TEST_TEXT_FILE_2_PATH)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, lines_to_write)
self.assertFalse(is_file_exits, 'File was not removed')
change_directory_contents()
dir = Path(self.DIRECTORY_PATH)
self.assertTrue(dir.is_dir(), 'Directory not found')
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
def test_directory_contents_resotred_on_directory_delete(self):
@guard(self.DIRECTORY_PATH)
def delete_directory():
shutil.rmtree(self.DIRECTORY_PATH)
dir = Path(self.DIRECTORY_PATH)
self.assertFalse(dir.exists(), 'Path not deleted')
delete_directory()
dir = Path(self.DIRECTORY_PATH)
self.assertTrue(dir.is_dir(), 'Directory not found')
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
def test_directory_and_file_contents_preserved(self):
@guard(self.DIRECTORY_PATH, self.TEST_TEXT_FILE_3_PATH)
def change_directory_contents():
lines_to_write = ['of course\n', 'I would\n']
with open(self.TEST_TEXT_FILE_1_PATH, 'w') as file:
file.writelines(lines_to_write)
os.remove(self.TEST_TEXT_FILE_3_PATH)
os.remove(self.TEST_TEXT_FILE_2_PATH)
is_file_exits = os.path.exists(self.TEST_TEXT_FILE_2_PATH)
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, lines_to_write)
self.assertFalse(is_file_exits, 'File was not removed')
change_directory_contents()
dir = Path(self.DIRECTORY_PATH)
self.assertTrue(dir.is_dir(), 'Directory not found')
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_3_PATH, self.TEST_FILE_3_CONTENTS)
def test_directory_contents_resotred_on_directory_unchanged(self):
@guard(self.DIRECTORY_PATH)
def do_nothing():
pass
do_nothing()
dir = Path(self.DIRECTORY_PATH)
self.assertTrue(dir.is_dir(), 'Directory not found')
self._assert_file_content_equals(self.TEST_TEXT_FILE_1_PATH, self.TEST_FILE_1_CONTENTS)
self._assert_file_content_equals(self.TEST_TEXT_FILE_2_PATH, self.TEST_FILE_2_CONTENTS)
| 38.971781 | 132 | 0.704349 | 2,890 | 22,097 | 4.958478 | 0.068858 | 0.050244 | 0.072017 | 0.083461 | 0.840684 | 0.804466 | 0.772156 | 0.734054 | 0.720237 | 0.674948 | 0 | 0.006933 | 0.210164 | 22,097 | 566 | 133 | 39.040636 | 0.814129 | 0.01688 | 0 | 0.647368 | 0 | 0 | 0.068725 | 0.018878 | 0 | 0 | 0 | 0 | 0.228947 | 1 | 0.152632 | false | 0.023684 | 0.018421 | 0.002632 | 0.247368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1af81b158f49dc7f133a039a51757a439af65349 | 148 | py | Python | minimalist_cms/cms_pagetree/admin/__init__.py | wullerot/django-minimalist-cms | bd6795d9647f9db1d98e83398238c0e63aca3c1b | [
"MIT"
] | null | null | null | minimalist_cms/cms_pagetree/admin/__init__.py | wullerot/django-minimalist-cms | bd6795d9647f9db1d98e83398238c0e63aca3c1b | [
"MIT"
] | null | null | null | minimalist_cms/cms_pagetree/admin/__init__.py | wullerot/django-minimalist-cms | bd6795d9647f9db1d98e83398238c0e63aca3c1b | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.contrib import admin
from .page import Page, PageAdmin
admin.site.register(Page, PageAdmin)
| 16.444444 | 39 | 0.817568 | 20 | 148 | 5.8 | 0.6 | 0.224138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128378 | 148 | 8 | 40 | 18.5 | 0.899225 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
211d1b16cbd0564c2ae7d23e0f11f645c3807143 | 158 | py | Python | base_attendance/models/__init__.py | ShaheenHossain/itpp-labs-misc-addons13 | bf62dc5bc1abdc18d78e9560a286babbe1d0e082 | [
"MIT"
] | null | null | null | base_attendance/models/__init__.py | ShaheenHossain/itpp-labs-misc-addons13 | bf62dc5bc1abdc18d78e9560a286babbe1d0e082 | [
"MIT"
] | null | null | null | base_attendance/models/__init__.py | ShaheenHossain/itpp-labs-misc-addons13 | bf62dc5bc1abdc18d78e9560a286babbe1d0e082 | [
"MIT"
] | 3 | 2020-08-25T01:57:59.000Z | 2021-09-11T15:38:02.000Z | # -*- coding: utf-8 -*-
# License MIT (https://opensource.org/licenses/MIT).
from . import res_attendance
from . import res_partner
from . import res_config
| 22.571429 | 52 | 0.721519 | 22 | 158 | 5.045455 | 0.681818 | 0.27027 | 0.351351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007353 | 0.139241 | 158 | 6 | 53 | 26.333333 | 0.808824 | 0.455696 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
2133f77b30c718b226e5fd80463de915884e26bf | 260 | py | Python | stix_shifter_modules/guardium/stix_translation/data_mapper.py | remkohdev/stix-shifter | 9462f318a1ebf4cb0a560eed19f4d78ccbf09359 | [
"Apache-2.0"
] | null | null | null | stix_shifter_modules/guardium/stix_translation/data_mapper.py | remkohdev/stix-shifter | 9462f318a1ebf4cb0a560eed19f4d78ccbf09359 | [
"Apache-2.0"
] | null | null | null | stix_shifter_modules/guardium/stix_translation/data_mapper.py | remkohdev/stix-shifter | 9462f318a1ebf4cb0a560eed19f4d78ccbf09359 | [
"Apache-2.0"
] | null | null | null | from os import path
import json
from stix_shifter_utils.stix_translation.src.utils.exceptions import DataMappingException
from stix_shifter_utils.modules.base.stix_translation.base_data_mapper import BaseDataMapper
class DataMapper(BaseDataMapper):
pass
| 28.888889 | 92 | 0.869231 | 34 | 260 | 6.411765 | 0.588235 | 0.073395 | 0.137615 | 0.183486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088462 | 260 | 8 | 93 | 32.5 | 0.919831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.166667 | 0.666667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
2143b8e933a346260602ec30056f648eb66d2d86 | 27,997 | py | Python | pytests/gsi/ephemeral_gsi.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 14 | 2015-02-06T02:47:57.000Z | 2020-03-14T15:06:05.000Z | pytests/gsi/ephemeral_gsi.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 3 | 2019-02-27T19:29:11.000Z | 2021-06-02T02:14:27.000Z | pytests/gsi/ephemeral_gsi.py | sumedhpb/testrunner | 9ff887231c75571624abc31a3fb5248110e01203 | [
"Apache-2.0"
] | 108 | 2015-03-26T08:58:49.000Z | 2022-03-21T05:21:39.000Z | """ephemeral_gsi.py: description
o
__author__ = "Hemant Rajput"
__maintainer = "Hemant Rajput"
__email__ = "Hemant.Rajput@couchbase.com"
__git_user__ = "hrajput89"
__created_on__ = "15/07/21 1:43 pm"
"""
from concurrent.futures import ThreadPoolExecutor
from couchbase_helper.documentgenerator import SDKDataLoader
from couchbase_helper.query_definitions import QueryDefinition
from couchbase_helper.stats_tools import StatsCommon
from remote.remote_util import RemoteMachineShellConnection
from .base_gsi import BaseSecondaryIndexingTests
class EphemeralGSI(BaseSecondaryIndexingTests):
def setUp(self):
super(EphemeralGSI, self).setUp()
self.log.info("============== EphemeralGSI setup has started ==============")
self.process_to_kill = self.input.param('process_to_kill', None)
self.partition_fields = self.input.param('partition_fields', None)
self.num_partition = self.input.param('num_partition', 8)
self.rest.delete_all_buckets()
self.bucket_params = self._create_bucket_params(server=self.master, size=self.bucket_size,
replicas=self.num_replicas, bucket_type='ephemeral',
enable_replica_index=self.enable_replica_index,
eviction_policy=self.eviction_policy, lww=self.lww)
self.cluster.create_standard_bucket(name=self.test_bucket, port=11222,
bucket_params=self.bucket_params)
self.buckets = self.rest.get_buckets()
self.log.info("============== EphemeralGSI setup has completed ==============")
def tearDown(self):
self.log.info("============== EphemeralGSI tearDown has started ==============")
super(EphemeralGSI, self).tearDown()
self.log.info("============== EphemeralGSI tearDown has completed ==============")
def suite_tearDown(self):
pass
def suite_setUp(self):
pass
def test_gsi_on_ephemeral_bucket(self):
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
bucket_params = self._create_bucket_params(server=self.master, size=self.bucket_size,
replicas=self.num_replicas, bucket_type='membase',
enable_replica_index=self.enable_replica_index,
eviction_policy='valueOnly', lww=self.lww)
self.cluster.create_standard_bucket(name='default', port=11222,
bucket_params=bucket_params)
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
index_storage_mode = self.index_rest.get_index_storage_mode()
self.assertEqual(index_storage_mode, self.gsi_type)
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select * from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
with ThreadPoolExecutor() as executor:
select_task = executor.submit(self.run_cbq_query, query=select_query)
meta_task = executor.submit(self.run_cbq_query, query=select_meta_id_query,
query_context=named_collection_query_context)
count_task = executor.submit(self.run_cbq_query, query=count_query)
result = select_task.result()['results']
meta_id_result = meta_task.result()['results']
count_result = count_task.result()['results'][0]['$1']
self.assertTrue(len(result) > 0)
self.assertEqual(len(meta_id_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
def test_gsi_on_ephemeral_with_partial_KV_node(self):
kv_nodes = self.get_kv_nodes()
index_node = self.get_nodes_from_services_map(service_type="index")
if len(kv_nodes) < 2:
self.fail("This test requires at least 2 KV node")
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select * from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
# Blocking communication between one KV node and index to check indexes catching with mutation from other KV
kv_node_a, kv_node_b = kv_nodes
try:
self.block_incoming_network_from_node(kv_node_b, index_node)
self.sleep(10)
new_insert_docs_num = 10 ** 3
gen_create = SDKDataLoader(num_ops=new_insert_docs_num, percent_create=100, json_template="Person",
percent_update=0, percent_delete=0, scope=scope,
collection=collection, output=True,
start_seq_num=self.num_of_docs_per_collection + 1)
tasks = self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=10000)
self.sleep(20, "Giving some time for data insertion")
select_result_after_kv_block = self.run_cbq_query(query=select_query)['results']
meta_result_after_kv_block = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result_after_kv_block = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result_after_kv_block) > len(select_result),
"Query result not matching expected value")
self.assertTrue(len(meta_result_after_kv_block) > len(meta_result),
"Query result not matching expected value")
self.assertTrue(count_result_after_kv_block > count_result, "Query result not matching expected value")
for task in tasks:
out = task.result()
self.log.info(out)
except Exception as err:
self.fail(err)
finally:
self.resume_blocked_incoming_network_from_node(kv_node_b, index_node)
self.sleep(10)
# Checking if indexer continue to process insertion after KV node failover
failover_task = self.cluster.async_failover(self.servers[:self.nodes_init], failover_nodes=[kv_node_b],
graceful=self.graceful)
failover_task.result()
gen_create = SDKDataLoader(num_ops=new_insert_docs_num, percent_create=0, json_template="Person",
percent_update=0, percent_delete=100, scope=scope,
collection=collection, output=True,
start_seq_num=self.num_of_docs_per_collection + 1)
tasks = self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=10000)
for task in tasks:
out = task.result()
self.log.info(out)
select_result_after_failover = self.run_cbq_query(query=select_query)['results']
meta_result_after_failover = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result_after_failover = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertEqual(len(select_result_after_failover), len(select_result),
"Query result not matching expected value")
self.assertEqual(len(meta_result_after_failover), len(meta_result), "Query result not matching expected value")
self.assertEqual(count_result_after_failover, count_result, "Query result not matching expected value")
def test_gsi_on_ephemeral_with_server_restart(self):
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select * from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
# restarting couchbase services
shell = RemoteMachineShellConnection(self.master)
shell.restart_couchbase()
shell.disconnect()
self.sleep(30, "Waiting for server to be up again after restart")
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertEqual(len(select_result), 0)
self.assertEqual(len(meta_result), 0)
self.assertEqual(count_result, 0)
def test_gsi_on_ephemeral_with_bucket_flush(self):
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select * from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
# flushing the bucket
task = self.cluster.async_bucket_flush(server=self.master, bucket=self.test_bucket)
result = task.result()
self.log.info(result)
self.sleep(30, "Giving some time to indexer to update doc counts")
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertEqual(len(select_result), 0)
self.assertEqual(len(meta_result), 0)
self.assertEqual(count_result, 0)
new_inserts = 1000
gen_create = SDKDataLoader(num_ops=new_inserts, percent_create=100, json_template="Person",
percent_update=0, percent_delete=0, scope=scope,
collection=collection, output=True)
tasks = self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=1000)
for task in tasks:
out = task.result()
self.log.info(out)
self.sleep(30)
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), new_inserts)
self.assertEqual(count_result, new_inserts)
def test_gsi_on_ephemeral_with_process_kill(self):
index_nodes = self.get_nodes_from_services_map(service_type="index", get_all_nodes=True)
if len(index_nodes) < 2:
self.fail("This test requires at least 2 index node")
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
partition_fields = None
if self.partition_fields:
partition_fields = self.partition_fields.split(',')
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
partition_by_fields=partition_fields,
num_partition=self.num_partition,
num_replica=self.num_replicas)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
self.sleep(10)
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build,
partition_by_fields=partition_fields,
num_partition=self.num_partition,
num_replica=self.num_replicas)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select * from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
num_rollbacks_before_kill = self.index_rest.get_num_rollback_stat(bucket=bucket)
num_rollbacks_to_zero_before_kill = self.index_rest.get_num_rollback_stat(bucket=bucket)
# todo: killing memcached only on indexer nodes
for server in self.servers:
remote = RemoteMachineShellConnection(server)
self.log.info(f"killing {self.process_to_kill} on {server.ip}")
remote.terminate_process(process_name=self.process_to_kill)
self.sleep(30, f"Sleep after killing {self.process_to_kill}")
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
num_rollbacks_after_kill = self.index_rest.get_num_rollback_stat(bucket=bucket)
num_rollbacks_to_zero_after_kill = self.index_rest.get_num_rollback_stat(bucket=bucket)
if self.process_to_kill == "indexer":
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
# todo: add assertion about num_rollback stats
# self.assertTrue(num_rollbacks_after_kill > num_rollbacks_before_kill)
else:
self.assertEqual(len(select_result), 0)
self.assertEqual(len(meta_result), 0)
self.assertEqual(count_result, 0)
def test_gsi_on_ephemeral_with_eviction_policy(self):
num_of_docs = self.num_of_docs_per_collection
self.prepare_collection_for_indexing(num_of_docs_per_collection=self.num_of_docs_per_collection)
collection_namespace = self.namespaces[0]
_, keyspace = collection_namespace.split(':')
bucket, scope, collection = keyspace.split('.')
index_gen = QueryDefinition(index_name='idx', index_fields=['age', 'country', 'city'])
meta_index_gen = QueryDefinition(index_name='meta_idx', index_fields=['meta().id'])
query = index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
query = meta_index_gen.generate_index_create_query(namespace=collection_namespace, defer_build=self.defer_build)
self.run_cbq_query(query)
if self.defer_build:
build_query = meta_index_gen.generate_build_query(namespace=collection_namespace)
self.run_cbq_query(build_query)
self.wait_until_indexes_online()
select_query = f'Select * from {collection_namespace} where age >10 and country like "A%";'
select_meta_id_query = f'Select meta().id from {collection} where meta().id like "doc_%";'
count_query = f'Select count(*) from {collection_namespace} where age >= 0;'
named_collection_query_context = f'default:{bucket}.{scope}'
select_result = self.run_cbq_query(query=select_query)['results']
meta_result = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
count_result = self.run_cbq_query(query=count_query)['results'][0]['$1']
self.assertTrue(len(select_result) > 0)
self.assertEqual(len(meta_result), self.num_of_docs_per_collection)
self.assertEqual(count_result, self.num_of_docs_per_collection)
new_inserts = 10 ** 4
is_memory_full = False
stats_all_buckets = {}
for bucket in self.buckets:
stats_all_buckets[bucket.name] = StatsCommon()
threshold = 0.93
last_memory_used_val = 0
while not is_memory_full:
gen_create = SDKDataLoader(num_ops=new_inserts, percent_create=100,
percent_update=0, percent_delete=0, scope=scope,
collection=collection, output=True, start_seq_num=num_of_docs+1)
task = self.cluster.async_load_gen_docs(self.master, bucket, gen_create)
task.result()
# Updating the doc counts
num_of_docs = num_of_docs + new_inserts
self.sleep(30)
memory_used = int(stats_all_buckets[bucket.name].get_stats([self.master], bucket, '',
'mem_used')[self.master])
self.log.info(f"Current memory usage: {memory_used}")
if self.eviction_policy == 'noEviction':
# memory is considered full if mem_used is at say 90% of the available memory
if memory_used > threshold * self.bucket_size * 1000000:
# Just filling the leftover memory to be double sure
gen_create = SDKDataLoader(num_ops=new_inserts, percent_create=100,
percent_update=0, percent_delete=0, scope=scope,
collection=collection, output=True, start_seq_num=num_of_docs + 1)
task = self.cluster.async_load_gen_docs(self.master, bucket, gen_create)
task.result()
num_of_docs = num_of_docs + new_inserts
memory_used = int(stats_all_buckets[bucket.name].get_stats([self.master], bucket, '',
'mem_used')[self.master])
self.log.info(f"Current memory usage: {memory_used}")
is_memory_full = True
else:
if memory_used < last_memory_used_val:
break
last_memory_used_val = memory_used
meta_ids = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
ids_at_threshold = sorted([item['id'] for item in meta_ids])
# Pushing new docs to check the eviction policy
new_inserts = 10 ** 4
gen_create = SDKDataLoader(num_ops=new_inserts, percent_create=100, json_template="Employee",
percent_update=0, percent_delete=0, scope=scope,
collection=collection, output=True, start_seq_num=num_of_docs+1)
tasks = self.data_ops_javasdk_loader_in_batches(sdk_data_loader=gen_create, batch_size=10000)
for task in tasks:
out = task.result()
self.log.info(out)
meta_ids_with_eviction_enforced = self.run_cbq_query(query=select_meta_id_query,
query_context=named_collection_query_context)['results']
ids_after_threshold = sorted([item['id'] for item in meta_ids_with_eviction_enforced])
if self.eviction_policy == 'noEviction':
self.assertEqual(len(meta_ids_with_eviction_enforced), len(meta_ids))
self.assertEqual(ids_at_threshold, ids_after_threshold)
else:
self.assertTrue(len(meta_ids_with_eviction_enforced) != len(meta_ids))
self.assertTrue(ids_after_threshold != ids_at_threshold)
| 58.448852 | 120 | 0.659321 | 3,380 | 27,997 | 5.084024 | 0.085207 | 0.037244 | 0.03608 | 0.05412 | 0.81442 | 0.794635 | 0.779504 | 0.777118 | 0.76321 | 0.734346 | 0 | 0.008981 | 0.248312 | 27,997 | 478 | 121 | 58.57113 | 0.807555 | 0.028074 | 0 | 0.66 | 0 | 0 | 0.103152 | 0.016622 | 0 | 0 | 0 | 0.002092 | 0.11 | 1 | 0.025 | false | 0.005 | 0.015 | 0 | 0.0425 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
216b19f91bdfced35432ead2bc84011d5e38cea8 | 43 | py | Python | learning_to_learn/nets/__init__.py | take-cheeze/models | 3ded8fd062c57f20f6154cac2dd0d998181de755 | [
"MIT"
] | 112 | 2018-04-18T07:13:03.000Z | 2022-03-11T03:36:34.000Z | learning_to_learn/nets/__init__.py | take-cheeze/models | 3ded8fd062c57f20f6154cac2dd0d998181de755 | [
"MIT"
] | 16 | 2018-05-11T11:41:08.000Z | 2021-04-24T03:50:54.000Z | learning_to_learn/nets/__init__.py | take-cheeze/models | 3ded8fd062c57f20f6154cac2dd0d998181de755 | [
"MIT"
] | 45 | 2018-04-18T07:13:06.000Z | 2021-12-22T03:46:18.000Z | from . import images
from . import optnets
| 14.333333 | 21 | 0.767442 | 6 | 43 | 5.5 | 0.666667 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 22 | 21.5 | 0.942857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
dcdd3eae8c48a9dcae9007279baa8d70364d4905 | 78,199 | py | Python | porzotokProject/HotelAdmin/views.py | Shakil776/project-porzo | 5326c3096946fe1100d3c3c4d7d0e75515d5bb40 | [
"MIT"
] | null | null | null | porzotokProject/HotelAdmin/views.py | Shakil776/project-porzo | 5326c3096946fe1100d3c3c4d7d0e75515d5bb40 | [
"MIT"
] | null | null | null | porzotokProject/HotelAdmin/views.py | Shakil776/project-porzo | 5326c3096946fe1100d3c3c4d7d0e75515d5bb40 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from porzotokApp import models
from django.views import View
from django.http import HttpResponse, HttpResponseRedirect
from porzotokApp.password import PassWord
from porzotokApp import serializers
from HotelAdmin import serializers as HotelSerializers
from .custom import custom
from datetime import datetime
from rest_framework import viewsets, generics, status, filters
from rest_framework.generics import get_object_or_404
from rest_framework.response import Response
from rest_framework.pagination import PageNumberPagination
from HotelAdmin.pagination import PaginationHandlerMixin
from rest_framework.views import APIView
from django.db.models import Q
# Basic Pagination size for all
class BasicPagination(PageNumberPagination):
page_size_query_param = 'size'
class HotelLoginView(View):
def get(self, request):
context = {}
if 'uid' not in request.session and 'uphone' not in request.session\
and 'utype' not in request.session and 'time' not in request.session:
return render(request, 'backEnd/hoteladmin/admin_login.html', context)
else:
return HttpResponseRedirect("/hotel-admin")
def post(self, request):
context = {}
if 'uid' not in request.session and 'uphone' not in request.session\
and 'utype' not in request.session and 'time' not in request.session:
if 'phone' in request.POST and 'password' in request.POST:
phone = request.POST['phone']
password = PassWord(request.POST['password'])
logged = models.HotelUserOwner.objects.filter(hotel_user_phone = phone, hotel_user_password = password)
if len(logged) != 0:
request.session.create()
request.session['uid'] = logged[0].hotel_user_owner_id
request.session['uphone'] = logged[0].hotel_user_phone
request.session['utype'] = logged[0].hotel_user_type_id.hotel_user_type_name
request.session['time'] = str(datetime.now())
custom(request.session.session_key, request.session['uphone'])
return HttpResponseRedirect("/hotel-admin")
else:
context['error'] = True
context['msg'] = "Username and password does not match"
else:
context['error'] = True
context['msg'] = "Data Field missing"
return render(request, 'backEnd/hoteladmin/admin_login.html', context)
else:
print("LS")
return HttpResponseRedirect("/hotel-admin/login/")
class HotelDashboard(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/dashboard.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
class Logout(View):
def get(self, request):
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
del request.session['uid']
del request.session['uphone']
del request.session['utype']
del request.session['time']
return HttpResponseRedirect("/hotel-admin/login/")
else:
return HttpResponseRedirect("/hotel-admin")
# Hotel Admin Add Room
class AdminRoomView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/admin_room.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/admin_room.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Room
class AdminRoomListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminFacilitesView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_facilites.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_facilites.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminFacilitesListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_facilites_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_facilites_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminRoomFacilitesView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_facilites.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_facilites.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminRoomFacilitesListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_facilites_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_facilites_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminHotelDiscountView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_discount.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_discount.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class AdminHotelDiscountListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_discount_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_discount_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
class TwentyFourHoursDealsHotelAdminView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_room_24_hours_deals.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/hotel_room_24_hours_deals.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin Add Hotel Facilites
# class TwentyFourHoursDealsListView(View):
# def get(self, request):
# context = {}
# if 'uid' in request.session and 'uphone' in request.session\
# and 'utype' in request.session and 'time' in request.session:
# context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
# user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
# try:
# context['user_image'] = user_image[len(user_image) - 1]
# except:
# context['user_image'] = 0
# return render(request, 'backEnd/hoteladmin/24_hours_deals_list.html', context)
# else:
# return HttpResponseRedirect("/hotel-admin/login/")
# def post(self, request):
# context = {}
# if 'uid' in request.session and 'uphone' in request.session\
# and 'utype' in request.session and 'time' in request.session:
# context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
# user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
# try:
# context['user_image'] = user_image[len(user_image) - 1]
# except:
# context['user_image'] = 0
# return render(request, 'backEnd/hoteladmin/24_hours_deals_list.html', context)
# else:
# return HttpResponseRedirect("/hotel-admin/login/")
# Hotel Admin LOGO
class HotelAdminLogoView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/logo.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/logo.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
class HotelAdminBannerView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/banner.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/banner.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
#Start Hotel User Profile
class HotelUserProfileView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/user_profile.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/user_profile.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
#End Hotel User Profile
##Start Book List
class BookingsListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_booking_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_booking_list.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
#End Book List
#Start Book List
class RoomsBookingsListView(View):
def get(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_booking.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
def post(self, request):
context = {}
if 'uid' in request.session and 'uphone' in request.session\
and 'utype' in request.session and 'time' in request.session:
context['user_details'] = models.HotelUserOwner.objects.get(hotel_user_owner_id=request.session['uid'])
user_image = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=request.session['uid'])
try:
context['user_image'] = user_image[len(user_image) - 1]
except:
context['user_image'] = 0
return render(request, 'backEnd/hoteladmin/room_booking.html', context)
else:
return HttpResponseRedirect("/hotel-admin/login/")
#End Book List
#Hotel Admin Serializer View
# Country View
class CountryAdminAPIView(viewsets.ModelViewSet):
queryset = models.Country.objects.all()
serializer_class = HotelSerializers.CountryAdminSerializer
# State View
class StateAdminAPIView(viewsets.ModelViewSet):
queryset = models.State.objects.all()
serializer_class = HotelSerializers.StateAdminSerializer
# City View
class CityAdminAPIView(viewsets.ModelViewSet):
queryset = models.City.objects.all()
serializer_class = HotelSerializers.CityAdminSerializer
#ImageGalaryDetailsAdmin
class ImageGalaryDetailsAdminAPIView(viewsets.ModelViewSet):
queryset = models.ImageGalaryDetails.objects.all()
serializer_class = HotelSerializers.ImageGalaryDetailsAdminSerializer
#Image
class ImageAdminAPIView(viewsets.ModelViewSet):
queryset = models.Image.objects.all()
def get_serializer_class(self):
return HotelSerializers.ImageAdminSerializer
def get (self,request):
return self.list(request)
#serializer_class = serializers.ImageSerializer
def list(self, request):
if request.GET.get('galley_id'):
galley_id = request.GET.get('galley_id')
imagelist = models.Image.objects.filter(image_galary_details_id__image_galary_details_id = int(galley_id))
else:
imagelist = models.Image.objects.all()
serializer = HotelSerializers.ImageAdminSerializer(imagelist, many=True, context={"request": request})
response_dict = {"error": False, "message": "All Image List", "data": serializer.data}
return Response(response_dict)
# def retrieve(self, request, pk=None):
# image = models.Image.objects.all()
# imagelist = get_object_or_404(image, pk=pk)
# serializer = serializers.ImageSerializer(imagelist, context={"request": request})
# serializer_data = serializer.data
# return Response({"error": False, "message": "Single Data Fetch", "Data": serializer_data})
def create(self, request):
print(request.data)
try:
images = request.FILES.getlist('Image')
for image in list(images):
image_list = []
image_detail = {}
image_detail["Image"] = image
image_detail["image_galary_details_id"] = request.data["image_galary_details_id"]
image_list.append(image_detail)
serializer = HotelSerializers.ImageAdminSerializer(data=image_list, many=True, context={"request": request})
serializer.is_valid(raise_exception=True)
serializer.save()
response_dict = {"error": False, "message": f"Image Uploaded in Gallary and The ID {request.data['image_galary_details_id']}"}
except Exception as e:
print(str(e))
response_dict = {"error": False, "message": f"Image Not Uploaded ",}
return Response(response_dict)
class HotelDetailAdminViewSet(viewsets.ViewSet):
queryset = models.HotelDetails.objects.all()
def get_serializer_class(self):
return HotelSerializers.HotelDetailsAdminSerializer
def get (self,request):
return self.list(request)
def list(self, request):
if request.GET.get('user_id'):
param = request.GET.get('user_id')
hotel_details = models.HotelDetails.objects.filter(hotel_user_id__hotel_user_owner_id=int(param))
serializer = HotelSerializers.HotelDetailsAdminSerializer(hotel_details, many=True, context={"request": request})
response_dict = {"error": False, "message": "User Hotel Details List", "data": serializer.data}
return Response(response_dict)
else:
return Response({"error": False, "message": "User Id Not Found"})
def create(self, request):
try:
# save into image gallery details serializer
image_gallery_details_serializer = HotelSerializers.ImageGalaryDetailsAdminSerializer(data=request.data, context={"request":request})
image_gallery_details_serializer.is_valid(raise_exception=True)
image_gallery_details_serializer.save()
image_galary_details_id = image_gallery_details_serializer.data['image_galary_details_id']
# save hotel details info with image gallery details id into hotel details table
hotel_details_list = []
hotel_detail = {}
hotel_detail["hotel_name"] = request.data["hotel_name"]
hotel_detail["slug_name"] = request.data["slug_name"]
hotel_detail["hotel_reg_no"] = request.data["hotel_reg_no"]
hotel_detail["hotel_email"] = request.data["hotel_email"]
hotel_detail["hotel_phone"] = request.data["hotel_phone"]
hotel_detail["Map"] = request.data["Map"]
hotel_detail["short_address"] = request.data["short_address"]
hotel_detail["city_id"] = request.data["city_id"]
hotel_detail["hotel_info"] = request.data["hotel_info"]
hotel_detail["image_galary_details_id"] = image_galary_details_id
hotel_detail["hotel_type"] = request.data["hotel_type"]
hotel_detail["hotel_discount_id"] = request.data["hotel_discount_id"]
hotel_detail["is_active"] = request.data["is_active"]
hotel_details_list.append(hotel_detail)
serializer = HotelSerializers.HotelDetailsAdminSerializer(data=hotel_details_list, many=True, context={"request": request})
serializer.is_valid(raise_exception=True)
serializer.save()
try:
images = request.FILES.getlist('image')
for image in list(images):
image_list = []
image_detail = {}
image_detail["Image"] = image
image_detail["image_galary_details_id"] = image_galary_details_id
image_list.append(image_detail)
serializer = HotelSerializers.ImageAdminSerializer(data=image_list, many=True, context={"request": request})
serializer.is_valid(raise_exception=True)
serializer.save()
except Exception as e:
print(str(e))
dict_response = {"error": False, "message": "Hotel Information Save Successfully!"}
except Exception as e:
print(str(e))
dict_response = {"error": True, "message": "Hotel Information Not Save."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.HotelDetails.objects.all()
single_hotel_data = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelDetailsAdminSerializer(single_hotel_data, context={"request": request})
return Response({"error": False, "message": "Single Data Fetch", "data": serializer.data})
def update(self, request, pk=None):
queryset = models.HotelDetails.objects.all()
single_hotel_data_update = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelDetailsAdminSerializer(single_hotel_data_update, data=request.data, context={"request": request})
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Has been Updated"})
def destroy(self, request, pk=None):
try:
queryset = models.HotelDetails.objects.all()
hotel_details = get_object_or_404(queryset, pk=pk)
hotel_details.delete()
return Response({"error": False, "message": "Hotel Details deleted successfully!"})
except:
return Response({"error": True, "message": "Hotel Details Data Dosen't Deleted!"})
class RoomListHotelAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.RoomHotelAdminListSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.Room.objects.filter(is_active=True, hotel_id__hotel_user_id=int(user_id)).order_by('-room_id')
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class HotelAdminSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
room = models.Room.objects.filter(is_active=True, hotel_id__hotel_user_id=int(user_id))
queryset = room.filter(Q(hotel_id__hotel_name__contains=str(key)) | Q(price_id__price__contains=str(key)) | Q(room_name__contains=str(key)) | Q(room_no__contains=str(key))| Q(floor_id__floor_no__contains=str(key))).order_by('-room_id')
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.RoomHotelAdminListSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count})
class RoomHotelAdminViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('hotel_id'):
hotel_id = request.GET.get('hotel_id')
rooms = models.Room.objects.filter(hotel_id__hotel_id=int(hotel_id))
serializer = HotelSerializers.RoomHotelAdminSerializer(rooms, many=True, context={"request": request})
return Response({"error": False, "message": "Room Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Room Not found"})
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
rooms = models.Room.objects.filter(hotel_id__hotel_user_id=int(user_id))
serializer = HotelSerializers.RoomHotelAdminSerializer(rooms, many=True, context={"request": request})
return Response({"error": False, "message": "Room Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Room Not found"})
def create(self, request):
try:
# save into image gallery details serializer
image_gallery_details_serializer = HotelSerializers.ImageGalaryDetailsAdminSerializer(data=request.data, context={"request":request})
image_gallery_details_serializer.is_valid(raise_exception=True)
image_gallery_details_serializer.save()
image_galary_details_id = image_gallery_details_serializer.data['image_galary_details_id']
# save price details into price table
price_detail = {}
price_detail["price"] = request.data['price']
try:
price_detail["offer_price"] = request.data['offer_price']
except:
price_detail["offer_price"] = request.data['price']
price_detail["price_entry_date"] = datetime.now().date()
price_detail["price_type"] = request.data['price_type']
price_serializer = HotelSerializers.PriceAdminSerializer(data=price_detail, context={"request": request})
price_serializer.is_valid(raise_exception=True)
price_serializer.save()
price_details_id = price_serializer.data['price_id']
# save room info with image gallery details id into room info table
room_detail = {}
room_detail["room_name"] = request.data["room_name"]
room_detail["room_no"] = request.data["room_no"]
room_detail["floor_id"] = request.data["floor_id"]
room_detail["hotel_id"] = models.HotelDetails.objects.get(hotel_user_id__hotel_user_owner_id=int(request.data["hotel_owner_id"])).hotel_id
room_detail["room_status"] = request.data["room_status"]
room_detail["price_id"] = price_details_id
room_detail["room_description"] = request.data["room_description"]
room_detail["image_galary_details_id"] = image_galary_details_id
room_detail["is_active"] = True
room_detail["is_deals"] = False
# try:
# room_detail["hotel_discount_id"] = request.data["hotel_discount_id"]
# except:
# room_detail["hotel_discount_id"] = None
room_serializer = HotelSerializers.RoomHotelAdminSerializer(data=room_detail, context={"request": request})
room_serializer.is_valid(raise_exception=True)
room_serializer.save()
# save image into image table
try:
images = request.FILES.getlist('image')
for image in list(images):
image_list = []
image_detail = {}
image_detail["Image"] = image
image_detail["image_galary_details_id"] = image_galary_details_id
image_list.append(image_detail)
serializer = HotelSerializers.ImageAdminSerializer(data=image_list, many=True, context={"request": request})
serializer.is_valid(raise_exception=True)
serializer.save()
except Exception as e:
print(str(e))
dict_response = {"error": False, "message": "Room Information Save Successfully"}
except Exception as e:
print(str(e))
dict_response = {"error": True, "message": f"{str(e)} Room Information Not Save."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.Room.objects.all()
room = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.RoomHotelAdminSerializer(room, context={"request": request})
serializer_data = serializer.data
return Response({"error": False, "message": "Single Data Fetch", "data": serializer_data})
def update(self, request, pk=None):
queryset = models.Room.objects.all()
room = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.RoomHotelAdminSerializerUpdate(room, data=request.data, context={"request": request}, partial=True)
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Has been Updated"})
def destroy(self, request, pk=None):
try:
queryset = models.Room.objects.all()
room = get_object_or_404(queryset, pk=pk)
room.delete()
return Response({"error": False, "message": "Room deleted successfully!"})
except:
return Response({"error": True, "message": "Room Not Deleted!"})
#Create Price ViewSet
class CreatePriceViewSet(viewsets.ViewSet):
def update(self, request, pk=None):
queryset = models.Price.objects.all()
price = get_object_or_404(queryset, pk=pk)
price_serializer = HotelSerializers.PriceAdminSerializer(price, data=request.data, context={"request": request})
price_serializer.is_valid(raise_exception=True)
price_serializer.save()
return Response({"error": False, "message": "Price Updated"})
# def create(self, request):
# price_list = []
# price_detail = {}
# price_detail["price"] = request.data['price']
# price_detail["offer_price"] = request.data['offer_price']
# price_detail["price_entry_date"] = request.data['price_entry_date']
# price_detail["price_type"] = 1
# price_list.append(price_detail)
# price_serializer = HotelSerializers.PriceAdminSerializer(data=price_list, many=True, context={"request": request})
# price_serializer.is_valid(raise_exception=True)
# price_serializer.save()
# price_details_id = price_serializer.data[0]['price_id']
# return Response({"error": False, "message": "Price Created", "price_id": price_details_id})
#Hotel Room Admin Facilites
class HotelRoomAdminFacilitesViewSet(viewsets.ViewSet):
def list(self, request):
facilities = models.Facilites.objects.all()
serializer = HotelSerializers.FacilitesAdminSerializer(facilities, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data})
def create(self, request):
try:
if 'room_id' in request.data and 'facilites_id' in request.data:
# myList = zip(request.data['room_id'].split(','), request.data['facilites_id'].split(','))
# for room in request.data['room_id'].split(','):
# roomList = []
# roomDetail["room"] = room
# # print(room)
# roomList.append(room_detail)
for facility in request.data['facilites_id'].split(','):
for room in request.data['room_id'].split(','):
facilityGroup = models.FacilitesGroup.objects.create(
facilites_id = models.Facilites.objects.get(facilites_id=facility),
room_id = models.Room.objects.get(room_id=room)
)
facilityGroup.save()
# facilityList = []
# facilityDetail["facility"] = facility
# # print(facilityList)
# facilityList.append(facilityDetail)
# finalList = zip(roomList, facilityList)
# for room, facility in myList:
# # facilites_room_details_list = []
# facilites_room_detail = {}
# facilites_room_detail["facilites_id"] = facility
# facilites_room_detail["room_id"] = room
# # facilites_room_details_list.append(facilites_room_detail)
# facilites_room_serializer = HotelSerializers.HotelRoomAdminFacilitesSerializer(data=finalList, context={"request": request})
# facilites_room_serializer.is_valid(raise_exception=True)
# facilites_room_serializer.save()
dict_response = {"error": False, "message": "Room Facilites Information Save Successfully!"}
except Exception as e:
dict_response = {"error": True, "message": "Room Facilites Information Not Save."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.FacilitesGroup.objects.all()
facilites_room = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelRoomAdminFacilitesSerializer(facilites_room, context={"request": request})
return Response({"error": False, "message": "Single Data Fetch", "Data": serializer.data})
def update(self, request, pk=None):
queryset = models.FacilitesGroup.objects.all()
facilites_room = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelRoomAdminFacilitesSerializer(facilites_room, data=request.data, context={"request": request})
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Hasbeen Updated"})
def destroy(self, request, pk=None):
try:
queryset = models.FacilitesGroup.objects.all()
facilites_room = get_object_or_404(queryset, pk=pk)
facilites_room.delete()
return Response({"error": False, "message": "Room Facilites deleted successfully!"})
except:
return Response({"error": True, "message": "Room Facilites Not Deleted!"})
class HotelRoomAdminFacilitesListAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelRoomAdminFacilitesSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.FacilitesGroup.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id))
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class HotelRoomFacilitiesSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
facilities = models.FacilitesGroup.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id))
queryset = facilities.filter(Q(room_id__hotel_id__hotel_name__contains=str(key)) | Q(room_id__room_name__contains=str(key)) | Q(room_id__room_no__contains=str(key))| Q(facilites_id__facilites_name__contains=str(key)))
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.HotelRoomAdminFacilitesSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count})
class FacilitesListHotelAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelAdminFacilitesSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.HotelFacilites.objects.filter(hotel_id__hotel_user_id=int(user_id)).order_by('-hotel_facilites_id')
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class HotelFacilitiesSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
hotel_facilities = models.HotelFacilites.objects.filter(hotel_id__hotel_user_id=int(user_id))
queryset = hotel_facilities.filter(Q(hotel_id__hotel_name__contains=str(key)) | Q(facilites_id__facilites_name__contains=str(key)) | Q(price_id__price__contains=str(key))).order_by('-hotel_facilites_id')
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.HotelAdminFacilitesSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count})
#Hotel Admin Facilites
class HotelAdminFacilitesViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
hotel_facilites = models.HotelFacilites.objects.filter(hotel_id__hotel_user_id=int(user_id))
serializer = HotelSerializers.HotelAdminFacilitesSerializer(hotel_facilites, many=True, context={"request": request})
return Response({"error": False, "message": "All Hotel Facilites Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Hotel Facilites Not found"})
def create(self, request):
# hotel = models.HotelDetails.objects.get(hotel_user_id__hotel_user_owner_id=int(request.data["hotel_owner_id"]))
# print(hotel.hotel_id)
try:
# save price details into price table
price_detail = {}
price_detail["price"] = request.data['price']
try:
price_detail["offer_price"] = request.data['offer_price']
except:
price_detail["offer_price"] = request.data['price']
price_detail["price_entry_date"] = datetime.now().date()
price_detail["price_type"] = request.data['price_type']
price_serializer = HotelSerializers.PriceAdminSerializer(data=price_detail, context={"request": request})
price_serializer.is_valid(raise_exception=True)
price_serializer.save()
price_details_id = price_serializer.data['price_id']
# save
hotel_facilites_details = {}
hotel_facilites_details["facilites_id"] = request.data["facilites_id"]
hotel_facilites_details["hotel_id"] = models.HotelDetails.objects.get(hotel_user_id__hotel_user_owner_id=int(request.data["hotel_owner_id"])).hotel_id
hotel_facilites_details["price_id"] = price_details_id
hotel_facilites_serializer = HotelSerializers.HotelAdminFacilitesSerializer(data=hotel_facilites_details, context={"request": request})
hotel_facilites_serializer.is_valid(raise_exception=True)
hotel_facilites_serializer.save()
dict_response = {"error": False, "message": "Hotel Facilites Information Save Successfully!"}
except Exception as e:
dict_response = {"error": True, "message": "Hotel Facilites Information Not Save."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.HotelFacilites.objects.all()
hotel_facilites = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelAdminFacilitesSerializer(hotel_facilites, context={"request": request})
return Response({"error": False, "message": "Single Data Fetch","data":serializer.data})
def update(self, request, pk=None):
queryset = models.HotelFacilites.objects.all()
hotel_facilites = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelAdminFacilitesSerializer(hotel_facilites, data=request.data, context={"request": request}, partial=True)
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Hasbeen Updated"})
def destroy(self, request, pk=None):
try:
queryset = models.HotelFacilites.objects.all()
hotel_facilites = get_object_or_404(queryset, pk=pk)
hotel_facilites.delete()
return Response({"error": False, "message": "Hotel Facilites deleted successfully!"})
except:
return Response({"error": True, "message": "Hotel Facilites Not Deleted!"})
# Get State By Country
class GetStateByAdminHotelCountry(generics.ListAPIView):
serializer_class = HotelSerializers.StateAdminSerializer
def get_queryset(self):
c_id = self.kwargs["c_id"]
return models.State.objects.filter(country_id__country_id=c_id)
# Get city By State
class GetCityByAdminHotelState(generics.ListAPIView):
serializer_class = HotelSerializers.CityAdminSerializer
def get_queryset(self):
s_id = self.kwargs["s_id"]
return models.City.objects.filter(state_id__state_id=s_id)
#TwentyFourHoursDealsListHotelAdmin
class TwentyFourHoursDealsListHotelAdminViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
deals_list = models.TwentyFourHoursDeals.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id))
serializer = HotelSerializers.TwentyFourHoursDealsListHotelAdminSerializer(deals_list, many=True, context={"request": request})
return Response({"error": False, "message": "Twenty Four Hours Deals Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Twenty Four Hours Deals Data List Not found"})
#Twenty Four Hours Deals Hotel Admin View
class TwentyFourHoursDealsHotelAdminSerializerViewSet(viewsets.ViewSet):
def create(self, request):
try:
if 'room_id' in request.data:
for room in request.data['room_id'].split(','):
## Room Update for Deals
get_price = models.Room.objects.get(room_id=int(room))
get_price.is_deals = True
get_price.deal_start_date = datetime.now()
get_price.allow_offer_percent = int(request.data['allow_offer'])
## Price Update for Deals
get_price.offer_discount_price = float(get_price.price_id.price) - float(get_price.price_id.price * int(request.data['allow_offer'])/100)
get_price.save()
dict_response = {"error": False, "message": "Hotel Twenty Four Hours Deals Information Save Successfully!"}
except Exception as e:
dict_response = {"error": True, "message": "Hotel Twenty Four Hours Deals Information Not Save."}
return Response(dict_response)
def update(self, request, pk=None):
if 'status' in request.data:
# Room Update for Deals
if request.data['status'] == 1:
get_price = models.Room.objects.get(room_id=pk)
get_price.is_deals = False
get_price.deal_start_date = None
get_price.allow_offer_percent = ''
get_price.offer_discount_price = 0.00
get_price.save()
dict_response = {"error": False, "message": "Deactivated"}
return Response(dict_response)
# TwentyFourHoursDealsListRoomAPIView
class TwentyFourHoursDealsListRoomAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.TwentyFourHoursDealsListRoomSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.Room.objects.filter(is_deals=True, is_active=True, hotel_id__hotel_user_id=int(user_id)).order_by('-room_id')
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class Hotel24HourDealsSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
room = models.Room.objects.filter(is_deals=True, is_active=True, hotel_id__hotel_user_id=int(user_id))
queryset = room.filter(Q(hotel_id__hotel_name__contains=str(key)) | Q(price_id__price__contains=str(key)) | Q(room_name__contains=str(key)) | Q(room_no__contains=str(key)) | Q(offer_discount_price__contains=str(key))).order_by('-room_id')
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.TwentyFourHoursDealsListRoomSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count})
class HotelDiscountAdminViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
hotel_discount = models.HotelDiscount.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
serializer = HotelSerializers.HotelDiscountAdminSerializer(hotel_discount, many=True, context={"request": request})
return Response({"error": False, "message": "Hotel Discount List", "data": serializer.data})
else:
return Response({"error": False, "message": "Hotel Discount Not found"})
def create(self, request):
try:
# save Offer Max Amount details into Hotel Discount table
# offer_max_amount_detail = {}
# offer_max_amount_detail["max_amount"] = request.data['max_amount']
# offer_max_amount_detail["min_amount"] = request.data['min_amount']
# offer_max_amount_serializer = HotelSerializers.OfferMaxAmountAdminSerializer(data=offer_max_amount_detail, context={"request": request})
# offer_max_amount_serializer.is_valid(raise_exception=True)
# offer_max_amount_serializer.save()
# offer_max_amount_details_id = offer_max_amount_serializer.data['offer_max_amount_id']
# save hotel Discount info.
hotel_discount_detail = {}
hotel_discount_detail["hotel_discount_name"] = request.data["hotel_discount_name"]
hotel_discount_detail["start_time_date"] = request.data["start_time_date"]
hotel_discount_detail["end_time_date"] = request.data["end_time_date"]
# hotel_discount_detail["offer_max_amount_id"] = int(offer_max_amount_details_id)
hotel_discount_detail["hotel_user_owner_id"] = int(request.data["hotel_user_owner_id"])
hotel_discount_serializer = HotelSerializers.HotelDiscountAdminSerializer(data=hotel_discount_detail, context={"request": request})
hotel_discount_serializer.is_valid(raise_exception=True)
hotel_discount_serializer.save()
hotel_discount_id = hotel_discount_serializer.data['hotel_discount_id']
# save discount in the hotel rooms
if 'room_id' in request.data:
for room in request.data['room_id'].split(','):
# Update room discount field
_room = models.Room.objects.get(room_id=int(room))
_room.allow_offer_percent = int(request.data['allow_offer'])
_room.offer_discount_price = float(_room.price_id.price) - float(_room.price_id.price * int(request.data['allow_offer'])/100)
_room.hotel_discount_id = models.HotelDiscount.objects.get(hotel_discount_id=int(hotel_discount_id))
_room.save()
dict_response = {"error": False, "message": "Discount Information Added Successfully"}
except Exception as e:
dict_response = {"error": True, "message": "Discount Information Not Added."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.HotelDiscount.objects.all()
hotel_discount = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelDiscountAdminSerializer(hotel_discount, context={"request": request})
serializer_data = serializer.data
return Response({"error": False, "message": "Single Data Fetch", "data": serializer_data})
def update(self, request, pk=None):
queryset = models.HotelDiscount.objects.all()
hotel_discount = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelDiscountAdminSerializer(hotel_discount, data=request.data, context={"request": request}, partial=True)
print(serializer)
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Has been Updated"})
def destroy(self, request, pk=None):
try:
queryset = models.HotelDiscount.objects.all()
hotel_discount = get_object_or_404(queryset, pk=pk)
hotel_discount.delete()
return Response({"error": False, "message": "Hotel Discount deleted successfully!"})
except:
return Response({"error": True, "message": "Hotel Discount Not Deleted!"})
class GetHotelRoomsViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET:
user_id = request.GET.get('user_id')
rooms = models.Room.objects.filter(hotel_id__hotel_user_id__hotel_user_owner_id=int(user_id), is_active=True, is_deals=False)
serializer = HotelSerializers.HotelRoomsGetSerializer(rooms, many=True)
return Response({"error": False, "data": serializer.data})
else:
return Response({"error": False, "data": []})
class GetFacilitiesRoomsViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET:
user_id = request.GET.get('user_id')
rooms = models.Room.objects.filter(hotel_id__hotel_user_id__hotel_user_owner_id=int(user_id), is_active=True)
serializer = HotelSerializers.FacilitiesRoomsSerializer(rooms, many=True)
return Response({"error": False, "data": serializer.data})
else:
return Response({"error": False, "data": []})
class CreateOfferMaxAmountViewSet(viewsets.ViewSet):
def update(self, request, pk=None):
queryset = models.OfferMaxAmount.objects.all()
offer_max = get_object_or_404(queryset, pk=pk)
offer_max_serializer = HotelSerializers.OfferMaxAmountAdminSerializer(offer_max, data=request.data, context={"request": request}, partial=True)
offer_max_serializer.is_valid(raise_exception=True)
offer_max_serializer.save()
return Response({"error": False, "message": "Offer Max Amount Updated"})
# HotelDiscountListAdminAPIView
class HotelDiscountListAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelDiscountListAdminSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.HotelDiscount.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id)).order_by('-hotel_discount_id')
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class HotelDiscountSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
hotel_discount = models.HotelDiscount.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
queryset = hotel_discount.filter(Q(hotel_discount_name__contains=str(key)) | Q(start_time_date__contains=str(key)) | Q(end_time_date__contains=str(key)) | Q(offer_max_amount_id__max_amount__contains=str(key)) | Q(offer_max_amount_id__min_amount__contains=str(key))).order_by('-hotel_discount_id')
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.HotelDiscountListAdminSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count})
#HotelUserPhotoAdminSerializer
class HotelUserPhotoAdminViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
photo = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
serializer = HotelSerializers.HotelUserPhotoAdminSerializer(photo, many=True, context={"request": request})
return Response({"error": False, "message": "Images Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Images Not found"})
def create(self, request):
try:
photo_detail = {}
try:
photo_detail["logo_photo"] = request.data["logo_photo"]
except:
photo_detail["logo_photo"] = None
try:
photo_detail["banner_photo"] = request.data["banner_photo"]
except:
photo_detail["banner_photo"] = None
photo_detail["hotel_user_owner_id"] = request.data["hotel_user_owner_id"]
# photo_details_list.append(photo_detail)
photo_serializer = HotelSerializers.HotelUserPhotoAdminSerializer(data=photo_detail, context={"request": request}, partial=True)
photo_serializer.is_valid(raise_exception=True)
photo_serializer.save()
dict_response = {"error": False, "message": "Photo Information Save Successfully!"}
except Exception as e:
dict_response = {"error": True, "message": "Photo Information Not Save."}
return Response(dict_response)
def retrieve(self, request, pk=None):
queryset = models.HotelUserPhoto.objects.all()
photo = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelUserPhotoAdminSerializer(photo, context={"request": request})
return Response({"error": False, "message": "Single Food Menu Data", "data": serializer.data})
def update(self, request, pk=None):
try:
queryset = models.HotelUserPhoto.objects.all()
photo = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelUserPhotoAdminSerializer(photo, data=request.data, context={"request": request})
if serializer.is_valid():
serializer.save()
dict_response = {"error": False, "message": "Photo Updated Successfully!"}
except:
dict_response = {"error": True, "message": "Photo Not Updated"}
return Response(dict_response)
def destroy(self, request, pk=None):
try:
queryset = models.HotelUserPhoto.objects.all()
photo = get_object_or_404(queryset, pk=pk)
photo.delete()
return Response({"error": False, "message": "Photo deleted successfully!"})
except:
return Response({"error": True, "message": "Photo Not Deleted!"})
class HotelUserOwnarAdminViewSet(viewsets.ViewSet):
def list(self, request):
if request.GET.get('user_id'):
user_id = request.GET.get('user_id')
hotel_user = models.HotelUserOwner.objects.filter(hotel_user_owner_id=int(user_id))
serializer = HotelSerializers.HotelUserAdminSerializer(hotel_user, many=True, context={"request": request})
return Response({"error": False, "message": "Hotel User Data List", "data": serializer.data})
else:
return Response({"error": False, "message": "Hotel User Not found"})
#Hotel User LOGO List Admin
class HotelUserLogoListAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelUserLogoListAdminSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
#Hotel User Banner List Admin
class HotelUserBannerListAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelUserBannerListAdminSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.HotelUserPhoto.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
#Start Hotel User Profile View Set
class HotelUserProfileAdminViewSet(viewsets.ViewSet):
def retrieve(self, request, pk=None):
queryset = models.HotelUserOwner.objects.all()
user = get_object_or_404(queryset, hotel_user_owner_id=pk)
serializer = HotelSerializers.HotelUserProfileAdminSerializer(user, context={"request": request})
return Response({"error": False, "message": "Single User Data", "data": serializer.data})
def update(self, request, pk=None):
queryset = models.HotelUserOwner.objects.all()
up_profile = get_object_or_404(queryset, pk=pk)
serializer = HotelSerializers.HotelUserProfileUpdateAdminSerializer(up_profile, data=request.data, context={"request": request}, partial=True)
print(serializer)
serializer.is_valid()
serializer.save()
return Response({"error": False, "message": "Data Has been Updated"})
#End Hotel User Profile View Set
#Start Room Booking List
# class RoomBookListAdminAPIView(viewsets.ViewSet):
# def list(self, request):
# if request.GET.get('user_id'):
# user_id = request.GET.get('user_id')
# room_list = models.RoomCartDetails.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id))
# serializer = HotelSerializers.HotelRoomCartSerializer(room_list, many=True, context={"request": request})
# return Response({"error": False, "message": "Room Book Data List", "data": serializer.data})
# else:
# return Response({"error": False, "message": "Room Book Not found"})
# RoomBookListAdminAPIView
class RoomBookListAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.HotelRoomCartSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
instance = models.RoomCartDetails.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id)).order_by('-room_id')
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
# class HotelBookingDetailsSearchViewSet(viewsets.ViewSet):
# def list(self, request):
# if 'user_id' in request.GET and 'key' in request.GET:
# user_id = request.GET.get('user_id')
# key = request.GET.get('key')
# booking_datail = models.RoomCartDetails.objects.filter(hotel_user_owner_id__hotel_user_owner_id=int(user_id))
# queryset = booking_datail.filter(Q(hotel_discount_name__contains=str(key)) | Q(start_time_date__contains=str(key)) | Q(end_time_date__contains=str(key)) | Q(offer_max_amount_id__max_amount__contains=str(key)) | Q(offer_max_amount_id__min_amount__contains=str(key))).order_by('-hotel_discount_id')
# total_count = len(queryset)
# results = queryset[:10]
# serializer = HotelSerializers.HotelRoomCartSerializer(results, many=True, context={"request": request})
# return Response({"error": False, "data": serializer.data, "count": total_count})
# class RecommendedHotelViewSet(viewsets.ViewSet):
# def list(self, request):
# hotel_details = models.HotelDetails.objects.filter(is_recommended=True, is_active=True)
# serializer = serializers.RecommendedHotelDetailsSerializer(hotel_details, many=True, context={"request": request})
# if len(serializer.data) != 0:
# return Response({"error": False, "message": "All Recommended Hotel Details List", "data": serializer.data})
# else:
# return Response({"error": False, "message": "No Recommended Hotel Found", "data": []})
# SingleRoomBookingAdminAPIView
class SingleRoomBookingAdminAPIView(APIView, PaginationHandlerMixin):
pagination_class = BasicPagination
serializer_class = HotelSerializers.BookingHotelRoomCartSerializer
def get(self, request, format=None, *args, **kwargs):
user_id = self.request.query_params.get('user_id')
custom_serializer = []
room_cart = models.RoomCartDetails.objects.filter(room_id__hotel_id__hotel_user_id=int(user_id))
cart_id_list = []
for each in room_cart:
if each.cart_id.cart_id not in cart_id_list:
cart_id_list.append(each.cart_id.cart_id)
try:
book_list = models.ConfirmBooking.objects.get(cart_id = each.cart_id)
custom_serializer.append(book_list)
except Exception as e:
print(str(e))
instance = custom_serializer
page = self.paginate_queryset(instance)
if page is not None:
serializer = self.get_paginated_response(self.serializer_class(page, many=True).data)
else:
serializer = self.serializer_class(instance, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
class HotelSingleBookingSearchViewSet(viewsets.ViewSet):
def list(self, request):
if 'user_id' in request.GET and 'key' in request.GET:
user_id = request.GET.get('user_id')
key = request.GET.get('key')
booking = models.ConfirmBooking.objects.filter(user_id__user_id=int(user_id))
print(booking)
queryset = booking.filter(Q(booking_id__contains=str(key)) | Q(total_payable_amount__contains=str(key)) | Q(booking_status__contains=str(key)) | Q(user_id__user_name__contains=str(key))).order_by('-booking_id')
total_count = len(queryset)
results = queryset[:10]
serializer = HotelSerializers.BookingHotelRoomCartSerializer(results, many=True, context={"request": request})
return Response({"error": False, "data": serializer.data, "count": total_count}) | 46.910018 | 310 | 0.657886 | 8,706 | 78,199 | 5.6847 | 0.038594 | 0.059122 | 0.043968 | 0.038795 | 0.800125 | 0.772322 | 0.74955 | 0.728456 | 0.686731 | 0.662626 | 0 | 0.003372 | 0.233878 | 78,199 | 1,667 | 311 | 46.910018 | 0.822717 | 0.102725 | 0 | 0.706342 | 0 | 0 | 0.117869 | 0.019888 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082537 | false | 0.004344 | 0.013901 | 0.003475 | 0.296264 | 0.009557 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dcfc175f2c06fc58ab52837808b4e62e506f6c99 | 209 | py | Python | helpers/registry/__init__.py | shuklaayush/badger-system | 1274eadbd0b0f3a02efbf40702719ce1d0a96c44 | [
"MIT"
] | 99 | 2020-12-02T08:40:48.000Z | 2022-03-15T05:21:06.000Z | helpers/registry/__init__.py | shuklaayush/badger-system | 1274eadbd0b0f3a02efbf40702719ce1d0a96c44 | [
"MIT"
] | 115 | 2020-12-15T07:15:39.000Z | 2022-03-28T22:21:03.000Z | helpers/registry/__init__.py | shuklaayush/badger-system | 1274eadbd0b0f3a02efbf40702719ce1d0a96c44 | [
"MIT"
] | 56 | 2020-12-11T06:50:04.000Z | 2022-02-21T09:17:38.000Z | from .registries import registries, registry
from .artifacts import artifacts
from .bsc_registry import bsc_registry
from .eth_registry import eth_registry
from .WhaleRegistryAction import WhaleRegistryAction
| 34.833333 | 52 | 0.870813 | 25 | 209 | 7.12 | 0.32 | 0.202247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100478 | 209 | 5 | 53 | 41.8 | 0.946809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0d1d6c098f958b31b70a8e4b8313b925e49c57b8 | 2,617 | py | Python | kobra/api/v1/tests/test_students.py | karservice/kobra | 2019fd3be499c06d2527e80576fd6ff03d8fe151 | [
"MIT"
] | 4 | 2016-08-28T16:00:20.000Z | 2018-01-31T18:22:43.000Z | kobra/api/v1/tests/test_students.py | karservice/kobra | 2019fd3be499c06d2527e80576fd6ff03d8fe151 | [
"MIT"
] | 25 | 2016-08-15T20:57:59.000Z | 2022-02-10T18:14:48.000Z | kobra/api/v1/tests/test_students.py | karservice/kobra | 2019fd3be499c06d2527e80576fd6ff03d8fe151 | [
"MIT"
] | 1 | 2017-02-06T17:13:16.000Z | 2017-02-06T17:13:16.000Z | # -*- coding: utf-8 -*-
from django.contrib.auth.models import Permission
from rest_framework import status
from rest_framework.reverse import reverse
from rest_framework.test import APITestCase
from .... import factories, models
class StudentApiTests(APITestCase):
def test_retrieve_unauthenticated(self):
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_retrieve_authenticated(self):
user = factories.UserFactory()
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
self.client.force_authenticate(user)
response = self.client.get(url)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data['id'], str(student.id))
def test_update_unauthenticated(self):
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
new_student = factories.StudentFactory.build()
# Request with changed organization
request_data = {
'name': new_student.name
}
response = self.client.put(url, data=request_data)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_update_authenticated(self):
user = factories.UserFactory()
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
new_student = factories.StudentFactory.build()
# Request with changed organization
request_data = {
'name': new_student.name
}
self.client.force_authenticate(user)
response = self.client.put(url, data=request_data)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
def test_delete_unauthenticated(self):
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
response = self.client.delete(url)
self.assertEqual(response.status_code, status.HTTP_401_UNAUTHORIZED)
def test_delete_authenticated(self):
user = factories.UserFactory()
student = factories.StudentFactory()
url = reverse('v1:student-detail', kwargs={'pk': student.pk})
self.client.force_authenticate(user)
response = self.client.delete(url)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
| 35.849315 | 76 | 0.684371 | 288 | 2,617 | 6.065972 | 0.208333 | 0.051517 | 0.137378 | 0.113337 | 0.816829 | 0.816829 | 0.816829 | 0.816829 | 0.799084 | 0.785346 | 0 | 0.012037 | 0.206343 | 2,617 | 72 | 77 | 36.347222 | 0.82908 | 0.034008 | 0 | 0.686275 | 0 | 0 | 0.049128 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 1 | 0.117647 | false | 0 | 0.098039 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d28b4744c4c64a455303dcc021d6555ed4fc6fc | 198,439 | py | Python | Access Subsystem/AsOpAgent.py | ViniGarcia/EMSPlatform | f51e1c5ce1b75016bc7f453e125a37909ac9b566 | [
"MIT"
] | 1 | 2021-05-31T15:44:25.000Z | 2021-05-31T15:44:25.000Z | Access Subsystem/AsOpAgent.py | ViniGarcia/HoLMES | f51e1c5ce1b75016bc7f453e125a37909ac9b566 | [
"MIT"
] | null | null | null | Access Subsystem/AsOpAgent.py | ViniGarcia/HoLMES | f51e1c5ce1b75016bc7f453e125a37909ac9b566 | [
"MIT"
] | null | null | null | import uuid
import json
import yaml
import flask
import sqlite3
import os.path
import importlib
import AsModels
import IrModels
import VibModels
import IrAgent
import VibManager
import AsAuthAgent
'''
CLASS: OperationAgent
AUTHOR: Vinicius Fulber-Garcia
CREATION: 05 Nov. 2020
L. UPDATE: 28 Jul. 2021 (Fulber-Garcia; Update of DummyVnfmDriver instantiation;
Included the VNFM address to SetupDriver method)
DESCRIPTION: Operation agent implementation. This class
has the kernel functionalites of the access
subsystem. It holds the implementation of all
the methods provided for external users. We
have a natural division of standardized me-
thods (Ve-Vnfm-em) and particular methods
of the EMS platform.
CODES: -1 -> Invalid data type of __vibManager
-2 -> Invalid data type of __veVnfmEm
-3 -> SQL error while consulting standard dummy VNFM
-4 -> Standard dummy VNFM is not available
-5 -> SQL error while consulting standard dummy VNFM driver
-6 -> Standard dummy VNFM driver is not available
-7 -> Invalid driver name of __veVnfmEm
-8 -> Invalid class instantiation of __veVnfmEm
-9 -> Invalid data type of aiAs
-10 -> Invalid data type of oaAa
-11 -> Invalid data type of asIr
-12 -> Unavailable authentication attribute
-13 -> Error during request authentication
-14 -> Invalid argument type received
-15 -> Invalid id of VNF provided
-16 -> Invalid id of indicator provided
-17 -> Error during VIB table entry creation
-18 -> Error on database operation
-19 -> Invalid id of subscription provided
-20 -> Error on response creation
'''
class OperationAgent:
__vibManager = None
__veVnfmEm = None
__aiAs = None
__oaAa = None
__asIr = None
def __init__(self):
return
def setupAgent(self, vibManager, veVnfmEm, aiAs, oaAa, asIr):
if type(vibManager) != VibManager.VibManager:
return -1
self.__vibManager = vibManager
if type(veVnfmEm) != str:
return -2
vnfm = self.__vibManager.queryVibDatabase("SELECT * FROM VnfmInstance WHERE vnfmId = \"" + veVnfmEm + "\";")
if type(vnfm) == sqlite3.Error:
return -3
if len(vnfm) != 1:
return -4
vnfm = VibModels.VibVnfmInstance().fromSql(vnfm[0])
driver = self.__vibManager.queryVibDatabase("SELECT * FROM VnfmDriverInstance WHERE vnfmId = \"" + vnfm.vnfmDriver + "\";")
if type(driver) == sqlite3.Error:
return -5
if len(driver) != 1:
return -6
driver = VibModels.VibVnfmDriverInstance().fromSql(driver[0])
if not os.path.isfile("Access Subsystem/Ve-Vnfm-em/" + driver.vnfmDriver + ".py"):
return -7
try:
self.__veVnfmEm = getattr(importlib.import_module("Ve-Vnfm-em." + driver.vnfmDriver), driver.vnfmDriver)(vnfm.vnfmId, vnfm.vnfmAddress, vnfm.vnfmCredentials)
except Exception as e:
return -8
if type(aiAs) != flask.Flask:
return -9
self.__aiAs = aiAs
self.__setupAccessInterface()
if type(oaAa) != AsAuthAgent.AuthenticationAgent:
return -10
self.__oaAa = oaAa
if type(asIr) != IrAgent.IrAgent:
return -11
self.__asIr = asIr
return self
def setupDriver(self, vibVnfmInstance):
driver = self.__vibManager.queryVibDatabase("SELECT * FROM VnfmDriverInstance WHERE vnfmId = \"" + vibVnfmInstance.vnfmDriver + "\";")
if type(driver) == sqlite3.Error:
return -5
if len(driver) != 1:
return -6
driver = VibModels.VibVnfmDriverInstance().fromSql(driver[0])
if not os.path.isfile("Access Subsystem/Ve-Vnfm-em/" + driver.vnfmDriver + ".py"):
return -7
try:
self.__veVnfmEm = getattr(importlib.import_module("Ve-Vnfm-em." + driver.vnfmDriver), driver.vnfmDriver)(vibVnfmInstance.vnfmId, vibVnfmInstance.vnfmAddress, vibVnfmInstance.vnfmCredentials)
except Exception as e:
return -8
return 0
def getRunningVnfm(self):
if self.__veVnfmEm != None:
vnfm = self.__vibManager.queryVibDatabase("SELECT * FROM VnfmInstance WHERE vnfmId = \"" + self.__veVnfmEm.vnfmId + "\";")
if type(vnfm) == sqlite3.Error:
return -3
if len(vnfm) != 1:
return -4
return VibModels.VibVnfmInstance().fromSql(vnfm[0]).toDictionary()
return None
def __authenticateRequest(self, userRequest, vnfRequest):
if not "userAuth" in flask.request.values:
return -12
authResult = self.__oaAa.authenticationCheck(flask.request.values.get("userAuth"))
if type(authResult) == int:
return -13
if type(authResult) == bool:
return authResult
if not userRequest in authResult.userPrivileges:
return False
if userRequest == "VS" and vnfRequest != None:
credentialResult = self.__oaAa.credentialCheck(authResult.userId, vnfRequest)
if type(credentialResult) == bool:
return credentialResult
return True
def __autheticateUser(self, authentication):
authResult = self.__oaAa.authenticationCheck(authentication)
if type(authResult) == int:
return -13
return authResult
def __setupAccessInterface(self):
self.__aiAs.add_url_rule("/vlmi/vnf_instances", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vnfInstances)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vi_vnfInstanceID)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/instantiate", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_instantiate)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/scale", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_scale)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/scale_to_level", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_scaleToLevel)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/change_flavour", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_changeFlavour)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/terminate", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_terminate)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/heal", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_heal)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/operate", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_operate)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/changeExtConn", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_changeExtConn)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/changeVnfPkg", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_changeVnfPkg)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/createSnapshot", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_createSnapshot)
self.__aiAs.add_url_rule("/vlmi/vnf_instances/<vnfInstanceId>/revertToSnapshot", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_viid_revertToSnapshot)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vnfLcmOpOccs)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs/<vnfLcmOpOccId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vloo_vnfOperationID)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs/<vnfLcmOpOccId>/retry", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vlooid_retry)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs/<vnfLcmOpOccId>/rollback", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vlooid_rollback)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs/<vnfLcmOpOccId>/fail", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vlooid_fail)
self.__aiAs.add_url_rule("/vlmi/vnf_lcm_op_occs/<vnfLcmOpOccId>/cancel", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vlooid_cancel)
self.__aiAs.add_url_rule("/vlmi/vnf_snapshots", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vnfSnapshots)
self.__aiAs.add_url_rule("/vlmi/vnf_snapshots/<vnfSnapshotInfoId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_vs_vnfSnapshotID)
self.__aiAs.add_url_rule("/vlmi/subscriptions", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_subscriptions)
self.__aiAs.add_url_rule("/vlmi/subscriptions/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vlmi_s_subscriptionID)
self.__aiAs.add_url_rule("/vpmi/pm_jobs", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vpmi_pm_jobs)
self.__aiAs.add_url_rule("/vpmi/pm_jobs/<pmJobId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vpmi_pmj_pmJobID)
self.__aiAs.add_url_rule("/vpmi/pm_jobs/<pmJobId>/reports/<reportId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vpmi_pmjid_r_reportID)
self.__aiAs.add_url_rule("/vpmi/thresholds", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vpmi_thresholds)
self.__aiAs.add_url_rule("/vpmi/thresholds/<thresholdId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vpmi_t_thresholdID)
self.__aiAs.add_url_rule("/vfmi/alarms", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vfmi_alarms)
self.__aiAs.add_url_rule("/vfmi/alarms/<alarmId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vfmi_a_alarmID)
self.__aiAs.add_url_rule("/vfmi/alarms/<alarmId>/escalate", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vfmi_aid_escalate)
self.__aiAs.add_url_rule("/vfmi/subscriptions", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vfmi_subscriptions)
self.__aiAs.add_url_rule("/vfmi/subscriptions/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vfmi_s_subscriptionID)
self.__aiAs.add_url_rule("/vii/indicators", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vii_indicators)
self.__aiAs.add_url_rule("/vii/indicators/<vnfInstanceId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vii_i_vnfInstanceID)
self.__aiAs.add_url_rule("/vii/indicators/<vnfInstanceId>/<indicatorId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vii_iid_indicatorID)
self.__aiAs.add_url_rule("/vii/subscriptions", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vii_subscriptions)
self.__aiAs.add_url_rule("/vii/subscriptions/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vii_s_subscriptionID)
self.__aiAs.add_url_rule("/vci/configuration/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vci_configuration)
self.__aiAs.add_url_rule("/vnf/operation/<vnfId>/<operationId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.vnf_operation)
self.__aiAs.add_url_rule("/aa/authenticate/<authentication>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.aa_authenticate)
self.__aiAs.add_url_rule("/im/vib/users", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_users)
self.__aiAs.add_url_rule("/im/vib/users/<userId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_u_userId)
self.__aiAs.add_url_rule("/im/vib/credentials", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_credentials)
self.__aiAs.add_url_rule("/im/vib/credentials/user/<userId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_c_userId)
self.__aiAs.add_url_rule("/im/vib/credentials/vnf/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_c_vnfId)
self.__aiAs.add_url_rule("/im/vib/credentials/<userId>/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_c_credentialId)
self.__aiAs.add_url_rule("/im/vib/subscriptions", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_subscriptions)
self.__aiAs.add_url_rule("/im/vib/subscriptions/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_s_subscriptionId)
self.__aiAs.add_url_rule("/im/vib/management_agents", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_management_agents)
self.__aiAs.add_url_rule("/im/vib/management_agents/<agentId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_ma_agentId)
self.__aiAs.add_url_rule("/im/vib/vnf_instances", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnf_instances)
self.__aiAs.add_url_rule("/im/vib/vnf_instances/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnfi_vnfId)
self.__aiAs.add_url_rule("/im/vib/platforms", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_platforms)
self.__aiAs.add_url_rule("/im/vib/platforms/<platformId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_p_platformId)
self.__aiAs.add_url_rule("/im/vib/vnf_managers", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnf_managers)
self.__aiAs.add_url_rule("/im/vib/vnf_managers/<managerId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnfm_managerId)
self.__aiAs.add_url_rule("/im/vib/vnf_managers_drivers", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnf_managers_drivers)
self.__aiAs.add_url_rule("/im/vib/vnf_managers_drivers/<vnfmId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vib_vnfmd_vnfmId)
self.__aiAs.add_url_rule("/im/ms/running_subscription", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_ms_running_subscription)
self.__aiAs.add_url_rule("/im/ms/running_subscription/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_msrs_subscriptionId)
self.__aiAs.add_url_rule("/im/ms/subscription", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_ms_subscription)
self.__aiAs.add_url_rule("/im/ms/subscription/<subscriptionId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_mss_subscriptionId)
self.__aiAs.add_url_rule("/im/ms/agent", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_ms_agent)
self.__aiAs.add_url_rule("/im/ms/agent/<agentId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_msa_agentId)
self.__aiAs.add_url_rule("/im/as/authenticator", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_authenticator)
self.__aiAs.add_url_rule("/im/as/authenticator/<authenticatorId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_a_authenticatorId)
self.__aiAs.add_url_rule("/im/as/running_authenticator", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_running_authenticator)
self.__aiAs.add_url_rule("/im/as/running_authenticator/<authenticatorId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_ra_authenticatorId)
self.__aiAs.add_url_rule("/im/as/user", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_user)
self.__aiAs.add_url_rule("/im/as/user/<userId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_u_userId)
self.__aiAs.add_url_rule("/im/as/credential", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_credential)
self.__aiAs.add_url_rule("/im/as/credential/user/<userId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_c_userId)
self.__aiAs.add_url_rule("/im/as/credential/vnf/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_c_vnfId)
self.__aiAs.add_url_rule("/im/as/credential/<userId>/<vnfId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_c_credentialId)
self.__aiAs.add_url_rule("/im/as/vnfm/running_vnfm", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vnfm_running_vnfm)
self.__aiAs.add_url_rule("/im/as/vnfm/running_vnfm/<vnfmId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vrv_vnfmId)
self.__aiAs.add_url_rule("/im/as/vnfm/instance", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vnfm_instance)
self.__aiAs.add_url_rule("/im/as/vnfm/instance/<vnfmId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vi_vnfmId)
self.__aiAs.add_url_rule("/im/as/vnfm/driver", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vnfm_driver)
self.__aiAs.add_url_rule("/im/as/vnfm/driver/<vnfmId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_as_vd_vnfmId)
self.__aiAs.add_url_rule("/im/vs/vnf_instance", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_vnf_instance)
self.__aiAs.add_url_rule("/im/vs/vnf_instance/<instanceId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_vnfi_instanceId)
self.__aiAs.add_url_rule("/im/vs/running_driver", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_running_driver)
self.__aiAs.add_url_rule("/im/vs/running_driver/<platformId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_rs_platformId)
self.__aiAs.add_url_rule("/im/vs/driver", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_driver)
self.__aiAs.add_url_rule("/im/vs/driver/<platformId>", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vsd_platformId)
self.__aiAs.add_url_rule("/im/vs/running_driver/operations", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_rd_operations)
self.__aiAs.add_url_rule("/im/vs/running_driver/operations/monitoring", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_rdo_monitoring)
self.__aiAs.add_url_rule("/im/vs/running_driver/operations/modification", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_rdo_modification)
self.__aiAs.add_url_rule("/im/vs/running_driver/operations/other", methods=["GET", "POST", "PUT", "PATCH", "DELETE"], view_func=self.im_vs_rdo_other)
# ================================ Ve-Vnfm-em Operations (EMS -> VNFM) ================================
def vlmi_vnfInstances(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vnfInstances()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vnfInstances(flask.request.values.get("createVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vnfInstances()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vnfInstances()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vnfInstances()
def vlmi_vi_vnfInstanceID(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vi_vnfInstanceID(vnfInstanceId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vi_vnfInstanceID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vi_vnfInstanceID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vi_vnfInstanceID(vnfInstanceId, flask.request.values.get("vnfInfoModificationRequest"))
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vi_vnfInstanceID(vnfInstanceId)
def vlmi_viid_instantiate(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_instantiate()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_instantiate(vnfInstanceId, flask.request.values.get("instantiateVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_instantiate()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_instantiate()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_instantiate()
def vlmi_viid_scale(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_scale()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_scale(vnfInstanceId, flask.request.values.get("scaleVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_scale()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_scale()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_scale()
def vlmi_viid_scaleToLevel(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_scaleToLevel()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_scaleToLevel(vnfInstanceId, flask.request.values.get("scaleVnfToLevelRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_scaleToLevel()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_scaleToLevel()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_scaleToLevel()
def vlmi_viid_changeFlavour(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_changeFlavour()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_changeFlavour(vnfInstanceId, flask.request.values.get("changeVnfFlavourRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_changeFlavour()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_changeFlavour()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_changeFlavour()
def vlmi_viid_terminate(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_terminate()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_terminate(vnfInstanceId, flask.request.values.get("terminateVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_terminate()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_terminate()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_terminate()
def vlmi_viid_heal(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_heal()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_heal(vnfInstanceId, flask.request.values.get("healVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_heal()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_heal()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_heal()
def vlmi_viid_operate(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_operate()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_operate(vnfInstanceId, flask.request.values.get("operateVnfRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_operate()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_operate()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_operate()
def vlmi_viid_changeExtConn(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_changeExtConn()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_changeExtConn(vnfInstanceId, flask.request.values.get("changeExtVnfConnectivityRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_changeExtConn()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_changeExtConn()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_changeExtConn()
def vlmi_viid_changeVnfPkg(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_changeVnfPkg()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_changeVnfPkg(vnfInstanceId, flask.request.values.get("changeCurrentVnfPkgRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_changeVnfPkg()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_changeVnfPkg()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_changeVnfPkg()
def vlmi_viid_createSnapshot(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_createSnapshot()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_createSnapshot(vnfInstanceId, flask.request.values.get("createVnfSnapshotRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_createSnapshot()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_createSnapshot()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_createSnapshot()
def vlmi_viid_revertToSnapshot(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_viid_revertToSnapshot()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_viid_revertToSnapshot(vnfInstanceId, flask.request.values.get("revertToVnfSnapshotRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_viid_revertToSnapshot()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_viid_revertToSnapshot()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_viid_revertToSnapshot()
def vlmi_vnfLcmOpOccs(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vnfLcmOpOccs()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vnfLcmOpOccs()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vnfLcmOpOccs()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vnfLcmOpOccs()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vnfLcmOpOccs()
def vlmi_vloo_vnfOperationID(self, vnfLcmOpOccId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vloo_vnfOperationID(vnfLcmOpOccId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vloo_vnfOperationID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vloo_vnfOperationID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vloo_vnfOperationID()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vloo_vnfOperationID()
def vlmi_vlooid_retry(self, vnfLcmOpOccId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vlooid_retry()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vlooid_retry(vnfLcmOpOccId)
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vlooid_retry()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vlooid_retry()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vlooid_retry()
def vlmi_vlooid_rollback(self, vnfLcmOpOccId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vlooid_rollback()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vlooid_rollback(vnfLcmOpOccId)
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vlooid_rollback()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vlooid_rollback()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vlooid_rollback()
def vlmi_vlooid_fail(self, vnfLcmOpOccId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vlooid_fail()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vlooid_fail(vnfLcmOpOccId)
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vlooid_fail()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vlooid_fail()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vlooid_fail()
def vlmi_vlooid_cancel(self, vnfLcmOpOccId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vlooid_cancel()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vlooid_cancel(vnfLcmOpOccId, flask.request.values.get("cancelMode"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vlooid_cancel()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vlooid_cancel()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vlooid_cancel()
def vlmi_vnfSnapshots(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vnfSnapshots()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vnfSnapshots(flask.request.values.get("createVnfSnapshotInfoRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vnfSnapshots()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vnfSnapshots()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vnfSnapshots()
def vlmi_vs_vnfSnapshotID(self, vnfSnapshotInfoId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_vs_vnfSnapshotID(vnfSnapshotInfoId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_vs_vnfSnapshotID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_vs_vnfSnapshotID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_vs_vnfSnapshotID()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_vs_vnfSnapshotID(vnfSnapshotInfoId)
def vlmi_subscriptions(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_subscriptions()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_subscriptions(flask.request.values.get("lccnSubscriptionRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_subscriptions()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_subscriptions()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_subscriptions()
def vlmi_s_subscriptionID(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VLMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vlmi_s_subscriptionID(subscriptionId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vlmi_s_subscriptionID(subscriptionId)
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vlmi_s_subscriptionID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vlmi_s_subscriptionID()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vlmi_s_subscriptionID()
def vpmi_pm_jobs(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VPMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vpmi_pm_jobs()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vpmi_pm_jobs(flask.request.values.get("createPmJobRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vpmi_pm_jobs()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vpmi_pm_jobs()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vpmi_pm_jobs()
def vpmi_pmj_pmJobID(self, pmJobId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VPMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vpmi_pmj_pmJobID(pmJobId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vpmi_pmj_pmJobID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vpmi_pmj_pmJobID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vpmi_pmj_pmJobID(pmJobId, flask.request.values.get("pmJobModifications"))
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vpmi_pmj_pmJobID(pmJobId)
def vpmi_pmjid_r_reportID(self, pmJobId, reportId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VPMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vpmi_pmjid_r_reportID(pmJobId, reportId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vpmi_pmjid_r_reportID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vpmi_pmjid_r_reportID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vpmi_pmjid_r_reportID()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vpmi_pmjid_r_reportID()
def vpmi_thresholds(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VPMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vpmi_thresholds()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vpmi_thresholds(flask.request.values.get("createThresholdRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vpmi_thresholds()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vpmi_thresholds()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vpmi_thresholds()
def vpmi_t_thresholdID(self, thresholdId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VPMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vpmi_t_thresholdID(thresholdId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vpmi_t_thresholdID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vpmi_t_thresholdID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vpmi_t_thresholdID(thresholdId, flask.request.values.get("thresholdModifications"))
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vpmi_t_thresholdID(thresholdId)
def vfmi_alarms(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VFMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vfmi_alarms()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vfmi_alarms()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vfmi_alarms()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vfmi_alarms()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vfmi_alarms()
def vfmi_a_alarmID(self, alarmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VFMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vfmi_a_alarmID(alarmId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vfmi_a_alarmID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vfmi_a_alarmID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vfmi_a_alarmID(alarmId, flask.request.values.get("alarmModifications"))
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vfmi_a_alarmID()
def vfmi_aid_escalate(self, alarmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VFMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vfmi_aid_escalate()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vfmi_aid_escalate(alarmId, flask.request.values.get("perceivedSeverityRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vfmi_aid_escalate()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vfmi_aid_escalate()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vfmi_aid_escalate()
def vfmi_subscriptions(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VFMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vfmi_subscriptions()
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vfmi_subscriptions(flask.request.values.get("fmSubscriptionRequest"))
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vfmi_subscriptions()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vfmi_subscriptions()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vfmi_subscriptions()
def vfmi_s_subscriptionID(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VFMI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.__veVnfmEm.get_vfmi_s_subscriptionID(subscriptionId)
elif flask.request.method == "POST":
return self.__veVnfmEm.post_vfmi_s_subscriptionID()
elif flask.request.method == "PUT":
return self.__veVnfmEm.put_vfmi_s_subscriptionID()
elif flask.request.method == "PATCH":
return self.__veVnfmEm.patch_vfmi_s_subscriptionID()
elif flask.request.method == "DELETE":
return self.__veVnfmEm.delete_vfmi_s_subscriptionID(subscriptionId)
def vii_indicators(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vii_indicators()
elif flask.request.method == "POST":
return self.post_vii_indicators()
elif flask.request.method == "PUT":
return self.put_vii_indicators()
elif flask.request.method == "PATCH":
return self.patch_vii_indicators()
elif flask.request.method == "DELETE":
return self.delete_vii_indicators()
def vii_i_vnfInstanceID(self, vnfInstanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
result = self.get_vii_i_vnfInstanceID(vnfInstanceId)
if result == "204":
return self.vii_i_indicatorID(vnfInstanceId)
return result
elif flask.request.method == "POST":
return self.post_vii_i_vnfInstanceID()
elif flask.request.method == "PUT":
return self.put_vii_i_vnfInstanceID()
elif flask.request.method == "PATCH":
return self.patch_vii_i_vnfInstanceID()
elif flask.request.method == "DELETE":
return self.delete_vii_i_vnfInstanceID()
def vii_iid_indicatorID(self, vnfInstanceId, indicatorId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vii_iid_indicatorID(vnfInstanceId, indicatorId)
elif flask.request.method == "POST":
return self.post_vii_iid_indicatorID()
elif flask.request.method == "PUT":
return self.put_vii_iid_indicatorID()
elif flask.request.method == "PATCH":
return self.patch_vii_iid_indicatorID()
elif flask.request.method == "DELETE":
return self.delete_vii_iid_indicatorID()
def vii_i_indicatorID(self, indicatorId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vii_i_indicatorID(indicatorId)
elif flask.request.method == "POST":
return self.post_vii_i_indicatorID()
elif flask.request.method == "PUT":
return self.put_vii_i_indicatorID()
elif flask.request.method == "PATCH":
return self.patch_vii_i_indicatorID()
elif flask.request.method == "DELETE":
return self.delete_vii_i_indicatorID()
def vii_subscriptions(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vii_subscriptions()
elif flask.request.method == "POST":
return self.post_vii_subscriptions(flask.request.values.get("vnfIndicatorSubscriptionRequest"))
elif flask.request.method == "PUT":
return self.put_vii_subscriptions()
elif flask.request.method == "PATCH":
return self.patch_vii_subscriptions()
elif flask.request.method == "DELETE":
return self.delete_vii_subscriptions()
def vii_s_subscriptionID(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VII", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vii_s_subscriptionID(subscriptionId)
elif flask.request.method == "POST":
return self.post_vii_s_subscriptionID()
elif flask.request.method == "PUT":
return self.put_vii_s_subscriptionID()
elif flask.request.method == "PATCH":
return self.patch_vii_s_subscriptionID()
elif flask.request.method == "DELETE":
return self.delete_vii_s_subscriptionID(subscriptionId)
def vci_configuration(self, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VCI", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vci_configuration(vnfId)
elif flask.request.method == "POST":
return self.post_vci_configuration()
elif flask.request.method == "PUT":
return self.put_vci_configuration()
elif flask.request.method == "PATCH":
return self.patch_vci_configuration(vnfId, flask.request.values.get("vnfConfigModifications"))
elif flask.request.method == "DELETE":
return self.delete_vci_configuration()
def vnf_operation(self, vnfId, operationId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", vnfId) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vnf_operation()
elif flask.request.method == "POST":
return self.post_vnf_operation(vnfId, operationId, flask.request.values.get("operationArguments"))
elif flask.request.method == "PUT":
return self.put_vnf_operation()
elif flask.request.method == "PATCH":
return self.patch_vnf_operation()
elif flask.request.method == "DELETE":
return self.delete_vnf_operation()
def aa_authenticate(self, authentication):
if flask.request.method == "GET":
return self.get_aa_authenticate(authentication)
elif flask.request.method == "POST":
return self.post_aa_authenticate()
elif flask.request.method == "PUT":
return self.put_aa_authenticate()
elif flask.request.method == "PATCH":
return self.patch_aa_authenticate()
elif flask.request.method == "DELETE":
return self.delete_aa_authenticate()
def im_vib_users(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_users()
elif flask.request.method == "POST":
return self.post_vib_users(flask.request.values.get("vibUserInstance"))
elif flask.request.method == "PUT":
return self.put_vib_users()
elif flask.request.method == "PATCH":
return self.patch_vib_users()
elif flask.request.method == "DELETE":
return self.delete_vib_users()
def im_vib_u_userId(self, userId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_u_userId(userId)
elif flask.request.method == "POST":
return self.post_vib_u_userId()
elif flask.request.method == "PUT":
return self.put_vib_u_userId()
elif flask.request.method == "PATCH":
return self.patch_vib_u_userId(userId, flask.request.values.get("vibUserInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_u_userId(userId)
def im_vib_credentials(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_credentials()
elif flask.request.method == "POST":
return self.post_vib_credentials(flask.request.values.get("vibCredentialInstance"))
elif flask.request.method == "PUT":
return self.put_vib_credentials()
elif flask.request.method == "PATCH":
return self.patch_vib_credentials()
elif flask.request.method == "DELETE":
return self.delete_vib_credentials()
def im_vib_c_credentialId(self, userId, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_c_credentialId(userId, vnfId)
elif flask.request.method == "POST":
return self.post_vib_c_credentialId()
elif flask.request.method == "PUT":
return self.put_vib_c_credentialId()
elif flask.request.method == "PATCH":
return self.patch_vib_c_credentialId()
elif flask.request.method == "DELETE":
return self.delete_vib_c_credentialId(userId, vnfId)
def im_vib_c_userId(self, userId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_c_userId(userId)
elif flask.request.method == "POST":
return self.post_vib_c_userId()
elif flask.request.method == "PUT":
return self.put_vib_c_userId()
elif flask.request.method == "PATCH":
return self.patch_vib_c_userId()
elif flask.request.method == "DELETE":
return self.delete_vib_c_userId()
def im_vib_c_vnfId(self, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_c_vnfId(vnfId)
elif flask.request.method == "POST":
return self.post_vib_c_vnfId()
elif flask.request.method == "PUT":
return self.put_vib_c_vnfId()
elif flask.request.method == "PATCH":
return self.patch_vib_c_vnfId()
elif flask.request.method == "DELETE":
return self.delete_vib_c_vnfId()
def im_vib_subscriptions(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_subscriptions()
elif flask.request.method == "POST":
return self.post_vib_subscriptions(flask.request.values.get("vibSubscriptionInstance"))
elif flask.request.method == "PUT":
return self.put_vib_subscriptions()
elif flask.request.method == "PATCH":
return self.patch_vib_subscriptions()
elif flask.request.method == "DELETE":
return self.delete_vib_subscriptions()
def im_vib_s_subscriptionId(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_s_subscriptionId(subscriptionId)
elif flask.request.method == "POST":
return self.post_vib_s_subscriptionId()
elif flask.request.method == "PUT":
return self.put_vib_s_subscriptionId()
elif flask.request.method == "PATCH":
return self.patch_vib_s_subscriptionId(subscriptionId, flask.request.values.get("vibSubscriptionInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_s_subscriptionId(subscriptionId)
def im_vib_management_agents(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_management_agents()
elif flask.request.method == "POST":
return self.post_vib_management_agents(flask.request.values.get("vibMaInstance"))
elif flask.request.method == "PUT":
return self.put_vib_management_agents()
elif flask.request.method == "PATCH":
return self.patch_vib_management_agents()
elif flask.request.method == "DELETE":
return self.delete_vib_management_agents()
def im_vib_ma_agentId(self, agentId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_ma_agentId(agentId)
elif flask.request.method == "POST":
return self.post_vib_ma_agentId()
elif flask.request.method == "PUT":
return self.put_vib_ma_agentId()
elif flask.request.method == "PATCH":
return self.patch_vib_ma_agentId(agentId, flask.request.values.get("vibMaInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_ma_agentId(agentId)
def im_vib_vnf_instances(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnf_instances()
elif flask.request.method == "POST":
return self.post_vib_vnf_instances(flask.request.values.get("vibVnfInstance"))
elif flask.request.method == "PUT":
return self.put_vib_vnf_instances()
elif flask.request.method == "PATCH":
return self.patch_vib_vnf_instances()
elif flask.request.method == "DELETE":
return self.delete_vib_vnf_instances()
def im_vib_vnfi_vnfId(self, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnfi_vnfId(vnfId)
elif flask.request.method == "POST":
return self.post_vib_vnfi_vnfId()
elif flask.request.method == "PUT":
return self.put_vib_vnfi_vnfId()
elif flask.request.method == "PATCH":
return self.patch_vib_vnfi_vnfId(vnfId, flask.request.values.get("vibVnfInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_vnfi_vnfId(vnfId)
def im_vib_platforms(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_platforms()
elif flask.request.method == "POST":
return self.post_vib_platforms(flask.request.values.get("vibPlatformInstance"))
elif flask.request.method == "PUT":
return self.put_vib_platforms()
elif flask.request.method == "PATCH":
return self.patch_vib_platforms()
elif flask.request.method == "DELETE":
return self.delete_vib_platforms()
def im_vib_p_platformId(self, platformId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_p_platformId(platformId)
elif flask.request.method == "POST":
return self.post_vib_p_platformId()
elif flask.request.method == "PUT":
return self.put_vib_p_platformId()
elif flask.request.method == "PATCH":
return self.patch_vib_p_platformId(platformId, flask.request.values.get("vibPlatformInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_p_platformId(platformId)
def im_vib_vnf_managers(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnf_managers()
elif flask.request.method == "POST":
return self.post_vib_vnf_managers(flask.request.values.get("vibVnfmInstance"))
elif flask.request.method == "PUT":
return self.put_vib_vnf_managers()
elif flask.request.method == "PATCH":
return self.patch_vib_vnf_managers()
elif flask.request.method == "DELETE":
return self.delete_vib_vnf_managers()
def im_vib_vnfm_managerId(self, managerId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnfm_managerId(managerId)
elif flask.request.method == "POST":
return self.post_vib_vnfm_managerId()
elif flask.request.method == "PUT":
return self.put_vib_vnfm_managerId()
elif flask.request.method == "PATCH":
return self.patch_vib_vnfm_managerId(managerId, flask.request.values.get("vibVnfmInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_vnfm_managerId(managerId)
def im_vib_vnf_managers_drivers(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnf_manager_drivers()
elif flask.request.method == "POST":
return self.post_vib_vnf_manager_drivers(flask.request.values.get("vibVnfmDriverInstance"))
elif flask.request.method == "PUT":
return self.put_vib_vnf_manager_drivers()
elif flask.request.method == "PATCH":
return self.patch_vib_vnf_manager_drivers()
elif flask.request.method == "DELETE":
return self.delete_vib_vnf_manager_drivers()
def im_vib_vnfmd_vnfmId(self, vnfmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VIB", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vib_vnfmd_vnfmId(vnfmId)
elif flask.request.method == "POST":
return self.post_vib_vnfmd_vnfmId()
elif flask.request.method == "PUT":
return self.put_vib_vnfmd_vnfmId()
elif flask.request.method == "PATCH":
return self.patch_vib_vnfmd_vnfmId(vnfmId, flask.request.values.get("vibVnfmDriverInstance"))
elif flask.request.method == "DELETE":
return self.delete_vib_vnfmd_vnfmId(vnfmId)
def im_ms_running_subscription(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_ms_running_subscription()
elif flask.request.method == "POST":
return self.post_ms_running_subscription()
elif flask.request.method == "PUT":
return self.put_ms_running_subscription()
elif flask.request.method == "PATCH":
return self.patch_ms_running_subscription()
elif flask.request.method == "DELETE":
return self.delete_ms_running_subscription()
def im_msrs_subscriptionId(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_msrs_subscriptionId(subscriptionId)
elif flask.request.method == "POST":
return self.post_msrs_subscriptionId(subscriptionId)
elif flask.request.method == "PUT":
return self.put_msrs_subscriptionId()
elif flask.request.method == "PATCH":
if "agentArguments" in flask.request.values:
return self.patch_msrs_subscriptionId(subscriptionId, flask.request.values.get("agentArguments"))
else:
return self.patch_msrs_subscriptionId(subscriptionId, None)
elif flask.request.method == "DELETE":
return self.delete_msrs_subscriptionId(subscriptionId)
def im_ms_subscription(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_ms_subscription()
elif flask.request.method == "POST":
return self.post_ms_subscription(flask.request.values.get("vnfIndicatorSubscriptionRequest"))
elif flask.request.method == "PUT":
return self.put_ms_subscription()
elif flask.request.method == "PATCH":
return self.patch_ms_subscription()
elif flask.request.method == "DELETE":
return self.delete_ms_subscription()
def im_mss_subscriptionId(self, subscriptionId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_mss_subscriptionId(subscriptionId)
elif flask.request.method == "POST":
return self.post_mss_subscriptionId()
elif flask.request.method == "PUT":
return self.put_mss_subscriptionId()
elif flask.request.method == "PATCH":
return self.patch_mss_subscriptionId(subscriptionId, flask.request.values.get("vnfIndicatorSubscription"))
elif flask.request.method == "DELETE":
return self.delete_mss_subscriptionId(subscriptionId)
def im_ms_agent(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_ms_agent()
elif flask.request.method == "POST":
return self.post_ms_agent(flask.request.values.get("vibMaInstance"))
elif flask.request.method == "PUT":
return self.put_ms_agent()
elif flask.request.method == "PATCH":
return self.patch_ms_agent()
elif flask.request.method == "DELETE":
return self.delete_ms_agent()
def im_msa_agentId(self, agentId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("MS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_msa_agentId(agentId)
elif flask.request.method == "POST":
return self.post_msa_agentId()
elif flask.request.method == "PUT":
return self.put_msa_agentId()
elif flask.request.method == "PATCH":
return self.patch_msa_agentId(agentId, flask.request.values.get("vibMaInstance"))
elif flask.request.method == "DELETE":
return self.delete_msa_agentId(agentId)
def im_as_authenticator(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_authenticator()
elif flask.request.method == "POST":
return self.post_as_authenticator()
elif flask.request.method == "PUT":
return self.put_as_authenticator()
elif flask.request.method == "PATCH":
return self.patch_as_authenticator()
elif flask.request.method == "DELETE":
return self.delete_as_authenticator()
def im_as_a_authenticatorId(self, authenticatorId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_a_authenticatorId(authenticatorId)
elif flask.request.method == "POST":
return self.post_as_a_authenticatorId()
elif flask.request.method == "PUT":
return self.put_as_a_authenticatorId()
elif flask.request.method == "PATCH":
return self.patch_as_a_authenticatorId()
elif flask.request.method == "DELETE":
return self.delete_as_a_authenticatorId()
def im_as_running_authenticator(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_running_authenticator()
elif flask.request.method == "POST":
return self.post_as_running_authenticator()
elif flask.request.method == "PUT":
return self.put_as_running_authenticator()
elif flask.request.method == "PATCH":
return self.patch_as_running_authenticator()
elif flask.request.method == "DELETE":
return self.delete_as_running_authenticator()
def im_as_ra_authenticatorId(self, authenticatorId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_ra_authenticatorId(authenticatorId)
elif flask.request.method == "POST":
return self.post_as_ra_authenticatorId(authenticatorId)
elif flask.request.method == "PUT":
return self.put_as_ra_authenticatorId()
elif flask.request.method == "PATCH":
return self.patch_as_ra_authenticatorId()
elif flask.request.method == "DELETE":
return self.delete_as_ra_authenticatorId()
def im_as_user(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_user()
elif flask.request.method == "POST":
return self.post_as_user(flask.request.values.get("vibUserInstance"))
elif flask.request.method == "PUT":
return self.put_as_user()
elif flask.request.method == "PATCH":
return self.patch_as_user()
elif flask.request.method == "DELETE":
return self.delete_as_user()
def im_as_u_userId(self, userId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_u_userId(userId)
elif flask.request.method == "POST":
return self.post_as_u_userId()
elif flask.request.method == "PUT":
return self.put_as_u_userId()
elif flask.request.method == "PATCH":
return self.patch_as_u_userId(userId, flask.request.values.get("vibUserInstance"))
elif flask.request.method == "DELETE":
return self.delete_as_u_userId(userId)
def im_as_credential(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_credential()
elif flask.request.method == "POST":
return self.post_as_credential(flask.request.values.get("vibCredentialInstance"))
elif flask.request.method == "PUT":
return self.put_as_credential()
elif flask.request.method == "PATCH":
return self.patch_as_credential()
elif flask.request.method == "DELETE":
return self.delete_as_credential()
def im_as_c_credentialId(self, userId, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_c_credentialId(userId, vnfId)
elif flask.request.method == "POST":
return self.post_as_c_credentialId()
elif flask.request.method == "PUT":
return self.put_as_c_credentialId()
elif flask.request.method == "PATCH":
return self.patch_as_c_credentialId()
elif flask.request.method == "DELETE":
return self.delete_as_c_credentialId(userId, vnfId)
def im_as_c_userId(self, userId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_c_userId(userId)
elif flask.request.method == "POST":
return self.post_as_c_userId()
elif flask.request.method == "PUT":
return self.put_as_c_userId()
elif flask.request.method == "PATCH":
return self.patch_as_c_userId()
elif flask.request.method == "DELETE":
return self.delete_as_c_userId()
def im_as_c_vnfId(self, vnfId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_c_vnfId(vnfId)
elif flask.request.method == "POST":
return self.post_as_c_vnfId()
elif flask.request.method == "PUT":
return self.put_as_c_vnfId()
elif flask.request.method == "PATCH":
return self.patch_as_c_vnfId()
elif flask.request.method == "DELETE":
return self.delete_as_c_vnfId()
def im_as_vnfm_running_vnfm(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vnfm_running_vnfm()
elif flask.request.method == "POST":
return self.post_as_vnfm_running_vnfm()
elif flask.request.method == "PUT":
return self.put_as_vnfm_running_vnfm()
elif flask.request.method == "PATCH":
return self.patch_as_vnfm_running_vnfm()
elif flask.request.method == "DELETE":
return self.delete_as_vnfm_running_vnfm()
def im_as_vrv_vnfmId(self, vnfmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vrv_vnfmId(vnfmId)
elif flask.request.method == "POST":
return self.post_as_vrv_vnfmId(vnfmId)
elif flask.request.method == "PUT":
return self.put_as_vrv_vnfmId()
elif flask.request.method == "PATCH":
return self.patch_as_vrv_vnfmId()
elif flask.request.method == "DELETE":
return self.delete_as_vrv_vnfmId()
def im_as_vnfm_instance(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vnfm_instance()
elif flask.request.method == "POST":
return self.post_as_vnfm_instance(flask.request.values.get("vibVnfmInstance"))
elif flask.request.method == "PUT":
return self.put_as_vnfm_instance()
elif flask.request.method == "PATCH":
return self.patch_as_vnfm_instance()
elif flask.request.method == "DELETE":
return self.delete_as_vnfm_instance()
def im_as_vi_vnfmId(self, vnfmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vi_vnfmId(vnfmId)
elif flask.request.method == "POST":
return self.post_as_vi_vnfmId()
elif flask.request.method == "PUT":
return self.put_as_vi_vnfmId()
elif flask.request.method == "PATCH":
return self.patch_as_vi_vnfmId(vnfmId, flask.request.values.get("vibVnfmInstance"))
elif flask.request.method == "DELETE":
return self.delete_as_vi_vnfmId(vnfmId)
def im_as_vnfm_driver(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vnfm_driver()
elif flask.request.method == "POST":
return self.post_as_vnfm_driver(flask.request.values.get("vibVnfmDriverInstance"))
elif flask.request.method == "PUT":
return self.put_as_vnfm_driver()
elif flask.request.method == "PATCH":
return self.patch_as_vnfm_driver()
elif flask.request.method == "DELETE":
return self.delete_as_vnfm_driver()
def im_as_vd_vnfmId(self, vnfmId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("AS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_as_vd_vnfmId(vnfmId)
elif flask.request.method == "POST":
return self.post_as_vd_vnfmId()
elif flask.request.method == "PUT":
return self.put_as_vd_vnfmId()
elif flask.request.method == "PATCH":
return self.patch_as_vd_vnfmId(vnfmId, flask.request.values.get("vibVnfmDriverInstance"))
elif flask.request.method == "DELETE":
return self.delete_as_vd_vnfmId(vnfmId)
def im_vs_vnf_instance(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_vnf_instance()
elif flask.request.method == "POST":
return self.post_vs_vnf_instance(flask.request.values.get("vibVnfInstance"))
elif flask.request.method == "PUT":
return self.put_vs_vnf_instance()
elif flask.request.method == "PATCH":
return self.patch_vs_vnf_instance()
elif flask.request.method == "DELETE":
return self.delete_vs_vnf_instance()
def im_vs_vnfi_instanceId(self, instanceId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_vnfi_instanceId(instanceId)
elif flask.request.method == "POST":
return self.post_vs_vnfi_instanceId()
elif flask.request.method == "PUT":
return self.put_vs_vnfi_instanceId()
elif flask.request.method == "PATCH":
return self.patch_vs_vnfi_instanceId(instanceId, flask.request.values.get("vibVnfInstance"))
elif flask.request.method == "DELETE":
return self.delete_vs_vnfi_instanceId(instanceId)
def im_vs_running_driver(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_running_driver()
elif flask.request.method == "POST":
return self.post_vs_running_driver()
elif flask.request.method == "PUT":
return self.put_vs_running_driver()
elif flask.request.method == "PATCH":
return self.patch_vs_running_driver()
elif flask.request.method == "DELETE":
return self.delete_vs_running_driver()
def im_vs_rs_platformId(self, platformId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_rs_platformId(platformId)
elif flask.request.method == "POST":
return self.post_vs_rs_platformId(platformId)
elif flask.request.method == "PUT":
return self.put_vs_rs_platformId()
elif flask.request.method == "PATCH":
return self.patch_vs_rs_platformId()
elif flask.request.method == "DELETE":
return self.delete_vs_rs_platformId()
def im_vs_driver(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_driver()
elif flask.request.method == "POST":
return self.post_vs_driver(flask.request.values.get("vibPlatformInstance"))
elif flask.request.method == "PUT":
return self.put_vs_driver()
elif flask.request.method == "PATCH":
return self.patch_vs_driver()
elif flask.request.method == "DELETE":
return self.delete_vs_driver()
def im_vsd_platformId(self, platformId):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vsd_platformId(platformId)
elif flask.request.method == "POST":
return self.post_vsd_platformId()
elif flask.request.method == "PUT":
return self.put_vsd_platformId()
elif flask.request.method == "PATCH":
return self.patch_vsd_platformId(platformId, flask.request.values.get("vibPlatformInstance"))
elif flask.request.method == "DELETE":
return self.delete_vsd_platformId(platformId)
def im_vs_rd_operations(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_rd_operations()
elif flask.request.method == "POST":
return self.post_vs_rd_operations()
elif flask.request.method == "PUT":
return self.put_vs_rd_operations()
elif flask.request.method == "PATCH":
return self.patch_vs_rd_operations()
elif flask.request.method == "DELETE":
return self.delete_vs_rd_operations()
def im_vs_rdo_monitoring(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_rdo_monitoring()
elif flask.request.method == "POST":
return self.post_vs_rdo_monitoring()
elif flask.request.method == "PUT":
return self.put_vs_rdo_monitoring()
elif flask.request.method == "PATCH":
return self.patch_vs_rdo_monitoring()
elif flask.request.method == "DELETE":
return self.delete_vs_rdo_monitoring()
def im_vs_rdo_modification(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_rdo_modification()
elif flask.request.method == "POST":
return self.post_vs_rdo_modification()
elif flask.request.method == "PUT":
return self.put_vs_rdo_modification()
elif flask.request.method == "PATCH":
return self.patch_vs_rdo_modification()
elif flask.request.method == "DELETE":
return self.delete_vs_rdo_modification()
def im_vs_rdo_other(self):
if self.__oaAa.getRunningAuthenticator() != "None":
if self.__authenticateRequest("VS", None) != True:
return "ERROR CODE #4 (AA): REQUEST COULD NOT BE AUTHENTICATED", 400
if flask.request.method == "GET":
return self.get_vs_rdo_other()
elif flask.request.method == "POST":
return self.post_vs_rdo_other()
elif flask.request.method == "PUT":
return self.put_vs_rdo_other()
elif flask.request.method == "PATCH":
return self.patch_vs_rdo_other()
elif flask.request.method == "DELETE":
return self.delete_vs_rdo_other()
# ================================ Ve-Vnfm-em Operations (VNFM -> EMS) ================================
'''
PATH: /vii/indicators
ACTION: GET
DESCRIPTION: Query multiple VNF indicators. This resource allows to query all
VNF indicators that are known to the API producer.
ARGUMENT: --
RETURN: - 200 (HTTP) + VnfIndicator (Class) [0..N]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_indicators(self):
vnfIndicators = []
vnfPlatforms = {}
vibVnfInstances = [VibModels.VibVnfInstance().fromSql(vvi) for vvi in self.__vibManager.queryVibDatabase("SELECT * FROM VnfInstance;")]
for instance in vibVnfInstances:
if instance.vnfPlatform in vnfPlatforms:
operations = vnfPlatforms[instance.vnfPlatform][0]
else:
platform = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_running_driver", instance.vnfPlatform), "AS", "IM"))
if type(platform.messageData) == tuple:
return platform.messageData[0], 400
operations = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_monitoring", None), "AS", "IM"))
if type(operations.messageData) == tuple:
return platform.messageData[0], 400
vnfPlatforms[instance.vnfPlatform] = (operations.messageData, platform.messageData)
operations = operations.messageData
for indicator in operations:
result = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.VsData().fromData(instance, vnfPlatforms[instance.vnfPlatform][1], indicator, {}), "AS", "VS"))
vnfIndicators.append(AsModels.VnfIndicator().fromData(instance.vnfId + ";"+ indicator, indicator, result.messageData, instance.vnfId, {"self":flask.request.host, "vnfInstance":instance.vnfAddress}).toDictionary())
return json.dumps(vnfIndicators), 200
'''
PATH: /vii/indicators
N/A ACTIONS: POST, PUT, PATCH, DELETE
**Do not change these methods**
'''
def post_vii_indicators(self):
return "NOT AVAILABLE", 405
def put_vii_indicators(self):
return "NOT AVAILABLE", 405
def patch_vii_indicators(self):
return "NOT AVAILABLE", 405
def delete_vii_indicators(self):
return "NOT AVAILABLE", 405
'''
PATH: /vii/indicators/{vnfInstanceId}
ACTION: GET
DESCRIPTION: Query multiple VNF indicators related to one VNF instance. This re-
source allows to query all VNF indicators that are known to the API
producer.
ARGUMENT: vnfInstanceId (String)
RETURN: - 200 (HTTP) + VnfIndicator (Class) [0..N]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_i_vnfInstanceID(self, vnfInstanceId):
if type(vnfInstanceId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
vnfIndicators = []
instance = self.__vibManager.queryVibDatabase("SELECT * FROM VnfInstance WHERE vnfId = \"" + vnfInstanceId + "\";")
if len(instance) == 0:
return "204", 200
instance = VibModels.VibVnfInstance().fromSql(instance[0])
platform = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_running_driver", instance.vnfPlatform), "AS", "IM"))
if type(platform.messageData) == tuple:
return platform.messageData[0], 400
operations = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_monitoring", None), "AS", "IM"))
if type(operations.messageData) == tuple:
return platform.messageData[0], 400
for indicator in operations.messageData:
result = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.VsData().fromData(instance, platform.messageData, indicator, {}), "AS", "VS"))
vnfIndicators.append(AsModels.VnfIndicator().fromData(instance.vnfId + ";"+ indicator, indicator, result.messageData, instance.vnfId, {"self":flask.request.host, "vnfInstance":instance.vnfAddress}).toDictionary())
return json.dumps(vnfIndicators), 200
'''
PATH: /vii/indicators/{vnfInstanceId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
**Do not change these methods**
'''
def post_vii_i_vnfInstanceID(self):
return "NOT AVAILABLE", 405
def put_vii_i_vnfInstanceID(self):
return "NOT AVAILABLE", 405
def patch_vii_i_vnfInstanceID(self):
return "NOT AVAILABLE", 405
def delete_vii_i_vnfInstanceID(self):
return "NOT AVAILABLE", 405
'''
PATH: /vii/indicators/{vnfInstanceId}/{indicatorId}
ACTION: GET
DESCRIPTION: Read an individual VNF indicator to one VNF instance.
ARGUMENT: vnfInstanceId (String), indicatorId (String)
RETURN: - 200 (HTTP) + VnfIndicator (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_iid_indicatorID(self, vnfInstanceId, indicatorId):
if type(vnfInstanceId) != str or type(indicatorId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
instance = self.__vibManager.queryVibDatabase("SELECT * FROM VnfInstance WHERE vnfId = \"" + vnfInstanceId + "\";")
if len(instance) == 0:
return "ERROR CODE #1 (AS): INSTANCE ELEMENT NOT FOUND", 400
instance = VibModels.VibVnfInstance().fromSql(instance[0])
platform = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_running_driver", instance.vnfPlatform), "AS", "IM"))
if type(platform.messageData) == tuple:
return platform.messageData[0], 400
operations = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_monitoring", None), "AS", "IM"))
if type(operations.messageData) == tuple:
return platform.messageData[0], 400
if not indicatorId in operations.messageData:
return "ERROR CODE #1 (AS): INDICATOR ELEMENT NOT FOUND FOR INSTANCE", 400
result = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.VsData().fromData(instance, platform.messageData, indicatorId, {}), "AS", "VS"))
return json.dumps(AsModels.VnfIndicator().fromData(instance.vnfId + ";"+ indicatorId, indicatorId, result.messageData, instance.vnfId, {"self":flask.request.host, "vnfInstance":instance.vnfAddress}).toDictionary()), 200
'''
PATH: /vii/indicators/{vnfInstanceId}/{indicatorId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
**Do not change these methods**
'''
def post_vii_iid_indicatorID(self):
return "NOT AVAILABLE", 405
def put_vii_iid_indicatorID(self):
return "NOT AVAILABLE", 405
def patch_vii_iid_indicatorID(self):
return "NOT AVAILABLE", 405
def delete_vii_iid_indicatorID(self):
return "NOT AVAILABLE", 405
'''
PATH: /vii/indicators/{indicatorId}
ACTION: GET
DESCRIPTION: Read an individual VNF indicator to all VNF instances.
ARGUMENT: indicatorId (String)
RETURN: - 200 (HTTP) + VnfIndicator (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_i_indicatorID(self, indicatorId):
if type(indicatorId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
vnfIndicators = []
vnfPlatforms = {}
vibVnfInstances = [VibModels.VibVnfInstance().fromSql(vvi) for vvi in self.__vibManager.queryVibDatabase("SELECT * FROM VnfInstance;")]
for instance in vibVnfInstances:
if instance.vnfPlatform in vnfPlatforms:
operations = vnfPlatforms[instance.vnfPlatform][0]
else:
platform = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_running_driver", instance.vnfPlatform), "AS", "IM"))
if type(platform.messageData) == tuple:
return platform.messageData[0], 400
operations = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_monitoring", None), "AS", "IM"))
if type(operations.messageData) == tuple:
return platform.messageData[0], 400
vnfPlatforms[instance.vnfPlatform] = (operations.messageData, platform.messageData)
operations = operations.messageData
if indicatorId in operations:
result = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.VsData().fromData(instance, vnfPlatforms[instance.vnfPlatform][1], indicatorId, {}), "AS", "VS"))
vnfIndicators.append(AsModels.VnfIndicator().fromData(instance.vnfId + ";"+ indicatorId, indicatorId, result.messageData, instance.vnfId, {"self":flask.request.host, "vnfInstance":instance.vnfAddress}).toDictionary())
return json.dumps(vnfIndicators), 200
'''
PATH: /vii/indicators/{indicatorId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
**Do not change these methods**
'''
def post_vii_i_indicatorID(self):
return "NOT AVAILABLE", 405
def put_vii_i_indicatorID(self):
return "NOT AVAILABLE", 405
def patch_vii_i_indicatorID(self):
return "NOT AVAILABLE", 405
def delete_vii_i_indicatorID(self):
return "NOT AVAILABLE", 405
'''
PATH: /vii/subscriptions
ACTION: GET
DESCRIPTION: Query multiple subscriptions of indicators.
ARGUMENT: --
RETURN: - 200 (HTTP) + VnfIndicatorSubscription (Class) [0..N]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_subscriptions(self):
subscriptions = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_ms_subscription", None), "AS", "IM"))
if type(subscriptions.messageData) == list:
return json.dumps([AsModels.VnfIndicatorSubscription().fromData(s.visId, s.visFilter, s.visCallback, s.visLinks).toDictionary() for s in subscriptions.messageData]), 200
return subscriptions.messageData[0], 400
'''
PATH: /vii/subscriptions
ACTION: POST
DESCRIPTION: Subscribe to VNF indicator change notifications.
ARGUMENT: VnfIndicatorSubscriptionRequest (Class)
RETURN: - 201 (HTTP) + VnfIndicatorSubscription (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def post_vii_subscriptions(self, vnfIndicatorSubscriptionRequest):
try:
request = AsModels.VnfIndicatorSubscriptionRequest().fromDictionary(json.loads(vnfIndicatorSubscriptionRequest))
except:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
subscription = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "post_ms_subscription", request), "AS", "IM"))
if type(subscription.messageData) == AsModels.VnfIndicatorSubscription:
subscription.messageData.links["self"] = flask.request.host
return json.dumps(subscription.messageData.toDictionary()), 201
return subscription.messageData[0], 400
'''
PATH: /vii/subscriptions
N/A ACTIONS: PUT, PATCH, DELETE
**Do not change these methods**
'''
def put_vii_subscriptions(self):
return "NOT AVAILABLE", 405
def patch_vii_subscriptions(self):
return "NOT AVAILABLE", 405
def delete_vii_subscriptions(self):
return "NOT AVAILABLE", 405
'''
PATH: /vii/subscriptions/{subscriptionId}
ACTION: GET
DESCRIPTION: Read an individual subscription.
ARGUMENT: subscriptionId (String)
RETURN: - 200 (HTTP) + VnfIndicatorSubscription (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vii_s_subscriptionID(self, subscriptionId):
if type(subscriptionId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
subscription = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_mss_subscriptionId", subscriptionId), "AS", "IM"))
if type(subscription.messageData) == VibModels.VibSubscriptionInstance:
return json.dumps(AsModels.VnfIndicatorSubscription().fromData(subscription.messageData.visId, subscription.messageData.visFilter, subscription.messageData.visCallback, subscription.messageData.visLinks).toDictionary()), 200
return subscription.messageData[0], 400
'''
PATH: /vii/subscriptions/{subscriptionId}
ACTION: DELETE
DESCRIPTION: Terminate a subscription.
ARGUMENT: subscriptionId (String)
RETURN: - 204 (HTTP)
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def delete_vii_s_subscriptionID(self, subscriptionId):
if type(subscriptionId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
delete = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "delete_mss_subscriptionId", subscriptionId), "AS", "IM"))
if type(delete.messageData) == VibModels.VibSubscriptionInstance:
return "", 204
return delete.messageData[0], 400
'''
PATH: /vii/subscriptions/{subscriptionId}
N/A ACTIONS: POST, PUT, PATCH
**Do not change these methods**
'''
def post_vii_s_subscriptionID(self):
return "NOT AVAILABLE", 405
def put_vii_s_subscriptionID(self):
return "NOT AVAILABLE", 405
def patch_vii_s_subscriptionID(self):
return "NOT AVAILABLE", 405
'''
PATH: /vci/configuration/{vnfId}
ACTION: GET
DESCRIPTION: Read configuration data of a VNF instance and its VNFC
instances.
ARGUMENT: --
RETURN: - 200 (HTTP) + VnfConfiguration (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def get_vci_configuration(self, vnfId):
if type(vnfId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
instance = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_vnfi_instanceId", vnfId), "AS", "IM"))
if type(instance.messageData) == VibModels.VibVnfInstance:
return json.dumps(AsModels.VnfConfiguration().fromData(AsModels.VnfConfigurationData().fromData([], None, instance.messageData.toDictionary()), []).toDictionary()), 200
return instance.messageData[0], 400
'''
PATH: /vci/configuration/{vnfId}
ACTION: PATCH
DESCRIPTION: Set configuration data of a VNF instance and/or its VNFC
instances.
ARGUMENT: VnfConfigModifications (Class)
RETURN: - 200 (HTTP) + VnfConfigModifications (Class) [1]
- Integer error code (HTTP)
CALL: VNFM -> EM
'''
def patch_vci_configuration(self, vnfId, vnfConfigModifications):
if type(vnfId) != str:
return "ERROR CODE #0 (AS): INVALID ARGUMENTS PROVIDED", 400
request = AsModels.VnfConfigModifications().fromDictionary(json.loads(vnfConfigModifications))
instance = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_vnfi_instanceId", vnfId), "AS", "IM"))
if type(instance.messageData) != VibModels.VibVnfInstance:
return delete.messageData[0], 400
if request.vnfConfigurationData == None or request.vnfConfigurationData.vnfSpecificData == None:
return "ERROR CODE #2 (AS): MODIFICATIONS NOT SUPPORTED"
if "vnfAddress" in request.vnfConfigurationData.vnfSpecificData:
instance.messageData.vnfAddress = request.vnfConfigurationData.vnfSpecificData["vnfAddress"]
if "vnfPlatform" in request.vnfConfigurationData.vnfSpecificData:
instance.messageData.vnfPlatform = request.vnfConfigurationData.vnfSpecificData["vnfPlatform"]
if "vnfExtAgents" in request.vnfConfigurationData.vnfSpecificData:
instance.messageData.vnfExtAgents = request.vnfConfigurationData.vnfSpecificData["vnfExtAgents"]
if "vnfAuth" in request.vnfConfigurationData.vnfSpecificData:
instance.messageData.vnfExtAgents = request.vnfConfigurationData.vnfSpecificData["vnfAuth"]
instance = self.__asIr.sendMessage(IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "patch_vib_vnfi_vnfId", instance.messageData), "AS", "IM"))
if type(instance.messageData) == VibModels.VibVnfInstance:
return vnfConfigModifications, 200
return instance.messageData[0], 400
'''
PATH: /vci/configuration
N/A ACTIONS: POST, PUT, DELETE
**Do not change these methods**
'''
def post_vci_configuration(self):
return "NOT AVAILABLE", 405
def put_vci_configuration(self):
return "NOT AVAILABLE", 405
def delete_vci_configuration(self):
return "NOT AVAILABLE", 405
#TO DO FOR ALFA.2: IN/OUT METHODS IN DRIVER TO OPERATIONS FROM VNFM TO EM
# ===================================== VNF Management Operations =====================================
'''
PATH: /vnf/{vnfId}/{operationId}
ACTION: POST
DESCRIPTION: Execute a running VNF operation, it could receive arguments
through an dictionary, but it can be an empty dictionary if
there is no arguments.
ARGUMENT: Dictionary (operationArguments)
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def post_vnf_operation(self, vnfId, operationId, operationArguments):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnfi_vnfId", vnfId), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance) == tuple:
return instance[0]
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_p_platformId", instance.messageData.vnfPlatform), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform) == tuple:
return platform[0]
try:
operationArguments = json.loads(operationArguments)
except Exception as ej:
try:
operationArguments = yaml.safe_load(operationArguments)
except Exception as ey:
return "ERROR CODE #0 (AS): INVALID OPERATION ARGUMENTS PROVIDED (JSON: " + str(ej) + ") (YAML: " + str(ey) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.VsData().fromData(instance.messageData, platform.messageData, operationId, operationArguments), "AS", "VS")
result = self.__asIr.sendMessage(request)
if type(result) != IrModels.IrMessage:
return "ERROR CODE #3 (AS): VS ERROR DURING VNF OPERATION (returned result message is " + str(type(result)) + ")", 400
return result.messageData, 200
def get_vnf_operation(self):
return "NOT AVAILABLE", 405
def put_vnf_operation(self):
return "NOT AVAILABLE", 405
def patch_vnf_operation(self):
return "NOT AVAILABLE", 405
def delete_vnf_operation(self):
return "NOT AVAILABLE", 405
# ===================================== EMS Management Operations =====================================
'''
PATH: /aa/authenticate/<authentication>
ACTION: GET
DESCRIPTION: Return "True" if the authentication is
valid or "False" if it is not valid.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def get_aa_authenticate(self, authentication):
authResult = self.__autheticateUser(authentication)
if type(authResult) == VibModels.VibUserInstance:
return "True"
else:
return "False"
'''
PATH: /aa/authenticate/<authentication>
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_aa_authenticate(self):
return "NOT AVAILABLE", 405
def put_aa_authenticate(self):
return "NOT AVAILABLE", 405
def patch_aa_authenticate(self):
return "NOT AVAILABLE", 405
def delete_aa_authenticate(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/users
ACTION: GET
DESCRIPTION: Retrieve all the available users of the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_users(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_users", None), "AS", "IM")
users = self.__asIr.sendMessage(request)
if type(users.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING USER OPERATION (" + str(users[1]) + ")", 400
return json.dumps([u.toDictionary() for u in users.messageData]), 200
'''
PATH: /im/vib/users
ACTION: POST
DESCRIPTION: Send a new user to be saved in the VIB data-
base.
ARGUMENT: VibUserInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def post_vib_users(self, vibUserInstance):
try:
vibUserInstance = VibModels.VibUserInstance().fromDictionary(json.loads(vibUserInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_users", vibUserInstance), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING USER OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/vib/users
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_users(self):
return "NOT AVAILABLE", 405
def patch_vib_users(self):
return "NOT AVAILABLE", 405
def delete_vib_users(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/users/{userId}
ACTION: GET
DESCRIPTION: Retrieve a particular user from the VIB data-
base.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def get_vib_u_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_u_userId", userId), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING USER OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/vib/user/{userId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular user already saved
in the VIB database.
ARGUMENT: VibUserInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_u_userId(self, userId, vibUserInstance):
try:
vibUserInstance = VibModels.VibUserInstance().fromDictionary(json.loads(vibUserInstance))
if userId != vibUserInstance.userId:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED", 400
except:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_u_userId", vibUserInstance), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING USER OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/vib/users/{userId}
ACTION: DELETE
DESCRIPTION: Delete a particular user from the VIB data-
base.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_u_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_u_userId", userId), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING USER OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/vib/users/{userId}
N/A ACTIONS: POST, PUT
'''
def post_vib_u_userId(self):
return "NOT AVAILABLE", 405
def put_vib_u_userId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/credentials
ACTION: GET
DESCRIPTION: Retrieve all the available credentials of the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_credentials(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_credentials", None), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIAL OPERATION (" + str(credentials[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/vib/credentials
ACTION: POST
DESCRIPTION: Send a new credential to be saved in the VIB
database.
ARGUMENT: VibCredentialInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def post_vib_credentials(self, vibCredentialInstance):
try:
vibCredentialInstance = VibModels.VibCredentialInstance().fromDictionary(json.loads(vibCredentialInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID CREDENTIAL INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_credentials", vibCredentialInstance), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIAL OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/vib/credentials
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_credentials(self):
return "NOT AVAILABLE", 405
def patch_vib_credentials(self):
return "NOT AVAILABLE", 405
def delete_vib_credentials(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/credentials/{userId}/{vnfId}
ACTION: GET
DESCRIPTION: Retrieve a particular credential from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def get_vib_c_credentialId(self, userId, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_c_credentialId", (userId, vnfId)), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIAL OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/vib/credentials/{userId}/{vnfId}
ACTION: DELETE
DESCRIPTION: Delete a particular credential from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_c_credentialId(self, userId, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_c_credentialId", (userId, vnfId)), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIAL OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/vib/credentials/{userId}/{vnfId}
N/A ACTIONS: POST, PUT, PATCH
'''
def post_vib_c_credentialId(self):
return "NOT AVAILABLE", 405
def put_vib_c_credentialId(self):
return "NOT AVAILABLE", 405
def patch_vib_c_credentialId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/credentials/user/{userId}
ACTION: GET
DESCRIPTION: Retrieve the credentials of a par-
ticular user from the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_c_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_c_userId", userId), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIALs OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/vib/credentials/user/{userId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vib_c_userId(self):
return "NOT AVAILABLE", 405
def put_vib_c_userId(self):
return "NOT AVAILABLE", 405
def patch_vib_c_userId(self):
return "NOT AVAILABLE", 405
def delete_vib_c_userId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/credentials/vnf/{vnfId}
ACTION: GET
DESCRIPTION: Retrieve the credentials of a par-
ticular vnf from the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_c_vnfId(self, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_c_vnfId", vnfId), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING CREDENTIALs OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/vib/credentials/vnf/{vnfId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vib_c_vnfId(self):
return "NOT AVAILABLE", 405
def put_vib_c_vnfId(self):
return "NOT AVAILABLE", 405
def patch_vib_c_vnfId(self):
return "NOT AVAILABLE", 405
def delete_vib_c_vnfId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/subscriptions
ACTION: GET
DESCRIPTION: Retrieve all the available subscriptions of the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_subscriptions(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_subscriptions", None), "AS", "IM")
subscriptions = self.__asIr.sendMessage(request)
if type(subscriptions.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING SUBSCRIPTION OPERATION (" + str(subscriptions.messageData[1]) + ")", 400
return json.dumps([s.toDictionary() for s in subscriptions.messageData]), 200
'''
PATH: /im/vib/subscriptions
ACTION: POST
DESCRIPTION: Send a new subscription to be saved in the VIB
database.
ARGUMENT: VibSubscriptionInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def post_vib_subscriptions(self, vibSubscriptionInstance):
try:
vibSubscriptionInstance = VibModels.VibSubscriptionInstance().fromDictionary(json.loads(vibSubscriptionInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID SUBSCRIPTION INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_subscriptions", vibSubscriptionInstance), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/vib/subscriptions
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_subscriptions(self):
return "NOT AVAILABLE", 405
def patch_vib_subscriptions(self):
return "NOT AVAILABLE", 405
def delete_vib_subscriptions(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/subscriptions/{subscriptionId}
ACTION: GET
DESCRIPTION: Retrieve a particular subscription from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def get_vib_s_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_s_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/vib/subscriptions/{subscriptionId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular subscription already
saved in the VIB database.
ARGUMENT: VibSubscriptionInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_s_subscriptionId(self, subscriptionId, vibSubscriptionInstance):
try:
vibSubscriptionInstance = VibModels.VibSubscriptionInstance().fromDictionary(json.loads(vibSubscriptionInstance))
if subscriptionId != vibSubscriptionInstance.visId:
return "ERROR CODE #0 (AS): INVALID SUBSCRIPTION INSTANCE PROVIDED (" + str(subscriptionId) + " != " + str(vibSubscriptionInstance.visId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID SUBSCRIPTION INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_s_subscriptionId", vibSubscriptionInstance), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/vib/subscriptions/{subscriptionId}
ACTION: DELETE
DESCRIPTION: Delete a particular subscription from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_s_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_s_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/vib/subscriptions/{subscriptionId}
N/A ACTIONS: POST, PUT
'''
def post_vib_s_subscriptionId(self):
return "NOT AVAILABLE", 405
def put_vib_s_subscriptionId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/management_agents
ACTION: GET
DESCRIPTION: Retrieve all the available management agents of
the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_management_agents(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_management_agents", None), "AS", "IM")
agents = self.__asIr.sendMessage(request)
if type(agents.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGEMENT AGENT OPERATION (" + str(agents.messageData[1]) + ")", 400
return json.dumps([a.toDictionary() for a in agents.messageData]), 200
'''
PATH: /im/vib/management_agents
ACTION: POST
DESCRIPTION: Send a new management agent to be saved in the
VIB database.
ARGUMENT: VibMaInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def post_vib_management_agents(self, vibMaInstance):
try:
vibMaInstance = VibModels.VibMaInstance().fromDictionary(json.loads(vibMaInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID MANAGEMENT AGENT INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_management_agents", vibMaInstance), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGEMENT AGENT OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/vib/management_agents
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_management_agents(self):
return "NOT AVAILABLE", 405
def patch_vib_management_agents(self):
return "NOT AVAILABLE", 405
def delete_vib_management_agents(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/management_agents/{agentId}
ACTION: GET
DESCRIPTION: Retrieve a particular management agent from the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def get_vib_ma_agentId(self, agentId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_ma_agentId", agentId), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGEMENT AGENT OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/vib/management_agents/{agentId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular management agent
already saved in the VIB database.
ARGUMENT: VibMaInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_ma_agentId(self, agentId, vibMaInstance):
try:
vibMaInstance = VibModels.VibMaInstance().fromDictionary(json.loads(vibMaInstance))
if agentId != vibMaInstance.maId:
return "ERROR CODE #0 (AS): INVALID MANAGEMENT AGENT INSTANCE PROVIDED (" + str(agentId) + " != " + str(vibMaInstance.maId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID MANAGEMENT AGENT INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_ma_agentId", vibMaInstance), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGEMENT AGENT OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/vib/management_agents/{agentId}
ACTION: DELETE
DESCRIPTION: Delete a particular management agent from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_ma_agentId(self, agentId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_ma_agentId", agentId), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGEMENT AGENT OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/vib/management_agents/{agentId}
N/A ACTIONS: POST, PUT
'''
def post_vib_ma_agentId(self):
return "NOT AVAILABLE", 405
def put_vib_ma_agentId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_instances
ACTION: GET
DESCRIPTION: Retrieve all the available VNF instances of the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_vnf_instances(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnf_instances", None), "AS", "IM")
instances = self.__asIr.sendMessage(request)
if type(instances.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF INSTANCE OPERATION (" + str(instances.messageData[1]) + ")", 400
return json.dumps([i.toDictionary() for i in instances.messageData]), 200
'''
PATH: /im/vib/vnf_instances
ACTION: POST
DESCRIPTION: Send a new VNF instance to be saved in the VIB
database.
ARGUMENT: VibVnfInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def post_vib_vnf_instances(self, vibVnfInstance):
try:
vibVnfInstance = VibModels.VibVnfInstance().fromDictionary(json.loads(vibVnfInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_vnf_instances", vibVnfInstance), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vib/vnf_instances
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_vnf_instances(self):
return "NOT AVAILABLE", 405
def patch_vib_vnf_instances(self):
return "NOT AVAILABLE", 405
def delete_vib_vnf_instances(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_instances/{vnfId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF instance from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def get_vib_vnfi_vnfId(self, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnfi_vnfId", vnfId), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vib/vnf_instances/{vnfId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF instance already
saved in the VIB database.
ARGUMENT: VibVnfInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_vnfi_vnfId(self, vnfId, vibVnfInstance):
try:
vibVnfInstance = VibModels.VibVnfInstance().fromDictionary(json.loads(vibVnfInstance))
if vnfId != vibVnfInstance.vnfId:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED (" + str(vnfId) + " != " + str(vibVnfInstance.vnfId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_vnfi_vnfId", vibVnfInstance), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vib/vnf_instances/{vnfId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF instance from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_vnfi_vnfId(self, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_vnfi_vnfId", vnfId), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vib/vnf_instances/{vnfId}
N/A ACTIONS: POST, PUT
'''
def post_vib_vnfi_vnfId(self):
return "NOT AVAILABLE", 405
def put_vib_vnfi_vnfId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/platforms
ACTION: GET
DESCRIPTION: Retrieve all the available platforms of the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_platforms(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_platforms", None), "AS", "IM")
platforms = self.__asIr.sendMessage(request)
if type(platforms.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING PLATFORM OPERATION (" + str(platforms.messageData[1]) + ")", 400
return json.dumps([p.toDictionary() for p in platforms.messageData]), 200
'''
PATH: /im/vib/platforms
ACTION: POST
DESCRIPTION: Send a new platform to be saved in the VIB
database.
ARGUMENT: VibPlatformInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def post_vib_platforms(self, vibPlatformInstance):
try:
vibPlatformInstance = VibModels.VibPlatformInstance().fromDictionary(json.loads(vibPlatformInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID PLATFORM INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_platforms", vibPlatformInstance), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING PLATFORM OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vib/platforms
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_platforms(self):
return "NOT AVAILABLE", 405
def patch_vib_platforms(self):
return "NOT AVAILABLE", 405
def delete_vib_platforms(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/platforms/{platformId}
ACTION: GET
DESCRIPTION: Retrieve a particular platform from the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def get_vib_p_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_p_platformId", platformId), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING PLATFORM OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vib/platforms/{platformId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular platform already
saved in the VIB database.
ARGUMENT: VibPlatformInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_p_platformId(self, platformId, vibPlatformInstance):
try:
vibPlatformInstance = VibModels.VibPlatformInstance().fromDictionary(json.loads(vibPlatformInstance))
if platformId != vibPlatformInstance.platformId:
return "ERROR CODE #0 (AS): INVALID PLATFORM INSTANCE PROVIDED (" + str(platformId) + " != " + str(vibPlatformInstance.platformId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID PLATFORM INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_p_platformId", vibPlatformInstance), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING PLATFORM INSTANCE OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vib/platforms/{platformId}
ACTION: DELETE
DESCRIPTION: Delete a particular platform from the VIB
database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_p_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_p_platformId", platformId), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING PLATFORM INSTANCE OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vib/platforms/{platformId}
N/A ACTIONS: POST, PUT
'''
def post_vib_p_platformId(self):
return "NOT AVAILABLE", 405
def put_vib_p_platformId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_managers
ACTION: GET
DESCRIPTION: Retrieve all the available VNF managers of
the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_vnf_managers(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnf_managers", None), "AS", "IM")
managers = self.__asIr.sendMessage(request)
if type(managers.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER OPERATION (" + str(managers.messageData[1]) + ")", 400
return json.dumps([m.toDictionary() for m in managers.messageData]), 200
'''
PATH: /im/vib/vnf_managers
ACTION: POST
DESCRIPTION: Send a new VNF manager to be saved in the
VIB database.
ARGUMENT: VibVnfmInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def post_vib_vnf_managers(self, vibVnfmInstance):
try:
vibVnfmInstance = VibModels.VibVnfmInstance().fromDictionary(json.loads(vibVnfmInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_vnf_managers", vibVnfmInstance), "AS", "IM")
manager = self.__asIr.sendMessage(request)
if type(manager.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER INSTANCE OPERATION (" + str(manager.messageData[1]) + ")", 400
return json.dumps(manager.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_vnf_managers(self):
return "NOT AVAILABLE", 405
def patch_vib_vnf_managers(self):
return "NOT AVAILABLE", 405
def delete_vib_vnf_managers(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_managers/{managerId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF manager from
the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def get_vib_vnfm_managerId(self, managerId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnfm_managerId", managerId), "AS", "IM")
manager = self.__asIr.sendMessage(request)
if type(manager.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER OPERATION (" + str(manager.messageData[1]) + ")", 400
return json.dumps(manager.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers/{managerId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF manager
already saved in the VIB database.
ARGUMENT: VibManagerInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibManagerInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_vnfm_managerId(self, managerId, vibVnfmInstance):
try:
vibVnfmInstance = VibModels.VibVnfmInstance().fromDictionary(json.loads(vibVnfmInstance))
if managerId != vibVnfmInstance.vnfmId:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER INSTANCE PROVIDED (" + str(managerId) + " != " + str(vibVnfmInstance.vnfmId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER INSTANCE PROVIDED (" + str(e) + ")" , 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_vnfm_managerId", vibVnfmInstance), "AS", "IM")
manager = self.__asIr.sendMessage(request)
if type(manager.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER INSTANCE OPERATION (" + str(manager.messageData[1]) + ")", 400
return json.dumps(manager.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers/{managerId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF manager from the
VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_vnfm_managerId(self, managerId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_vnfm_managerId", managerId), "AS", "IM")
manager = self.__asIr.sendMessage(request)
if type(manager.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNFM INSTANCE OPERATION (" + str(manager.messageData[1]) + ")", 400
return json.dumps(manager.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers/{managerId}
N/A ACTIONS: POST, PUT
'''
def post_vib_vnfm_managerId(self):
return "NOT AVAILABLE", 405
def put_vib_vnfm_managerId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_managers_drivers
ACTION: GET
DESCRIPTION: Retrieve all the available VNF manager drivers
of the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmDriverInstance [0..N]
- Integer error code (HTTP)
'''
def get_vib_vnf_manager_drivers(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnf_manager_drivers", None), "AS", "IM")
drivers = self.__asIr.sendMessage(request)
if type(drivers.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER DRIVER OPERATION (" + str(drivers.messageData[1]) + ")", 400
return json.dumps([m.toDictionary() for m in drivers.messageData]), 200
'''
PATH: /im/vib/vnf_managers_drivers
ACTION: POST
DESCRIPTION: Send a new VNF manager driver to be saved
in the VIB database.
ARGUMENT: VibVnfmDriverInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibVnfmDriverInstance [1]
- Integer error code (HTTP)
'''
def post_vib_vnf_manager_drivers(self, vibVnfmDriverInstance):
try:
vibVnfmDriverInstance = VibModels.VibVnfmDriverInstance().fromDictionary(json.loads(vibVnfmDriverInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER DRIVER INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "post_vib_vnf_manager_drivers", vibVnfmDriverInstance), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING MANAGER DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers_drivers
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vib_vnf_manager_drivers(self):
return "NOT AVAILABLE", 405
def patch_vib_vnf_manager_drivers(self):
return "NOT AVAILABLE", 405
def delete_vib_vnf_manager_drivers(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vib/vnf_managers_drivers/{vnfmId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF manager driver
from the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmDriverInstance [1]
- Integer error code (HTTP)
'''
def get_vib_vnfmd_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "get_vib_vnfmd_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers_drivers/{vnfmId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF manager
driver already saved in the VIB database.
ARGUMENT: VibManagerDriverInstance (JSON Dictionary)
RETURN: - 200 (HTTP) + VibManagerDriverInstance [1]
- Integer error code (HTTP)
'''
def patch_vib_vnfmd_vnfmId(self, vnfmId, vibVnfmDriverInstance):
try:
vibVnfmDriverInstance = VibModels.VibVnfmDriverInstance().fromDictionary(json.loads(vibVnfmDriverInstance))
if vnfmId != vibVnfmDriverInstance.vnfmId:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER DRIVER INSTANCE PROVIDED (" + str(vnfmId) + " != " + str(vibVnfmDriverInstance.vnfmId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF MANAGER INSTANCE DRIVER PROVIDED (" + str(e) + ")" , 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "patch_vib_vnfmd_vnfmId", vibVnfmDriverInstance), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNF MANAGER DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers_drivers/{vnfmId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF manager driver
from the VIB database.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def delete_vib_vnfmd_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VIB", "delete_vib_vnfmd_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VIB ERROR DURING VNFM INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/vib/vnf_managers_drivers/{vnfmId}
N/A ACTIONS: POST, PUT
'''
def post_vib_vnfmd_vnfmId(self):
return "NOT AVAILABLE", 405
def put_vib_vnfmd_vnfmId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/running_subscription
ACTION: GET
DESCRIPTION: Retrieve all the running subscriptions of
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_ms_running_subscription(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_ms_running_subscription", None), "AS", "IM")
subscriptions = self.__asIr.sendMessage(request)
if type(subscriptions.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION OPERATION (" + str(subscriptions.messageData[1]) + ")", 400
return json.dumps(list(subscriptions.messageData.keys()))
'''
PATH: /im/ms/running_subscription
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_ms_running_subscription(self):
return "NOT AVAILABLE", 405
def put_ms_running_subscription(self):
return "NOT AVAILABLE", 405
def patch_ms_running_subscription(self):
return "NOT AVAILABLE", 405
def delete_ms_running_subscription(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/running_subscription/{subscriptionId}
ACTION: GET
DESCRIPTION: Return "True" if a required subscription
is running, or "False" if it is not.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def get_msrs_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_msrs_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData)
'''
PATH: /im/ms/running_subscription/{subscriptionId}
ACTION: POST
DESCRIPTION: Retrieve the required subscription from
the VIB and prepare it to be executed in
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def post_msrs_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "post_msrs_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION INSTANCE OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/running_subscription/{subscriptionId}
ACTION: PATCH
DESCRIPTION: Start or stop the execution of the required
running subscription in the monitoring subsytem.
ARGUMENT: Tuple [START: (String, ); STOP: (String, String); 1]
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def patch_msrs_subscriptionId(self, subscriptionId, agentArguments):
if agentArguments == None:
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "patch_msrs_subscriptionId", (subscriptionId, )), "AS", "IM")
else:
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "patch_msrs_subscriptionId", (subscriptionId, json.loads(agentArguments))), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/running_subscription/{subscriptionId}
ACTION: DELETE
DESCRIPTION: Delete a particular running subscription from
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def delete_msrs_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "delete_msrs_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/running_subscription/{subscriptionId}
N/A ACTIONS: PUT
'''
def put_msrs_subscriptionId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/subscription
ACTION: GET
DESCRIPTION: Retrieve all the available subscriptions in
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [0..N]
- Integer error code (HTTP)
'''
def get_ms_subscription(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_ms_subscription", None), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION INSTANCE OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps([s.toDictionary() for s in subscription.messageData])
'''
PATH: /im/ms/subscription
ACTION: POST
DESCRIPTION: Send a new subscription to be saved in the
monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def post_ms_subscription(self, vnfIndicatorSubscriptionRequest):
try:
vnfIndicatorSubscriptionRequest = AsModels.VnfIndicatorSubscriptionRequest().fromDictionary(json.loads(vnfIndicatorSubscriptionRequest))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF INDICATOR SUBSCRIPTION REQUEST PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "post_ms_subscription", vnfIndicatorSubscriptionRequest), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/subscription
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_ms_subscription(self):
return "NOT AVAILABLE", 405
def patch_ms_subscription(self):
return "NOT AVAILABLE", 405
def delete_ms_subscription(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/subscription/{subscriptionId}
ACTION: GET
DESCRIPTION: Retrieve a particular subscription from the
monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def get_mss_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_mss_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION INSTANCE OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/subscription/{subscriptionId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular subscription
already saved in the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VnfIndicatorSubscription [1]
- Integer error code (HTTP)
'''
def patch_mss_subscriptionId(self, subscriptionId, vnfIndicatorSubscription):
try:
vnfIndicatorSubscription = AsModels.VnfIndicatorSubscription().fromDictionary(json.loads(vnfIndicatorSubscription))
if subscriptionId != vnfIndicatorSubscription.id:
return "ERROR CODE #0 (AS): INVALID VNF INDICATOR SUBSCRIPTION PROVIDED (" + str(subscriptionId) + " != " + str(vnfIndicatorSubscription.id) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF INDICATOR SUBSCRIPTION INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "patch_mss_subscriptionId", vnfIndicatorSubscription), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION INSTANCE OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/subscription/{subscriptionId}
ACTION: DELETE
DESCRIPTION: Delete a particular subscription from the
monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibSubscriptionInstance [1]
- Integer error code (HTTP)
'''
def delete_mss_subscriptionId(self, subscriptionId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "delete_mss_subscriptionId", subscriptionId), "AS", "IM")
subscription = self.__asIr.sendMessage(request)
if type(subscription.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING SUBSCRIPTION INSTANCE OPERATION (" + str(subscription.messageData[1]) + ")", 400
return json.dumps(subscription.messageData.toDictionary())
'''
PATH: /im/ms/subscription/{subscriptionId}
N/A ACTIONS: POST, PUT
'''
def post_mss_subscriptionId(self):
return "NOT AVAILABLE", 405
def put_mss_subscriptionId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/agent
ACTION: GET
DESCRIPTION: Retrieve all the available monitoring agents
in the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [0..N]
- Integer error code (HTTP)
'''
def get_ms_agent(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_ms_agent", None), "AS", "IM")
agents = self.__asIr.sendMessage(request)
if type(agents.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING MONITORING AGENT INSTANCE OPERATION (" + str(agents.messageData[1]) + ")", 400
return json.dumps([a.toDictionary() for a in agents.messageData])
'''
PATH: /im/ms/agent
ACTION: POST
DESCRIPTION: Send a new monitoring agent to be saved in
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def post_ms_agent(self, vibMaInstance):
try:
vibMaInstance = VibModels.VibMaInstance().fromDictionary(json.loads(vibMaInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF MONITORING AGENT PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "post_ms_agent", vibMaInstance), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING MONITORING AGENT OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/ms/subscription
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_ms_agent(self):
return "NOT AVAILABLE", 405
def patch_ms_agent(self):
return "NOT AVAILABLE", 405
def delete_ms_agent(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/ms/agent/{agentId}
ACTION: GET
DESCRIPTION: Retrieve a particular monitoring agent from
the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def get_msa_agentId(self, agentId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "get_msa_agentId", agentId), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING MONITORING AGENT INSTANCE OPERATION (" + str(agent.messageData[1]) + ")", 400
return agent.messageData.toDictionary()
'''
PATH: /im/ms/agent/{agentId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular monitoring agent
already saved in the monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def patch_msa_agentId(self, agentId, vibMaInstance):
try:
vibMaInstance = VibModels.VibMaInstance().fromDictionary(json.loads(vibMaInstance))
if agentId != vibMaInstance.maId:
return "ERROR CODE #0 (AS): INVALID MONITORING AGENT INSTANCE PROVIDED (" + str(agentId) + " != " + str(vibMaInstance.maId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID MONITORING AGENT INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "patch_msa_agentId", vibMaInstance), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING MONITORING AGENT INSTANCE OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/ms/agent/{agentId}
ACTION: DELETE
DESCRIPTION: Delete a particular monitoring agent from the
monitoring subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibMaInstance [1]
- Integer error code (HTTP)
'''
def delete_msa_agentId(self, agentId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("MS", "delete_msa_agentId", agentId), "AS", "IM")
agent = self.__asIr.sendMessage(request)
if type(agent.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/MS ERROR DURING MONITORING AGENT INSTANCE OPERATION (" + str(agent.messageData[1]) + ")", 400
return json.dumps(agent.messageData.toDictionary())
'''
PATH: /im/ms/agent/{agentId}
N/A ACTIONS: POST, PUT
'''
def post_msa_agentId(self):
return "NOT AVAILABLE", 405
def put_msa_agentId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/authenticator
ACTION: GET
DESCRIPTION: Retrieve all the available authenticators of
the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_as_authenticator(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_auth", None), "AS", "IM")
authenticators = self.__asIr.sendMessage(request)
if type(authenticators.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING AUTHENTICATOR INSTANCE OPERATION (" + str(authenticators.messageData[1]) + ")", 400
return json.dumps(authenticators.messageData)
'''
PATH: /im/as/authenticator
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_authenticator(self):
return "NOT AVAILABLE", 405
def put_as_authenticator(self):
return "NOT AVAILABLE", 405
def patch_as_authenticator(self):
return "NOT AVAILABLE", 405
def delete_as_authenticator(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/authenticator/{authenticatorId}
ACTION: GET
DESCRIPTION: Return "True" if a required autheticator is
a available, or "False" if it is not.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def get_as_a_authenticatorId(self, authenticatorId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_a_authId", authenticatorId), "AS", "IM")
authenticators = self.__asIr.sendMessage(request)
if type(authenticators.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING AUTHENTICATOR INSTANCE OPERATION (" + str(authenticators.messageData[1]) + ")", 400
return json.dumps(authenticators.messageData)
'''
PATH: /im/as/authenticator/{authenticatorId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_a_authenticatorId(self):
return "NOT AVAILABLE", 405
def put_as_a_authenticatorId(self):
return "NOT AVAILABLE", 405
def patch_as_a_authenticatorId(self):
return "NOT AVAILABLE", 405
def delete_as_a_authenticatorId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/running_authenticator
ACTION: GET
DESCRIPTION: Retrieve the currently running authenticator agent
in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def get_as_running_authenticator(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_running_auth", None), "AS", "IM")
authenticators = self.__asIr.sendMessage(request)
if type(authenticators.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING AUTHENTICATOR INSTANCE OPERATION (" + str(authenticators.messageData[1]) + ")", 400
return json.dumps(authenticators.messageData)
'''
PATH: /im/as/running_authenticator
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_running_authenticator(self):
return "NOT AVAILABLE", 405
def put_as_running_authenticator(self):
return "NOT AVAILABLE", 405
def patch_as_running_authenticator(self):
return "NOT AVAILABLE", 405
def delete_as_running_authenticator(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/running_authenticator/{authenticatorId}
ACTION: GET
DESCRIPTION: Return "True" if a required authenticator is
running, or "False" if it is not.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def get_as_ra_authenticatorId(self, authenticatorId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_ra_authId", authenticatorId), "AS", "IM")
authenticators = self.__asIr.sendMessage(request)
if type(authenticators.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING AUTHENTICATOR INSTANCE OPERATION (" + str(authenticators.messageData[1]) + ")", 400
return json.dumps(authenticators.messageData)
'''
PATH: /im/as/running_authenticator/{authenticatorId}
ACTION: POST
DESCRIPTION: Retrieve the required authenticator and execute
it in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def post_as_ra_authenticatorId(self, authenticatorId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_ra_authId", authenticatorId), "AS", "IM")
authenticators = self.__asIr.sendMessage(request)
if type(authenticators.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING AUTHENTICATOR INSTANCE OPERATION (" + str(authenticators.messageData[1]) + ")", 400
return json.dumps(authenticators.messageData)
'''
PATH: /im/as/running_authenticator/{authenticatorId}
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_ra_authenticatorId(self):
return "NOT AVAILABLE", 405
def patch_as_ra_authenticatorId(self):
return "NOT AVAILABLE", 405
def delete_as_ra_authenticatorId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/user
ACTION: GET
DESCRIPTION: Retrieve all the available users in the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_user(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_user", None), "AS", "IM")
users = self.__asIr.sendMessage(request)
if type(users.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING USER INSTANCE OPERATION (" + str(users.messageData[1]) + ")", 400
return json.dumps([u.toDictionary() for u in users.messageData])
'''
PATH: /im/as/user
ACTION: POST
DESCRIPTION: Send a new user to be saved in the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def post_as_user(self, vibUserInstance):
try:
vibUserInstance = VibModels.VibUserInstance().fromDictionary(json.loads(vibUserInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_user", vibUserInstance), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING USER INSTANCE OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/as/user
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_user(self):
return "NOT AVAILABLE", 405
def patch_as_user(self):
return "NOT AVAILABLE", 405
def delete_as_user(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/user/{userId}
ACTION: GET
DESCRIPTION: Retrieve a particular user from the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def get_as_u_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_u_userId", userId), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING USER INSTANCE OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/as/user/{userId}
ACTION: PATCH
DESCRIPTION: Update a particular user from the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def patch_as_u_userId(self, userId, vibUserInstance):
try:
vibUserInstance = VibModels.VibUserInstance().fromDictionary(json.loads(vibUserInstance))
if userId != vibUserInstance.userId:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED (" + str(userId) + " != " + str(vibUserInstance.userId) + ")", 400
except Exception as e:
return "ERROR CODE #0 (AS): INVALID USER INSTANCE PROVIDED (" + str(e) + ")", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "patch_as_u_userId", vibUserInstance), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING USER INSTANCE OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/as/user/{userId}
ACTION: DELETE
DESCRIPTION: DELETE a particular user from the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibUserInstance [1]
- Integer error code (HTTP)
'''
def delete_as_u_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "delete_as_u_userId", userId), "AS", "IM")
user = self.__asIr.sendMessage(request)
if type(user.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING USER INSTANCE OPERATION (" + str(user.messageData[1]) + ")", 400
return json.dumps(user.messageData.toDictionary())
'''
PATH: /im/as/user/{userId}
N/A ACTIONS: POST, PUT
'''
def post_as_u_userId(self):
return "NOT AVAILABLE", 405
def put_as_u_userId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/credential
ACTION: GET
DESCRIPTION: Retrieve all the available credentials in the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_credential(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_credential", None), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING CREDENTIAL INSTANCE OPERATION (" + str(credentials.messageData[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/as/credential
ACTION: POST
DESCRIPTION: Send a new credential to be saved in the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def post_as_credential(self, vibCredentialInstance):
try:
vibCredentialInstance = VibModels.VibCredentialInstance().fromDictionary(json.loads(vibCredentialInstance))
except:
return "ERROR CODE #0 (AS): INVALID CREDENTIAL INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_credential", vibCredentialInstance), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING CREDENTIAL OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/as/credential
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_credential(self):
return "NOT AVAILABLE", 405
def patch_as_credential(self):
return "NOT AVAILABLE", 405
def delete_as_credential(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/credential/{userId}/{vnfId}
ACTION: GET
DESCRIPTION: Retrieve a particular credential from the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def get_as_c_credentialId(self, userId, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_c_credentialId", (userId, vnfId)), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING CREDENTIAL INSTANCE OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/as/credential/{userId}/{vnfId}
ACTION: DELETE
DESCRIPTION: Delete a particular credential from the access
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [1]
- Integer error code (HTTP)
'''
def delete_as_c_credentialId(self, userId, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "delete_as_c_credentialId", (userId, vnfId)), "AS", "IM")
credential = self.__asIr.sendMessage(request)
if type(credential.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING MONITORING AGENT INSTANCE OPERATION (" + str(credential.messageData[1]) + ")", 400
return json.dumps(credential.messageData.toDictionary())
'''
PATH: /im/as/credential/{userId}/{vnfId}
N/A ACTIONS: POST, PUT, PATCH
'''
def post_as_c_credentialId(self):
return "NOT AVAILABLE", 405
def put_as_c_credentialId(self):
return "NOT AVAILABLE", 405
def patch_as_c_credentialId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/credential/user/{userId}
ACTION: GET
DESCRIPTION: Retrieve the credentials of a particular
user from the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_c_userId(self, userId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_c_userId", userId), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING CREDENTIAL INSTANCE OPERATION (" + str(credentials.messageData[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/as/credential/user/{userId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_c_userId(self):
return "NOT AVAILABLE", 405
def put_as_c_userId(self):
return "NOT AVAILABLE", 405
def patch_as_c_userId(self):
return "NOT AVAILABLE", 405
def delete_as_c_userId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/credential/vnf/{vnfId}
ACTION: GET
DESCRIPTION: Retrieve the credentials of a particular
vnf from the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibCredentialInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_c_vnfId(self, vnfId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_c_vnfId", vnfId), "AS", "IM")
credentials = self.__asIr.sendMessage(request)
if type(credentials.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING CREDENTIAL INSTANCE OPERATION (" + str(credentials.messageData[1]) + ")", 400
return json.dumps([c.toDictionary() for c in credentials.messageData])
'''
PATH: /im/as/credential/vnf/{vnfId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_c_vnfId(self):
return "NOT AVAILABLE", 405
def put_as_c_vnfId(self):
return "NOT AVAILABLE", 405
def patch_as_c_vnfId(self):
return "NOT AVAILABLE", 405
def delete_as_c_vnfId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/running_vnfm
ACTION: GET
DESCRIPTION: Retrieve the currently running vnfm driver in
the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def get_as_vnfm_running_vnfm(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vnfm_running_vnfm", None), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData)
'''
PATH: /im/as/vnfm/running_vnfm
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_as_vnfm_running_vnfm(self):
return "NOT AVAILABLE", 405
def put_as_vnfm_running_vnfm(self):
return "NOT AVAILABLE", 405
def patch_as_vnfm_running_vnfm(self):
return "NOT AVAILABLE", 405
def delete_as_vnfm_running_vnfm(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/running_vnfm/{vnfmId}
ACTION: GET
DESCRIPTION: Return "True" if a required VNF manager is
running, or "False" if it is not.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def get_as_vrv_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vrv_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData)
'''
PATH: /im/as/vnfm/running_vnfm/{vnfmId}
ACTION: POST
DESCRIPTION: Retrieve the required VNF manager from the
VIB and prepare it to be executed in the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def post_as_vrv_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_vrv_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/as/vnfm/running_vnfm/{vnfmId}
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_vrv_vnfmId(self):
return "NOT AVAILABLE", 405
def patch_as_vrv_vnfmId(self):
return "NOT AVAILABLE", 405
def delete_as_vrv_vnfmId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/instance
ACTION: GET
DESCRIPTION: Retrieve all the available VNF managers in
the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVfnmInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_vnfm_instance(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vnfm_instance", None), "AS", "IM")
vnfms = self.__asIr.sendMessage(request)
if type(vnfms.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(vnfms.messageData[1]) + ")", 400
return json.dumps([v.toDictionary() for v in vnfms.messageData])
'''
PATH: /im/as/vnfm/instance
ACTION: POST
DESCRIPTION: Send a new VNF manager to be saved in the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def post_as_vnfm_instance(self, vibVnfmInstance):
try:
vibVnfmInstance = VibModels.VibVnfmInstance().fromDictionary(json.loads(vibVnfmInstance))
except:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_vnfm_instance", vibVnfmInstance), "AS", "IM")
vnfm = self.__asIr.sendMessage(request)
if type(vnfm.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER OPERATION (" + str(vnfm.messageData[1]) + ")", 400
return json.dumps(vnfm.messageData.toDictionary())
'''
PATH: /im/as/vnfm/instance
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_vnfm_instance(self):
return "NOT AVAILABLE", 405
def patch_as_vnfm_instance(self):
return "NOT AVAILABLE", 405
def delete_as_vnfm_instance(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/instance/{vnfmId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF manager from the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def get_as_vi_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vi_vnfmId", vnfmId), "AS", "IM")
vnfm = self.__asIr.sendMessage(request)
if type(vnfm.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(vnfm.messageData[1]) + ")", 400
return vnfm.messageData.toDictionary()
'''
PATH: /im/as/vnfm/instance/{vnfmId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF manager
already saved in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def patch_as_vi_vnfmId(self, vnfmId, vibVnfmInstance):
try:
vibVnfmInstance = VibModels.VibVnfmInstance().fromDictionary(json.loads(vibVnfmInstance))
if vnfmId != vibVnfmInstance.vnfmId:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER INSTANCE PROVIDED", 400
except:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "patch_as_vi_vnfmId", vibVnfmInstance), "AS", "IM")
vnfm = self.__asIr.sendMessage(request)
if type(vnfm.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(vnfm.messageData[1]) + ")", 400
return json.dumps(vnfm.messageData.toDictionary())
'''
PATH: /im/as/vnfm/instance/{vnfmId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF manager from the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def delete_as_vi_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "delete_as_vi_vnfmId", vnfmId), "AS", "IM")
vnfm = self.__asIr.sendMessage(request)
if type(vnfm.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(vnfm.messageData[1]) + ")", 400
return json.dumps(vnfm.messageData.toDictionary())
'''
PATH: /im/as/vnfm/instance/{vnfmId}
N/A ACTIONS: POST, PUT
'''
def post_as_vi_vnfmId(self):
return "NOT AVAILABLE", 405
def put_as_vi_vnfmId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/driver
ACTION: GET
DESCRIPTION: Retrieve all the available VNF manager drivers
in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVfnmDriverInstance [0..N]
- Integer error code (HTTP)
'''
def get_as_vnfm_driver(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vnfm_driver", None), "AS", "IM")
drivers = self.__asIr.sendMessage(request)
if type(drivers.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(drivers.messageData[1]) + ")", 400
return json.dumps([d.toDictionary() for d in drivers.messageData])
'''
PATH: /im/as/vnfm/driver
ACTION: POST
DESCRIPTION: Send a new VNF manager driver to be saved
in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmDriverInstance [1]
- Integer error code (HTTP)
'''
def post_as_vnfm_driver(self, vibVnfmDriverInstance):
try:
vibVnfmDriverInstance = VibModels.VibVnfmDriverInstance().fromDictionary(json.loads(vibVnfmDriverInstance))
except:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "post_as_vnfm_driver", vibVnfmDriverInstance), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/as/vnfm/driver
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_as_vnfm_driver(self):
return "NOT AVAILABLE", 405
def patch_as_vnfm_driver(self):
return "NOT AVAILABLE", 405
def delete_as_vnfm_driver(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/as/vnfm/driver/{vnfmId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF manager driver
from the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmDriverInstance [1]
- Integer error code (HTTP)
'''
def get_as_vd_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "get_as_vd_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return driver.messageData.toDictionary()
'''
PATH: /im/as/vnfm/driver/{vnfmId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF manager
already saved in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def patch_as_vd_vnfmId(self, vnfmId, vibVnfmDriverInstance):
try:
vibVnfmDriverInstance = VibModels.VibVnfmDriverInstance().fromDictionary(json.loads(vibVnfmDriverInstance))
if vnfmId != vibVnfmDriverInstance.vnfmId:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER INSTANCE PROVIDED", 400
except:
return "ERROR CODE #0 (AS): INVALID VNFM DRIVER INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "patch_as_vd_vnfmId", vibVnfmDriverInstance), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/as/vnfm/driver/{vnfmId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF manager from the
access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfmInstance [1]
- Integer error code (HTTP)
'''
def delete_as_vd_vnfmId(self, vnfmId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("AS", "delete_as_vd_vnfmId", vnfmId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/AS ERROR DURING VNFM DRIVER INSTANCE OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/as/vnfm/driver/{vnfmId}
N/A ACTIONS: POST, PUT
'''
def post_as_vd_vnfmId(self):
return "NOT AVAILABLE", 405
def put_as_vd_vnfmId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/vnf_instance
ACTION: GET
DESCRIPTION: Retrieve all the available VNF instances
in the VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [0..N]
- Integer error code (HTTP)
'''
def get_vs_vnf_instance(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_vnf_instance", None), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps([i.toDictionary() for i in instance.messageData])
'''
PATH: /im/vs/vnf_instance
ACTION: POST
DESCRIPTION: Send a new VNF instance to be saved in
the VNF instances.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def post_vs_vnf_instance(self, vibVnfInstance):
try:
vibVnfInstance = VibModels.VibVnfInstance().fromDictionary(json.loads(vibVnfInstance))
except Exception as e:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_vnf_instance", vibVnfInstance), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vs/vnf_instance
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vs_vnf_instance(self):
return "NOT AVAILABLE", 405
def patch_vs_vnf_instance(self):
return "NOT AVAILABLE", 405
def delete_vs_vnf_instance(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/vnf_instance/{instanceId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF instance
from the VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def get_vs_vnfi_instanceId(self, instanceId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_vnfi_instanceId", instanceId), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vs/vnf_instance/{instanceId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF
instance already saved in the VNF
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def patch_vs_vnfi_instanceId(self, instanceId, vibVnfInstance):
try:
vibVnfInstance = VibModels.VibVnfInstance().fromDictionary(json.loads(vibVnfInstance))
if instanceId != vibVnfInstance.vnfId:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED", 400
except:
return "ERROR CODE #0 (AS): INVALID VNF INSTANCE PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "patch_vs_vnfi_instanceId", vibVnfInstance), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vs/vnf_instance/{instanceId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF instance
from the VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def delete_vs_vnfi_instanceId(self, instanceId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "delete_vs_vnfi_instanceId", instanceId), "AS", "IM")
instance = self.__asIr.sendMessage(request)
if type(instance.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(instance.messageData[1]) + ")", 400
return json.dumps(instance.messageData.toDictionary())
'''
PATH: /im/vs/vnf_instance/{instanceId}
N/A ACTIONS: POST, PUT
'''
def post_vs_vnfi_instanceId(self):
return "NOT AVAILABLE", 405
def put_vs_vnfi_instanceId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/
ACTION: GET
DESCRIPTION: Retrieve the currently running VNF platform driver
in the access subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [1]
- Integer error code (HTTP)
'''
def get_vs_running_driver(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_running_driver", None), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData)
'''
PATH: /im/vs/running_driver
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vs_running_driver(self):
return "NOT AVAILABLE", 405
def put_vs_running_driver(self):
return "NOT AVAILABLE", 405
def patch_vs_running_driver(self):
return "NOT AVAILABLE", 405
def delete_vs_running_driver(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/{platformId}
ACTION: GET
DESCRIPTION: Return "True" if a required VNF platform driver is
running, or "False" if it is not.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def get_vs_rs_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rs_platformId", platformId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData)
'''
PATH: /im/vs/running_driver/{platformId}
ACTION: POST
DESCRIPTION: Retrieve the required VNF platform from the VIB and
prepare it to be executed in the VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + Boolean [1]
- Integer error code (HTTP)
'''
def post_vs_rs_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_rs_platformId", platformId), "AS", "IM")
driver = self.__asIr.sendMessage(request)
if type(driver.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF DRIVER OPERATION (" + str(driver.messageData[1]) + ")", 400
return json.dumps(driver.messageData.toDictionary())
'''
PATH: /im/vs/running_driver/{platformId}
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def put_vs_rs_platformId(self):
return "NOT AVAILABLE", 405
def patch_vs_rs_platformId(self):
return "NOT AVAILABLE", 405
def delete_vs_rs_platformId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/driver
ACTION: GET
DESCRIPTION: Retrieve all the available VNF platforms in the
VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [0..N]
- Integer error code (HTTP)
'''
def get_vs_driver(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_driver", None), "AS", "IM")
platforms = self.__asIr.sendMessage(request)
if type(platforms.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF PLATFORM OPERATION (" + str(platforms.messageData[1]) + ")", 400
return json.dumps([p.toDictionary() for p in platforms.messageData])
'''
PATH: /im/vs/driver
ACTION: POST
DESCRIPTION: Send a new VNF platform to be saved in the VNF
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def post_vs_driver(self, vibPlatformInstance):
try:
vibPlatformInstance = VibModels.VibPlatformInstance().fromDictionary(json.loads(vibPlatformInstance))
except:
return "ERROR CODE #0 (AS): INVALID VNF PLATFORM PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "post_vs_driver", vibPlatformInstance), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF PLATFORM OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vs/vnf_instance
N/A ACTIONS: PUT, PATCH, DELETE
'''
def put_vs_driver(self):
return "NOT AVAILABLE", 405
def patch_vs_driver(self):
return "NOT AVAILABLE", 405
def delete_vs_driver(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/driver/{platformId}
ACTION: GET
DESCRIPTION: Retrieve a particular VNF platform from the VNF
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def get_vsd_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vsd_platformId", platformId), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF PLATFORM OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vs/driver/{platformId}
ACTION: PATCH
DESCRIPTION: Send updates to a particular VNF platform already
saved in the VNF subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibVnfInstance [1]
- Integer error code (HTTP)
'''
def patch_vsd_platformId(self, platformId, vibPlatformInstance):
try:
vibPlatformInstance = VibModels.VibPlatformInstance().fromDictionary(json.loads(vibPlatformInstance))
if platformId != vibPlatformInstance.platformId:
return "ERROR CODE #0 (AS): INVALID VNF PLATFORM PROVIDED", 400
except:
return "ERROR CODE #0 (AS): INVALID VNF PLATFORM PROVIDED", 400
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "patch_vsd_platformId", vibPlatformInstance), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF PLATFORM OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vs/driver/{platformId}
ACTION: DELETE
DESCRIPTION: Delete a particular VNF platform from the VNF
subsystem.
ARGUMENT: --
RETURN: - 200 (HTTP) + VibPlatformInstance [1]
- Integer error code (HTTP)
'''
def delete_vsd_platformId(self, platformId):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "delete_vsd_platformId", platformId), "AS", "IM")
platform = self.__asIr.sendMessage(request)
if type(platform.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF INSTANCE OPERATION (" + str(platform.messageData[1]) + ")", 400
return json.dumps(platform.messageData.toDictionary())
'''
PATH: /im/vs/driver/{platformId}
N/A ACTIONS: POST, PUT
'''
def post_vsd_platformId(self):
return "NOT AVAILABLE", 405
def put_vsd_platformId(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/operations
ACTION: GET
DESCRIPTION: Retrieve the available operations of the currently
running platform driver.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_vs_rd_operations(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rd_operations", None), "AS", "IM")
operations = self.__asIr.sendMessage(request)
if type(operations.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF RUNNING PLATFORM OPERATION (" + str(operations.messageData[1]) + ")", 400
return json.dumps(operations.messageData)
'''
PATH: /im/vs/running_driver/operations
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vs_rd_operations(self):
return "NOT AVAILABLE", 405
def put_vs_rd_operations(self):
return "NOT AVAILABLE", 405
def patch_vs_rd_operations(self):
return "NOT AVAILABLE", 405
def delete_vs_rd_operations(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/operations/monitoring
ACTION: GET
DESCRIPTION: Retrieve the available monitoring operations of the
currently running platform driver.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_vs_rdo_monitoring(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_monitoring", None), "AS", "IM")
operations = self.__asIr.sendMessage(request)
if type(operations.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF RUNNING PLATFORM OPERATION (" + str(operations.messageData[1]) + ")", 400
return json.dumps(operations.messageData)
'''
PATH: /im/vs/running_driver/operations/monitoring
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vs_rdo_monitoring(self):
return "NOT AVAILABLE", 405
def put_vs_rdo_monitoring(self):
return "NOT AVAILABLE", 405
def patch_vs_rdo_monitoring(self):
return "NOT AVAILABLE", 405
def delete_vs_rdo_monitoring(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/operations/modification
ACTION: GET
DESCRIPTION: Retrieve the available modification operations of the
currently running platform driver.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_vs_rdo_modification(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_modification", None), "AS", "IM")
operations = self.__asIr.sendMessage(request)
if type(operations.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF RUNNING PLATFORM OPERATION (" + str(operations.messageData[1]) + ")", 400
return json.dumps(operations.messageData)
'''
PATH: /im/vs/running_driver/operations/modification
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vs_rdo_modification(self):
return "NOT AVAILABLE", 405
def put_vs_rdo_modification(self):
return "NOT AVAILABLE", 405
def patch_vs_rdo_modification(self):
return "NOT AVAILABLE", 405
def delete_vs_rdo_modification(self):
return "NOT AVAILABLE", 405
'''
PATH: /im/vs/running_driver/operations/other
ACTION: GET
DESCRIPTION: Retrieve the available other operations of the
currently running platform driver.
ARGUMENT: --
RETURN: - 200 (HTTP) + String [0..N]
- Integer error code (HTTP)
'''
def get_vs_rdo_other(self):
request = IrModels.IrMessage().fromData(IrModels.IrManagement().fromData("VS", "get_vs_rdo_other", None), "AS", "IM")
operations = self.__asIr.sendMessage(request)
if type(operations.messageData) == tuple:
return "ERROR CODE #3 (AS): IM/VS ERROR DURING VNF RUNNING PLATFORM OPERATION (" + str(operations.messageData[1]) + ")", 400
return json.dumps(operations.messageData)
'''
PATH: /im/vs/running_driver/operations/other
N/A ACTIONS: POST, PUT, PATCH, DELETE
'''
def post_vs_rdo_other(self):
return "NOT AVAILABLE", 405
def put_vs_rdo_other(self):
return "NOT AVAILABLE", 405
def patch_vs_rdo_other(self):
return "NOT AVAILABLE", 405
def delete_vs_rdo_other(self):
return "NOT AVAILABLE", 405 | 40.83107 | 228 | 0.703793 | 24,401 | 198,439 | 5.542355 | 0.018975 | 0.046584 | 0.061225 | 0.059864 | 0.914662 | 0.893071 | 0.876663 | 0.848986 | 0.808886 | 0.721293 | 0 | 0.013944 | 0.163748 | 198,439 | 4,860 | 229 | 40.83107 | 0.801018 | 0.002414 | 0 | 0.546243 | 0 | 0 | 0.188026 | 0.027803 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142341 | false | 0 | 0.005419 | 0.065751 | 0.531431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0d48c41bdcad29e8c6a18c5db4e388837efd06bb | 27 | py | Python | error_code/__init__.py | airlovelq/flask-frame | 86a97522a6eff4e34f0cd5c131ebf68f7c78390a | [
"Apache-2.0"
] | null | null | null | error_code/__init__.py | airlovelq/flask-frame | 86a97522a6eff4e34f0cd5c131ebf68f7c78390a | [
"Apache-2.0"
] | null | null | null | error_code/__init__.py | airlovelq/flask-frame | 86a97522a6eff4e34f0cd5c131ebf68f7c78390a | [
"Apache-2.0"
] | null | null | null | from .error_define import * | 27 | 27 | 0.814815 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4bd0dbc9ea758af667565669de3b7767cde42b5 | 36 | py | Python | bookworm/__init__.py | zstanecic/bookworm | e5371344986b97a219a8a7527dec378fe7a09634 | [
"MIT"
] | null | null | null | bookworm/__init__.py | zstanecic/bookworm | e5371344986b97a219a8a7527dec378fe7a09634 | [
"MIT"
] | null | null | null | bookworm/__init__.py | zstanecic/bookworm | e5371344986b97a219a8a7527dec378fe7a09634 | [
"MIT"
] | null | null | null | # coding: utf-8
from . import boot
| 9 | 18 | 0.666667 | 6 | 36 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.222222 | 36 | 3 | 19 | 12 | 0.821429 | 0.361111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b4ed1ee7601f6ce092a3fd3bc0fd5851a0e6b342 | 241 | py | Python | oas/python-client/jatdb_client/api/__init__.py | NGenetzky/jatdb | 518e0cedca1b61b8a744aef5e02255d8501bf8eb | [
"MIT"
] | null | null | null | oas/python-client/jatdb_client/api/__init__.py | NGenetzky/jatdb | 518e0cedca1b61b8a744aef5e02255d8501bf8eb | [
"MIT"
] | null | null | null | oas/python-client/jatdb_client/api/__init__.py | NGenetzky/jatdb | 518e0cedca1b61b8a744aef5e02255d8501bf8eb | [
"MIT"
] | null | null | null | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from jatdb_client.api.content_api import ContentApi
from jatdb_client.api.default_api import DefaultApi
from jatdb_client.api.trello_api import TrelloApi
| 26.777778 | 51 | 0.850622 | 36 | 241 | 5.388889 | 0.5 | 0.139175 | 0.231959 | 0.278351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004673 | 0.112033 | 241 | 8 | 52 | 30.125 | 0.901869 | 0.170124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3728af3111d9cb64a21ccacdc04abc0dbd172238 | 29,528 | py | Python | tests/functional/test_utilities_batch.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | 1 | 2021-05-07T07:10:42.000Z | 2021-05-07T07:10:42.000Z | tests/functional/test_utilities_batch.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | 250 | 2021-05-31T22:51:24.000Z | 2022-03-31T16:24:04.000Z | tests/functional/test_utilities_batch.py | nayaverdier/aws-lambda-powertools-python | cd15ee97746356a84c6f196dbd2d26a34ea50411 | [
"Apache-2.0",
"MIT-0"
] | null | null | null | import json
from random import randint
from typing import Callable, Dict, Optional
from unittest.mock import patch
import pytest
from botocore.config import Config
from botocore.stub import Stubber
from aws_lambda_powertools.utilities.batch import (
BatchProcessor,
EventType,
PartialSQSProcessor,
batch_processor,
sqs_batch_processor,
)
from aws_lambda_powertools.utilities.batch.exceptions import BatchProcessingError, SQSBatchProcessingError
from aws_lambda_powertools.utilities.data_classes.dynamo_db_stream_event import DynamoDBRecord
from aws_lambda_powertools.utilities.data_classes.kinesis_stream_event import KinesisStreamRecord
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
from aws_lambda_powertools.utilities.parser import BaseModel, validator
from aws_lambda_powertools.utilities.parser.models import DynamoDBStreamChangedRecordModel, DynamoDBStreamRecordModel
from aws_lambda_powertools.utilities.parser.models import KinesisDataStreamRecord as KinesisDataStreamRecordModel
from aws_lambda_powertools.utilities.parser.models import KinesisDataStreamRecordPayload, SqsRecordModel
from aws_lambda_powertools.utilities.parser.types import Literal
from tests.functional.utils import b64_to_str, str_to_b64
@pytest.fixture(scope="module")
def sqs_event_factory() -> Callable:
def factory(body: str):
return {
"messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
"receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a",
"body": body,
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1545082649183",
"SenderId": "AIDAIENQZJOLO23YVJ4VO",
"ApproximateFirstReceiveTimestamp": "1545082649185",
},
"messageAttributes": {},
"md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-2:123456789012:my-queue",
"awsRegion": "us-east-1",
}
return factory
@pytest.fixture(scope="module")
def kinesis_event_factory() -> Callable:
def factory(body: str):
seq = "".join(str(randint(0, 9)) for _ in range(52))
return {
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "1",
"sequenceNumber": seq,
"data": str_to_b64(body),
"approximateArrivalTimestamp": 1545084650.987,
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": f"shardId-000000000006:{seq}",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::123456789012:role/lambda-role",
"awsRegion": "us-east-2",
"eventSourceARN": "arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream",
}
return factory
@pytest.fixture(scope="module")
def dynamodb_event_factory() -> Callable:
def factory(body: str):
seq = "".join(str(randint(0, 9)) for _ in range(10))
return {
"eventID": "1",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {"Id": {"N": "101"}},
"NewImage": {"Message": {"S": body}},
"StreamViewType": "NEW_AND_OLD_IMAGES",
"SequenceNumber": seq,
"SizeBytes": 26,
},
"awsRegion": "us-west-2",
"eventName": "INSERT",
"eventSourceARN": "eventsource_arn",
"eventSource": "aws:dynamodb",
}
return factory
@pytest.fixture(scope="module")
def record_handler() -> Callable:
def handler(record):
body = record["body"]
if "fail" in body:
raise Exception("Failed to process record.")
return body
return handler
@pytest.fixture(scope="module")
def kinesis_record_handler() -> Callable:
def handler(record: KinesisStreamRecord):
body = b64_to_str(record.kinesis.data)
if "fail" in body:
raise Exception("Failed to process record.")
return body
return handler
@pytest.fixture(scope="module")
def dynamodb_record_handler() -> Callable:
def handler(record: DynamoDBRecord):
body = record.dynamodb.new_image.get("Message").get_value
if "fail" in body:
raise Exception("Failed to process record.")
return body
return handler
@pytest.fixture(scope="module")
def config() -> Config:
return Config(region_name="us-east-1")
@pytest.fixture(scope="function")
def partial_processor(config) -> PartialSQSProcessor:
return PartialSQSProcessor(config=config)
@pytest.fixture(scope="function")
def partial_processor_suppressed(config) -> PartialSQSProcessor:
return PartialSQSProcessor(config=config, suppress_exception=True)
@pytest.fixture(scope="function")
def stubbed_partial_processor(config) -> PartialSQSProcessor:
processor = PartialSQSProcessor(config=config)
with Stubber(processor.client) as stubber:
yield stubber, processor
@pytest.fixture(scope="function")
def stubbed_partial_processor_suppressed(config) -> PartialSQSProcessor:
processor = PartialSQSProcessor(config=config, suppress_exception=True)
with Stubber(processor.client) as stubber:
yield stubber, processor
@pytest.fixture(scope="module")
def order_event_factory() -> Callable:
def factory(item: Dict) -> str:
return json.dumps({"item": item})
return factory
def test_partial_sqs_processor_context_with_failure(sqs_event_factory, record_handler, partial_processor):
"""
Test processor with one failing record
"""
fail_record = sqs_event_factory("fail")
success_record = sqs_event_factory("success")
records = [fail_record, success_record]
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
with Stubber(partial_processor.client) as stubber:
stubber.add_response("delete_message_batch", response)
with pytest.raises(SQSBatchProcessingError) as error:
with partial_processor(records, record_handler) as ctx:
ctx.process()
assert len(error.value.child_exceptions) == 1
stubber.assert_no_pending_responses()
def test_partial_sqs_processor_context_only_success(sqs_event_factory, record_handler, partial_processor):
"""
Test processor without failure
"""
first_record = sqs_event_factory("success")
second_record = sqs_event_factory("success")
records = [first_record, second_record]
with partial_processor(records, record_handler) as ctx:
result = ctx.process()
assert result == [
("success", first_record["body"], first_record),
("success", second_record["body"], second_record),
]
def test_partial_sqs_processor_context_multiple_calls(sqs_event_factory, record_handler, partial_processor):
"""
Test processor without failure
"""
first_record = sqs_event_factory("success")
second_record = sqs_event_factory("success")
records = [first_record, second_record]
with partial_processor(records, record_handler) as ctx:
ctx.process()
with partial_processor([first_record], record_handler) as ctx:
ctx.process()
assert partial_processor.success_messages == [first_record]
def test_batch_processor_middleware_with_partial_sqs_processor(sqs_event_factory, record_handler, partial_processor):
"""
Test middleware's integration with PartialSQSProcessor
"""
@batch_processor(record_handler=record_handler, processor=partial_processor)
def lambda_handler(event, context):
return True
fail_record = sqs_event_factory("fail")
event = {"Records": [sqs_event_factory("fail"), sqs_event_factory("fail"), sqs_event_factory("success")]}
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
with Stubber(partial_processor.client) as stubber:
stubber.add_response("delete_message_batch", response)
with pytest.raises(SQSBatchProcessingError) as error:
lambda_handler(event, {})
assert len(error.value.child_exceptions) == 2
stubber.assert_no_pending_responses()
@patch("aws_lambda_powertools.utilities.batch.sqs.PartialSQSProcessor")
def test_sqs_batch_processor_middleware(
patched_sqs_processor, sqs_event_factory, record_handler, stubbed_partial_processor
):
"""
Test middleware's integration with PartialSQSProcessor
"""
@sqs_batch_processor(record_handler=record_handler)
def lambda_handler(event, context):
return True
stubber, processor = stubbed_partial_processor
patched_sqs_processor.return_value = processor
fail_record = sqs_event_factory("fail")
event = {"Records": [sqs_event_factory("fail"), sqs_event_factory("success")]}
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
stubber.add_response("delete_message_batch", response)
with pytest.raises(SQSBatchProcessingError) as error:
lambda_handler(event, {})
assert len(error.value.child_exceptions) == 1
stubber.assert_no_pending_responses()
def test_batch_processor_middleware_with_custom_processor(capsys, sqs_event_factory, record_handler, config):
"""
Test middlewares' integration with custom batch processor
"""
class CustomProcessor(PartialSQSProcessor):
def failure_handler(self, record, exception):
print("Oh no ! It's a failure.")
return super().failure_handler(record, exception)
processor = CustomProcessor(config=config)
@batch_processor(record_handler=record_handler, processor=processor)
def lambda_handler(event, context):
return True
fail_record = sqs_event_factory("fail")
event = {"Records": [sqs_event_factory("fail"), sqs_event_factory("success")]}
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
with Stubber(processor.client) as stubber:
stubber.add_response("delete_message_batch", response)
with pytest.raises(SQSBatchProcessingError) as error:
lambda_handler(event, {})
stubber.assert_no_pending_responses()
assert len(error.value.child_exceptions) == 1
assert capsys.readouterr().out == "Oh no ! It's a failure.\n"
def test_batch_processor_middleware_suppressed_exceptions(
sqs_event_factory, record_handler, partial_processor_suppressed
):
"""
Test middleware's integration with PartialSQSProcessor
"""
@batch_processor(record_handler=record_handler, processor=partial_processor_suppressed)
def lambda_handler(event, context):
return True
fail_record = sqs_event_factory("fail")
event = {"Records": [sqs_event_factory("fail"), sqs_event_factory("fail"), sqs_event_factory("success")]}
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
with Stubber(partial_processor_suppressed.client) as stubber:
stubber.add_response("delete_message_batch", response)
result = lambda_handler(event, {})
stubber.assert_no_pending_responses()
assert result is True
def test_partial_sqs_processor_suppressed_exceptions(sqs_event_factory, record_handler, partial_processor_suppressed):
"""
Test processor without failure
"""
first_record = sqs_event_factory("success")
second_record = sqs_event_factory("fail")
records = [first_record, second_record]
fail_record = sqs_event_factory("fail")
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
with Stubber(partial_processor_suppressed.client) as stubber:
stubber.add_response("delete_message_batch", response)
with partial_processor_suppressed(records, record_handler) as ctx:
ctx.process()
assert partial_processor_suppressed.success_messages == [first_record]
@patch("aws_lambda_powertools.utilities.batch.sqs.PartialSQSProcessor")
def test_sqs_batch_processor_middleware_suppressed_exception(
patched_sqs_processor, sqs_event_factory, record_handler, stubbed_partial_processor_suppressed
):
"""
Test middleware's integration with PartialSQSProcessor
"""
@sqs_batch_processor(record_handler=record_handler)
def lambda_handler(event, context):
return True
stubber, processor = stubbed_partial_processor_suppressed
patched_sqs_processor.return_value = processor
fail_record = sqs_event_factory("fail")
event = {"Records": [sqs_event_factory("fail"), sqs_event_factory("success")]}
response = {"Successful": [{"Id": fail_record["messageId"]}], "Failed": []}
stubber.add_response("delete_message_batch", response)
result = lambda_handler(event, {})
stubber.assert_no_pending_responses()
assert result is True
def test_partial_sqs_processor_context_only_failure(sqs_event_factory, record_handler, partial_processor):
"""
Test processor with only failures
"""
first_record = sqs_event_factory("fail")
second_record = sqs_event_factory("fail")
records = [first_record, second_record]
with pytest.raises(SQSBatchProcessingError) as error:
with partial_processor(records, record_handler) as ctx:
ctx.process()
assert len(error.value.child_exceptions) == 2
def test_batch_processor_middleware_success_only(sqs_event_factory, record_handler):
# GIVEN
first_record = SQSRecord(sqs_event_factory("success"))
second_record = SQSRecord(sqs_event_factory("success"))
event = {"Records": [first_record.raw_event, second_record.raw_event]}
processor = BatchProcessor(event_type=EventType.SQS)
@batch_processor(record_handler=record_handler, processor=processor)
def lambda_handler(event, context):
return processor.response()
# WHEN
result = lambda_handler(event, {})
# THEN
assert result["batchItemFailures"] == []
def test_batch_processor_middleware_with_failure(sqs_event_factory, record_handler):
# GIVEN
first_record = SQSRecord(sqs_event_factory("fail"))
second_record = SQSRecord(sqs_event_factory("success"))
event = {"Records": [first_record.raw_event, second_record.raw_event]}
processor = BatchProcessor(event_type=EventType.SQS)
@batch_processor(record_handler=record_handler, processor=processor)
def lambda_handler(event, context):
return processor.response()
# WHEN
result = lambda_handler(event, {})
# THEN
assert len(result["batchItemFailures"]) == 1
def test_batch_processor_context_success_only(sqs_event_factory, record_handler):
# GIVEN
first_record = SQSRecord(sqs_event_factory("success"))
second_record = SQSRecord(sqs_event_factory("success"))
records = [first_record.raw_event, second_record.raw_event]
processor = BatchProcessor(event_type=EventType.SQS)
# WHEN
with processor(records, record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages == [
("success", first_record.body, first_record.raw_event),
("success", second_record.body, second_record.raw_event),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_context_with_failure(sqs_event_factory, record_handler):
# GIVEN
first_record = SQSRecord(sqs_event_factory("failure"))
second_record = SQSRecord(sqs_event_factory("success"))
records = [first_record.raw_event, second_record.raw_event]
processor = BatchProcessor(event_type=EventType.SQS)
# WHEN
with processor(records, record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages[1] == ("success", second_record.body, second_record.raw_event)
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record.message_id}]}
def test_batch_processor_kinesis_context_success_only(kinesis_event_factory, kinesis_record_handler):
# GIVEN
first_record = KinesisStreamRecord(kinesis_event_factory("success"))
second_record = KinesisStreamRecord(kinesis_event_factory("success"))
records = [first_record.raw_event, second_record.raw_event]
processor = BatchProcessor(event_type=EventType.KinesisDataStreams)
# WHEN
with processor(records, kinesis_record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages == [
("success", b64_to_str(first_record.kinesis.data), first_record.raw_event),
("success", b64_to_str(second_record.kinesis.data), second_record.raw_event),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_kinesis_context_with_failure(kinesis_event_factory, kinesis_record_handler):
# GIVEN
first_record = KinesisStreamRecord(kinesis_event_factory("failure"))
second_record = KinesisStreamRecord(kinesis_event_factory("success"))
records = [first_record.raw_event, second_record.raw_event]
processor = BatchProcessor(event_type=EventType.KinesisDataStreams)
# WHEN
with processor(records, kinesis_record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages[1] == ("success", b64_to_str(second_record.kinesis.data), second_record.raw_event)
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record.kinesis.sequence_number}]}
def test_batch_processor_kinesis_middleware_with_failure(kinesis_event_factory, kinesis_record_handler):
# GIVEN
first_record = KinesisStreamRecord(kinesis_event_factory("failure"))
second_record = KinesisStreamRecord(kinesis_event_factory("success"))
event = {"Records": [first_record.raw_event, second_record.raw_event]}
processor = BatchProcessor(event_type=EventType.KinesisDataStreams)
@batch_processor(record_handler=kinesis_record_handler, processor=processor)
def lambda_handler(event, context):
return processor.response()
# WHEN
result = lambda_handler(event, {})
# THEN
assert len(result["batchItemFailures"]) == 1
def test_batch_processor_dynamodb_context_success_only(dynamodb_event_factory, dynamodb_record_handler):
# GIVEN
first_record = dynamodb_event_factory("success")
second_record = dynamodb_event_factory("success")
records = [first_record, second_record]
processor = BatchProcessor(event_type=EventType.DynamoDBStreams)
# WHEN
with processor(records, dynamodb_record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages == [
("success", first_record["dynamodb"]["NewImage"]["Message"]["S"], first_record),
("success", second_record["dynamodb"]["NewImage"]["Message"]["S"], second_record),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_dynamodb_context_with_failure(dynamodb_event_factory, dynamodb_record_handler):
# GIVEN
first_record = dynamodb_event_factory("failure")
second_record = dynamodb_event_factory("success")
records = [first_record, second_record]
processor = BatchProcessor(event_type=EventType.DynamoDBStreams)
# WHEN
with processor(records, dynamodb_record_handler) as batch:
processed_messages = batch.process()
# THEN
assert processed_messages[1] == ("success", second_record["dynamodb"]["NewImage"]["Message"]["S"], second_record)
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record["dynamodb"]["SequenceNumber"]}]}
def test_batch_processor_dynamodb_middleware_with_failure(dynamodb_event_factory, dynamodb_record_handler):
# GIVEN
first_record = dynamodb_event_factory("failure")
second_record = dynamodb_event_factory("success")
event = {"Records": [first_record, second_record]}
processor = BatchProcessor(event_type=EventType.DynamoDBStreams)
@batch_processor(record_handler=dynamodb_record_handler, processor=processor)
def lambda_handler(event, context):
return processor.response()
# WHEN
result = lambda_handler(event, {})
# THEN
assert len(result["batchItemFailures"]) == 1
def test_batch_processor_context_model(sqs_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderSqs(SqsRecordModel):
body: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("body", pre=True)
def transform_body_to_dict(cls, value: str):
return json.loads(value)
def record_handler(record: OrderSqs):
return record.body.item
order_event = order_event_factory({"type": "success"})
first_record = sqs_event_factory(order_event)
second_record = sqs_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.SQS, model=OrderSqs)
with processor(records, record_handler) as batch:
processed_messages = batch.process()
# THEN
order_item = json.loads(order_event)["item"]
assert processed_messages == [
("success", order_item, first_record),
("success", order_item, second_record),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_context_model_with_failure(sqs_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderSqs(SqsRecordModel):
body: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("body", pre=True)
def transform_body_to_dict(cls, value: str):
return json.loads(value)
def record_handler(record: OrderSqs):
if "fail" in record.body.item["type"]:
raise Exception("Failed to process record.")
return record.body.item
order_event = order_event_factory({"type": "success"})
order_event_fail = order_event_factory({"type": "fail"})
first_record = sqs_event_factory(order_event_fail)
second_record = sqs_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.SQS, model=OrderSqs)
with processor(records, record_handler) as batch:
batch.process()
# THEN
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record["messageId"]}]}
def test_batch_processor_dynamodb_context_model(dynamodb_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderDynamoDB(BaseModel):
Message: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("Message", pre=True)
def transform_message_to_dict(cls, value: Dict[Literal["S"], str]):
return json.loads(value["S"])
class OrderDynamoDBChangeRecord(DynamoDBStreamChangedRecordModel):
NewImage: Optional[OrderDynamoDB]
OldImage: Optional[OrderDynamoDB]
class OrderDynamoDBRecord(DynamoDBStreamRecordModel):
dynamodb: OrderDynamoDBChangeRecord
def record_handler(record: OrderDynamoDBRecord):
return record.dynamodb.NewImage.Message.item
order_event = order_event_factory({"type": "success"})
first_record = dynamodb_event_factory(order_event)
second_record = dynamodb_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.DynamoDBStreams, model=OrderDynamoDBRecord)
with processor(records, record_handler) as batch:
processed_messages = batch.process()
# THEN
order_item = json.loads(order_event)["item"]
assert processed_messages == [
("success", order_item, first_record),
("success", order_item, second_record),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_dynamodb_context_model_with_failure(dynamodb_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderDynamoDB(BaseModel):
Message: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("Message", pre=True)
def transform_message_to_dict(cls, value: Dict[Literal["S"], str]):
return json.loads(value["S"])
class OrderDynamoDBChangeRecord(DynamoDBStreamChangedRecordModel):
NewImage: Optional[OrderDynamoDB]
OldImage: Optional[OrderDynamoDB]
class OrderDynamoDBRecord(DynamoDBStreamRecordModel):
dynamodb: OrderDynamoDBChangeRecord
def record_handler(record: OrderDynamoDBRecord):
if "fail" in record.dynamodb.NewImage.Message.item["type"]:
raise Exception("Failed to process record.")
return record.dynamodb.NewImage.Message.item
order_event = order_event_factory({"type": "success"})
order_event_fail = order_event_factory({"type": "fail"})
first_record = dynamodb_event_factory(order_event_fail)
second_record = dynamodb_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.DynamoDBStreams, model=OrderDynamoDBRecord)
with processor(records, record_handler) as batch:
batch.process()
# THEN
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record["dynamodb"]["SequenceNumber"]}]}
def test_batch_processor_kinesis_context_parser_model(kinesis_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderKinesisPayloadRecord(KinesisDataStreamRecordPayload):
data: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("data", pre=True)
def transform_message_to_dict(cls, value: str):
# Powertools KinesisDataStreamRecordModel already decodes b64 to str here
return json.loads(value)
class OrderKinesisRecord(KinesisDataStreamRecordModel):
kinesis: OrderKinesisPayloadRecord
def record_handler(record: OrderKinesisRecord):
return record.kinesis.data.item
order_event = order_event_factory({"type": "success"})
first_record = kinesis_event_factory(order_event)
second_record = kinesis_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.KinesisDataStreams, model=OrderKinesisRecord)
with processor(records, record_handler) as batch:
processed_messages = batch.process()
# THEN
order_item = json.loads(order_event)["item"]
assert processed_messages == [
("success", order_item, first_record),
("success", order_item, second_record),
]
assert batch.response() == {"batchItemFailures": []}
def test_batch_processor_kinesis_context_parser_model_with_failure(kinesis_event_factory, order_event_factory):
# GIVEN
class Order(BaseModel):
item: dict
class OrderKinesisPayloadRecord(KinesisDataStreamRecordPayload):
data: Order
# auto transform json string
# so Pydantic can auto-initialize nested Order model
@validator("data", pre=True)
def transform_message_to_dict(cls, value: str):
# Powertools KinesisDataStreamRecordModel
return json.loads(value)
class OrderKinesisRecord(KinesisDataStreamRecordModel):
kinesis: OrderKinesisPayloadRecord
def record_handler(record: OrderKinesisRecord):
if "fail" in record.kinesis.data.item["type"]:
raise Exception("Failed to process record.")
return record.kinesis.data.item
order_event = order_event_factory({"type": "success"})
order_event_fail = order_event_factory({"type": "fail"})
first_record = kinesis_event_factory(order_event_fail)
second_record = kinesis_event_factory(order_event)
records = [first_record, second_record]
# WHEN
processor = BatchProcessor(event_type=EventType.KinesisDataStreams, model=OrderKinesisRecord)
with processor(records, record_handler) as batch:
batch.process()
# THEN
assert len(batch.fail_messages) == 1
assert batch.response() == {"batchItemFailures": [{"itemIdentifier": first_record["kinesis"]["sequenceNumber"]}]}
def test_batch_processor_error_when_entire_batch_fails(sqs_event_factory, record_handler):
# GIVEN
first_record = SQSRecord(sqs_event_factory("fail"))
second_record = SQSRecord(sqs_event_factory("fail"))
event = {"Records": [first_record.raw_event, second_record.raw_event]}
processor = BatchProcessor(event_type=EventType.SQS)
@batch_processor(record_handler=record_handler, processor=processor)
def lambda_handler(event, context):
return processor.response()
# WHEN/THEN
with pytest.raises(BatchProcessingError) as e:
lambda_handler(event, {})
ret = str(e)
assert ret is not None
| 35.194279 | 118 | 0.714068 | 3,197 | 29,528 | 6.310291 | 0.077573 | 0.064241 | 0.044612 | 0.020819 | 0.866858 | 0.84485 | 0.796818 | 0.759393 | 0.743383 | 0.733072 | 0 | 0.00817 | 0.183385 | 29,528 | 838 | 119 | 35.236277 | 0.828474 | 0.043891 | 0 | 0.652174 | 0 | 0 | 0.100899 | 0.017739 | 0 | 0 | 0 | 0 | 0.092628 | 1 | 0.130435 | false | 0 | 0.034026 | 0.045369 | 0.3138 | 0.00189 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2eade9187b74847e00dbf24f9be7b172f238871a | 237 | py | Python | evaluations/__init__.py | Drkun/ZSTCI | ec9c031e7649a61ed23316d23d6bd5d5702467c0 | [
"Apache-2.0"
] | 11 | 2021-01-02T07:00:39.000Z | 2021-10-08T02:11:56.000Z | evaluations/__init__.py | Drkun/ZSTCI | ec9c031e7649a61ed23316d23d6bd5d5702467c0 | [
"Apache-2.0"
] | 1 | 2021-02-24T19:36:34.000Z | 2021-02-25T03:47:41.000Z | evaluations/__init__.py | Drkun/ZSTCI | ec9c031e7649a61ed23316d23d6bd5d5702467c0 | [
"Apache-2.0"
] | 1 | 2020-12-06T12:20:59.000Z | 2020-12-06T12:20:59.000Z | from __future__ import absolute_import
import utils
from .extract_featrure import extract_features, pairwise_distance, pairwise_similarity,extract_features_val,extract_features_transfer,extract_features_all,extract_features_transfer_all | 59.25 | 184 | 0.919831 | 30 | 237 | 6.666667 | 0.466667 | 0.375 | 0.23 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050633 | 237 | 4 | 184 | 59.25 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
25e0739dbd4976015f1bd9de3a7d16f544615836 | 36 | py | Python | trj2img/__init__.py | MasanoriYamada/trj2img | fce3589c4607572704dda56cff9935a6ddb56331 | [
"MIT"
] | null | null | null | trj2img/__init__.py | MasanoriYamada/trj2img | fce3589c4607572704dda56cff9935a6ddb56331 | [
"MIT"
] | null | null | null | trj2img/__init__.py | MasanoriYamada/trj2img | fce3589c4607572704dda56cff9935a6ddb56331 | [
"MIT"
] | null | null | null | from trj2img.trj2img import trj2img
| 18 | 35 | 0.861111 | 5 | 36 | 6.2 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 0.111111 | 36 | 1 | 36 | 36 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d308d12640a0a121c47bc09070a6af9b9996f7e8 | 75 | py | Python | src/mode/modetalkwin3.py | Titooox/monsters | de0b4752c90d0725f4af0c1e6d52514739e98655 | [
"MIT"
] | 2 | 2017-05-14T06:37:14.000Z | 2022-03-07T02:25:32.000Z | src/mode/modetalkwin3.py | Titooox/monsters | de0b4752c90d0725f4af0c1e6d52514739e98655 | [
"MIT"
] | 2 | 2017-10-08T19:41:18.000Z | 2021-04-08T04:40:50.000Z | src/mode/modetalkwin3.py | JovialKnoll/monsters | 15d969d0220fd003c2c28ae690f66633da370682 | [
"MIT"
] | null | null | null | from .modeconvo import ModeConvo
class ModeTalkWin3(ModeConvo):
pass
| 12.5 | 32 | 0.773333 | 8 | 75 | 7.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 0.173333 | 75 | 5 | 33 | 15 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
d31d01f1ef3cd867fb0a2430facdb20d30f0183c | 115 | py | Python | 01/03/0.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | 01/03/0.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | 39 | 2017-07-31T22:54:01.000Z | 2017-08-31T00:19:03.000Z | 01/03/0.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | print(())
print((1,))
print((1,2,3))
print(tuple())
print(tuple((1,)))
print(tuple((1,2,3)))
print(tuple([1,2,3]))
| 14.375 | 21 | 0.573913 | 22 | 115 | 3 | 0.227273 | 0.606061 | 0.136364 | 0.242424 | 0.590909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101852 | 0.06087 | 115 | 7 | 22 | 16.428571 | 0.509259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6cf31bdff7e501ee82ef53549fb54d6a7bc77495 | 632 | gyp | Python | deps/libgdal/gyp-formats/zlib.gyp | jimgambale/node-gdal | dc5c89fb23f1004732106250c8b7d57f380f9b61 | [
"Apache-2.0"
] | 462 | 2015-01-07T23:09:18.000Z | 2022-03-30T03:58:09.000Z | deps/libgdal/gyp-formats/zlib.gyp | jimgambale/node-gdal | dc5c89fb23f1004732106250c8b7d57f380f9b61 | [
"Apache-2.0"
] | 196 | 2015-01-07T11:10:35.000Z | 2022-03-29T08:50:30.000Z | deps/libgdal/gyp-formats/zlib.gyp | jimgambale/node-gdal | dc5c89fb23f1004732106250c8b7d57f380f9b61 | [
"Apache-2.0"
] | 113 | 2015-01-15T02:24:18.000Z | 2021-11-22T06:05:52.000Z | {
"includes": [
"../common.gypi"
],
"targets": [
{
"target_name": "libgdal_zlib_frmt",
"type": "static_library",
"sources": [
"../gdal/frmts/zlib/adler32.c",
"../gdal/frmts/zlib/compress.c",
"../gdal/frmts/zlib/crc32.c",
"../gdal/frmts/zlib/deflate.c",
"../gdal/frmts/zlib/gzio.c",
"../gdal/frmts/zlib/infback.c",
"../gdal/frmts/zlib/inffast.c",
"../gdal/frmts/zlib/inflate.c",
"../gdal/frmts/zlib/inftrees.c",
"../gdal/frmts/zlib/trees.c",
"../gdal/frmts/zlib/uncompr.c",
"../gdal/frmts/zlib/zutil.c"
],
"include_dirs": [
"../gdal/frmts/zlib"
]
}
]
}
| 21.793103 | 38 | 0.550633 | 78 | 632 | 4.397436 | 0.384615 | 0.341108 | 0.492711 | 0.44898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007722 | 0.18038 | 632 | 28 | 39 | 22.571429 | 0.65444 | 0 | 0 | 0.071429 | 0 | 0 | 0.697785 | 0.52057 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6cfe9f567b4fee190001df1a8bf835850a4107f7 | 349 | py | Python | src/cocoa/toga_cocoa/libs/__init__.py | renantamashiro/toga | 579bd668756f37bcbe69fccdb8375503320fff47 | [
"BSD-3-Clause"
] | null | null | null | src/cocoa/toga_cocoa/libs/__init__.py | renantamashiro/toga | 579bd668756f37bcbe69fccdb8375503320fff47 | [
"BSD-3-Clause"
] | null | null | null | src/cocoa/toga_cocoa/libs/__init__.py | renantamashiro/toga | 579bd668756f37bcbe69fccdb8375503320fff47 | [
"BSD-3-Clause"
] | null | null | null | from .appkit import * # noqa: F401, F403
from .core_graphics import * # noqa: F401, F403
from .core_text import * # noqa: F401, F403
from .foundation import * # noqa: F401, F403
from .webkit import * # noqa: F401, F403
# macOS always renders at 96dpi. Scaling is handled
# transparently at the level of the screen compositor.
DISPLAY_DPI = 96
| 34.9 | 54 | 0.722063 | 51 | 349 | 4.882353 | 0.54902 | 0.200803 | 0.281125 | 0.361446 | 0.385542 | 0.208835 | 0 | 0 | 0 | 0 | 0 | 0.120567 | 0.191977 | 349 | 9 | 55 | 38.777778 | 0.762411 | 0.535817 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6cff0e83be4c2c1ec50c8c7f4fd13c80a2f86378 | 35 | py | Python | authorization_system/__init__.py | daftcode/authorization-system | 9dec0c0717f168bee5b38791b80e4d4579977b18 | [
"MIT"
] | 7 | 2019-02-22T01:49:26.000Z | 2020-02-25T22:01:22.000Z | authorization_system/__init__.py | daftcode/authorization-system | 9dec0c0717f168bee5b38791b80e4d4579977b18 | [
"MIT"
] | null | null | null | authorization_system/__init__.py | daftcode/authorization-system | 9dec0c0717f168bee5b38791b80e4d4579977b18 | [
"MIT"
] | 5 | 2019-02-22T01:49:29.000Z | 2019-06-10T03:07:52.000Z | from . import authorization_system
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.