hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1db44a3f4963fa9622ff63f4ca143896b6396480 | 216 | py | Python | setup.py | h-terao/multiprocess_cli | c18fe87dfe66d03c9754b007c6f43ac406106df4 | [
"MIT"
] | null | null | null | setup.py | h-terao/multiprocess_cli | c18fe87dfe66d03c9754b007c6f43ac406106df4 | [
"MIT"
] | null | null | null | setup.py | h-terao/multiprocess_cli | c18fe87dfe66d03c9754b007c6f43ac406106df4 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name="muli",
version="0.1",
install_requires=[
"tqdm",
"docstring_parser",
],
author="h-terao",
packages=find_packages(),
)
| 16.615385 | 43 | 0.597222 | 23 | 216 | 5.434783 | 0.826087 | 0.192 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0125 | 0.259259 | 216 | 12 | 44 | 18 | 0.76875 | 0 | 0 | 0 | 0 | 0 | 0.157407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.090909 | 0 | 0.090909 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1db5a84fc323ed0654110c94b6bd0f56118ca182 | 9,843 | py | Python | src/quantum/azext_quantum/_help.py | jonie001/azure-cli-extensions | 5fee532e35f318c6405659bf79afb5a236552a34 | [
"MIT"
] | null | null | null | src/quantum/azext_quantum/_help.py | jonie001/azure-cli-extensions | 5fee532e35f318c6405659bf79afb5a236552a34 | [
"MIT"
] | null | null | null | src/quantum/azext_quantum/_help.py | jonie001/azure-cli-extensions | 5fee532e35f318c6405659bf79afb5a236552a34 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from knack.help_files import helps # pylint: disable=unused-import
helps['quantum'] = """
type: group
short-summary: Manage Azure Quantum Workspaces and submit jobs to Azure Quantum Providers.
"""
helps['quantum execute'] = """
type: command
short-summary: Submit a job to run on Azure Quantum, and waits for the result.
examples:
- name: Submit the Q# program from the current folder and wait for the result.
text: |-
az quantum execute -g MyResourceGroup -w MyWorkspace -l MyLocation -t MyTarget
- name: Submit and wait for a Q# program from the current folder with job and program parameters.
text: |-
az quantum execute -g MyResourceGroup -w MyWorkspace -l MyLocation -t MyTarget \\
--job-params key1=value1 key2=value2 -- --n-qubits=3
"""
helps['quantum run'] = """
type: command
short-summary: Equivalent to `az quantum execute`
examples:
- name: Submit the Q# program from the current folder and wait for the result.
text: |-
az quantum run -g MyResourceGroup -w MyWorkspace -l MyLocation -t MyTarget
- name: Submit and wait for a Q# program from the current folder with job and program parameters.
text: |-
az quantum run -g MyResourceGroup -w MyWorkspace -l MyLocation -t MyTarget \\
--job-params key1=value1 key2=value2 -- --n-qubits=3
"""
helps['quantum job'] = """
type: group
short-summary: Manage jobs for Azure Quantum.
"""
helps['quantum job list'] = """
type: command
short-summary: Get the list of jobs in a Quantum Workspace.
examples:
- name: Get the list of jobs from an Azure Quantum workspace.
text: |-
az quantum job list -g MyResourceGroup -w MyWorkspace -l MyLocation
"""
helps['quantum job output'] = """
type: command
short-summary: Get the results of running a Q# job.
examples:
- name: Print the results of a successful Azure Quantum job.
text: |-
az quantum job output -g MyResourceGroup -w MyWorkspace -l MyLocation \\
-j yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy -o table
"""
helps['quantum job show'] = """
type: command
short-summary: Get the job's status and details.
examples:
- name: Get the status of an Azure Quantum job.
text: |-
az quantum job show -g MyResourceGroup -w MyWorkspace -l MyLocation \\
-j yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy --query status
"""
helps['quantum job submit'] = """
type: command
short-summary: Submit a Q# project to run on Azure Quantum.
examples:
- name: Submit the Q# program from the current folder.
text: |-
az quantum job submit -g MyResourceGroup -w MyWorkspace -l MyLocation \\
--job-name MyJob
- name: Submit the Q# program from the current folder with job parameters for a target.
text: |-
az quantum job submit -g MyResourceGroup -w MyWorkspace -l MyLocation \\
--job-name MyJob --job-params param1=value1 param2=value2
- name: Submit the Q# program with program parameters (e.g. n-qubits = 2).
text: |-
az quantum job submit -g MyResourceGroup -w MyWorkspace -l MyLocation \\
--job-name MyJob -- --n-qubits=2
"""
helps['quantum job wait'] = """
type: command
short-summary: Place the CLI in a waiting state until the job finishes running.
examples:
- name: Wait for completion of a job, check at 60 second intervals.
text: |-
az quantum job wait -g MyResourceGroup -w MyWorkspace -l MyLocation \\
-j yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy --max-poll-wait-secs 60 -o table
"""
helps['quantum job cancel'] = """
type: command
short-summary: Request to cancel a job on Azure Quantum if it hasn't completed.
examples:
- name: Cancel an Azure Quantum job by id.
text: |-
az quantum job cancel -g MyResourceGroup -w MyWorkspace -l MyLocation \\
-j yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy
"""
helps['quantum offerings'] = """
type: group
short-summary: Manage provider offerings for Azure Quantum.
"""
helps['quantum offerings list'] = """
type: command
short-summary: Get the list of all provider offerings available on the given location.
examples:
- name: List offerings available in an Azure location.
text: |-
az quantum offerings list -l MyLocation
"""
helps['quantum offerings show-terms'] = """
type: command
short-summary: Show the terms of a provider and SKU combination including license URL and acceptance status.
examples:
- name: Use a Provider Id and SKU from `az quantum offerings list` to review the terms.
text: |-
az quantum offerings show-terms -p MyProviderId -k MySKU -l MyLocation
"""
helps['quantum offerings accept-terms'] = """
type: command
short-summary: Accept the terms of a provider and SKU combination to enable it for workspace creation.
examples:
- name: Once terms have been reviewed, accept the invoking this command.
text: |-
az quantum offerings accept-terms -p MyProviderId -k MySKU -l MyLocation
"""
helps['quantum target'] = """
type: group
short-summary: Manage targets for Azure Quantum workspaces.
"""
helps['quantum target clear'] = """
type: command
short-summary: Clear the default target-id.
examples:
- name: Clear the default target-id.
text: |-
az quantum target clear
"""
helps['quantum target list'] = """
type: command
short-summary: Get the list of providers and their targets in an Azure Quantum workspace.
examples:
- name: Get the list of targets available in a Azure Quantum workspaces
text: |-
az quantum target list -g MyResourceGroup -w MyWorkspace -l MyLocation
"""
helps['quantum target set'] = """
type: command
short-summary: Select the default target to use when submitting jobs to Azure Quantum.
examples:
- name: Select a default when submitting jobs to Azure Quantum.
text: |-
az quantum target set -t target-id
"""
helps['quantum target show'] = """
type: command
short-summary: Get the details of the given (or current) target to use when submitting jobs to Azure Quantum.
examples:
- name: Show the currently selected default target
text: |-
az quantum target show
"""
helps['quantum workspace'] = """
type: group
short-summary: Manage Azure Quantum workspaces.
"""
helps['quantum workspace clear'] = """
type: command
short-summary: Clear the default Azure Quantum workspace.
examples:
- name: Clear the default Azure Quantum workspace if previously set.
text: |-
az quantum workspace clear
"""
helps['quantum workspace create'] = """
type: command
short-summary: Create a new Azure Quantum workspace.
examples:
- name: Create a new Azure Quantum workspace with a specific list of providers.
text: |-
az quantum workspace create -g MyResourceGroup -w MyWorkspace -l MyLocation \\
-r "MyProvider1 / MySKU1, MyProvider2 / MySKU2" -a MyStorageAccountName
"""
helps['quantum workspace delete'] = """
type: command
short-summary: Delete the given (or current) Azure Quantum workspace.
examples:
- name: Delete an Azure Quantum workspace by name and group.
text: |-
az quantum workspace delete -g MyResourceGroup -w MyWorkspace
- name: Delete and clear the default Azure Quantum workspace (if one has been set).
text: |-
az quantum workspace delete
"""
helps['quantum workspace list'] = """
type: command
short-summary: Get the list of Azure Quantum workspaces available.
examples:
- name: Get the list of all Azure Quantum workspaces available.
text: |-
az quantum workspace list
- name: Get the list Azure Quantum workspaces available in a location.
text: |-
az quantum workspace list -l MyLocation
"""
helps['quantum workspace quotas'] = """
type: command
short-summary: List the quotas for the given (or current) Azure Quantum workspace.
examples:
- name: List the quota information of the default workspace if set.
text: |-
az quantum workspace quotas
- name: List the quota information of a specified Azure Quantum workspace.
text: |-
az quantum workspace quotas -g MyResourceGroup -w MyWorkspace -l MyLocation
"""
helps['quantum workspace set'] = """
type: command
short-summary: Select a default Azure Quantum workspace for future commands.
examples:
- name: Set the default Azure Quantum workspace.
text: |-
az quantum workspace set -g MyResourceGroup -w MyWorkspace -l MyLocation
"""
helps['quantum workspace show'] = """
type: command
short-summary: Get the details of the given (or current) Azure Quantum workspace.
examples:
- name: Show the currently selected default Azure Quantum workspace.
text: |-
az quantum workspace show
- name: Show the details of a provided Azure Quantum workspace.
text: |-
az quantum workspace show -g MyResourceGroup -w MyWorkspace
"""
| 37.003759 | 113 | 0.639033 | 1,215 | 9,843 | 5.176132 | 0.153086 | 0.091588 | 0.062013 | 0.080458 | 0.6346 | 0.519955 | 0.453331 | 0.410558 | 0.332008 | 0.228017 | 0 | 0.003424 | 0.258255 | 9,843 | 265 | 114 | 37.143396 | 0.857965 | 0.038505 | 0 | 0.497836 | 0 | 0.025974 | 0.938353 | 0.015227 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.004329 | 0 | 0.004329 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1dbf73be3a157a1df3da95082819d4d3d2511510 | 149 | py | Python | public/python/GlobalData.py | IzzatHalabi/newpix_prototype | 5d617ef20df59af57c26ca0f7fc8521afd4203f7 | [
"MIT"
] | null | null | null | public/python/GlobalData.py | IzzatHalabi/newpix_prototype | 5d617ef20df59af57c26ca0f7fc8521afd4203f7 | [
"MIT"
] | 4 | 2020-07-28T17:43:16.000Z | 2022-02-27T09:40:50.000Z | public/python/GlobalData.py | IzzatHalabi/newpix_prototype | 5d617ef20df59af57c26ca0f7fc8521afd4203f7 | [
"MIT"
] | null | null | null | students = []
references = []
benefits = []
MODE_SPECIFIC = '1. SPECIFIC'
MODE_HOSTEL = '2. HOSTEL'
MODE_MCDM = '3. MCDM'
MODE_REMAIN = '4. REMAIN'
| 16.555556 | 29 | 0.657718 | 19 | 149 | 4.947368 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03252 | 0.174497 | 149 | 8 | 30 | 18.625 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 0.241611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1dc6a1b7f0449375a57c1a122a7fd5321b6ee6ee | 428 | py | Python | django_base/bookmanager/book/models.py | qdc520/repository | 15ba0f5cbbe82d0229f52428d53f1bf985e78eff | [
"MIT"
] | null | null | null | django_base/bookmanager/book/models.py | qdc520/repository | 15ba0f5cbbe82d0229f52428d53f1bf985e78eff | [
"MIT"
] | null | null | null | django_base/bookmanager/book/models.py | qdc520/repository | 15ba0f5cbbe82d0229f52428d53f1bf985e78eff | [
"MIT"
] | null | null | null | #coding:utf-8
from django.db import models
# Create your models here.
class Bookinfo(models.Model):
name = models.CharField(max_length=10)
def __str__(self):
return self.name
class Peopleinfo(models.Model):
name = models.CharField(max_length=10)
gender = models.BooleanField(default=True)
book=models.ForeignKey(Bookinfo, on_delete=models.CASCADE)
def __str__(self):
return self.name
| 23.777778 | 62 | 0.71729 | 57 | 428 | 5.192982 | 0.578947 | 0.074324 | 0.101351 | 0.141892 | 0.439189 | 0.439189 | 0.277027 | 0.277027 | 0 | 0 | 0 | 0.014245 | 0.179907 | 428 | 17 | 63 | 25.176471 | 0.82906 | 0.086449 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0.181818 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
1dcbd528396a857f738994a2d796255a9431962d | 272 | py | Python | Python-desenvolvimento/ex002.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | Python-desenvolvimento/ex002.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | Python-desenvolvimento/ex002.py | MarcosMaciel-MMRS/Desenvolvimento-python | 2b2fc54788da3ca110d495b9e80a494f2b31fb09 | [
"MIT"
] | null | null | null | nome = input("Informe Seu Nome: ").lower()
print("É um prazer te conhecer ",nome)
print("Senta o dedo nessa porra")
#essa parte está fora do exercício, aprendi q da para manipular um nome deixando a primeira letra maiúscula.
pl = nome[0:1].upper()
print(pl+ nome[1:]) | 45.333333 | 109 | 0.713235 | 46 | 272 | 4.217391 | 0.782609 | 0.061856 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013158 | 0.161765 | 272 | 6 | 110 | 45.333333 | 0.837719 | 0.393382 | 0 | 0 | 0 | 0 | 0.4125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
1ddd179178c4c49ab71d365528fb314c74c761e9 | 411 | py | Python | ex004.py | Iigorsf/Python | f803332d12db8b472710a02b67b812458dabec4a | [
"MIT"
] | null | null | null | ex004.py | Iigorsf/Python | f803332d12db8b472710a02b67b812458dabec4a | [
"MIT"
] | null | null | null | ex004.py | Iigorsf/Python | f803332d12db8b472710a02b67b812458dabec4a | [
"MIT"
] | null | null | null | #Faça um programa que leia algo pelo teclado e mostre na tela o seu tipo primitivo e todas as informações possíveis sobre ele
var = input('Digite algo: ')
print('O tipo primitivo desse valor é', type(var))
print('Só tem espaços? ', var.isspace())
print('É um número ', var.isnumeric())
print('É alfabetico', var.isalpha())
print('Está em maiúsculas? ', var.isupper())
print('Está em minúsculas', var.islower())
| 45.666667 | 125 | 0.725061 | 65 | 411 | 4.584615 | 0.676923 | 0.087248 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138686 | 411 | 8 | 126 | 51.375 | 0.841808 | 0.301703 | 0 | 0 | 0 | 0 | 0.423077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.857143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
1de1abcdb99207e37dd1590cf6f0a01dad106801 | 374 | py | Python | crabageprediction/venv/Lib/site-packages/pandas/io/json/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 28,899 | 2016-10-13T03:32:12.000Z | 2022-03-31T21:39:05.000Z | crabageprediction/venv/Lib/site-packages/pandas/io/json/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 31,004 | 2016-10-12T23:22:27.000Z | 2022-03-31T23:17:38.000Z | crabageprediction/venv/Lib/site-packages/pandas/io/json/__init__.py | 13rianlucero/CrabAgePrediction | 92bc7fbe1040f49e820473e33cc3902a5a7177c7 | [
"MIT"
] | 15,149 | 2016-10-13T03:21:31.000Z | 2022-03-31T18:46:47.000Z | from pandas.io.json._json import (
dumps,
loads,
read_json,
to_json,
)
from pandas.io.json._normalize import (
_json_normalize,
json_normalize,
)
from pandas.io.json._table_schema import build_table_schema
__all__ = [
"dumps",
"loads",
"read_json",
"to_json",
"_json_normalize",
"json_normalize",
"build_table_schema",
]
| 17 | 59 | 0.660428 | 46 | 374 | 4.891304 | 0.304348 | 0.288889 | 0.16 | 0.213333 | 0.213333 | 0.213333 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224599 | 374 | 21 | 60 | 17.809524 | 0.775862 | 0 | 0 | 0 | 0 | 0 | 0.195187 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1de9acf0c8997f18da4e6837c58ed5006943ae11 | 2,372 | py | Python | src/waldur_vmware/tests/fixtures.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 26 | 2017-10-18T13:49:58.000Z | 2021-09-19T04:44:09.000Z | src/waldur_vmware/tests/fixtures.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 14 | 2018-12-10T14:14:51.000Z | 2021-06-07T10:33:39.000Z | src/waldur_vmware/tests/fixtures.py | geant-multicloud/MCMS-mastermind | 81333180f5e56a0bc88d7dad448505448e01f24e | [
"MIT"
] | 32 | 2017-09-24T03:10:45.000Z | 2021-10-16T16:41:09.000Z | from django.utils.functional import cached_property
from waldur_core.structure.tests.fixtures import ProjectFixture
from . import factories
class VMwareFixture(ProjectFixture):
def __init__(self):
super(VMwareFixture, self).__init__()
self.customer_cluster
self.customer_network
self.customer_datastore
self.customer_folder
@cached_property
def settings(self):
return factories.VMwareServiceSettingsFactory(customer=self.customer)
@cached_property
def cluster(self):
return factories.ClusterFactory(settings=self.settings)
@cached_property
def customer_cluster(self):
return factories.CustomerClusterFactory(
cluster=self.cluster, customer=self.customer
)
@cached_property
def network(self):
return factories.NetworkFactory(settings=self.settings)
@cached_property
def customer_network(self):
return factories.CustomerNetworkFactory(
network=self.network, customer=self.customer
)
@cached_property
def customer_network_pair(self):
return factories.CustomerNetworkPairFactory(
network=self.network, customer=self.customer
)
@cached_property
def datastore(self):
return factories.DatastoreFactory(settings=self.settings)
@cached_property
def customer_datastore(self):
return factories.CustomerDatastoreFactory(
datastore=self.datastore, customer=self.customer
)
@cached_property
def folder(self):
return factories.FolderFactory(settings=self.settings)
@cached_property
def customer_folder(self):
return factories.CustomerFolderFactory(
folder=self.folder, customer=self.customer
)
@cached_property
def template(self):
return factories.TemplateFactory(settings=self.settings)
@cached_property
def virtual_machine(self):
return factories.VirtualMachineFactory(
service_settings=self.settings,
project=self.project,
template=self.template,
cluster=self.cluster,
)
@cached_property
def disk(self):
return factories.DiskFactory(
vm=self.virtual_machine,
service_settings=self.settings,
project=self.project,
)
| 27.581395 | 77 | 0.68339 | 223 | 2,372 | 7.103139 | 0.215247 | 0.123737 | 0.13952 | 0.098485 | 0.370581 | 0.356692 | 0.239899 | 0.069444 | 0.069444 | 0 | 0 | 0 | 0.244519 | 2,372 | 85 | 78 | 27.905882 | 0.883929 | 0 | 0 | 0.279412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.044118 | 0.191176 | 0.455882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
1deb9767f1349dfb310931aa09d072ac6c4b72b0 | 797 | py | Python | test_muniments/test_integration/test_external/test_couchdb/schedule_8c42557da273420492cd6a16c25a2d74.py | alunduil/muniments | 0e4f0a7cdc0f5ff016dc6e9271631953a0fe4092 | [
"MIT"
] | 1 | 2015-04-30T07:13:43.000Z | 2015-04-30T07:13:43.000Z | test_muniments/test_integration/test_external/test_couchdb/schedule_8c42557da273420492cd6a16c25a2d74.py | alunduil/muniments | 0e4f0a7cdc0f5ff016dc6e9271631953a0fe4092 | [
"MIT"
] | null | null | null | test_muniments/test_integration/test_external/test_couchdb/schedule_8c42557da273420492cd6a16c25a2d74.py | alunduil/muniments | 0e4f0a7cdc0f5ff016dc6e9271631953a0fe4092 | [
"MIT"
] | null | null | null | # Copyright (C) 2015 by Alex Brandt <alunduil@alunduil.com>
#
# muniments is freely distributable under the terms of an MIT-style license.
# See COPYING or http://www.opensource.org/licenses/mit-license.php.
from test_muniments.test_fixtures import register_fixture
from test_muniments.test_integration.test_external.test_couchdb import CouchDbReadFixture
from test_muniments.test_integration.test_external.test_couchdb import CouchDbWriteFixture
from test_muniments.test_unit.test_scheduler.test_models import test_schedule
for module in [ getattr(test_schedule, module_name) for module_name in dir(test_schedule) if module_name.startswith('model_') ]:
register_fixture(globals(), ( CouchDbReadFixture, ), module.DATA)
register_fixture(globals(), ( CouchDbWriteFixture, ), module.DATA)
| 56.928571 | 128 | 0.824341 | 106 | 797 | 5.971698 | 0.518868 | 0.050553 | 0.107425 | 0.132701 | 0.192733 | 0.192733 | 0.192733 | 0.192733 | 0.192733 | 0.192733 | 0 | 0.00554 | 0.094103 | 797 | 13 | 129 | 61.307692 | 0.871191 | 0.249686 | 0 | 0 | 0 | 0 | 0.010118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1deeb20f41ec3e7ccc728da600f515018ab32f12 | 387 | py | Python | studentportal/admin.py | IIIT-Delhi/sg-cw-iiitd | a85e90fccc3f6a11446bfd9200b32444fd0a13d2 | [
"MIT"
] | null | null | null | studentportal/admin.py | IIIT-Delhi/sg-cw-iiitd | a85e90fccc3f6a11446bfd9200b32444fd0a13d2 | [
"MIT"
] | 14 | 2020-12-26T13:18:17.000Z | 2022-01-06T10:49:04.000Z | studentportal/admin.py | IIIT-Delhi/sg-cw-iiitd | a85e90fccc3f6a11446bfd9200b32444fd0a13d2 | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from studentportal.models import Category, NGO, Project, Document, Feedback, Bug, CustomUser
admin.site.register(Category)
admin.site.register(NGO)
admin.site.register(Project)
admin.site.register(Document)
admin.site.register(Feedback)
admin.site.register(Bug)
admin.site.register(CustomUser, UserAdmin)
| 32.25 | 92 | 0.826873 | 52 | 387 | 6.153846 | 0.346154 | 0.196875 | 0.371875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 387 | 11 | 93 | 35.181818 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.3 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1df43a244da570d8ae1f14638aa09e077d95b9ec | 445 | py | Python | src/tense.py | jdw1996/french-conjugator | a0656e081f0b73d2f086f9a85a51dec1fa41ef2d | [
"MIT"
] | 1 | 2020-07-05T20:11:20.000Z | 2020-07-05T20:11:20.000Z | src/tense.py | jdw1996/french-conjugator | a0656e081f0b73d2f086f9a85a51dec1fa41ef2d | [
"MIT"
] | 4 | 2016-12-30T18:42:15.000Z | 2017-01-23T04:23:59.000Z | src/tense.py | jdw1996/french-conjugator | a0656e081f0b73d2f086f9a85a51dec1fa41ef2d | [
"MIT"
] | null | null | null | #*********************************************
# Joseph Winters
# Enum for representing different verb tenses
# Fall 2016
#*********************************************
import enum
@enum.unique
class Tense(enum.Enum):
"""An enum to represent different verb tenses."""
PASSE_COMPOSE = "passé composé"
IMPARFAIT = "imparfait"
PRESENT = "présent"
FUTUR_PROCHE = "futur proche"
FUTUR_SIMPLE = "futur simple"
| 22.25 | 53 | 0.530337 | 41 | 445 | 5.682927 | 0.658537 | 0.111588 | 0.16309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011142 | 0.193258 | 445 | 19 | 54 | 23.421053 | 0.637883 | 0.45618 | 0 | 0 | 0 | 0 | 0.228448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.125 | 0.125 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
1df50e395489fcf914245c7892fa8093c2b1c853 | 1,165 | py | Python | tests/test_parsing.py | krassowski/data-vault | ce8b4b1d88d0f54e12e3dbe53a10344c1158afb7 | [
"MIT"
] | 13 | 2019-12-08T20:07:06.000Z | 2022-02-02T09:52:15.000Z | tests/test_parsing.py | krassowski/data-vault | ce8b4b1d88d0f54e12e3dbe53a10344c1158afb7 | [
"MIT"
] | 17 | 2019-12-08T13:03:06.000Z | 2022-01-05T00:22:17.000Z | tests/test_parsing.py | krassowski/data-vault | ce8b4b1d88d0f54e12e3dbe53a10344c1158afb7 | [
"MIT"
] | 2 | 2021-03-17T20:49:00.000Z | 2021-04-22T08:44:19.000Z | from data_vault import clean_line
from data_vault.parsing import unquote, bool_or_str
def test_clean_line():
pieces = clean_line('from module import a, b, c')
assert pieces == ['from', 'module', 'import', 'a,b,c']
# commas
pieces = clean_line('from "a/path/with/c,o,m,m,a,s" import a, b, c')
assert pieces == ['from', '"a/path/with/c,o,m,m,a,s"', 'import', 'a,b,c']
pieces = clean_line("from 'a/path/with/c,o,m,m,a,s' import a, b, c")
assert pieces == ['from', "'a/path/with/c,o,m,m,a,s'", 'import', 'a,b,c']
# escaped paths
pieces = clean_line(r"from 'an/escaped\'path/' import a, b, c")
assert pieces == ['from', r"'an/escaped\'path/'", 'import', 'a,b,c']
# comments
pieces = clean_line('from module import a, b#, c')
assert pieces == ['from', 'module', 'import', 'a,b']
def test_unquote():
assert unquote("'an/escaped\'path/'") == 'an/escaped\'path/'
assert unquote('"an/escaped\"path/"') == 'an/escaped\"path/'
def test_bool_or_str():
assert bool_or_str('a') == 'a'
assert bool_or_str('True') is True
assert bool_or_str('False') is False
assert bool_or_str('true') == 'true'
| 33.285714 | 77 | 0.611159 | 195 | 1,165 | 3.528205 | 0.169231 | 0.101744 | 0.116279 | 0.117733 | 0.684593 | 0.62936 | 0.627907 | 0.540698 | 0.427326 | 0.427326 | 0 | 0 | 0.177682 | 1,165 | 34 | 78 | 34.264706 | 0.718163 | 0.024893 | 0 | 0 | 0 | 0.095238 | 0.371908 | 0.088339 | 0 | 0 | 0 | 0 | 0.52381 | 1 | 0.142857 | false | 0 | 0.571429 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
1dfe94c37e75778b15d25eda8d478d46e3e5f84a | 252 | py | Python | spec_classes/collections/__init__.py | matthewwardrop/spec-classes | d50c9ded426b5becd445c255e45b88ccc8961672 | [
"MIT"
] | 1 | 2021-07-10T11:43:16.000Z | 2021-07-10T11:43:16.000Z | spec_classes/collections/__init__.py | matthewwardrop/spec-classes | d50c9ded426b5becd445c255e45b88ccc8961672 | [
"MIT"
] | 1 | 2021-09-02T18:15:10.000Z | 2021-09-02T18:15:10.000Z | spec_classes/collections/__init__.py | matthewwardrop/spec-classes | d50c9ded426b5becd445c255e45b88ccc8961672 | [
"MIT"
] | null | null | null | from .base import CollectionAttrMutator
from .mappings import MappingMutator
from .sequences import SequenceMutator
from .sets import SetMutator
__all__ = (
"CollectionAttrMutator",
"MappingMutator",
"SequenceMutator",
"SetMutator",
)
| 21 | 39 | 0.761905 | 21 | 252 | 8.952381 | 0.52381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162698 | 252 | 11 | 40 | 22.909091 | 0.890995 | 0 | 0 | 0 | 0 | 0 | 0.238095 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3803230f92b0f541e0f405006d1cc08b2ea30a1f | 403 | py | Python | pyleecan/Methods/Mesh/MeshVTK/get_normals.py | Kelos-Zhu/pyleecan | 368f8379688e31a6c26d2c1cd426f21dfbceff2a | [
"Apache-2.0"
] | null | null | null | pyleecan/Methods/Mesh/MeshVTK/get_normals.py | Kelos-Zhu/pyleecan | 368f8379688e31a6c26d2c1cd426f21dfbceff2a | [
"Apache-2.0"
] | null | null | null | pyleecan/Methods/Mesh/MeshVTK/get_normals.py | Kelos-Zhu/pyleecan | 368f8379688e31a6c26d2c1cd426f21dfbceff2a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
def get_normals(self, indices=[]):
"""Return the array of the normals coordinates.
Parameters
----------
self : MeshVTK
a MeshVTK object
indices : list
list of the points to extract (optional)
Returns
-------
normals: ndarray
Normals coordinates
"""
surf = self.get_surf(indices)
return surf.cell_normals
| 17.521739 | 51 | 0.580645 | 44 | 403 | 5.25 | 0.590909 | 0.112554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003521 | 0.295285 | 403 | 22 | 52 | 18.318182 | 0.809859 | 0.605459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
380f2ee2a6322d6ac5358848b6792f404069c2ee | 612 | py | Python | play_go_web_app/api/migrations/0010_auto_20210522_0012.py | jacobsomer/AlphaGoLite | c051d5e45d4e0e8595927c5cfeb4ae38727588b3 | [
"BSD-2-Clause"
] | null | null | null | play_go_web_app/api/migrations/0010_auto_20210522_0012.py | jacobsomer/AlphaGoLite | c051d5e45d4e0e8595927c5cfeb4ae38727588b3 | [
"BSD-2-Clause"
] | null | null | null | play_go_web_app/api/migrations/0010_auto_20210522_0012.py | jacobsomer/AlphaGoLite | c051d5e45d4e0e8595927c5cfeb4ae38727588b3 | [
"BSD-2-Clause"
] | null | null | null | # Generated by Django 3.2.3 on 2021-05-22 04:12
import api.models
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0009_alter_room_id'),
]
operations = [
migrations.AlterField(
model_name='room',
name='player1',
field=models.CharField(default=api.models.randomNameP1, max_length=50),
),
migrations.AlterField(
model_name='room',
name='player2',
field=models.CharField(default=api.models.randomNameP2, max_length=50),
),
]
| 24.48 | 83 | 0.602941 | 66 | 612 | 5.484848 | 0.575758 | 0.074586 | 0.138122 | 0.160221 | 0.403315 | 0.403315 | 0 | 0 | 0 | 0 | 0 | 0.061364 | 0.281046 | 612 | 24 | 84 | 25.5 | 0.761364 | 0.073529 | 0 | 0.333333 | 1 | 0 | 0.076106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
381ea01f704d01a859f15c586b338c0400f338ff | 1,411 | py | Python | numpy/test_numpy.py | Akagi201/learning-opencv | 26d208806e76a14325ceb3e365baefbc05c79e49 | [
"MIT"
] | 1 | 2021-07-14T02:57:39.000Z | 2021-07-14T02:57:39.000Z | numpy/test_numpy.py | Akagi201/learning-opencv | 26d208806e76a14325ceb3e365baefbc05c79e49 | [
"MIT"
] | null | null | null | numpy/test_numpy.py | Akagi201/learning-opencv | 26d208806e76a14325ceb3e365baefbc05c79e49 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from numpy import *
# sample
print ('#' * 15 + ' sample ' + '#' * 15)
a = arange(15).reshape(3, 5)
print a
print a.shape
print a.ndim
print a.dtype.name
print a.itemsize
print a.size
print type(a)
b = array([6, 7, 8])
print b
print type(b)
# create ndarray
print ('#' * 15 + ' create ndarray ' + '#' * 15)
a = array([2,3,4])
print a
print a.dtype
b = array([1.2, 3.5, 5.1])
print b.dtype
c = array([(1.5, 2, 3), (4, 5, 6)])
print c
d = array([[1, 2], [3, 4]], dtype = complex)
print d
print zeros((3, 4))
print ones((2,3,4), dtype=int16)
print empty((2, 3))
print arange(10, 30, 5)
print arange(0, 2, 0.3)
# print ndarray
# 1d array
a = arange(6)
print a
# 2d array
b = arange(12).reshape(4,3)
print b
# 3d array
c = arange(24).reshape(2,3,4)
print c
# 4d array
d = arange(2*3*4*5).reshape(2,3,4,5)
print d
print arange(10000)
print arange(10000).reshape(100, 100)
#set_printoptions(threshold='nan')
#print arange(10000)
# basic operation
print ('#' * 15 + ' basic operation ' + '#' * 15)
a = array([20, 30, 40, 50])
b = arange(4)
print b
c = a - b
print c
print b**2
print 10 * sin(a)
print a < 35
A = array([[1,1],
[0,1]])
B = array([[2,0],
[3,4]])
# elementwise product
print A*B
# matrix product
print dot(A,B)
a = ones((2,3), dtype=int)
b = random.random((2,3))
a *= 3
print a
b += a
print b
# b is converted to integer type
a += b
print a
# upcast
| 12.598214 | 49 | 0.603118 | 271 | 1,411 | 3.136531 | 0.265683 | 0.091765 | 0.024706 | 0.014118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112889 | 0.202693 | 1,411 | 111 | 50 | 12.711712 | 0.642667 | 0.165131 | 0 | 0.229508 | 0 | 0 | 0.040413 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.016393 | null | null | 0.622951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
38237553ab1c536fbc2db7dc1e7508a52181b729 | 691 | py | Python | vae_lm/models/base/flows/iaf.py | Nemexur/nonauto-lm | 6f237e4fc2b3b679cd92126ea5facd58d3cf6e75 | [
"Apache-2.0"
] | 3 | 2021-05-04T09:41:20.000Z | 2021-12-14T07:41:40.000Z | vae_lm/models/base/flows/iaf.py | Nemexur/nonauto-lm | 6f237e4fc2b3b679cd92126ea5facd58d3cf6e75 | [
"Apache-2.0"
] | null | null | null | vae_lm/models/base/flows/iaf.py | Nemexur/nonauto-lm | 6f237e4fc2b3b679cd92126ea5facd58d3cf6e75 | [
"Apache-2.0"
] | null | null | null | from .made import MADE
from .maf import MAFlow
from .flow import Flow
@Flow.register("iaf")
class IAFlow(MAFlow):
"""
An implementation of Inverse Autoregressive Flow for Density Estimation.
IAF is just like MAF but we swap forward and backward.
Parameters
----------
made : `MADE`, required
Instance of MADE (Masked autoencoder for Density estimation).
parity : `bool`, required
Simply flipping the transformation on last dimension.
Works good when we stack IAF.
"""
def __init__(self, made: MADE, parity: bool) -> None:
super().__init__(made, parity)
self.forward, self.backward = self.backward, self.forward
| 28.791667 | 76 | 0.670043 | 86 | 691 | 5.290698 | 0.581395 | 0.043956 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23589 | 691 | 23 | 77 | 30.043478 | 0.861742 | 0.51809 | 0 | 0 | 0 | 0 | 0.010526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
382a30dc55c7d4bd1668bf7b67414cd718874fa0 | 913 | py | Python | globe/util/id_gen.py | T620/globe | 5033a9750387d169b757538764bdf4fd229b81ae | [
"MIT"
] | null | null | null | globe/util/id_gen.py | T620/globe | 5033a9750387d169b757538764bdf4fd229b81ae | [
"MIT"
] | 14 | 2018-04-06T16:19:38.000Z | 2018-04-09T18:59:08.000Z | globe/util/id_gen.py | T620/globe | 5033a9750387d169b757538764bdf4fd229b81ae | [
"MIT"
] | null | null | null | #this file generates IDs for posts etc
import os, uuid, random, string
def postID(size, chars, userID):
#take the userID and add a random string to it
postID = str(userID) + "." + ''.join(random.choice(chars) for _ in range(size))
return postID
#this function is also used to generate dispute ids
def booking_id():
#random String
bookingID = str(uuid.uuid4().hex)
return str(bookingID)
def user_id(size, chars):
#take the userID and add a random string to it
userID = ''.join(random.choice(chars) for _ in range(size))
print "gen'd new id: %s" % userID
return userID
def username(forename, surname):
#generates a username by taking the forename and surname and three random numbers
#i know, i'll fix this mess later
num1 = str(random.randint(1,9))
num2 = str(random.randint(1,9))
num3 = str(random.randint(1,9))
username = forename + "." + surname + num1 + num2 + num3
return username
| 25.361111 | 82 | 0.711939 | 146 | 913 | 4.424658 | 0.452055 | 0.074303 | 0.074303 | 0.078947 | 0.321981 | 0.23839 | 0.23839 | 0.23839 | 0.23839 | 0.111455 | 0 | 0.017287 | 0.176342 | 913 | 35 | 83 | 26.085714 | 0.841755 | 0.330778 | 0 | 0 | 1 | 0 | 0.029801 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.058824 | null | null | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38333ea7cb486952aa3b432da9340bf2cf0cc1ea | 334 | py | Python | util/notifier.py | vfcosta/coegan-trained | 44174e68909d9c03bf2e4b7e4c7a48237a560183 | [
"MIT"
] | null | null | null | util/notifier.py | vfcosta/coegan-trained | 44174e68909d9c03bf2e4b7e4c7a48237a560183 | [
"MIT"
] | null | null | null | util/notifier.py | vfcosta/coegan-trained | 44174e68909d9c03bf2e4b7e4c7a48237a560183 | [
"MIT"
] | 1 | 2021-06-11T16:52:55.000Z | 2021-06-11T16:52:55.000Z | import requests
import os
from dotenv import load_dotenv
load_dotenv()
def notify(message, event="coegan_epoch_ended", ifttt_key=os.getenv("IFTTT_WEBHOOK_KEY")):
if ifttt_key is None:
return
report = {'value1': message}
requests.post(f"https://maker.ifttt.com/trigger/{event}/with/key/{ifttt_key}", data=report)
| 27.833333 | 95 | 0.730539 | 49 | 334 | 4.795918 | 0.632653 | 0.102128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003472 | 0.137725 | 334 | 11 | 96 | 30.363636 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.302395 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
38336754db20382751da00cdc9212e6000075b73 | 734 | py | Python | parser/team23/instruccion/owner_mode.py | strickergt128/tytus | 93216dd9481ea0775da1d2967dc27be66872537f | [
"MIT"
] | null | null | null | parser/team23/instruccion/owner_mode.py | strickergt128/tytus | 93216dd9481ea0775da1d2967dc27be66872537f | [
"MIT"
] | null | null | null | parser/team23/instruccion/owner_mode.py | strickergt128/tytus | 93216dd9481ea0775da1d2967dc27be66872537f | [
"MIT"
] | null | null | null | from abstract.instruccion import *
from tools.tabla_tipos import *
class owner_mode(instruccion):
def __init__(self, owner, dato, line, column, num_nodo):
super().__init__(line, column)
self.owner = owner
self.dato = dato
#Nodo AST Owner Mode
if owner:
self.nodo = nodo_AST('OWNER', num_nodo)
self.nodo.hijos.append(nodo_AST('OWNER', num_nodo+1))
else:
self.nodo = nodo_AST('MODE', num_nodo)
self.nodo.hijos.append(nodo_AST('MODE', num_nodo+1))
self.nodo.hijos.append(nodo_AST('=', num_nodo+2))
self.nodo.hijos.append(nodo_AST(dato, num_nodo+3))
def ejecutar(self):
pass | 33.363636 | 77 | 0.588556 | 96 | 734 | 4.260417 | 0.302083 | 0.119804 | 0.127139 | 0.185819 | 0.400978 | 0.288509 | 0.161369 | 0.161369 | 0 | 0 | 0 | 0.007692 | 0.291553 | 734 | 22 | 78 | 33.363636 | 0.778846 | 0.025886 | 0 | 0 | 0 | 0 | 0.026573 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0.058824 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
6982306eacd2e07ce115be7595dededf8ea891b7 | 785 | py | Python | neuronit/neuronit/migrations/0005_carousel.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | neuronit/neuronit/migrations/0005_carousel.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | neuronit/neuronit/migrations/0005_carousel.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.8 on 2017-03-10 12:50
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
('neuronit', '0004_delete_carousel'),
]
operations = [
migrations.CreateModel(
name='Carousel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('title', models.CharField(max_length=200)),
('intro_text', models.CharField(max_length=500, null=True)),
('link', models.CharField(max_length=200, null=True)),
('link_text', models.CharField(max_length=200, null=True)),
],
),
]
| 28.035714 | 76 | 0.582166 | 83 | 785 | 5.337349 | 0.614458 | 0.13544 | 0.162528 | 0.216704 | 0.291196 | 0.158014 | 0.158014 | 0 | 0 | 0 | 0 | 0.056838 | 0.282803 | 785 | 27 | 77 | 29.074074 | 0.730018 | 0.08535 | 0 | 0 | 1 | 0 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
6986a9a9e7ba69465186751a1944f457ef720585 | 615 | py | Python | chatette_qiu/units/intent/example.py | fanfanfeng/Chatette | dc88ab195876c6d3ed2e02c2e0005f47bee5ff26 | [
"MIT"
] | null | null | null | chatette_qiu/units/intent/example.py | fanfanfeng/Chatette | dc88ab195876c6d3ed2e02c2e0005f47bee5ff26 | [
"MIT"
] | null | null | null | chatette_qiu/units/intent/example.py | fanfanfeng/Chatette | dc88ab195876c6d3ed2e02c2e0005f47bee5ff26 | [
"MIT"
] | null | null | null | from chatette_qiu.units import Example
class IntentExample(Example):
def __init__(self, name, text=None, entities=None):# -> None:
super(IntentExample, self).__init__(text, entities)
self.name = name
@classmethod
def from_example(cls, name, ex):
return cls(name, ex.text, ex.entities)
# def __str__(self):
# return str(self.__dict__)
def __eq__(self, other):
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.name+self.text+str(self.entities))
| 26.73913 | 65 | 0.653659 | 78 | 615 | 4.615385 | 0.346154 | 0.066667 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.229268 | 615 | 22 | 66 | 27.954545 | 0.759494 | 0.092683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.357143 | false | 0 | 0.071429 | 0.285714 | 0.785714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
69ab449447e70697b843239f1b9d48af21824e08 | 58,084 | py | Python | hub/dos_test.py | hungyo/pubsubhubbub | 04c7f05f163e0b127b98befa25819a20390678a3 | [
"Apache-2.0"
] | 4 | 2015-11-17T18:18:39.000Z | 2020-05-29T09:39:43.000Z | hub/dos_test.py | ankurpiyush26/pubsubhubbub | 04c7f05f163e0b127b98befa25819a20390678a3 | [
"Apache-2.0"
] | 1 | 2019-12-16T13:30:11.000Z | 2019-12-16T13:30:11.000Z | hub/dos_test.py | ankurpiyush26/pubsubhubbub | 04c7f05f163e0b127b98befa25819a20390678a3 | [
"Apache-2.0"
] | 1 | 2020-05-29T09:40:20.000Z | 2020-05-29T09:40:20.000Z | #!/usr/bin/env python
#
# Copyright 2009 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Tests for the dos module."""
import cProfile
import gc
import logging
logging.basicConfig(format='%(levelname)-8s %(filename)s] %(message)s')
import os
import random
import sys
import unittest
import testutil
testutil.fix_path()
from google.appengine.api import memcache
from google.appengine.ext import webapp
import dos
################################################################################
class LimitTestBase(testutil.HandlerTestBase):
"""Base class for limit function tests."""
def setUp(self):
"""Sets up the test harness."""
testutil.HandlerTestBase.setUp(self)
self.old_environ = os.environ.copy()
os.environ['PATH_INFO'] = '/foobar_path'
def tearDown(self):
"""Tears down the test hardness."""
testutil.HandlerTestBase.tearDown(self)
os.environ.clear()
os.environ.update(self.old_environ)
class HeaderHandler(webapp.RequestHandler):
# Rate limit by headers
@dos.limit(count=3, period=10)
def get(self):
self.response.out.write('get success')
# Rate limit by custom header
@dos.limit(header='HTTP_FANCY_HEADER', count=3, period=10)
def post(self):
self.response.out.write('post success')
class HeaderTest(LimitTestBase):
"""Tests for limiting by only headers."""
handler_class = HeaderHandler
def testDefaultHeader(self):
"""Tests limits on a default header."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
for i in xrange(3):
self.handle('get')
self.assertEquals(200, self.response_code())
self.assertEquals('get success', self.response_body())
self.handle('get')
self.assertEquals(503, self.response_code())
# Different header value will not be limited.
os.environ['REMOTE_ADDR'] = '10.1.1.4'
self.handle('get')
self.assertEquals(200, self.response_code())
def testCustomHeader(self):
"""Tests limits on a default header."""
header = 'HTTP_FANCY_HEADER'
os.environ[header] = 'my cool header value'
for i in xrange(3):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
self.handle('post')
self.assertEquals(503, self.response_code())
# Different header value will not be limited.
os.environ['HTTP_FANCY_HEADER'] = 'something else'
self.handle('post')
self.assertEquals(200, self.response_code())
def testHeaderMissing(self):
"""Tests when rate-limiting on a header that's missing."""
# Should not allow more than three requests here, but
# since there is no limit key, we let them all through.
for i in xrange(4):
self.handle('get')
self.assertEquals(200, self.response_code())
self.assertEquals('get success', self.response_body())
class ParamHandler(webapp.RequestHandler):
# Limit by parameter
@dos.limit(param='foo', header=None, count=3, period=10)
def post(self):
self.response.out.write('post success')
class ParamTest(LimitTestBase):
"""Tests for limiting by only parameters."""
handler_class = ParamHandler
def testParam(self):
"""Tests limits on a parameter."""
for i in xrange(3):
self.handle('post', ('foo', 'meep'))
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
self.handle('post', ('foo', 'meep'))
self.assertEquals(503, self.response_code())
# Different parameter value will not be limited.
self.handle('post', ('foo', 'wooh'))
self.assertEquals(200, self.response_code())
def testParamMissing(self):
"""Tests when rate-limiting on a parameter that's missing."""
# Should not allow more than three requests here, but
# since there is no limit key, we let them all through.
for i in xrange(4):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
class ParamAndHeaderHandler(webapp.RequestHandler):
# Limit by headers and params
@dos.limit(param='foo', count=3, period=10)
def post(self):
self.response.out.write('post success')
class ParamAndHeaderTest(LimitTestBase):
"""Tests for limiting by parameters and headers."""
handler_class = ParamAndHeaderHandler
def testHeaderAndParam(self):
"""Tests when a header and parameter are limited."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
for i in xrange(3):
self.handle('post', ('foo', 'meep'))
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
self.handle('post', ('foo', 'meep'))
self.assertEquals(503, self.response_code())
# Different header *or* parmaeter values will not be limited.
self.handle('post', ('foo', 'stuff'))
self.assertEquals(200, self.response_code())
os.environ['REMOTE_ADDR'] = '10.1.1.4'
self.handle('post', ('foo', 'meep'))
self.assertEquals(200, self.response_code())
def testHeaderMissing(self):
"""Tests when the header should be there too but isn't."""
# Should not allow more than three requests here, but
# since there is no limit key, we let them all through.
for i in xrange(4):
self.handle('post', ('foo', 'meep'))
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
def testParamMissing(self):
"""Tests when the parameter should be there too but isn't."""
# Should not allow more than three requests here, but
# since there is no limit key, we let them all through.
os.environ['REMOTE_ADDR'] = '10.1.1.4'
for i in xrange(4):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
def testBothMissing(self):
"""Tests when the header and parameter are missing."""
# Should not allow more than three requests here, but
# since there is no limit key, we let them all through.
for i in xrange(4):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
class MethodsAndUrlsHandler(webapp.RequestHandler):
@dos.limit(count=3, period=10)
def get(self):
self.response.out.write('get success')
@dos.limit(count=3, period=10)
def post(self):
self.response.out.write('post success')
class MethodsAndUrlsTest(LimitTestBase):
"""Tests for limiting across various methods and URLs."""
handler_class = MethodsAndUrlsHandler
def testMethods(self):
"""Tests that methods are limited separately."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
for i in xrange(3):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
self.handle('post')
self.assertEquals(503, self.response_code())
# Different method still works.
self.handle('get')
self.assertEquals(200, self.response_code())
def testUrls(self):
"""Tests that limiting for the same verb on different URLs works."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
for i in xrange(3):
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
self.handle('post')
self.assertEquals(503, self.response_code())
# Different path still works.
os.environ['PATH_INFO'] = '/other_path'
self.handle('post')
self.assertEquals(200, self.response_code())
class ErrorParamsHandler(webapp.RequestHandler):
# Alternate error code
@dos.limit(count=0, period=1, error_code=409, retry_after=99)
def get(self):
self.response.out.write('get success')
# No retry-after time
@dos.limit(count=0, period=1, retry_after=None)
def post(self):
self.response.out.write('post success')
# Defaults
@dos.limit(count=0, period=1)
def put(self):
self.response.out.write('put success')
class ErrorParamsTest(LimitTestBase):
"""Tests the error and retry paramters"""
handler_class = ErrorParamsHandler
def testDefaultRetryAmount(self):
"""Tests the supplied retry amount is valid."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
self.handle('put')
self.assertEquals(503, self.response_code())
self.assertEquals('120', self.response_headers().get('Retry-After'))
def testCustomErrorCode(self):
"""Tests when a custom error code and retry time are specified."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
self.handle('get')
self.assertEquals(409, self.response_code())
self.assertEquals('99', self.response_headers().get('Retry-After'))
def testNoRetryTime(self):
"""Tests when no retry time should be returned."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
self.handle('post')
self.assertEquals(503, self.response_code())
self.assertEquals(None, self.response_headers().get('Retry-After'))
class ConfigErrorTest(unittest.TestCase):
"""Tests various limit configuration errors."""
def testNoKeyError(self):
"""Tests when there is no limiting key to derive."""
self.assertRaises(
dos.ConfigError,
dos.limit,
header=None,
param=None)
def testNegativeCount(self):
"""Tests when the count is less than zero."""
self.assertRaises(
dos.ConfigError,
dos.limit,
count=-1,
period=1)
def testZeroPeriod(self):
"""Tests when the count is less than zero."""
self.assertRaises(
dos.ConfigError,
dos.limit,
count=1,
period=0)
def testParamForDifferentVerb(self):
"""Tests trying to rate limit a ."""
def put():
pass
wrapper = dos.limit(param='okay', count=1, period=1)
self.assertRaises(dos.ConfigError, wrapper, put)
class MemcacheDetailsHandler(webapp.RequestHandler):
@dos.limit(count=0, period=5.234)
def post(self):
self.response.out.write('post success')
class MemcacheDetailTest(LimitTestBase):
"""Tests for various memcache details and failures."""
handler_class = MemcacheDetailsHandler
def setUp(self):
"""Sets up the test harness."""
LimitTestBase.setUp(self)
os.environ['REMOTE_ADDR'] = '10.1.1.4'
self.expected_key = 'POST /foobar_path REMOTE_ADDR=10.1.1.4'
self.expected_incr = []
self.expected_add = []
self.old_incr = dos.memcache.incr
self.old_add = dos.memcache.add
def incr(key):
self.assertEquals(self.expected_key, key)
return self.expected_incr.pop(0)
dos.memcache.incr = incr
def add(key, value, time=None):
self.assertEquals(self.expected_key, key)
self.assertEquals(1, value)
self.assertEquals(5.234, time)
return self.expected_add.pop(0)
dos.memcache.add = add
def tearDown(self):
"""Tears down the test harness."""
LimitTestBase.tearDown(self)
self.assertEquals(0, len(self.expected_incr))
self.assertEquals(0, len(self.expected_add))
dos.memcache.incr = self.old_incr
dos.memcache.add = self.old_add
def testIncrFailure(self):
"""Tests when the initial increment fails."""
self.expected_incr.append(None)
self.expected_add.append(True)
self.handle('post')
self.assertEquals(503, self.response_code())
def testIncrAndAddFailure(self):
"""Tests when the initial increment and the following add fail."""
self.expected_incr.append(None)
self.expected_add.append(False)
self.expected_incr.append(14)
self.handle('post')
self.assertEquals(503, self.response_code())
def testCompleteFailure(self):
"""Tests when all memcache calls fail."""
self.expected_incr.append(None)
self.expected_add.append(False)
self.expected_incr.append(None)
self.handle('post')
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
class WhiteListHandler(webapp.RequestHandler):
@dos.limit(count=0, period=1,
header_whitelist=set(['10.1.1.5', '10.1.1.6']))
def get(self):
self.response.out.write('get success')
@dos.limit(param='foobar', header=None, count=0, period=1,
param_whitelist=set(['meep', 'stuff']))
def post(self):
self.response.out.write('post success')
class WhiteListTest(LimitTestBase):
"""Tests for white-listing."""
handler_class = WhiteListHandler
def testHeaderWhitelist(self):
"""Tests white-lists for headers."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
self.handle('get')
self.assertEquals(503, self.response_code())
for addr in ('10.1.1.5', '10.1.1.6'):
os.environ['REMOTE_ADDR'] = addr
self.handle('get')
self.assertEquals(200, self.response_code())
self.assertEquals('get success', self.response_body())
def testParameterWhitelist(self):
"""Tests white-lists for parameters."""
self.handle('post', ('foobar', 'zebra'))
self.assertEquals(503, self.response_code())
for value in ('meep', 'stuff'):
self.handle('post', ('foobar', value))
self.assertEquals(200, self.response_code())
self.assertEquals('post success', self.response_body())
class LayeredHandler(webapp.RequestHandler):
@dos.limit(count=3, period=1)
@dos.limit(header=None, param='stuff', count=0, period=1)
def get(self):
self.response.out.write('get success')
class LayeringTest(LimitTestBase):
"""Tests that dos limits can be layered."""
handler_class = LayeredHandler
def testLayering(self):
"""Tests basic layering."""
os.environ['REMOTE_ADDR'] = '10.1.1.3'
# First request works normally, limiting by IP.
self.handle('get')
self.assertEquals(200, self.response_code())
self.assertEquals('get success', self.response_body())
# Next request uses param and is blocked.
self.handle('get', ('stuff', 'meep'))
self.assertEquals(503, self.response_code())
# Next request without param is allowed, following one is blocked.
self.handle('get')
self.assertEquals(200, self.response_code())
self.assertEquals('get success', self.response_body())
self.handle('get')
self.assertEquals(503, self.response_code())
################################################################################
class GetUrlDomainTest(unittest.TestCase):
"""Tests for the get_url_domain function."""
def testDomain(self):
"""Tests good domain names."""
# No subdomain
self.assertEquals(
'example.com',
dos.get_url_domain('http://example.com/foo/bar?meep=stuff#asdf'))
# One subdomain
self.assertEquals(
'www.example.com',
dos.get_url_domain('http://www.example.com/foo/bar?meep=stuff#asdf'))
# Many subdomains
self.assertEquals(
'1.2.3.many.sub.example.com',
dos.get_url_domain('http://1.2.3.many.sub.example.com/'))
# Domain with no trailing path
self.assertEquals(
'www.example.com',
dos.get_url_domain('http://www.example.com'))
def testDomainExceptions(self):
"""Tests that some URLs may use more than the domain suffix."""
self.assertEquals(
'blogspot.com',
dos.get_url_domain('http://example.blogspot.com/this-is?some=test'))
def testIP(self):
"""Tests IP addresses."""
self.assertEquals(
'192.168.1.1',
dos.get_url_domain('http://192.168.1.1/foo/bar?meep=stuff#asdf'))
# No trailing path
self.assertEquals(
'192.168.1.1',
dos.get_url_domain('http://192.168.1.1'))
def testOther(self):
"""Tests anything that's not IP- or domain-like."""
self.assertEquals(
'localhost',
dos.get_url_domain('http://localhost/foo/bar?meep=stuff#asdf'))
# No trailing path
self.assertEquals(
'localhost',
dos.get_url_domain('http://localhost'))
def testBadUrls(self):
"""Tests URLs that are bad."""
self.assertEquals('bad_url',
dos.get_url_domain('this is bad'))
self.assertEquals('bad_url',
dos.get_url_domain('example.com/foo/bar?meep=stuff#asdf'))
self.assertEquals('bad_url',
dos.get_url_domain('example.com'))
self.assertEquals('bad_url',
dos.get_url_domain('//example.com'))
self.assertEquals('bad_url',
dos.get_url_domain('/myfeed.atom'))
self.assertEquals('bad_url',
dos.get_url_domain('192.168.0.1/foobar'))
self.assertEquals('bad_url',
dos.get_url_domain('192.168.0.1'))
def testCaching(self):
"""Tests that cache eviction works properly."""
dos._DOMAIN_CACHE.clear()
old_size = dos.DOMAIN_CACHE_SIZE
try:
dos.DOMAIN_CACHE_SIZE = 2
dos._DOMAIN_CACHE['http://a.example.com/stuff'] = 'a.example.com'
dos._DOMAIN_CACHE['http://b.example.com/stuff'] = 'b.example.com'
dos._DOMAIN_CACHE['http://c.example.com/stuff'] = 'c.example.com'
self.assertEquals(3, len(dos._DOMAIN_CACHE))
# Old cache entries are hit:
self.assertEquals('c.example.com',
dos.get_url_domain('http://c.example.com/stuff'))
self.assertEquals(3, len(dos._DOMAIN_CACHE))
# New cache entries clear the contents.
self.assertEquals('d.example.com',
dos.get_url_domain('http://d.example.com/stuff'))
self.assertEquals(1, len(dos._DOMAIN_CACHE))
finally:
dos.DOMAIN_CACHE_SIZE = old_size
################################################################################
class OffsetOrAddTest(unittest.TestCase):
"""Tests for the offset_or_add function."""
def setUp(self):
"""Sets up the test harness."""
self.offsets = None
self.offset_multi = lambda *a, **k: self.offsets.next()(*a, **k)
self.adds = None
self.add_multi = lambda *a, **k: self.adds.next()(*a, **k)
def testAlreadyExist(self):
"""Tests when the keys already exist and can just be added to."""
def offset_multi():
yield lambda *a, **k: {'one': 2, 'three': 4}
self.offsets = offset_multi()
self.assertEquals(
{'one': 2, 'three': 4},
dos.offset_or_add({'blue': 15, 'red': 10}, 5,
offset_multi=self.offset_multi,
add_multi=self.add_multi))
def testKeysAdded(self):
"""Tests when some keys need to be re-added."""
def offset_multi():
yield lambda *a, **k: {'one': None, 'three': 4, 'five': None}
self.offsets = offset_multi()
def add_multi():
def run(adds, **kwargs):
self.assertEquals({'one': 5, 'five': 10}, adds)
return []
yield run
self.adds = add_multi()
self.assertEquals(
{'one': 5, 'three': 4, 'five': 10},
dos.offset_or_add({'one': 5, 'three': 0, 'five': 10}, 5,
offset_multi=self.offset_multi,
add_multi=self.add_multi))
def testAddsRace(self):
"""Tests when re-adding keys is a race that is lost."""
def offset_multi():
yield lambda *a, **k: {'one': None, 'three': 4, 'five': None}
yield lambda *a, **k: {'one': 5, 'five': 10}
self.offsets = offset_multi()
def add_multi():
def run(adds, **kwargs):
self.assertEquals({'one': 5, 'five': 10}, adds)
return ['one', 'five']
yield run
self.adds = add_multi()
self.assertEquals(
{'one': 5, 'three': 4, 'five': 10},
dos.offset_or_add({'one': 5, 'three': 0, 'five': 10}, 5,
offset_multi=self.offset_multi,
add_multi=self.add_multi))
def testOffsetsFailAfterRace(self):
"""Tests when the last offset call fails."""
def offset_multi():
yield lambda *a, **k: {'one': None, 'three': 4, 'five': None}
yield lambda *a, **k: {'one': None, 'five': None}
self.offsets = offset_multi()
def add_multi():
def run(adds, **kwargs):
self.assertEquals({'one': 5, 'five': 10}, adds)
return ['one', 'five']
yield run
self.adds = add_multi()
self.assertEquals(
{'one': None, 'three': 4, 'five': None},
dos.offset_or_add({'one': 5, 'three': 0, 'five': 10}, 5,
offset_multi=self.offset_multi,
add_multi=self.add_multi))
################################################################################
class SamplerTest(unittest.TestCase):
"""Tests for the MultiSampler class."""
def setUp(self):
"""Sets up the test harness."""
testutil.setup_for_testing()
self.domainA = 'mydomain.com'
self.domainB = 'example.com'
self.domainC = 'other.com'
self.domainD = 'meep.com'
self.url1 = 'http://mydomain.com/stuff/meep'
self.url2 = 'http://example.com/some-path?a=b'
self.url3 = 'http://example.com'
self.url4 = 'http://other.com/relative'
self.url5 = 'http://meep.com/another-one'
self.all_urls = [self.url1, self.url2, self.url3, self.url4, self.url5]
self.randrange_results = []
self.fake_randrange = lambda value: self.randrange_results.pop(0)
self.random_results = []
self.fake_random = lambda: self.random_results.pop(0)
self.gettime_results = []
self.fake_gettime = lambda: self.gettime_results.pop(0)
def verify_sample(self,
results,
key,
expected_count,
expected_frequency,
expected_average=1,
expected_min=1,
expected_max=1):
"""Verifies a sample key is present in the results.
Args:
results: SampleResult object.
key: String key of the sample to test.
expected_count: How many samples should be present in the results.
expected_frequency: The frequency of this single key.
expected_average: Expected average value across samples of this key.
expected_min: Expected minimum value across samples of this key.
expected_max: Expected maximum value across samples of this key.
Raises:
AssertionError if any of the expectations are not met.
"""
self.assertEquals(expected_count, results.get_count(key))
self.assertTrue(
-0.001 < (expected_frequency - results.get_frequency(key)) < 0.001,
'Difference %f - %f = %f' % (
expected_frequency, results.get_frequency(key),
expected_frequency - results.get_frequency(key)))
self.assertTrue(
-0.001 < (expected_average - results.get_average(key)) < 0.001,
'Difference %f - %f %f' % (
expected_average, results.get_average(key),
expected_average - results.get_average(key)))
self.assertEquals(expected_min, results.get_min(key))
self.assertEquals(expected_max, results.get_max(key))
def verify_no_sample(self, results, key):
"""Verifies a sample key is not present in the results.
Args:
results: SampleResult object.
key: String key of the sample to test.
Raises:
AssertionError if the key is present.
"""
self.assertEquals(0, len(results.get_samples(key)))
def testSingleAlways(self):
"""Tests single-config sampling when the sampling rate is 100%."""
config = dos.ReservoirConfig(
'always',
period=300,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(5, results.unique_samples)
self.verify_sample(results, self.domainA, 1, 0.1)
self.verify_sample(results, self.domainB, 2, 0.2)
self.verify_sample(results, self.domainC, 1, 0.1)
self.verify_sample(results, self.domainD, 1, 0.1)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(10, results.total_samples)
self.assertEquals(10, results.unique_samples)
self.verify_sample(results, self.domainA, 2, 0.2)
self.verify_sample(results, self.domainB, 4, 0.4)
self.verify_sample(results, self.domainC, 2, 0.2)
self.verify_sample(results, self.domainD, 2, 0.2)
reporter = dos.Reporter()
reporter.set(self.url1, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(11, results.total_samples)
self.assertEquals(11, results.unique_samples)
self.verify_sample(results, self.domainA, 3, 0.3)
self.verify_sample(results, self.domainB, 4, 0.4)
self.verify_sample(results, self.domainC, 2, 0.2)
self.verify_sample(results, self.domainD, 2, 0.2)
def testSingleOverwrite(self):
"""Tests when the number of slots is lower than the sample count."""
config = dos.ReservoirConfig(
'always',
period=300,
rate=1,
samples=2,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
# Writes samples index 0 and 1, then overwrites index 1 again with
# a URL in the same domain.
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
self.gettime_results.extend([0, 1])
self.randrange_results.extend([1])
sampler.sample(reporter, randrange=self.fake_randrange)
results = sampler.get(config)
self.assertEquals(3, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_sample(results, self.domainA, 1, 1.5)
self.verify_sample(results, self.domainB, 1, 1.5)
# Overwrites the sample at index 0, skewing all results towards the
# domain from index 1.
reporter = dos.Reporter()
reporter.set(self.url3, config)
self.gettime_results.extend([0, 1])
self.randrange_results.extend([0])
sampler.sample(reporter, randrange=self.fake_randrange)
results = sampler.get(config)
self.assertEquals(4, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_sample(results, self.domainB, 2, 4.0)
self.verify_no_sample(results, self.domainA)
# Now a sample outside the range won't replace anything.
self.gettime_results.extend([0, 1])
self.randrange_results.extend([3])
sampler.sample(reporter, randrange=self.fake_randrange)
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_sample(results, self.domainB, 2, 5.0)
self.verify_no_sample(results, self.domainA)
def testSingleSampleRate(self):
"""Tests when the sampling rate is less than 1."""
config = dos.ReservoirConfig(
'always',
period=300,
rate=0.2,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10])
self.random_results.extend([0.25, 0.199, 0.1, 0, 0.201])
sampler.sample(reporter, getrandom=self.fake_random)
results = sampler.get(config)
self.assertEquals(3, results.total_samples)
self.assertEquals(3, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 2,
(1.0/0.2) * (2.0/3.0) * (3.0/10.0))
self.verify_sample(results, self.domainC, 1,
(1.0/0.2) * (1.0/3.0) * (3.0/10.0))
def testSingleDoubleSampleRemoved(self):
"""Tests when the same sample key is set twice and one is skipped.
Setting the value twice should just overwite the previous value for a key,
but we store the keys in full order (with dupes) for simpler tests. This
ensures that incorrectly using the sampler with multiple sets won't barf.
"""
config = dos.ReservoirConfig(
'always',
period=300,
rate=0.2,
samples=4,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10])
self.randrange_results.extend([0])
self.random_results.extend([0.25, 0.199, 0.1, 0, 0.3, 0.3])
sampler.sample(reporter, getrandom=self.fake_random)
results = sampler.get(config)
self.assertEquals(3, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 2,
(1.0/0.2) * (2.0/2.0) * (3.0/10.0))
def testSingleSampleRateReplacement(self):
"""Tests when the sample rate is < 1 and slots are overwritten."""
config = dos.ReservoirConfig(
'always',
period=300,
rate=0.2,
samples=2,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
self.gettime_results.extend([0, 10])
self.randrange_results.extend([1])
self.random_results.extend([0.25, 0.199, 0.1, 0])
sampler.sample(reporter, getrandom=self.fake_random)
results = sampler.get(config)
self.assertEquals(3, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 1,
(1.0/0.2) * (1.0/2.0) * (3.0/10.0))
self.verify_sample(results, self.domainC, 1,
(1.0/0.2) * (1.0/2.0) * (3.0/10.0))
def testSingleSampleValues(self):
"""Tests various samples with expected values."""
config = dos.ReservoirConfig(
'always',
period=300,
rate=0.2,
samples=4,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config, 5)
reporter.set(self.url1, config, 20) # in
reporter.set(self.url2, config, 10) # in
reporter.set(self.url2 + '&more=true', config, 25) # in
reporter.set(self.url3, config, 20) # in
reporter.set(self.url4, config, 40) # in
reporter.set(self.url5, config, 60)
self.gettime_results.extend([0, 10])
self.randrange_results.extend([0])
self.random_results.extend([0.25, 0.199, 0.1, 0, 0, 0.1, 0.3])
sampler.sample(reporter,
randrange=self.fake_randrange,
getrandom=self.fake_random)
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(4, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 3,
(1.0/0.2) * (3.0/4.0) * (5.0/10.0),
expected_average=18.333,
expected_min=10,
expected_max=25)
self.verify_sample(results, self.domainC, 1,
(1.0/0.2) * (1.0/4.0) * (5.0/10.0),
expected_average=40,
expected_min=40,
expected_max=40)
def testResetTimestamp(self):
"""Tests resetting the timestamp after the period elapses."""
config = dos.ReservoirConfig(
'always',
period=10,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
self.gettime_results.extend([0, 5])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(1, results.total_samples)
self.assertEquals(1, results.unique_samples)
self.verify_sample(results, self.domainA, 1, 1.0 / 5)
self.verify_no_sample(results, self.domainB)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
reporter = dos.Reporter()
reporter.set(self.url2, config)
self.gettime_results.extend([15, 16])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(1, results.total_samples)
self.assertEquals(1, results.unique_samples)
self.verify_sample(results, self.domainB, 1, 1.0)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
def testSingleUnicodeKey(self):
"""Tests when a sampling key is unicode.
Keys must be UTF-8 encoded because the memcache API will do this for us
(and break) if we don't.
"""
config = dos.ReservoirConfig(
'always',
period=300,
samples=10000,
by_url=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
key = u'this-breaks-stuff\u30d6\u30ed\u30b0\u8846'
key_utf8 = key.encode('utf-8')
reporter.set(key, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(1, results.total_samples)
self.assertEquals(1, results.unique_samples)
self.verify_sample(results, key_utf8, 1, 0.1)
def testMultiple(self):
"""Tests multiple configs being applied together."""
config1 = dos.ReservoirConfig(
'first',
period=300,
samples=10000,
by_domain=True)
config2 = dos.ReservoirConfig(
'second',
period=300,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config1, config2], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config1)
reporter.set(self.url2, config1)
reporter.set(self.url3, config1)
reporter.set(self.url4, config1)
reporter.set(self.url5, config1)
reporter.set(self.url1, config2, 5)
reporter.set(self.url2, config2, 5)
reporter.set(self.url3, config2, 5)
reporter.set(self.url4, config2, 5)
reporter.set(self.url5, config2, 5)
self.gettime_results.extend([0, 10, 10])
sampler.sample(reporter)
results1 = sampler.get(config1)
self.assertEquals(5, results1.total_samples)
self.assertEquals(5, results1.unique_samples)
self.verify_sample(results1, self.domainA, 1, 0.1)
self.verify_sample(results1, self.domainB, 2, 0.2)
self.verify_sample(results1, self.domainC, 1, 0.1)
self.verify_sample(results1, self.domainD, 1, 0.1)
results2 = sampler.get(config2)
self.assertEquals(5, results2.total_samples)
self.assertEquals(5, results2.unique_samples)
self.verify_sample(results2, self.domainA, 1, 0.1,
expected_max=5,
expected_min=5,
expected_average=5)
self.verify_sample(results2, self.domainB, 2, 0.2,
expected_max=5,
expected_min=5,
expected_average=5)
self.verify_sample(results2, self.domainC, 1, 0.1,
expected_max=5,
expected_min=5,
expected_average=5)
self.verify_sample(results2, self.domainD, 1, 0.1,
expected_max=5,
expected_min=5,
expected_average=5)
def testGetSingleKey(self):
"""Tests getting the stats for a single key."""
config = dos.ReservoirConfig(
'single-sample',
period=300,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url3 + '&okay=1', config)
reporter.set(self.url3 + '&okay=2', config)
reporter.set(self.url3 + '&okay=3', config)
reporter.set(self.url3 + '&okay=4', config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(9, results.total_samples)
self.assertEquals(9, results.unique_samples)
self.verify_sample(results, self.domainA, 1, 0.1)
self.verify_sample(results, self.domainB, 6, 0.6)
self.verify_sample(results, self.domainC, 1, 0.1)
self.verify_sample(results, self.domainD, 1, 0.1)
results = sampler.get(config, self.url2)
self.assertEquals(6, results.total_samples)
self.assertEquals(6, results.unique_samples)
self.verify_sample(results, self.domainB, 6, 0.6)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
def testCountLost(self):
"""Tests when the count variable disappears between samples."""
config = dos.ReservoirConfig(
'lost_count',
period=300,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(2, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainA, 1, 0.1)
self.verify_sample(results, self.domainB, 1, 0.1)
memcache.delete('lost_count:by_domain:counter')
reporter = dos.Reporter()
reporter.set(self.url4, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(1, results.total_samples)
# Two samples found because we're still in the same period tolerance.
# Sample at index 0 will be overwritten with the new entry, meaning
# domain A is gone.
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 1, 0.05)
self.verify_sample(results, self.domainC, 1, 0.05)
def testStampLost(self):
"""Tests when the start timestamp is lost between samples."""
config = dos.ReservoirConfig(
'lost_stamp',
period=300,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(2, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainA, 1, 0.1)
self.verify_sample(results, self.domainB, 1, 0.1)
memcache.delete('lost_stamp:by_domain:start_time')
reporter = dos.Reporter()
reporter.set(self.url4, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
results = sampler.get(config)
self.assertEquals(1, results.total_samples)
# Just like losing the count, old samples found because we're still in the
# same period tolerance. Sample at index 0 will be overwritten with the new
# entry, meaning domain A is gone.
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainD)
self.verify_sample(results, self.domainB, 1, 0.05)
self.verify_sample(results, self.domainC, 1, 0.05)
def testSamplesLost(self):
"""Tests when some unique samples were evicted."""
config = dos.ReservoirConfig(
'lost_sample',
period=300,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
memcache.delete_multi([
'lost_sample:by_domain:0',
'lost_sample:by_domain:1',
'lost_sample:by_domain:2',
])
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainB)
self.verify_sample(results, self.domainC, 1, 0.25)
self.verify_sample(results, self.domainD, 1, 0.25)
def testBeforePeriod(self):
"""Tests when the samples retrieved are too old."""
config = dos.ReservoirConfig(
'old_samples',
period=10,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([20, 40])
sampler.sample(reporter)
memcache.set('old_samples:by_domain:start_time', 0)
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(0, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainB)
self.verify_no_sample(results, self.domainC)
self.verify_no_sample(results, self.domainD)
def testBadSamples(self):
"""Tests when getting samples with memcache values that are bad."""
config = dos.ReservoirConfig(
'bad_samples',
period=10,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config)
reporter.set(self.url2, config)
reporter.set(self.url3, config)
reporter.set(self.url4, config)
reporter.set(self.url5, config)
self.gettime_results.extend([0, 10])
sampler.sample(reporter)
# Totaly bad
memcache.set('bad_samples:by_domain:0', 'garbage')
# Bad value.
memcache.set('bad_samples:by_domain:1',
'%s:\0\0\0\1:' % self.domainB)
# Bad when.
memcache.set('bad_samples:by_domain:2',
'%s::\0\0\0\1' % self.domainB)
results = sampler.get(config)
self.assertEquals(5, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_no_sample(results, self.domainA)
self.verify_no_sample(results, self.domainB)
self.verify_sample(results, self.domainC, 1, 0.25)
self.verify_sample(results, self.domainD, 1, 0.25)
def testGetChain(self):
"""Tests getting results from multiple configs in a single call."""
config1 = dos.ReservoirConfig(
'first',
period=300,
rate=1,
samples=10000,
by_domain=True)
config2 = dos.ReservoirConfig(
'second',
period=300,
rate=1,
samples=10000,
by_url=True)
sampler = dos.MultiSampler([config1, config2], gettime=self.fake_gettime)
reporter = dos.Reporter()
reporter.set(self.url1, config1)
reporter.set(self.url2, config1)
reporter.set(self.url3, config1)
reporter.set(self.url4, config1)
reporter.set(self.url5, config1)
reporter.set(self.url1, config2)
reporter.set(self.url2, config2)
reporter.set(self.url3, config2)
reporter.set(self.url4, config2)
reporter.set(self.url5, config2)
self.gettime_results.extend([0, 10, 10, 10, 10])
sampler.sample(reporter)
result_iter = sampler.get_chain(config1, config2)
# Results for config1
results = result_iter.next()
self.assertEquals(5, results.total_samples)
self.assertEquals(5, results.unique_samples)
self.verify_sample(results, self.domainA, 1, 0.1)
self.verify_sample(results, self.domainB, 2, 0.2)
self.verify_sample(results, self.domainC, 1, 0.1)
self.verify_sample(results, self.domainD, 1, 0.1)
# Results for config2
results = result_iter.next()
self.assertEquals(5, results.total_samples)
self.assertEquals(5, results.unique_samples)
self.verify_sample(results, self.url1, 1, 0.1)
self.verify_sample(results, self.url2, 1, 0.1)
self.verify_sample(results, self.url3, 1, 0.1)
self.verify_sample(results, self.url4, 1, 0.1)
self.verify_sample(results, self.url5, 1, 0.1)
# Single key test
result_iter = sampler.get_chain(
config1, config2,
single_key=self.url2)
# Results for config1
results = result_iter.next()
self.assertEquals(2, results.total_samples)
self.assertEquals(2, results.unique_samples)
self.verify_sample(results, self.domainB, 2, 0.2)
# Results for config2
results = result_iter.next()
self.assertEquals(1, results.total_samples)
self.assertEquals(1, results.unique_samples)
self.verify_sample(results, self.url2, 1, 0.1)
def testConfig(self):
"""Tests config validation."""
# Bad name.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'',
period=10,
samples=10,
by_domain=True)
# Bad period.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=0,
samples=10,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=-1,
samples=10,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period='bad',
samples=10,
by_domain=True)
# Bad samples.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=0,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=-1,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples='bad',
by_domain=True)
# Bad domain/url combo.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
by_domain=True,
by_url=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
by_domain=False,
by_url=False)
# Bad rate.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
rate=-1,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
rate=1.1,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
rate='bad',
by_domain=True)
# Bad tolerance.
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
tolerance=-1,
by_domain=True)
self.assertRaises(
dos.ConfigError,
dos.ReservoirConfig,
'my name',
period=10,
samples=10,
tolerance='bad',
by_domain=True)
def testSampleProfile(self):
"""Profiles the sample method with lots of data."""
print 'Tracked objects start',len(gc.get_objects())
config = dos.ReservoirConfig(
'testing',
period=10,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config])
reporter = dos.Reporter()
fake_urls = ['http://example-%s.com/meep' % i
for i in xrange(100)]
for i in xrange(100000):
reporter.set(random.choice(fake_urls), config, random.randint(0, 10000))
del fake_urls
gc.collect()
dos._DOMAIN_CACHE.clear()
gc.disable()
gc.set_debug(gc.DEBUG_STATS | gc.DEBUG_LEAK)
try:
# Swap the two following lines to profile memory vs. CPU
sampler.sample(reporter)
#cProfile.runctx('sampler.sample(reporter)', globals(), locals())
memcache.flush_all() # Clear the string references
print 'Tracked objects before collection', len(gc.get_objects())
dos._DOMAIN_CACHE.clear()
del reporter
del sampler
finally:
print 'Unreachable', gc.collect()
print 'Tracked objects after collection', len(gc.get_objects())
gc.set_debug(0)
gc.enable()
def testGetProfile(self):
"""Profiles the get method when there's lots of data."""
print 'Tracked objects start',len(gc.get_objects())
config = dos.ReservoirConfig(
'testing',
period=10,
rate=1,
samples=10000,
by_domain=True)
sampler = dos.MultiSampler([config])
reporter = dos.Reporter()
fake_urls = ['http://example-%s.com/meep' % i
for i in xrange(100)]
for i in xrange(100000):
reporter.set(random.choice(fake_urls), config, random.randint(0, 10000))
del fake_urls
dos._DOMAIN_CACHE.clear()
gc.collect()
sampler.sample(reporter)
gc.disable()
gc.set_debug(gc.DEBUG_STATS | gc.DEBUG_LEAK)
try:
# Swap the two following lines to profile memory vs. CPU
result = sampler.get(config)
#cProfile.runctx('result = sampler.get(config)', globals(), locals())
memcache.flush_all() # Clear the string references
print 'Tracked objects before collection', len(gc.get_objects())
try:
del locals()['result']
del result
except:
pass
dos._DOMAIN_CACHE.clear()
del reporter
del sampler
finally:
print 'Unreachable', gc.collect()
print 'Tracked objects after collection', len(gc.get_objects())
gc.set_debug(0)
gc.enable()
################################################################################
class UrlScorerTest(unittest.TestCase):
"""Tests for the UrlScorer class."""
def setUp(self):
"""Sets up the test harness."""
testutil.setup_for_testing()
self.domain1 = 'mydomain.com'
self.domain2 = 'example.com'
self.domain3 = 'other.com'
self.url1 = 'http://mydomain.com/stuff/meep'
self.url2 = 'http://example.com/some-path?a=b'
self.url3 = 'http://example.com'
self.url4 = 'http://other.com/relative'
self.scorer = dos.UrlScorer(
period=60,
min_requests=1,
max_failure_percentage=0.2,
prefix='test')
def testConfig(self):
"""Tests that the config parameters are sanitized."""
# Bad periods
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=0,
min_requests=1,
max_failure_percentage=0.2,
prefix='test:')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=-1,
min_requests=1,
max_failure_percentage=0.2,
prefix='test:')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period='not an int',
min_requests=1,
max_failure_percentage=0.2,
prefix='test:')
# Bad min_requests
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests='bad',
max_failure_percentage=0.2,
prefix='test:')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=-1,
max_failure_percentage=0.2,
prefix='test:')
# Bad max_failure_percentage
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=1,
max_failure_percentage='not a float',
prefix='test:')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=1,
max_failure_percentage=2,
prefix='test:')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=1,
max_failure_percentage=-1,
prefix='test:')
# Bad prefix
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=1,
max_failure_percentage=0.2,
prefix='')
self.assertRaises(dos.ConfigError,
dos.UrlScorer,
period=1,
min_requests=1,
max_failure_percentage=0.2,
prefix=123)
def testReport(self):
"""Tests reporting domain status."""
self.scorer.report(
[self.url1, self.url2], [self.url3, self.url4])
self.assertEquals(1, memcache.get('scoring:test:success:' + self.domain1))
self.assertEquals(0, memcache.get('scoring:test:failure:' + self.domain1))
self.assertEquals(1, memcache.get('scoring:test:success:' + self.domain2))
self.assertEquals(1, memcache.get('scoring:test:failure:' + self.domain2))
self.assertEquals(0, memcache.get('scoring:test:success:' + self.domain3))
self.assertEquals(1, memcache.get('scoring:test:failure:' + self.domain3))
self.scorer.report(
[self.url1, self.url2, self.url3, self.url4], [])
self.assertEquals(2, memcache.get('scoring:test:success:' + self.domain1))
self.assertEquals(0, memcache.get('scoring:test:failure:' + self.domain1))
self.assertEquals(3, memcache.get('scoring:test:success:' + self.domain2))
self.assertEquals(1, memcache.get('scoring:test:failure:' + self.domain2))
self.assertEquals(1, memcache.get('scoring:test:success:' + self.domain3))
self.assertEquals(1, memcache.get('scoring:test:failure:' + self.domain3))
self.scorer.report(
[], [self.url1, self.url2, self.url3, self.url4])
self.assertEquals(2, memcache.get('scoring:test:success:' + self.domain1))
self.assertEquals(1, memcache.get('scoring:test:failure:' + self.domain1))
self.assertEquals(3, memcache.get('scoring:test:success:' + self.domain2))
self.assertEquals(3, memcache.get('scoring:test:failure:' + self.domain2))
self.assertEquals(1, memcache.get('scoring:test:success:' + self.domain3))
self.assertEquals(2, memcache.get('scoring:test:failure:' + self.domain3))
def testBelowMinRequests(self):
"""Tests when there are enough failures but not enough total requests."""
memcache.set('scoring:test:success:' + self.domain1, 0)
memcache.set('scoring:test:failure:' + self.domain1, 10)
self.assertEquals(
[(True, 1), (True, 0)],
self.scorer.filter([self.url1, self.url2]))
def testFailurePrecentageTooLow(self):
"""Tests when there are enough requests but too few failures."""
memcache.set('scoring:test:success:' + self.domain1, 100)
memcache.set('scoring:test:failure:' + self.domain1, 1)
self.assertEquals(
[(True, 1/101.0), (True, 0)],
self.scorer.filter([self.url1, self.url2]))
def testNotAllowed(self):
"""Tests when a result is blocked due to overage."""
memcache.set('scoring:test:success:' + self.domain1, 100)
memcache.set('scoring:test:failure:' + self.domain1, 30)
self.assertEquals(
[(False, 30/130.0), (True, 0)],
self.scorer.filter([self.url1, self.url2]))
def testGetScores(self):
"""Tests getting the scores of URLs."""
memcache.set('scoring:test:success:' + self.domain1, 2)
memcache.set('scoring:test:failure:' + self.domain1, 1)
memcache.set('scoring:test:success:' + self.domain2, 3)
memcache.set('scoring:test:failure:' + self.domain2, 3)
memcache.set('scoring:test:success:' + self.domain3, 1)
memcache.set('scoring:test:failure:' + self.domain3, 2)
self.assertEquals(
[(2, 1), (3, 3), (1, 2)],
self.scorer.get_scores([self.url1, self.url2, self.url4]))
def testBlackhole(self):
"""Tests blackholing a URL."""
self.assertEquals(
[(True, 0), (True, 0)],
self.scorer.filter([self.url1, self.url2]))
self.scorer.blackhole([self.url1, self.url2])
self.assertEquals(
[(False, 1.0), (False, 1.0)],
self.scorer.filter([self.url1, self.url2]))
################################################################################
if __name__ == '__main__':
unittest.main()
| 33.80908 | 80 | 0.648509 | 7,469 | 58,084 | 4.951131 | 0.084215 | 0.075717 | 0.040914 | 0.033586 | 0.751433 | 0.708761 | 0.669551 | 0.633099 | 0.60868 | 0.580124 | 0 | 0.033711 | 0.211986 | 58,084 | 1,717 | 81 | 33.828771 | 0.774224 | 0.051236 | 0 | 0.684414 | 0 | 0 | 0.080796 | 0.019416 | 0 | 0 | 0 | 0 | 0.158179 | 0 | null | null | 0.001543 | 0.008488 | null | null | 0.006173 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
69b325f341ef62c943068e3e5489628529369275 | 1,178 | py | Python | swampdragon/permissions.py | h-hirokawa/swampdragon | 064ee71a5838e1363b69eef9af6b1a56be96fe23 | [
"BSD-3-Clause"
] | 366 | 2015-01-03T12:32:40.000Z | 2018-04-19T19:30:44.000Z | swampdragon/permissions.py | h-hirokawa/swampdragon | 064ee71a5838e1363b69eef9af6b1a56be96fe23 | [
"BSD-3-Clause"
] | 175 | 2015-01-04T17:14:19.000Z | 2017-04-04T09:21:28.000Z | swampdragon/permissions.py | h-hirokawa/swampdragon | 064ee71a5838e1363b69eef9af6b1a56be96fe23 | [
"BSD-3-Clause"
] | 122 | 2015-01-07T18:07:33.000Z | 2017-10-17T01:41:13.000Z | def login_required(func):
def not_logged_in(self, **kwargs):
self.send_login_required({'signin_required': 'you need to sign in'})
return
def check_user(self, **kwargs):
user = self.connection.user
if not user:
return not_logged_in(self, **kwargs)
return func(self, **kwargs)
return check_user
class RoutePermission(object):
def test_permission(self, handler, verb, **kwargs):
raise NotImplementedError("You need to implement test_permission")
def permission_failed(self, handler):
raise NotImplementedError("You need to implement permission_failed")
class LoginRequired(RoutePermission):
def __init__(self, verbs=None):
self.test_against_verbs = verbs
def test_permission(self, handler, verb, **kwargs):
if not self.test_against_verbs:
return handler.connection.user is not None
if self.test_against_verbs:
if verb not in self.test_against_verbs:
return True
user = handler.connection.user
return user is not None
def permission_failed(self, handler):
handler.send_login_required()
| 31.837838 | 76 | 0.671477 | 144 | 1,178 | 5.284722 | 0.263889 | 0.052562 | 0.078844 | 0.105125 | 0.412615 | 0.21025 | 0.099869 | 0 | 0 | 0 | 0 | 0 | 0.245331 | 1,178 | 36 | 77 | 32.722222 | 0.856018 | 0 | 0 | 0.142857 | 0 | 0 | 0.093379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0 | 0 | 0.607143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
69c6c72b517f318abef39f7e0a2123851324a06f | 366 | py | Python | textattack/__init__.py | cclauss/TextAttack | 98b8d6102aa47bf3c41afedace0215d48f8ed046 | [
"MIT"
] | 2 | 2021-02-22T12:15:27.000Z | 2021-05-02T15:22:05.000Z | textattack/__init__.py | 53X/TextAttack | e6a7969abc1e28a2a8a7e2ace709b78eb9dc94be | [
"MIT"
] | null | null | null | textattack/__init__.py | 53X/TextAttack | e6a7969abc1e28a2a8a7e2ace709b78eb9dc94be | [
"MIT"
] | 1 | 2021-11-12T05:26:21.000Z | 2021-11-12T05:26:21.000Z | name = "textattack"
from . import attack_recipes
from . import attack_results
from . import augmentation
from . import commands
from . import constraints
from . import datasets
from . import goal_functions
from . import goal_function_results
from . import loggers
from . import models
from . import search_methods
from . import shared
from . import transformations
| 22.875 | 35 | 0.803279 | 47 | 366 | 6.12766 | 0.425532 | 0.451389 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153005 | 366 | 15 | 36 | 24.4 | 0.929032 | 0 | 0 | 0 | 0 | 0 | 0.027322 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.928571 | 0 | 0.928571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
69cd8c422b66b48552fe869a8f829a04bb8582ad | 502 | py | Python | packages/pyright-internal/src/tests/samples/dataclass2.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 3,934 | 2019-03-22T09:26:41.000Z | 2019-05-06T21:03:08.000Z | packages/pyright-internal/src/tests/samples/dataclass2.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 107 | 2019-03-24T04:09:37.000Z | 2019-05-06T17:00:04.000Z | packages/pyright-internal/src/tests/samples/dataclass2.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 119 | 2019-03-23T10:48:04.000Z | 2019-05-06T08:57:56.000Z | # This sample tests the handling of Callable fields within a
# dataclass definition.
# pyright: strict
from dataclasses import dataclass
from typing import Any, Callable, TypeVar
CallableT = TypeVar("CallableT", bound=Callable[..., Any])
def decorate(arg: CallableT) -> CallableT:
return arg
def f(s: str) -> int:
return int(s)
@dataclass
class C:
str_to_int: Callable[[str], int] = f
c = C()
reveal_type(c.str_to_int, expected_text="(str) -> int")
c.str_to_int = decorate(f)
| 16.193548 | 60 | 0.699203 | 73 | 502 | 4.69863 | 0.506849 | 0.052478 | 0.052478 | 0.078717 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179283 | 502 | 30 | 61 | 16.733333 | 0.832524 | 0.191235 | 0 | 0 | 0 | 0 | 0.052239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.153846 | 0.153846 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
69e4d8b96148c4233d14b9dd358779ec63755ab3 | 5,827 | py | Python | .ipynb_checkpoints/climate_app-checkpoint.py | Zone6Mars/sqlalchemy-challenge | a2ba1d97d7c2fa985604e6d4ac722a6cf43ced19 | [
"ADSL"
] | null | null | null | .ipynb_checkpoints/climate_app-checkpoint.py | Zone6Mars/sqlalchemy-challenge | a2ba1d97d7c2fa985604e6d4ac722a6cf43ced19 | [
"ADSL"
] | null | null | null | .ipynb_checkpoints/climate_app-checkpoint.py | Zone6Mars/sqlalchemy-challenge | a2ba1d97d7c2fa985604e6d4ac722a6cf43ced19 | [
"ADSL"
] | null | null | null | {
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "f9af1484",
"metadata": {},
"outputs": [],
"source": [
"# ---------------------- STEP 2: Climate APP\n",
"\n",
"from flask import Flask, json, jsonify\n",
"import datetime as dt\n",
"\n",
"import sqlalchemy\n",
"from sqlalchemy.ext.automap import automap_base\n",
"from sqlalchemy.orm import Session\n",
"from sqlalchemy import create_engine, func\n",
"from sqlalchemy import inspect\n",
"\n",
"engine = create_engine(\"sqlite:///./Resources/hawaii.sqlite\", connect_args={'check_same_thread': False})\n",
"# reflect an existing database into a new model\n",
"Base = automap_base()\n",
"# reflect the tables\n",
"Base.prepare(engine, reflect=True)\n",
"\n",
"# Save references to each table\n",
"Measurement = Base.classes.measurement\n",
"Station = Base.classes.station\n",
"session = Session(engine)\n",
"\n",
"app = Flask(__name__) # the name of the file & the object (double usage)\n",
"\n",
"# List all routes that are available.\n",
"@app.route(\"/\")\n",
"def home():\n",
" print(\"In & Out of Home section.\")\n",
" return (\n",
" f\"Welcome to the Climate API!<br/>\"\n",
" f\"Available Routes:<br/>\"\n",
" f\"/api/v1.0/precipitation<br/>\"\n",
" f\"/api/v1.0/stations<br/>\"\n",
" f\"/api/v1.0/tobs<br/>\"\n",
" f\"/api/v1.0/2016-01-01/<br/>\"\n",
" f\"/api/v1.0/2016-01-01/2016-12-31/\"\n",
" )\n",
"\n",
"# Return the JSON representation of your dictionary\n",
"@app.route('/api/v1.0/precipitation/')\n",
"def precipitation():\n",
" print(\"In Precipitation section.\")\n",
" \n",
" last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first().date\n",
" last_year = dt.datetime.strptime(last_date, '%Y-%m-%d') - dt.timedelta(days=365)\n",
"\n",
" rain_results = session.query(Measurement.date, Measurement.prcp).\\\n",
" filter(Measurement.date >= last_year).\\\n",
" order_by(Measurement.date).all()\n",
"\n",
" p_dict = dict(rain_results)\n",
" print(f\"Results for Precipitation - {p_dict}\")\n",
" print(\"Out of Precipitation section.\")\n",
" return jsonify(p_dict) \n",
"\n",
"# Return a JSON-list of stations from the dataset.\n",
"@app.route('/api/v1.0/stations/')\n",
"def stations():\n",
" print(\"In station section.\")\n",
" \n",
" station_list = session.query(Station.station)\\\n",
" .order_by(Station.station).all() \n",
" print()\n",
" print(\"Station List:\") \n",
" for row in station_list:\n",
" print (row[0])\n",
" print(\"Out of Station section.\")\n",
" return jsonify(station_list)\n",
"\n",
"# Return a JSON-list of Temperature Observations from the dataset.\n",
"@app.route('/api/v1.0/tobs/')\n",
"def tobs():\n",
" print(\"In TOBS section.\")\n",
" \n",
" last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first().date\n",
" last_year = dt.datetime.strptime(last_date, '%Y-%m-%d') - dt.timedelta(days=365)\n",
"\n",
" temp_obs = session.query(Measurement.date, Measurement.tobs)\\\n",
" .filter(Measurement.date >= last_year)\\\n",
" .order_by(Measurement.date).all()\n",
" print()\n",
" print(\"Temperature Results for All Stations\")\n",
" print(temp_obs)\n",
" print(\"Out of TOBS section.\")\n",
" return jsonify(temp_obs)\n",
"\n",
"# Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start date\n",
"@app.route('/api/v1.0/<start_date>/')\n",
"def calc_temps_start(start_date):\n",
" print(\"In start date section.\")\n",
" print(start_date)\n",
" \n",
" select = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n",
" result_temp = session.query(*select).\\\n",
" filter(Measurement.date >= start_date).all()\n",
" print()\n",
" print(f\"Calculated temp for start date {start_date}\")\n",
" print(result_temp)\n",
" print(\"Out of start date section.\")\n",
" return jsonify(result_temp)\n",
"\n",
"# Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start-end range.\n",
"@app.route('/api/v1.0/<start_date>/<end_date>/')\n",
"def calc_temps_start_end(start_date, end_date):\n",
" print(\"In start & end date section.\")\n",
" \n",
" select = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n",
" result_temp = session.query(*select).\\\n",
" filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()\n",
" print()\n",
" print(f\"Calculated temp for start date {start_date} & end date {end_date}\")\n",
" print(result_temp)\n",
" print(\"Out of start & end date section.\")\n",
" return jsonify(result_temp)\n",
"\n",
"if __name__ == \"__main__\":\n",
" app.run(debug=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| 38.335526 | 135 | 0.563068 | 753 | 5,827 | 4.256308 | 0.229748 | 0.046802 | 0.018721 | 0.01092 | 0.478003 | 0.426833 | 0.39688 | 0.385023 | 0.370047 | 0.318253 | 0 | 0.014605 | 0.224472 | 5,827 | 151 | 136 | 38.589404 | 0.694623 | 0 | 0 | 0.238411 | 0 | 0.05298 | 0.660889 | 0.224987 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.046358 | 0 | 0.046358 | 0.165563 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
69e972e1713882eab8d39d95e971636691fd4ffb | 492 | py | Python | src/phony/scorer.py | direct-phonology/phoNy | 853be2f4ec0628890065150a001a0ec64189ea59 | [
"MIT"
] | null | null | null | src/phony/scorer.py | direct-phonology/phoNy | 853be2f4ec0628890065150a001a0ec64189ea59 | [
"MIT"
] | 1 | 2022-03-21T16:25:23.000Z | 2022-03-21T16:25:23.000Z | src/phony/scorer.py | direct-phonology/phoNy | 853be2f4ec0628890065150a001a0ec64189ea59 | [
"MIT"
] | null | null | null | from typing import Any, Dict, Iterable
from spacy.scorer import Scorer
from spacy.training import Example
from spacy.util import registry
def phoneme_score(examples: Iterable[Example], **kwargs) -> Dict[str, Any]:
return Scorer.score_token_attr(
examples,
attr="phonemes",
getter=lambda t, attr: t._.get(attr),
missing_values=set("_"),
**kwargs,
)
@registry.scorers("phoneme_scorer.v1")
def make_phoneme_scorer():
return phoneme_score
| 23.428571 | 75 | 0.691057 | 62 | 492 | 5.322581 | 0.516129 | 0.081818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002538 | 0.199187 | 492 | 20 | 76 | 24.6 | 0.835025 | 0 | 0 | 0 | 0 | 0 | 0.052846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.266667 | 0.133333 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
69fad693ccb90408ffc7f4fafcc2aaf75316dfd9 | 676 | py | Python | IR_system/util.py | Abhimanyu210100/CS6310-NLP | 1cd6448cc6ef0eea9d178b6547ae92c9f011089b | [
"MIT"
] | null | null | null | IR_system/util.py | Abhimanyu210100/CS6310-NLP | 1cd6448cc6ef0eea9d178b6547ae92c9f011089b | [
"MIT"
] | null | null | null | IR_system/util.py | Abhimanyu210100/CS6310-NLP | 1cd6448cc6ef0eea9d178b6547ae92c9f011089b | [
"MIT"
] | null | null | null | # Add your import statements here
# Add any utility functions here
def rel_docs(query_id,qrels):
j=[item['id'] for item in qrels if item['query_num']==str(query_id)]
return j
def rel_score_as_dict_for_query(query_id,qrels):
docs_score_dict={int(item['id']):int(item['position']) for item in qrels if int(item['query_num'])==int(query_id)}
return docs_score_dict
def rel_score_for_query(query_id,qrels):
score=[int(item['position']) for item in qrels if int(item['query_num'])==int(query_id)]
i_d=[int(item['id']) for item in qrels if int(item['query_num'])==int(query_id)]
score_id=list(map(lambda x,y: [x,y],score,i_d))
return score_id
| 39.764706 | 118 | 0.705621 | 123 | 676 | 3.650407 | 0.276423 | 0.109131 | 0.080178 | 0.124722 | 0.492205 | 0.403118 | 0.403118 | 0.340757 | 0.340757 | 0.340757 | 0 | 0 | 0.139053 | 676 | 16 | 119 | 42.25 | 0.771478 | 0.091716 | 0 | 0 | 0 | 0 | 0.094926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
384f9543017a695ce524e6a158f9fedcbd2709ee | 1,648 | py | Python | python_scripts_experiment/textprocess/devanagari_characters.py | erc-dharma/tfd-sanskrit-philology | f4970fddc4583a9b950c6ba7331f4f7e0267b7fb | [
"CC-BY-4.0"
] | 1 | 2022-03-27T15:45:47.000Z | 2022-03-27T15:45:47.000Z | python_scripts_experiment/textprocess/devanagari_characters.py | erc-dharma/tfd-sanskrit-philology | f4970fddc4583a9b950c6ba7331f4f7e0267b7fb | [
"CC-BY-4.0"
] | null | null | null | python_scripts_experiment/textprocess/devanagari_characters.py | erc-dharma/tfd-sanskrit-philology | f4970fddc4583a9b950c6ba7331f4f7e0267b7fb | [
"CC-BY-4.0"
] | null | null | null | def devanagari_characters():
dic = [
# space:
(' ', ' '),
# comma:
#(',', ' , '),
# initial vowels:
('A','अ'),
('Ā','आ'),
('I', 'इ'),
('Ī','ई'),
('U', 'उ'),
('Ū', 'ऊ'),
('Ṛ', 'ऋ'),
('Ṝ', 'ॠ'),
('E', 'ए'),
('O', 'ओ'),
('Đ', 'ऐ'),
('Ő', 'औ'),
("'", 'ऽ'),
('Ó', 'ॐ'),
# conjunct vowels: ('ṝ', 'ॄ ' )
('a', ''), ('ā', 'ा' ), ('i', 'ि'), ('ī', 'ी'), ('u', 'ु'),
('ū', 'ू'), ('ṛ', 'ृ'), ('ṝ', 'ॄ' ), ('ḷ', 'ॢ '), ('ḹ', 'ॣ'),
('e', 'े'), ('o', 'ो'), ('đ', 'ै'), ('ő','ौ'), ('Ṃ', 'ं'), ('Ḥ', 'ः'),
# virāma:
('V', '्'),
# consonants:
('k', 'क'), ('K', 'ख'), ('g', 'ग'), ('G', 'घ'), ('ṅ', 'ङ'),
#
('c', 'च'), ('C', 'छ'), ('j', 'ज'), ('J', 'झ'), ('ñ', 'ञ'),
#
('ṭ', 'ट'), ('Ṭ', 'ठ'), ('ḍ', 'ड'), ('Ḍ', 'ढ'), ('ṇ', 'ण'),
#
('t', 'त'), ('T', 'थ'), ('d','द'), ('D', 'ध'), ('n', 'न'),
#
('p', 'प'), ('P', 'फ'), ('b', 'ब'), ('B', 'भ'), ('m', 'म'),
#
('y','य'), ('r','र'), ('l','ल'), ('v','व'), ('ś', 'श'), ('ṣ', 'ष'),
('s', 'स'), ('h', 'ह'), ('0', '०'),
('1', '१'), ('2', '२'), ('3', '३'), ('4', '४'), ('5', '५'), ('6', '६'),
('7', '७'), ('8', '८'), ('9', '९') ]
# 'ा ), this line is just to correct highlighting in this file
vowels = ["ṃ", "ḥ", 'a', 'i', 'u', 'ṛ', 'ṝ', 'ḷ', 'ā', 'ī', 'ū', 'ṝ', 'ḹ', 'e', 'ai', 'o', 'au', 'đ', 'ő']
consonants = ["k", "K", "g", "G", "ṅ", "c", "C", "j", "J", "ñ", "ṭ", "Ṭ", "ḍ", "Ḍ", "ṇ", "t", "T", "d", "D", "n", "p", "P", "b", "B", "m", "y", "r", "l", "v", "ś", "ṣ", "s", "h"]
return dic, vowels, consonants
| 32.96 | 182 | 0.212985 | 229 | 1,648 | 1.606987 | 0.637555 | 0.01087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017007 | 0.286408 | 1,648 | 49 | 183 | 33.632653 | 0.280612 | 0.105583 | 0 | 0 | 0 | 0 | 0.139822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
385bc2e53d267a584e403bd1360b954baa108206 | 325 | py | Python | __init__.py | ray-hoang30/dragon-maid-skill | 8c2279cbc03ef956af92eaaa2fcfa331036c461b | [
"MIT"
] | null | null | null | __init__.py | ray-hoang30/dragon-maid-skill | 8c2279cbc03ef956af92eaaa2fcfa331036c461b | [
"MIT"
] | null | null | null | __init__.py | ray-hoang30/dragon-maid-skill | 8c2279cbc03ef956af92eaaa2fcfa331036c461b | [
"MIT"
] | null | null | null | from mycroft import MycroftSkill, intent_file_handler
class DragonMaid(MycroftSkill):
def __init__(self):
MycroftSkill.__init__(self)
@intent_file_handler('maid.dragon.intent')
def handle_maid_dragon(self, message):
self.speak_dialog("playing it")
def create_skill():
return DragonMaid()
| 21.666667 | 53 | 0.729231 | 38 | 325 | 5.815789 | 0.605263 | 0.090498 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175385 | 325 | 14 | 54 | 23.214286 | 0.824627 | 0 | 0 | 0 | 0 | 0 | 0.08642 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.111111 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
38663810c24e704d71cb2fec309fd36e8b4ca380 | 1,671 | py | Python | clictestclient/v1/shell.py | arulkumarkandasamy/python-clictestclient | 9493d2ca082f966556f69bcfb2b5e50e88ae9845 | [
"Apache-2.0"
] | null | null | null | clictestclient/v1/shell.py | arulkumarkandasamy/python-clictestclient | 9493d2ca082f966556f69bcfb2b5e50e88ae9845 | [
"Apache-2.0"
] | null | null | null | clictestclient/v1/shell.py | arulkumarkandasamy/python-clictestclient | 9493d2ca082f966556f69bcfb2b5e50e88ae9845 | [
"Apache-2.0"
] | null | null | null | # Copyright 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import copy
import functools
import os
import six
import sys
from oslo_utils import encodeutils
from oslo_utils import strutils
from clictestclient.common import progressbar
from clictestclient.common import utils
from clictestclient import exc
import clictestclient.v1.objectspy
CONTAINER_FORMATS = 'Acceptable formats: ami, ari, aki, bare, and ovf.'
DISK_FORMATS = ('Acceptable formats: ami, ari, aki, vhd, vmdk, raw, '
'qcow2, vdi, and iso.')
DATA_FIELDS = ('location', 'copy_from', 'file')
_bool_strict = functools.partial(strutils.bool_from_string, strict=True)
def _objectspy_show(test, human_readable=False, max_column_width=80):
# Flatten test properties dict for display
info = copy.deepcopy(test._info)
if human_readable:
info['size'] = utils.make_size_human_readable(info['size'])
for (k, v) in six.iteritems(info.pop('properties')):
info['Property \'%s\'' % k] = v
utils.print_dict(info, max_column_width=max_column_width)
| 32.764706 | 78 | 0.73848 | 234 | 1,671 | 5.145299 | 0.568376 | 0.049834 | 0.034884 | 0.026578 | 0.054817 | 0.054817 | 0 | 0 | 0 | 0 | 0 | 0.008727 | 0.177139 | 1,671 | 50 | 79 | 33.42 | 0.866909 | 0.387193 | 0 | 0 | 0 | 0 | 0.167992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.5 | 0 | 0.541667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
38741ba3fa576858d52dc51f67e3d8c0f9493042 | 1,405 | py | Python | crud-flask-demo/cmd/rest/views/user.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | crud-flask-demo/cmd/rest/views/user.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | crud-flask-demo/cmd/rest/views/user.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# 用户接口处理
#
# wencan
# 2019-04-13
import abc
import attr
import flask
from flask.views import MethodView
from werkzeug.exceptions import BadRequest, NotImplemented
from .... import model
from ..abc_permission import AbstractGuard
from ...abcs import UserAbstractService
__all__ = ("UserHandlers", "UserView")
class UserHandlers:
'''用户接口处理'''
def __init__(self, guard: AbstractGuard, user_service: UserAbstractService):
self._user_service = user_service
self._guard = guard
# 添加认证检查
self.get_user = self._guard.authorization_required(self.get_user)
# @guard.authorization_required
def get_user(self, user_id: int):
user = self._user_service.get_user(user_id)
return flask.jsonify(attr.asdict(user))
def create_user(self):
name = flask.request.form.get("name", "")
phone = flask.request.form.get("phone", "")
user = self._user_service.create_user(name=name, phone=phone)
return flask.jsonify(attr.asdict(user))
class UserView(MethodView):
'''用户接口视图'''
def __init__(self, handlers: UserHandlers):
self._handlers = handlers
def get(self, user_id: int):
'''获取指定用户的信息'''
return self._handlers.get_user(user_id)
def post(self):
'''创建用户'''
return self._handlers.create_user()
| 21.615385 | 80 | 0.663345 | 165 | 1,405 | 5.406061 | 0.345455 | 0.061659 | 0.050448 | 0.029148 | 0.071749 | 0.071749 | 0 | 0 | 0 | 0 | 0 | 0.009091 | 0.217082 | 1,405 | 64 | 81 | 21.953125 | 0.801818 | 0.095374 | 0 | 0.068966 | 0 | 0 | 0.023387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.275862 | 0 | 0.689655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3886b97cb045d5b867b9768fe99ec202414b86e6 | 324 | py | Python | exercicios/ex016.py | RicardoAugusto-RCD/exercicios_python | 8a803f9cbc8b2ad0b5a6d61f0e7b6c2bc615b5ff | [
"MIT"
] | null | null | null | exercicios/ex016.py | RicardoAugusto-RCD/exercicios_python | 8a803f9cbc8b2ad0b5a6d61f0e7b6c2bc615b5ff | [
"MIT"
] | null | null | null | exercicios/ex016.py | RicardoAugusto-RCD/exercicios_python | 8a803f9cbc8b2ad0b5a6d61f0e7b6c2bc615b5ff | [
"MIT"
] | null | null | null |
# Crie um programa que leia um número Real qualquer pelo teclado e mostre na tela sua porção Inteira.
from math import trunc
num = float(input("Digite um número qualquer: "))
print('A porção Inteira do número {} é {}.\n'.format(num, trunc(num)))
#ou
print('A porção Inteira do número {} é {}.'.format(num, int(num)))
| 23.142857 | 101 | 0.697531 | 53 | 324 | 4.264151 | 0.622642 | 0.172566 | 0.106195 | 0.168142 | 0.247788 | 0.247788 | 0.247788 | 0 | 0 | 0 | 0 | 0 | 0.17284 | 324 | 13 | 102 | 24.923077 | 0.843284 | 0.311728 | 0 | 0 | 0 | 0 | 0.454128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
3887be2d14b8aa6bee7bf43b4040e122381c5213 | 93 | py | Python | yc246/1040.py | c-yan/yukicoder | cdbbd65402177225dd989df7fe01f67908484a69 | [
"MIT"
] | null | null | null | yc246/1040.py | c-yan/yukicoder | cdbbd65402177225dd989df7fe01f67908484a69 | [
"MIT"
] | null | null | null | yc246/1040.py | c-yan/yukicoder | cdbbd65402177225dd989df7fe01f67908484a69 | [
"MIT"
] | null | null | null | N = int(input())
t = N % 360
if t == 90 or t == 270:
print('Yes')
else:
print('No')
| 11.625 | 23 | 0.473118 | 17 | 93 | 2.588235 | 0.764706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123077 | 0.301075 | 93 | 7 | 24 | 13.285714 | 0.553846 | 0 | 0 | 0 | 0 | 0 | 0.053763 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
388e0f5beea1610dc4a1fa902ce0b3583ff711b3 | 1,016 | py | Python | vigil/migrations/0051_auto_20180504_1442.py | inuitwallet/vigil | 0367237edf96587c4101b909a7d748ba215b309a | [
"MIT"
] | null | null | null | vigil/migrations/0051_auto_20180504_1442.py | inuitwallet/vigil | 0367237edf96587c4101b909a7d748ba215b309a | [
"MIT"
] | 8 | 2020-06-06T06:34:55.000Z | 2021-09-22T19:43:55.000Z | vigil/migrations/0051_auto_20180504_1442.py | inuitwallet/vigil | 0367237edf96587c4101b909a7d748ba215b309a | [
"MIT"
] | null | null | null | # Generated by Django 2.0.3 on 2018-05-04 14:42
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('vigil', '0050_auto_20180504_1348'),
]
operations = [
migrations.AlterField(
model_name='alert',
name='active',
field=models.BooleanField(default=False),
),
migrations.AlterField(
model_name='alert',
name='message',
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name='alert',
name='priority',
field=models.CharField(blank=True, choices=[('Low', 'LOW'), ('Medium', 'MEDIUM'), ('High', 'HIGH'), ('Urgent', 'URGENT'), ('Emergency', 'EMERGENCY')], max_length=255, null=True),
),
migrations.AlterField(
model_name='alert',
name='title',
field=models.CharField(blank=True, max_length=500, null=True),
),
]
| 29.882353 | 190 | 0.559055 | 100 | 1,016 | 5.59 | 0.53 | 0.143113 | 0.178891 | 0.207513 | 0.404293 | 0.300537 | 0.16458 | 0.16458 | 0 | 0 | 0 | 0.05146 | 0.292323 | 1,016 | 33 | 191 | 30.787879 | 0.726008 | 0.044291 | 0 | 0.444444 | 1 | 0 | 0.134159 | 0.023736 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037037 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
388fbd2e5dd2ed3192bde76cc69cd53afe3ddcf1 | 437 | py | Python | utils/runningmeanfast.py | jotaylor/SPAMM | 3087269cb823d6f4022ebf1dd75d920dee7c1cc0 | [
"BSD-3-Clause-Clear"
] | null | null | null | utils/runningmeanfast.py | jotaylor/SPAMM | 3087269cb823d6f4022ebf1dd75d920dee7c1cc0 | [
"BSD-3-Clause-Clear"
] | 25 | 2018-09-07T13:50:57.000Z | 2019-05-31T19:50:23.000Z | utils/runningmeanfast.py | jotaylor/SPAMM | 3087269cb823d6f4022ebf1dd75d920dee7c1cc0 | [
"BSD-3-Clause-Clear"
] | null | null | null | #! /usr/bin/env python
import numpy as np
def runningMeanFast(x, N):
'''
Calculate the running mean of an array given a window.
Ref: http://stackoverflow.com/questions/13728392/moving-average-or-running-mean
Args:
x (array-like): Data array
N (int): Window width
Returns:
An array of averages over each window.
'''
return np.convolve(x, np.ones((N,))/N)[(N-1):]
| 20.809524 | 83 | 0.601831 | 61 | 437 | 4.311475 | 0.721311 | 0.08365 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0.272311 | 437 | 20 | 84 | 21.85 | 0.798742 | 0.645309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
389bdd9e500f7f458e0a90d8410cc6bf1ee03592 | 854 | py | Python | pylightcommon/migrations/0003_auto_20180616_1105.py | muma7490/PyLightSupport | ccbc895cb0ea5c821df1294b3e7fad816a3fdb06 | [
"MIT"
] | null | null | null | pylightcommon/migrations/0003_auto_20180616_1105.py | muma7490/PyLightSupport | ccbc895cb0ea5c821df1294b3e7fad816a3fdb06 | [
"MIT"
] | 4 | 2018-06-10T18:51:27.000Z | 2018-06-13T22:16:37.000Z | pylightcommon/migrations/0003_auto_20180616_1105.py | muma7490/PyLightCommon | ccbc895cb0ea5c821df1294b3e7fad816a3fdb06 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.6 on 2018-06-16 11:05
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('pylightcommon', '0002_load_fixtures'),
]
operations = [
migrations.AlterField(
model_name='connectedsystem',
name='lastIP',
field=models.GenericIPAddressField(verbose_name='Last known IP address of Connection'),
),
migrations.AlterField(
model_name='connectedsystem',
name='lastMacAddress',
field=models.CharField(max_length=255, verbose_name='Mac Address of Connection'),
),
migrations.AlterField(
model_name='connectedsystem',
name='name',
field=models.CharField(max_length=255, verbose_name='Name of Connection'),
),
]
| 29.448276 | 99 | 0.617096 | 83 | 854 | 6.228916 | 0.542169 | 0.116054 | 0.145068 | 0.168279 | 0.518375 | 0.518375 | 0.425532 | 0.425532 | 0.259188 | 0 | 0 | 0.040519 | 0.277518 | 854 | 28 | 100 | 30.5 | 0.797407 | 0.052693 | 0 | 0.409091 | 1 | 0 | 0.22057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38aa176e6dd3b36f29c3ce283cfcc4b3fad4ed8a | 83 | py | Python | Testes/teste10.py | JefferMarcelino/Python | bf2ebf4f110b1fa1a6226cb98cd16ce18108eb03 | [
"MIT"
] | 2 | 2021-01-27T19:30:02.000Z | 2022-01-10T20:34:47.000Z | Testes/teste10.py | JefferMarcelino/Python | bf2ebf4f110b1fa1a6226cb98cd16ce18108eb03 | [
"MIT"
] | null | null | null | Testes/teste10.py | JefferMarcelino/Python | bf2ebf4f110b1fa1a6226cb98cd16ce18108eb03 | [
"MIT"
] | null | null | null | from tkinter import *
janela = Tk()
janela.geometry("450x200")
janela.mainloop()
| 11.857143 | 26 | 0.722892 | 10 | 83 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0.13253 | 83 | 6 | 27 | 13.833333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0.084337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38b10f43de2a7c2ffc5d99afae591cc4820cbc45 | 1,544 | py | Python | microraiden/exceptions.py | andrevmatos/microraiden | 2d51e78afaf3c0a8ddab87e59a5260c0064cdbdd | [
"MIT"
] | 417 | 2017-09-19T19:06:23.000Z | 2021-11-28T05:39:23.000Z | microraiden/exceptions.py | andrevmatos/microraiden | 2d51e78afaf3c0a8ddab87e59a5260c0064cdbdd | [
"MIT"
] | 259 | 2017-09-19T20:42:57.000Z | 2020-11-18T01:31:41.000Z | microraiden/exceptions.py | andrevmatos/microraiden | 2d51e78afaf3c0a8ddab87e59a5260c0064cdbdd | [
"MIT"
] | 126 | 2017-09-19T17:11:39.000Z | 2020-12-17T17:05:27.000Z | class MicroRaidenException(Exception):
"""Base exception for uRaiden"""
pass
class InvalidBalanceAmount(MicroRaidenException):
"""Raised if the payment contains lesser balance than the previous one."""
pass
class InvalidBalanceProof(MicroRaidenException):
"""Balance proof data do not make sense."""
pass
class NoOpenChannel(MicroRaidenException):
"""Attempt to use nonexisting channel."""
pass
class InsufficientConfirmations(MicroRaidenException):
"""uRaiden channel doesn't have enough confirmations."""
pass
class NoBalanceProofReceived(MicroRaidenException):
"""Attempt to close channel with no registered payments."""
pass
class InvalidContractVersion(MicroRaidenException):
"""Library is not compatible with the deployed contract version"""
pass
class StateFileException(MicroRaidenException):
"""Base exception class for state file (database) operations"""
pass
class StateContractAddrMismatch(StateFileException):
"""Stored state contract address doesn't match."""
pass
class StateReceiverAddrMismatch(StateFileException):
"""Stored state receiver address doesn't match."""
pass
class StateFileLocked(StateFileException):
"""Another process is already using the database"""
pass
class InsecureStateFile(StateFileException):
"""Permissions of the state file do not match (0600 is expected)."""
pass
class NetworkIdMismatch(StateFileException):
"""RPC endpoint and database have different network id."""
pass
| 24.125 | 78 | 0.743523 | 151 | 1,544 | 7.602649 | 0.516556 | 0.094077 | 0.050523 | 0.031359 | 0.047038 | 0.047038 | 0 | 0 | 0 | 0 | 0 | 0.003123 | 0.170337 | 1,544 | 63 | 79 | 24.507937 | 0.893052 | 0.417746 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
38b4fd5bb961b76969cad2737b830372be102efb | 203 | py | Python | mysite/Tweet_Generator/urls.py | avinsit123/Tweet-o-Pedia | d36970f1426643da295f6f98e8c5fce9fcd7109c | [
"BSD-3-Clause"
] | 1 | 2019-01-15T17:15:26.000Z | 2019-01-15T17:15:26.000Z | mysite/Tweet_Generator/urls.py | avinsit123/Tweet-o-Pedia | d36970f1426643da295f6f98e8c5fce9fcd7109c | [
"BSD-3-Clause"
] | 18 | 2020-01-28T22:31:06.000Z | 2022-03-11T23:37:23.000Z | mysite/Tweet_Generator/urls.py | avinsit123/Tweet-o-Pedia | d36970f1426643da295f6f98e8c5fce9fcd7109c | [
"BSD-3-Clause"
] | 1 | 2019-01-05T08:41:11.000Z | 2019-01-05T08:41:11.000Z | from django.conf.urls import url
from . import views
urlpatterns=[
url(r'^$',views.Tweet_form,name='Tweetform'),
url('yumyum',views.ProcessForm,name='vahmc')
]
| 18.454545 | 58 | 0.586207 | 23 | 203 | 5.130435 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.270936 | 203 | 10 | 59 | 20.3 | 0.797297 | 0 | 0 | 0 | 0 | 0 | 0.108911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
38c9f3a028c9a6004cc07cecd78d77f4ae06062f | 445 | py | Python | resolver/BaseResolver.py | mibe/flag-loader | f356eff355ea1341f23b78126d2a56b0c5f6e5c3 | [
"MIT"
] | null | null | null | resolver/BaseResolver.py | mibe/flag-loader | f356eff355ea1341f23b78126d2a56b0c5f6e5c3 | [
"MIT"
] | null | null | null | resolver/BaseResolver.py | mibe/flag-loader | f356eff355ea1341f23b78126d2a56b0c5f6e5c3 | [
"MIT"
] | null | null | null | """Part of the flag-loader project.
Copyright: (C) 2014 Michael Bemmerl
License: MIT License (see LICENSE.txt)
Exported classes: BaseResolver
"""
from abc import ABCMeta, abstractmethod
class BaseResolver(object, metaclass=ABCMeta):
"""Abstract base class for all resolvers."""
@abstractmethod
def get_flag(self, data):
pass
@abstractmethod
def normalize(self, data):
pass
| 20.227273 | 49 | 0.653933 | 49 | 445 | 5.918367 | 0.755102 | 0.117241 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012121 | 0.258427 | 445 | 21 | 50 | 21.190476 | 0.866667 | 0.402247 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.25 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
38d429f4445c4116b9edbdeaefd5513f4ef7731b | 769 | py | Python | Etap 3/Logia14/Zad2.py | aszokalski/Logia | 5e29745b01623df8a2f162f143656a76056af407 | [
"MIT"
] | null | null | null | Etap 3/Logia14/Zad2.py | aszokalski/Logia | 5e29745b01623df8a2f162f143656a76056af407 | [
"MIT"
] | null | null | null | Etap 3/Logia14/Zad2.py | aszokalski/Logia | 5e29745b01623df8a2f162f143656a76056af407 | [
"MIT"
] | null | null | null | def obok(n, kost):
szesc = [[[k * n**2 + j * n + i + 1 for i in range(n)] for j in range(n)] for k in range(n)]
p = 0
r = 0
m = 0
for a in range(n):
for b in range(n):
for c in range(n):
if szesc[a][b][c] == kost:
p=a
r=b
m=c
#print(szesc[p][r][m])
ret = []
for indexy in [
[p + 1, r, m], [p - 1, r, m],
[p, r + 1, m], [p, r - 1, m],
[p, r, m + 1], [p, r, m - 1]
]:
if indexy[0] not in range(n) or indexy[1] not in range(n) or indexy[2] not in range(n):
continue
ret.append(szesc[indexy[0]][indexy[1]][indexy[2]])
ret.sort()
return ret
| 26.517241 | 96 | 0.379714 | 128 | 769 | 2.28125 | 0.234375 | 0.215753 | 0.246575 | 0.150685 | 0.191781 | 0.167808 | 0.037671 | 0 | 0 | 0 | 0 | 0.04038 | 0.452536 | 769 | 28 | 97 | 27.464286 | 0.653207 | 0.027308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
38d4a51d8e4d06efbce08b3caa7247ed32c9a2ae | 497 | py | Python | apps/user/forms.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | 2 | 2019-03-11T03:58:19.000Z | 2020-03-06T06:45:28.000Z | apps/user/forms.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | 5 | 2020-06-05T20:04:20.000Z | 2021-09-08T00:53:52.000Z | apps/user/forms.py | jimforit/lagou | 165593a15597012092b5e0ba34158fbc1d1c213d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding=utf-8 -*-
__author__ = 'jimit'
__CreateAt__ = '2019\2\25 14:48'
from django import forms
from captcha.fields import CaptchaField
import re
class RegisterForm(forms.Form):
email = forms.EmailField(required=True)
password = forms.CharField(required=True,label='密码',widget=forms.PasswordInput())
class LoginForm(forms.Form):
email = forms.EmailField(required=True)
password = forms.CharField(required=True,label='密码',widget=forms.PasswordInput()) | 31.0625 | 85 | 0.744467 | 64 | 497 | 5.65625 | 0.578125 | 0.132597 | 0.077348 | 0.104972 | 0.585635 | 0.585635 | 0.585635 | 0.585635 | 0.585635 | 0.585635 | 0 | 0.027273 | 0.114688 | 497 | 16 | 86 | 31.0625 | 0.795455 | 0.082495 | 0 | 0.363636 | 0 | 0 | 0.052747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.181818 | 0.272727 | 0 | 0.818182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
38da874cff9c9ce69eccb1fa1417df33d17c6cb2 | 843 | py | Python | tests/base.py | jdufresne/pwned-passwords-django | 664df66b54f662a26d98f34f1713281a15d0eb0b | [
"BSD-3-Clause"
] | 102 | 2018-03-06T11:46:40.000Z | 2022-03-23T17:25:19.000Z | tests/base.py | jdufresne/pwned-passwords-django | 664df66b54f662a26d98f34f1713281a15d0eb0b | [
"BSD-3-Clause"
] | 24 | 2018-03-08T08:19:54.000Z | 2020-11-05T11:09:03.000Z | tests/base.py | jdufresne/pwned-passwords-django | 664df66b54f662a26d98f34f1713281a15d0eb0b | [
"BSD-3-Clause"
] | 6 | 2018-03-07T22:19:48.000Z | 2020-05-05T00:43:52.000Z | """
Base test-case class for pwned-passwords-django.
"""
from unittest import mock
from django.test import TestCase
from pwned_passwords_django import api
class PwnedPasswordsTests(TestCase):
"""
Base test-case class defining some common code.
"""
sample_password = "swordfish"
sample_password_prefix = "4F571"
sample_password_suffix = "81DCAADE980555F2CE6755CA425F00658BE"
user_agent = {"User-Agent": api.USER_AGENT}
def _get_mock(self, response_text=None):
if response_text is None:
response_text = "{}:3".format(self.sample_password_suffix)
requests_get_mock = mock.MagicMock()
requests_get_mock.return_value.text = response_text
return requests_get_mock
def _get_exception_mock(self, exception):
return mock.MagicMock(side_effect=exception)
| 25.545455 | 70 | 0.72242 | 101 | 843 | 5.752475 | 0.445545 | 0.096386 | 0.077453 | 0.05852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038292 | 0.194543 | 843 | 32 | 71 | 26.34375 | 0.817379 | 0.113879 | 0 | 0 | 0 | 0 | 0.087258 | 0.048476 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.375 | 0.1875 | 0.0625 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
38e0ab552de6eae125a5000733f45885f613a6f1 | 604 | py | Python | tests/classes/gm_article.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 50 | 2021-08-18T08:08:04.000Z | 2022-03-20T07:23:26.000Z | tests/classes/gm_article.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 1 | 2021-02-21T03:18:09.000Z | 2021-03-08T01:07:52.000Z | tests/classes/gm_article.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 8 | 2021-07-01T02:39:15.000Z | 2021-12-10T02:20:18.000Z | from __future__ import annotations
from jsonclasses import jsonclass, types
def check_owner(article: GMArticle, operator: GMAuthor) -> bool:
return article.author.id == operator.id
def check_tier(article: GMArticle, operator: GMAuthor) -> bool:
return operator.paid_user
@jsonclass
class GMAuthor:
id: str
name: str
paid_user: bool
articles: list[GMArticle] = types.listof('GMArticle').linkedby('author') \
.required
@jsonclass(can_create=[check_owner, check_tier])
class GMArticle:
name: str
content: str
author: GMAuthor
| 22.37037 | 78 | 0.68543 | 69 | 604 | 5.84058 | 0.463768 | 0.039702 | 0.119107 | 0.158809 | 0.208437 | 0.208437 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22351 | 604 | 26 | 79 | 23.230769 | 0.859275 | 0 | 0 | 0.111111 | 0 | 0 | 0.024834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0.111111 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
2a0a02b225050550817605657f824d2e3f19de4f | 1,012 | py | Python | classroom/migrations/0011_auto_20190618_2257.py | Abulhusain/E-learing | 65cfe3125f1b6794572ef2daf89917976f0eac09 | [
"MIT"
] | 5 | 2019-06-19T03:47:17.000Z | 2020-06-11T17:46:50.000Z | classroom/migrations/0011_auto_20190618_2257.py | Abulhusain/E-learing | 65cfe3125f1b6794572ef2daf89917976f0eac09 | [
"MIT"
] | 3 | 2019-05-31T03:31:32.000Z | 2019-06-27T11:44:22.000Z | classroom/migrations/0011_auto_20190618_2257.py | seeej/digiwiz | 96ddfc22fe4c815feec3d75c30576fec5f344154 | [
"MIT"
] | 1 | 2021-06-04T05:58:15.000Z | 2021-06-04T05:58:15.000Z | # Generated by Django 2.2.2 on 2019-06-18 14:57
from django.db import migrations, models
import sorl.thumbnail.fields
class Migration(migrations.Migration):
dependencies = [
('classroom', '0010_auto_20190615_2203'),
]
operations = [
migrations.AlterField(
model_name='course',
name='status',
field=models.CharField(default='pending', max_length=10),
),
migrations.AlterField(
model_name='takencourse',
name='status',
field=models.CharField(default='pending', max_length=12),
),
migrations.AlterField(
model_name='takenquiz',
name='status',
field=models.CharField(default='incomplete', max_length=12),
),
migrations.AlterField(
model_name='teacher',
name='image',
field=sorl.thumbnail.fields.ImageField(default='profile_pics/default-user.png', upload_to='profile_pics'),
),
]
| 28.914286 | 118 | 0.593874 | 100 | 1,012 | 5.88 | 0.52 | 0.136054 | 0.170068 | 0.197279 | 0.363946 | 0.363946 | 0.30102 | 0.180272 | 0.180272 | 0 | 0 | 0.051105 | 0.284585 | 1,012 | 34 | 119 | 29.764706 | 0.76105 | 0.044466 | 0 | 0.392857 | 1 | 0 | 0.158549 | 0.053886 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2a14c939ceaa3fa14f7afae939a50094a3876bf4 | 1,565 | py | Python | flask_app/fh_webhook/result.py | nablabits/fareharbor-webhook | 70e415b9ccd45220693eecb6a668746a1282dd03 | [
"MIT"
] | null | null | null | flask_app/fh_webhook/result.py | nablabits/fareharbor-webhook | 70e415b9ccd45220693eecb6a668746a1282dd03 | [
"MIT"
] | null | null | null | flask_app/fh_webhook/result.py | nablabits/fareharbor-webhook | 70e415b9ccd45220693eecb6a668746a1282dd03 | [
"MIT"
] | null | null | null | """The result library."""
class Result:
"""
Construct Result objects.
Result is the basic answer for services that encapsulates the outcome of the service in a clear
and consistent pattern across services. It was added after creating some services, so there are
a few of them that do not make use of it. Eventually we might want to replace the responses for
those services.
"""
@staticmethod
def from_success(value):
"""Return a successful response to a given service."""
return Success(value)
@staticmethod
def from_failure(errors):
"""Return a failure response for a given service."""
return Failure(errors)
def __repr__(self):
return self.__str__()
class Failure:
"""Construct the Failure object for failed services."""
def __init__(self, errors):
self.success = False
self.failure = True
self.errors = errors
def __str__(self):
return "Failure: `{}`".format(str(self.errors))
def __bool__(self):
return False
def map(self, fn):
return self
class Success:
"""Construct the Success object for successful services."""
def __init__(self, value):
self.success = True
self.failure = False
self.value = value
def __str__(self) -> str:
return "Success: `{}`".format(str(self.value))
def __repr__(self) -> str:
return self.__str__()
def __bool__(self):
return True
def map(self, fn):
return Success(fn(self.value))
| 24.076923 | 99 | 0.632588 | 195 | 1,565 | 4.861538 | 0.353846 | 0.042194 | 0.040084 | 0.040084 | 0.037975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272843 | 1,565 | 64 | 100 | 24.453125 | 0.83304 | 0.351438 | 0 | 0.242424 | 0 | 0 | 0.02714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0 | 0.242424 | 0.757576 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
2a1ee77e08cf7d305e39e9ecb434adc963123995 | 1,272 | py | Python | gdbm/jython/create/gdbm_create.py | ekzemplaro/data_base_language | e77030367ffc595f1fac8583f03f9a3ce5eb1611 | [
"MIT",
"Unlicense"
] | 3 | 2015-05-12T16:44:27.000Z | 2021-02-09T00:39:24.000Z | gdbm/jython/create/gdbm_create.py | ekzemplaro/data_base_language | e77030367ffc595f1fac8583f03f9a3ce5eb1611 | [
"MIT",
"Unlicense"
] | null | null | null | gdbm/jython/create/gdbm_create.py | ekzemplaro/data_base_language | e77030367ffc595f1fac8583f03f9a3ce5eb1611 | [
"MIT",
"Unlicense"
] | null | null | null | #! /usr/bin/python
# -*- coding: utf-8 -*-
#
# gdbm_create.py
#
# Jul/13/2010
import sys
import string
import anydbm
#
sys.path.append ('/var/www/uchida/data_base/common/python_common')
#
from dbm_manipulate import dbm_disp_proc,dbm_update_proc
# -------------------------------------------------------------
print ("*** 開始 ***")
#
#
db_name = "/var/tmp/gdbm/cities.pag";
dd = anydbm.open (db_name,"c")
#
#
dd["2151"]='{"name": "岐阜","population": 70230,"date_mod": "2003-7-24"}';
dd["2152"]='{"name": "大垣","population": 52070,"date_mod": "2003-8-12"}';
dd["2153"]='{"name": "多治見","population": 420155,"date_mod": "2003-9-14"}';
dd["2154"]='{"name": "各務原","population": 44630,"date_mod": "2003-8-2"}';
dd["2155"]='{"name": "土岐","population": 21204,"date_mod": "2003-5-15"}';
dd["2156"]='{"name": "高山","population": 92130,"date_mod": "2003-10-12"}';
dd["2157"]='{"name": "美濃加茂","population": 82034,"date_mod": "2003-11-21"}';
dd["2158"]='{"name": "恵那","population": 92304,"date_mod": "2003-10-11"}';
dd["2159"]='{"name": "関","population": 926340,"date_mod": "2003-7-25"}';
dd["2160"]='{"name": "中津川","population": 920534,"date_mod": "2003-12-4"}';
#
dbm_disp_proc (dd)
#
dd.close ()
#
print ("*** 終了 ***")
# -------------------------------------------------------------
| 32.615385 | 75 | 0.544811 | 172 | 1,272 | 3.901163 | 0.517442 | 0.104322 | 0.163934 | 0.035768 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146883 | 0.079403 | 1,272 | 38 | 76 | 33.473684 | 0.426132 | 0.152516 | 0 | 0 | 0 | 0 | 0.678605 | 0.065975 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2a433fc8a51def8c74bd124d9b1522f70acb549c | 46,784 | py | Python | pynet/datasets/euaims.py | neurospin/pynet | 28eb248a04b40d180677e8fa20f2297c6967da0b | [
"CECILL-B"
] | 8 | 2020-06-23T16:30:52.000Z | 2021-07-27T15:07:18.000Z | pynet/datasets/euaims.py | neurospin/pynet | 28eb248a04b40d180677e8fa20f2297c6967da0b | [
"CECILL-B"
] | 8 | 2019-12-18T17:28:47.000Z | 2021-02-12T09:10:58.000Z | pynet/datasets/euaims.py | neurospin/pynet | 28eb248a04b40d180677e8fa20f2297c6967da0b | [
"CECILL-B"
] | 18 | 2019-08-19T14:17:48.000Z | 2021-12-20T03:56:39.000Z | # -*- coding: utf-8 -*-
########################################################################
# NSAp - Copyright (C) CEA, 2021
# Distributed under the terms of the CeCILL-B license, as published by
# the CEA-CNRS-INRIA. Refer to the LICENSE file or to
# http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.html
# for details.
########################################################################
"""
Module provides functions to prepare different datasets from EUAIMS.
"""
# Imports
import os
import json
import time
import urllib
import shutil
import pickle
import requests
import logging
import numpy as np
from collections import namedtuple
import pandas as pd
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler, OneHotEncoder, StandardScaler
from sklearn.linear_model import LinearRegression
from pynet.datasets import Fetchers
from neurocombat_sklearn import CombatModel as fortin_combat
from nibabel.freesurfer.mghformat import load as surface_loader
# Global parameters
Item = namedtuple("Item", ["train_input_path", "test_input_path",
"train_metadata_path", "test_metadata_path"])
COHORT_NAME = "EUAIMS"
FOLDER = "/neurospin/brainomics/2020_deepint/data"
SAVING_FOLDER = "/tmp/EUAIMS"
FILES = {
"stratification": os.path.join(FOLDER, "EUAIMS_stratification.tsv"),
"rois_mapper": os.path.join(FOLDER, "EUAIMS_rois.tsv"),
"surf_stratification": os.path.join(
FOLDER, "EUAIMS_surf_stratification.tsv")
}
DEFAULTS = {
"clinical": {
"test_size": 0.2, "seed": 42,
"return_data": False, "z_score": True,
"drop_cols": ["t1:site", "t1:ageyrs", "t1:sex", "t1:fsiq",
"t1:group", "t1:diagnosis", "mri", "t1:group:name",
"qc", "labels", "subgroups"],
"qc": {"t1:fsiq": {"gte": 70},
"mri": {"eq": 1},
"qc": {"eq": "include"}}
},
"rois": {
"test_size": 0.2, "seed": 42,
"return_data": False, "z_score": True, "adjust_sites": True,
"metrics": ["lgi:avg", "thick:avg", "surf:area"],
"roi_types": ["cortical"],
"residualize_by": {"continuous": ["t1:ageyrs", "t1:fsiq"],
"discrete": ["t1:sex"]},
"qc": {"t1:fsiq": {"gte": 70},
"mri": {"eq": 1},
"qc": {"eq": "include"}}
},
"genetic": {
"test_size": 0.2, "seed": 42,
"return_data": False, "z_score": True, "scores": None,
"qc": {"t1:fsiq": {"gte": 70},
"mri": {"eq": 1},
"qc": {"eq": "include"}}
},
"surface": {
"test_size": 0.2, "seed": 42,
"return_data": False, "z_score": True, "adjust_sites": True,
"metrics": ["pial_lgi", "thickness"],
"residualize_by": {"continuous": ["t1:ageyrs", "t1:fsiq"],
"discrete": ["t1:sex"]},
"qc": {"t1:fsiq": {"gte": 70},
"mri": {"eq": 1},
"qc": {"eq": "include"}}
},
"multiblock": {
"test_size": 0.2, "seed": 42,
"blocks": ["clinical", "surface-lh", "surface-rh", "genetic"],
"qc": {"t1:fsiq": {"gte": 70},
"mri": {"eq": 1},
"qc": {"eq": "include"}}
}
}
logger = logging.getLogger("pynet")
def apply_qc(data, prefix, qc):
""" applies quality control to the data
Parameters
----------
data: pandas DataFrame
data for which we control the quality
prefix: string
prefix of the column names
qc: dict
quality control dict. keys are the name of the columns
to control on, and values dict containing an order relationsip
and a value as items
Returns
-------
data: pandas DataFrame
selected data by the quality control
"""
idx_to_keep = pd.Series([True] * len(data))
relation_mapper = {
"gt": lambda x, y: x > y,
"lt": lambda x, y: x < y,
"gte": lambda x, y: x >= y,
"lte": lambda x, y: x <= y,
"eq": lambda x, y: x == y,
}
for name, controls in qc.items():
for relation, value in controls.items():
if relation not in relation_mapper.keys():
raise ValueError("The relationship {} provided is not a \
valid one".format(relation))
elif "{}{}".format(prefix, name) in data.columns:
new_idx = relation_mapper[relation](
data["{}{}".format(prefix, name)], value)
idx_to_keep = idx_to_keep & new_idx
return data[idx_to_keep]
def fetch_clinical_wrapper(datasetdir=SAVING_FOLDER, files=FILES,
cohort=COHORT_NAME, defaults=DEFAULTS['clinical']):
""" Fetcher wrapper for clinical data
Parameters
----------
datasetdir: string, default SAVING_FOLDER
path to the folder in which to save the data
files: dict, default FILES
contains the paths to the different files
cohort: string, default COHORT_NAME,
name of the cohort
subject_columns_name: string, default 'subjects'
name of the column containing the subjects id
defaults: dict, default DEFAULTS
default values for the wrapped function
Returns
-------
fetcher: function
corresponding fetcher.
"""
fetcher_name = "fetcher_clinical_{}".format(cohort)
# @Fetchers.register
def fetch_clinical(
test_size=defaults["test_size"], seed=defaults["seed"],
return_data=defaults["return_data"], z_score=defaults["z_score"],
drop_cols=defaults["drop_cols"], qc=defaults["qc"]):
""" Fetches and preprocesses clinical data
Parameters
----------
test_size: float, default 0.2
proportion of the dataset to keep for testing. Preprocessing models
will only be fitted on the training part and applied to the test
set. You can specify not to use a testing set by setting it to 0
seed: int, default 42
random seed to split the data into train / test
return_data: bool, default False
If false, saves the data in the specified folder, and return the
path. Otherwise, returns the preprocessed data and the
corresponding subjects
z_score: bool, default True
wether or not to transform the data into z_scores, meaning
standardizing and scaling it
drop_cols: list of string, see default
names of the columns to drop before saving the data.
qc: dict, see default
keys are the name of the features the control on, values are the
requirements on their values (see the function apply_qc)
Returns
-------
item: namedtuple
a named tuple containing 'train_input_path', 'train_metadata_path',
and 'test_input_path', 'test_metadata_path' if test_size > 0
X_train: numpy array,
Training data, if return_data is True
X_test: numpy array,
Test data, if return_data is True and test_size > 0
subj_train: numpy array,
Training subjects, if return_data is True
subj_test: numpy array,
Test subjects, if return_data is True and test_size > 0
"""
clinical_prefix = "bloc-clinical_score-"
subject_column_name = "participant_id"
path = os.path.join(datasetdir, "clinical_X_train.npy")
meta_path = os.path.join(datasetdir, "clinical_X_train.tsv")
path_test = None
meta_path_test = None
if test_size > 0:
path_test = os.path.join(datasetdir, "clinical_X_test.npy")
meta_path_test = os.path.join(datasetdir, "clinical_X_test.tsv")
if not os.path.isfile(path):
data = pd.read_csv(files["stratification"], sep="\t")
clinical_cols = [subject_column_name]
clinical_cols += [col for col in data.columns
if col.startswith(clinical_prefix)]
data = data[clinical_cols]
data_train = apply_qc(data, clinical_prefix, qc).sort_values(
subject_column_name)
data_train.columns = [elem.replace(clinical_prefix, "")
for elem in data_train.columns]
X_train = data_train.drop(columns=drop_cols)
# Splits in train and test and removes nans
X_test, subj_test = (None, None)
if test_size > 0:
X_train, X_test = train_test_split(
X_train, test_size=test_size, random_state=seed)
na_idx_test = (X_test.isna().sum(1) == 0)
X_test = X_test[na_idx_test]
subj_test = X_test[subject_column_name].values
X_test = X_test.drop(columns=[subject_column_name]).values
na_idx_train = (X_train.isna().sum(1) == 0)
X_train = X_train[na_idx_train]
subj_train = X_train[subject_column_name].values
X_train = X_train.drop(columns=[subject_column_name])
cols = X_train.columns
X_train = X_train.values
# Standardizes and scales
if z_score:
scaler = RobustScaler()
X_train = scaler.fit_transform(X_train)
_path = os.path.join(datasetdir, "clinical_scaler.pkl")
with open(_path, "wb") as f:
pickle.dump(scaler, f)
if test_size > 0:
X_test = scaler.transform(X_test)
# Return data and subjects
X_train_df = pd.DataFrame(data=X_train, columns=cols)
X_train_df.insert(0, subject_column_name, subj_train)
X_test_df = None
if test_size > 0:
X_test_df = pd.DataFrame(data=X_test, columns=cols)
X_test_df.insert(0, subject_column_name, subj_test)
# Saving
np.save(path, X_train)
X_train_df.to_csv(meta_path, index=False, sep="\t")
if test_size > 0:
np.save(path_test, X_test)
X_test_df.to_csv(meta_path_test, index=False, sep="\t")
if return_data:
X_train = np.load(path)
subj_train = pd.read_csv(meta_path, sep="\t")[
subject_column_name].values
X_test, subj_test = (None, None)
if test_size > 0:
X_test = np.load(path_test)
subj_test = pd.read_csv(meta_path_test, sep="\t")[
subject_column_name].values
return X_train, X_test, subj_train, subj_test
else:
return Item(train_input_path=path, test_input_path=path_test,
train_metadata_path=meta_path,
test_metadata_path=meta_path_test)
return fetch_clinical
def fetch_rois_wrapper(datasetdir=SAVING_FOLDER, files=FILES,
cohort=COHORT_NAME, site_column_name="t1:site",
defaults=DEFAULTS['rois']):
""" Fetcher wrapper for rois data
Parameters
----------
datasetdir: string, default SAVING_FOLDER
path to the folder in which to save the data
files: dict, default FILES
contains the paths to the different files
cohort: string, default COHORT_NAME,
name of the cohort
site_columns_name: string, default "t1:site"
name of the column containing the site of MRI acquisition
defaults: dict, default DEFAULTS
default values for the wrapped function
Returns
-------
fetcher: function
corresponding fetcher
"""
fetcher_name = "fetcher_rois_{}".format(cohort)
# @Fetchers.register
def fetch_rois(
metrics=defaults["metrics"], roi_types=defaults["roi_types"],
test_size=defaults["test_size"], seed=defaults["seed"],
return_data=defaults["return_data"], z_score=defaults["z_score"],
adjust_sites=defaults["adjust_sites"],
residualize_by=defaults["residualize_by"], qc=defaults["qc"]):
""" Fetches and preprocesses roi data
Parameters
----------
datasetdir: string
path to the folder in which to save the data
metrics: list of strings, see default
metrics to fetch
roi_types: list of strings, default ["cortical"]
type of rois to fetch. Must be one of "cortical", "subcortical"
and "other"
test_size: float, default 0.2
proportion of the dataset to keep for testing. Preprocessing models
will only be fitted on the training part and applied to the test
set. You can specify not to use a testing set by setting it to 0
seed: int, default 42
random seed to split the data into train / test
return_data: bool, default False
If false, saves the data in the specified folder, and return the
path. Otherwise, returns the preprocessed data and the
corresponding subjects
z_score: bool, default True
wether or not to transform the data into z_scores, meaning
standardizing and scaling it
adjust_sites: bool, default True
wether or not the correct site effects via the Combat algorithm
residualize_by: dict, see default
variables to residualize the data. Two keys, "continuous" and
"discrete", and the values are a list of the variable names
qc: dict, see default
keys are the name of the features the control on, values are the
requirements on their values (see the function apply_qc)
Returns
-------
item: namedtuple
a named tuple containing 'train_input_path', 'train_metadata_path',
and 'test_input_path', 'test_metadata_path' if test_size > 0
X_train: numpy array,
Training data, if return_data is True
X_test: numpy array,
Test data, if return_data is True and test_size > 0
subj_train: numpy array,
Training subjects, if return_data is True
subj_test: numpy array,
Test subjects, if return_data is True and test_size > 0
"""
clinical_prefix = "bloc-clinical_score-"
roi_prefix = "bloc-t1w_roi-"
subject_column_name = "participant_id"
path = os.path.join(datasetdir, "rois_X_train.npy")
meta_path = os.path.join(datasetdir, "rois_X_train.tsv")
path_test = None
meta_path_test = None
if test_size > 0:
path_test = os.path.join(datasetdir, "rois_X_test.npy")
meta_path_test = os.path.join(datasetdir, "rois_X_test.tsv")
if not os.path.isfile(path):
data = pd.read_csv(files["stratification"], sep="\t")
roi_mapper = pd.read_csv(files["rois_mapper"], sep="\t")
# ROI selection
roi_label_range = pd.Series([False] * len(roi_mapper))
for roi_type in roi_types:
if roi_type == "cortical":
roi_label_range = roi_label_range | (
(roi_mapper["labels"] > 11000) &
(roi_mapper["labels"] < 13000))
elif roi_type == "subcortical":
roi_label_range = roi_label_range | (
roi_mapper["labels"] > 13000)
elif roi_type == "other":
roi_label_range = roi_label_range | (
roi_mapper["labels"] < 11000)
else:
raise ValueError("Roi types must be either 'cortical', \
'subcortical' or 'other'")
roi_labels = roi_mapper.loc[roi_label_range, "labels"]
# Feature selection
features_list = []
for column in data.columns:
if column.startswith(roi_prefix):
roi = int(column.split(":")[1].split("_")[0])
metric = column.split("-")[-1]
if roi in roi_labels.values and metric in metrics:
features_list.append(column.replace(roi_prefix, ""))
data_train = apply_qc(data, clinical_prefix, qc).sort_values(
subject_column_name)
data_train.columns = [elem.replace(roi_prefix, "")
for elem in data_train.columns]
X_train = data_train[features_list].copy()
# Splits in train and test and removes nans
if test_size > 0:
X_train, X_test, data_train, data_test = train_test_split(
X_train, data_train, test_size=test_size,
random_state=seed)
na_idx_test = (X_test.isna().sum(1) == 0)
X_test = X_test[na_idx_test]
data_test = data_test[na_idx_test]
subj_test = data_test[subject_column_name].values
na_idx_train = (X_train.isna().sum(1) == 0)
X_train = X_train[na_idx_train]
data_train = data_train[na_idx_train]
subj_train = data_train[subject_column_name].values
cols = X_train.columns
# Correction for site effects
if adjust_sites:
for metric in metrics:
adjuster = fortin_combat()
features = [feature for feature in features_list
if metric in feature]
X_train[features] = adjuster.fit_transform(
X_train[features],
data_train[["{}{}".format(
clinical_prefix, site_column_name)]],
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]],
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]])
_path = os.path.join(
datasetdir, "rois_combat_{0}.pkl".format(metric))
with open(_path, "wb") as of:
pickle.dump(adjuster, of)
if test_size > 0:
X_test[features] = adjuster.transform(
X_test[features],
data_test[["{}{}".format(
clinical_prefix, site_column_name)]],
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]],
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]])
# Standardizes
if z_score:
scaler = RobustScaler()
X_train = scaler.fit_transform(X_train)
_path = os.path.join(datasetdir, "rois_scaler.pkl")
with open(_path, "wb") as f:
pickle.dump(scaler, f)
if test_size > 0:
X_test = scaler.transform(X_test)
else:
X_train = X_train.values
if test_size > 0:
X_test = X_test.values
# Residualizes and scales
if residualize_by is not None or len(residualize_by) > 0:
regressor = LinearRegression()
y_train = np.concatenate([
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]].values,
OneHotEncoder(sparse=False).fit_transform(
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]])
], axis=1)
regressor.fit(y_train, X_train)
X_train = X_train - regressor.predict(y_train)
_path = os.path.join(datasetdir, "rois_residualizer.pkl")
with open(_path, "wb") as f:
pickle.dump(regressor, f)
if test_size > 0:
y_test = np.concatenate([
data_test[[
"{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]].values,
OneHotEncoder(sparse=False).fit_transform(
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]])
], axis=1)
X_test = X_test - regressor.predict(y_test)
# Return data and subjects
X_train_df = pd.DataFrame(data=X_train, columns=cols)
X_train_df.insert(0, subject_column_name, subj_train)
X_test_df = None
if test_size > 0:
X_test_df = pd.DataFrame(data=X_test, columns=cols)
X_test_df.insert(0, subject_column_name, subj_test)
# Saving
np.save(path, X_train)
X_train_df.to_csv(meta_path, index=False, sep="\t")
if test_size > 0:
np.save(path_test, X_test)
X_test_df.to_csv(meta_path_test, index=False, sep="\t")
if return_data:
X_train = np.load(path)
subj_train = pd.read_csv(meta_path, sep="\t")[
subject_column_name].values
X_test, subj_test = (None, None)
if test_size > 0:
X_test = np.load(path_test)
subj_test = pd.read_csv(meta_path_test, sep="\t")[
subject_column_name].values
return X_train, X_test, subj_train, subj_test
else:
return Item(train_input_path=path, test_input_path=path_test,
train_metadata_path=meta_path,
test_metadata_path=meta_path_test)
return fetch_rois
def fetch_surface_wrapper(hemisphere, datasetdir=SAVING_FOLDER,
files=FILES, cohort=COHORT_NAME,
site_column_name="t1:site",
defaults=DEFAULTS["surface"]):
""" Fetcher wrapper for surface data
Parameters
----------
hemisphere: string
name of the hemisphere data fetcher, one of "rh" or "lh"
datasetdir: string, default SAVING_FOLDER
path to the folder in which to save the data
files: dict, default FILES
contains the paths to the different files
cohort: string, default COHORT_NAME,
name of the cohort
site_columns_name: string, default "t1:site"
name of the column containing the site of MRI acquisition
defaults: dict, default DEFAULTS
default values for the wrapped function
Returns
-------
fetcher: function
corresponding fetcher
"""
assert(hemisphere in ["rh", "lh"])
fetcher_name = "fetcher_surface_{}_{}".format(hemisphere, cohort)
# @Fetchers.register
def fetch_surface(
metrics=defaults["metrics"],
test_size=defaults["test_size"], seed=defaults["seed"],
return_data=defaults["return_data"],
z_score=defaults["z_score"], adjust_sites=defaults["adjust_sites"],
residualize_by=defaults["residualize_by"], qc=defaults["qc"]):
""" Fetches and preprocesses surface data
Parameters
----------
metrics: list of strings, see defaults
metrics to fetch
test_size: float, default 0.2
proportion of the dataset to keep for testing. Preprocessing models
will only be fitted on the training part and applied to the test
set. You can specify not to use a testing set by setting it to 0
seed: int, default 42
random seed to split the data into train / test
return_data: bool, default False
If false, saves the data in the specified folder, and return the
path. Otherwise, returns the preprocessed data and the
corresponding subjects
z_score: bool, default True
wether or not to transform the data into z_scores, meaning
standardizing and scaling it
adjust_sites: bool, default True
wether or not the correct site effects via the Combat algorithm
residualize_by: dict, see default
variables to residualize the data. Two keys, "continuous" and
"discrete", and the values are a list of the variable names
qc: dict, see default
keys are the name of the features the control on, values are the
requirements on their values (see the function apply_qc)
Returns
-------
item: namedtuple
a named tuple containing 'train_input_path', 'train_metadata_path',
and 'test_input_path', 'test_metadata_path' if test_size > 0
X_train: numpy array,
Training data, if return_data is True
X_test: numpy array,
Test data, if return_data is True and test_size > 0
subj_train: numpy array,
Training subjects, if return_data is True
subj_test: numpy array,
Test subjects, if return_data is True and test_size > 0
"""
clinical_prefix = "bloc-clinical_score-"
surf_prefix = "bloc-t1w_hemi-{}_metric".format(hemisphere)
data = pd.read_csv(files["clinical_surface"], sep="\t").drop(
columns=["bloc-t1w_hemi-lh_metric-area",
"bloc-t1w_hemi-rh_metric-area"])
# Feature selection
features_list = []
for metric in metrics:
for column in data.columns:
if column.startswith(surf_prefix):
m = column.split('-')[-1]
if m == metric:
features_list.append(column)
data_train = apply_qc(data, clinical_prefix, qc).sort_values(
"participant_id")
# Loads surface data
n_vertices = len(
surface_loader(data_train[features_list[0]].iloc[0]).get_data())
X_train = np.zeros((len(data_train), n_vertices, len(features_list)))
for i in range(len(data_train)):
for j, feature in enumerate(features_list):
path = data_train[feature].iloc[i]
if not pd.isnull([path]):
X_train[i, :, j] = surface_loader(
path).get_data().squeeze()
# Splits in train and test and removes nans
if test_size > 0:
X_train, X_test, data_train, data_test = train_test_split(
X_train, data_train, test_size=test_size, random_state=seed)
na_idx_test = (np.isnan(X_test).sum((1, 2)) == 0)
X_test = X_test[na_idx_test]
data_test = data_test[na_idx_test]
if return_data:
subj_test = data_test["participant_id"].values
na_idx_train = (np.isnan(X_train).sum((1, 2)) == 0)
X_train = X_train[na_idx_train]
data_train = data_train[na_idx_train]
if return_data:
subj_train = data_train["participant_id"].values
# Applies feature-wise preprocessing
for i, feature in enumerate(features_list):
# Correction for site effects
if adjust_sites:
non_zeros_idx = (X_train[:, :, i] > 0).sum(0) >= 1
adjuster = fortin_combat()
X_train[:, non_zeros_idx, i] = adjuster.fit_transform(
X_train[:, non_zeros_idx, i],
data_train[["{}{}".format(
clinical_prefix, site_column_name)]],
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]],
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]])
path = os.path.join(
datasetdir,
"surface_{}_combat_feature{}.pkl".format(hemisphere, i))
with open(path, "wb") as f:
pickle.dump(adjuster, f)
if test_size > 0:
X_test[:, non_zeros_idx, i] = adjuster.transform(
X_test[:, non_zeros_idx, i],
data_test[["{}{}".format(
clinical_prefix, site_column_name)]],
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]],
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]])
# Standardizes and scales
if z_score:
scaler = RobustScaler()
X_train[:, :, i] = scaler.fit_transform(X_train[:, :, i])
path = os.path.join(
datasetdir,
"surface_{}_scaler_feature{}.pkl".format(hemisphere, i))
with open(path, "wb") as f:
pickle.dump(scaler, f)
if test_size > 0:
X_test[:, :, i] = scaler.transform(X_test[:, :, i])
# Residualizes
if residualize_by is not None or len(residualize_by) > 0:
regressor = LinearRegression()
y_train = np.concatenate([
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]].values,
OneHotEncoder(sparse=False).fit_transform(
data_train[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]])
], axis=1)
regressor.fit(y_train, X_train[:, :, i])
X_train[:, :, i] = X_train[:, :, i] - regressor.predict(
y_train)
path = os.path.join(
datasetdir,
"surface_{}_residualizer_feature{}.pkl".format(
hemisphere, i))
with open(path, "wb") as f:
pickle.dump(regressor, f)
if test_size > 0:
y_test = np.concatenate([
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["continuous"]]
].values,
OneHotEncoder(sparse=False).fit_transform(
data_test[["{}{}".format(clinical_prefix, f)
for f in residualize_by["discrete"]]])
], axis=1)
X_test[:, :, i] = X_test[:, :, i] - regressor.predict(
y_test)
# Returns data and subjects
if return_data:
if test_size > 0:
return X_train, X_test, subj_train, subj_test
return X_train, subj_train
# Saving
path = os.path.join(
datasetdir, "surface_{}_X_train.npy".format(hemisphere))
np.save(path, X_train)
if test_size > 0:
path_test = os.path.join(
datasetdir, "surface_{}_X_test.npy".format(hemisphere))
np.save(path_test, X_test)
return path, path_test
return path
return fetch_surface
def fetch_genetic_wrapper(datasetdir=SAVING_FOLDER, files=FILES,
cohort=COHORT_NAME, defaults=DEFAULTS['genetic']):
""" Fetcher wrapper for genetic data
Parameters
----------
datasetdir: string, default SAVING_FOLDER
path to the folder in which to save the data
files: dict, default FILES
contains the paths to the different files
cohort: string, default COHORT_NAME,
name of the cohort
defaults: dict, default DEFAULTS
default values for the wrapped function
Returns
-------
fetcher: function
corresponding fetcher
"""
fetcher_name = "fetcher_genetic_{}".format(cohort)
# @Fetchers.register
def fetch_genetic(
scores=defaults["scores"], test_size=defaults["test_size"],
seed=defaults["seed"], return_data=defaults["return_data"],
z_score=defaults["z_score"], qc=defaults["qc"]):
""" Fetches and preprocesses genetic data
Parameters
----------
scores: list of strings, see defaults
scores to fetch, None mean it fetches all the available scores
test_size: float, see defaults
proportion of the dataset to keep for testing. Preprocessing models
will only be fitted on the training part and applied to the test
set. You can specify not to use a testing set by setting it to 0
seed: int, see default
random seed to split the data into train / test
return_data: bool, default False
If false, saves the data in the specified folder, and return the
path. Otherwise, returns the preprocessed data and the
corresponding subjects
z_score: bool, see defaults
wether or not to transform the data into z_scores, meaning
standardizing and scaling it
qc: dict, see defaults
keys are the name of the features the control on, values are the
requirements on their values (see the function apply_qc)
Returns
-------
item: namedtuple
a named tuple containing 'train_input_path', 'train_metadata_path',
and 'test_input_path', 'test_metadata_path' if test_size > 0
X_train: numpy array
Training data, if return_data is True
X_test: numpy array
Test data, if return_data is True and test_size > 0
subj_train: numpy array
Training subjects, if return_data is True
subj_test: numpy array
Test subjects, if return_data is True and test_size > 0
"""
clinical_prefix = "bloc-clinical_score-"
genetic_prefix = "bloc-genetic_score-"
subject_column_name = "participant_id"
path = os.path.join(datasetdir, "genetic_X_train.npy")
meta_path = os.path.join(datasetdir, "genetic_X_train.tsv")
path_test = None
meta_path_test = None
if test_size > 0:
path_test = os.path.join(datasetdir, "genetic_X_test.npy")
meta_path_test = os.path.join(datasetdir, "genetic_X_test.tsv")
if not os.path.isfile(path):
data = pd.read_csv(files["stratification"], sep="\t")
# Feature selection
features_list = []
for column in data.columns:
if column.startswith(genetic_prefix):
score = column.split("-")[-1]
if scores is not None and score in scores:
features_list.append(
column.replace(genetic_prefix, ""))
elif scores is None:
features_list.append(
column.replace(genetic_prefix, ""))
data_train = apply_qc(data, clinical_prefix, qc).sort_values(
subject_column_name)
data_train.columns = [elem.replace(genetic_prefix, "")
for elem in data_train.columns]
X_train = data_train[features_list].copy()
# Splits in train and test and removes nans
if test_size > 0:
X_train, X_test, data_train, data_test = train_test_split(
X_train, data_train, test_size=test_size,
random_state=seed)
na_idx_test = (X_test.isna().sum(1) == 0)
X_test = X_test[na_idx_test]
data_test = data_test[na_idx_test]
subj_test = data_test[subject_column_name].values
na_idx_train = (X_train.isna().sum(1) == 0)
X_train = X_train[na_idx_train]
data_train = data_train[na_idx_train]
subj_train = data_train[subject_column_name].values
cols = X_train.columns
# Standardizes and scales
if z_score:
scaler = RobustScaler()
X_train = scaler.fit_transform(X_train)
_path = os.path.join(datasetdir, "genetic_scaler.pkl")
with open(_path, "wb") as f:
pickle.dump(scaler, f)
if test_size > 0:
X_test = scaler.transform(X_test)
else:
X_train = X_train.values
if test_size > 0:
X_test = X_test.values
# Return data and subjects
X_train_df = pd.DataFrame(data=X_train, columns=cols)
X_train_df.insert(0, subject_column_name, subj_train)
X_test_df = None
if test_size > 0:
X_test_df = pd.DataFrame(data=X_test, columns=cols)
X_test_df.insert(0, subject_column_name, subj_test)
# Saving
np.save(path, X_train)
X_train_df.to_csv(meta_path, index=False, sep="\t")
if test_size > 0:
np.save(path_test, X_test)
X_test_df.to_csv(meta_path_test, index=False, sep="\t")
if return_data:
X_train = np.load(path)
subj_train = pd.read_csv(meta_path, sep="\t")[
subject_column_name].values
X_test, subj_test = (None, None)
if test_size > 0:
X_test = np.load(path_test)
subj_test = pd.read_csv(meta_path_test, sep="\t")[
subject_column_name].values
return X_train, X_test, subj_train, subj_test
else:
return Item(train_input_path=path, test_input_path=path_test,
train_metadata_path=meta_path,
test_metadata_path=meta_path_test)
return fetch_genetic
def make_fetchers(datasetdir=SAVING_FOLDER):
return {
"clinical": fetch_clinical_wrapper(datasetdir=datasetdir),
"rois": fetch_rois_wrapper(datasetdir=datasetdir),
"surface-rh": fetch_surface_wrapper(hemisphere="rh",
datasetdir=datasetdir),
"surface-lh": fetch_surface_wrapper(hemisphere="lh",
datasetdir=datasetdir),
"genetic": fetch_genetic_wrapper(datasetdir=datasetdir),
}
def fetch_multiblock_wrapper(datasetdir=SAVING_FOLDER, files=FILES,
cohort=COHORT_NAME,
subject_column_name="subjects",
defaults=DEFAULTS["multiblock"],
make_fetchers_func=make_fetchers):
""" Fetcher wrapper for multiblock data
Parameters
----------
datasetdir: string, default SAVING_FOLDER
path to the folder in which to save the data
files: dict, default FILES
contains the paths to the different files
cohort: string, default COHORT_NAME,
name of the cohort
subject_columns_name: string, default "subjects"
name of the column containing the subjects id
defaults: dict, default DEFAULTS
default values for the wrapped function
make_fetchers_func: function, default make_fetchers
function to build the fetchers from their wrappers.
Must return a dict containing as keys the name of the
channels, and values the corresponding fetcher
Returns
-------
fetcher: function
corresponding fetcher
"""
fetcher_name = "fetcher_multiblock_{}".format(cohort)
FETCHERS = make_fetchers_func(datasetdir)
# @Fetchers.register
def fetch_multiblock(
blocks=defaults["blocks"],
test_size=defaults["test_size"], seed=defaults["seed"],
qc=defaults["qc"],
**kwargs):
""" Fetches and preprocesses multi block data
Parameters
----------
blocks: list of strings, see default
blocks of data to fetch, all must be in the key list of FETCHERS
test_size: float, default 0.2
proportion of the dataset to keep for testing. Preprocessing models
will only be fitted on the training part and applied to the test
set. You can specify not to use a testing set by setting it to 0
seed: int, default 42
random seed to split the data into train / test
qc: dict, see default
keys are the name of the features the control on, values are the
requirements on their values (see the function apply_qc)
kwargs: dict
additional arguments to be passed to each fetcher indivudally.
Keys are the name of the fetchers, and values are a dictionnary
containing arguments and the values for this fetcher
Returns
-------
item: namedtuple
a named tuple containing 'train_input_path', 'train_metadata_path',
and 'test_input_path', 'test_metadata_path' if test_size > 0
"""
path = os.path.join(datasetdir, "multiblock_X_train.npz")
metadata_path = os.path.join(datasetdir, "metadata_train.tsv")
path_test = None
metadata_path_test = None
if test_size > 0:
path_test = os.path.join(datasetdir, "multiblock_X_test.npz")
metadata_path_test = os.path.join(
datasetdir, "metadata_test.tsv")
if not os.path.isfile(path):
X_train = {}
subj_train = {}
if test_size > 0:
X_test = {}
subj_test = {}
for block in blocks:
assert block in FETCHERS.keys()
if block in kwargs.keys():
local_kwargs = kwargs[block]
# Impose to have the same qc steps and splitting train/test
# over all the blocks to have the same subjects
for key, value in local_kwargs.items():
if key in ["qc", "test_size", "seed"]:
del local_kwargs[key]
else:
local_kwargs = {}
new_X_train, new_X_test, new_subj_train, new_subj_test = \
FETCHERS[block](
qc=qc, test_size=test_size, seed=seed,
return_data=True, **local_kwargs)
if test_size > 0:
X_test[block] = new_X_test
subj_test[block] = new_subj_test
X_train[block] = new_X_train
subj_train[block] = new_subj_train
# Remove subjects that arent in all the channels
common_subjects_train = list(
set.intersection(*map(set, subj_train.values())))
for block in blocks:
subjects = subj_train[block]
assert(len(subjects) == len(X_train[block]))
idx_to_keep = [
_idx for _idx in range(len(subjects))
if subjects[_idx] in common_subjects_train]
X_train[block] = X_train[block][idx_to_keep]
if test_size > 0:
common_subjects_test = list(
set.intersection(*map(set, subj_test.values())))
for block in blocks:
subjects = subj_test[block]
assert(len(subjects) == len(X_test[block]))
idx_to_keep = [
_idx for _idx in range(len(subjects))
if subjects[_idx] in common_subjects_test]
X_test[block] = X_test[block][idx_to_keep]
# Loads metadata
clinical_prefix = "bloc-clinical_score-"
metadata_cols = ["participant_id", "labels", "subgroups"]
metadata = pd.read_csv(files["stratification"], sep="\t")
clinical_cols = ["participant_id"]
clinical_cols += [col for col in metadata.columns
if col.startswith(clinical_prefix)]
metadata = metadata[clinical_cols]
metadata.columns = [elem.replace(clinical_prefix, "")
for elem in metadata.columns]
metadata = metadata[metadata_cols]
metadata_train = metadata[
metadata[subject_column_name].isin(common_subjects_train)]
if test_size > 0:
metadata_test = metadata[
metadata[subject_column_name].isin(common_subjects_test)]
# Saving
np.savez(path, **X_train)
metadata_train.to_csv(metadata_path, index=False, sep="\t")
if test_size > 0:
np.savez(path_test, **X_test)
metadata_test.to_csv(metadata_path_test, index=False, sep="\t")
return Item(train_input_path=path, test_input_path=path_test,
train_metadata_path=metadata_path,
test_metadata_path=metadata_path_test)
return fetch_multiblock
WRAPPERS = {
"clinical": fetch_clinical_wrapper,
"rois": fetch_rois_wrapper,
"genetic": fetch_genetic_wrapper,
"surface": fetch_surface_wrapper,
"multiblock": fetch_multiblock_wrapper,
}
def fetch_multiblock_euaims(datasetdir=SAVING_FOLDER, fetchers=make_fetchers,
surface=False):
if surface:
DEFAULTS["multiblock"]["blocks"] = ["clinical", "surface-lh",
"surface-rh", "genetic"]
else:
DEFAULTS["multiblock"]["blocks"] = ["clinical", "rois", "genetic"]
return WRAPPERS["multiblock"](
datasetdir=datasetdir, files=FILES, cohort=COHORT_NAME,
subject_column_name="participant_id", defaults=DEFAULTS["multiblock"],
make_fetchers_func=make_fetchers)()
def inverse_normalization(data, scalers):
""" De-normalize a dataset.
"""
for scaler_path in scalers:
with open(scaler_path, "rb") as of:
scaler = pickle.load(of)
data = scaler.inverse_transform(data)
return data
| 41.771429 | 79 | 0.560363 | 5,454 | 46,784 | 4.583242 | 0.073341 | 0.024003 | 0.018722 | 0.017162 | 0.732608 | 0.701524 | 0.660359 | 0.645397 | 0.622635 | 0.605153 | 0 | 0.006522 | 0.344498 | 46,784 | 1,119 | 80 | 41.808758 | 0.808589 | 0.250769 | 0 | 0.515244 | 0 | 0 | 0.086405 | 0.012773 | 0 | 0 | 0 | 0 | 0.006098 | 1 | 0.021341 | false | 0 | 0.027439 | 0.001524 | 0.079268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2a4d9730d330dd4b3866db7cf11fc5ada9f4ede1 | 168 | py | Python | xpdf_python/debug.py | pombredanne/xpdf_python | a247601e7f15d8775fbfec369a805954eb000808 | [
"BSD-3-Clause"
] | 18 | 2017-08-07T18:56:59.000Z | 2022-02-11T18:35:30.000Z | xpdf_python/debug.py | pombredanne/xpdf_python | a247601e7f15d8775fbfec369a805954eb000808 | [
"BSD-3-Clause"
] | 5 | 2018-04-05T20:49:34.000Z | 2020-08-21T06:41:59.000Z | xpdf_python/debug.py | pombredanne/xpdf_python | a247601e7f15d8775fbfec369a805954eb000808 | [
"BSD-3-Clause"
] | 15 | 2017-08-29T13:49:31.000Z | 2021-03-22T13:58:46.000Z | from wrapper import *
if __name__ == '__main__':
if len(sys.argv) > 1:
pdf_loc = sys.argv[1]
else:
pdf_loc = '/path/to/pdf'
test = to_text(pdf_loc)
print(test) | 18.666667 | 26 | 0.654762 | 29 | 168 | 3.37931 | 0.62069 | 0.183673 | 0.163265 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.184524 | 168 | 9 | 27 | 18.666667 | 0.70073 | 0 | 0 | 0 | 0 | 0 | 0.118343 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2a5162a726a5686f5b349e5e793cb5fb54b1469e | 1,805 | py | Python | rssdldmng/transmission/json_utils.py | alexpayne482/rssdldmng | 4428f10171902861702fc0f528d3d9576923541a | [
"MIT"
] | null | null | null | rssdldmng/transmission/json_utils.py | alexpayne482/rssdldmng | 4428f10171902861702fc0f528d3d9576923541a | [
"MIT"
] | 1 | 2019-11-25T15:54:02.000Z | 2019-11-25T15:54:02.000Z | rssdldmng/transmission/json_utils.py | alexpayne482/rssdldmng | 4428f10171902861702fc0f528d3d9576923541a | [
"MIT"
] | null | null | null | """
A JSON encoder and decoder to simplify working with the RPC's
datatypes.
"""
import json
import calendar
import datetime
class UTC(datetime.tzinfo):
"""UTC"""
def utcoffset(self, dt):
return datetime.timedelta(0)
def tzname(self, dt):
return 'UTC'
def dst(self, dt):
return datetime.timedelta(0)
# UNIX epochs to be turned into UTC datetimes
TIMESTAMP_KEYS = frozenset(
['activityDate',
'addedDate',
'dateCreated',
'doneDate',
'startDate',
'lastAnnounceStartTime',
'lastAnnounceTime',
'lastScrapeStartTime',
'lastScrapeTime',
'nextAnnounceTime',
'nextScrapeTime'])
def epoch_to_datetime(value):
return datetime.datetime.fromtimestamp(value, UTC())
def datetime_to_epoch(value):
if isinstance(value, datetime.datetime):
return calendar.timegm(value.utctimetuple())
elif isinstance(value, datetime.date):
value = datetime.datetime(value.year, value.month, value.day)
return calendar.timegm(value.utctimetuple())
class TransmissionJSONDecoder(json.JSONDecoder):
def __init__(self, **kwargs):
return super(TransmissionJSONDecoder, self).__init__(
object_hook=self.object_hook, **kwargs)
def object_hook(self, obj):
for key, value in obj.items():
if key in TIMESTAMP_KEYS:
value = epoch_to_datetime(value)
obj[key] = value
return obj
class TransmissionJSONEncoder(json.JSONEncoder):
def default(self, value):
# datetime is a subclass of date, so this'll catch both
if isinstance(value, datetime.date):
return datetime_to_epoch(value)
else:
return value
| 25.422535 | 70 | 0.631025 | 190 | 1,805 | 5.884211 | 0.442105 | 0.05814 | 0.0322 | 0.035778 | 0.119857 | 0.053667 | 0 | 0 | 0 | 0 | 0 | 0.001521 | 0.271468 | 1,805 | 70 | 71 | 25.785714 | 0.848669 | 0.096953 | 0 | 0.086957 | 0 | 0 | 0.098318 | 0.013583 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.065217 | 0.108696 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
2a667a7ae0b0bf39b17e99543864458431fefeea | 1,501 | py | Python | personal/models.py | LewisNjagi/personal-gallery-app | 59842bf92bff95eb00aa2b0e29fd7bcc721c7464 | [
"MIT"
] | null | null | null | personal/models.py | LewisNjagi/personal-gallery-app | 59842bf92bff95eb00aa2b0e29fd7bcc721c7464 | [
"MIT"
] | null | null | null | personal/models.py | LewisNjagi/personal-gallery-app | 59842bf92bff95eb00aa2b0e29fd7bcc721c7464 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
class Image(models.Model):
image = models.ImageField(upload_to = 'gallery/')
name = models.CharField(max_length=30)
description = models.CharField(max_length=100)
location = models.ForeignKey('location',on_delete = models.CASCADE)
category = models.ForeignKey('category',on_delete = models.CASCADE)
@classmethod
def images(cls):
images = cls.objects.all()
return images
@classmethod
def search_by_category(cls,search_term):
images = cls.objects.filter(category__name__icontains=search_term)
return images
@classmethod
def filter_by_location(cls,location):
images = cls.objects.filter(location__name__icontains=location)
return images
@classmethod
def get_image_by_id(cls,id):
image_id = cls.objects.get(id = id)
return image_id
def save_image(self):
self.save()
def delete_image(self):
self.delete()
def __str__(self):
return self.name
class Category(models.Model):
name = models.CharField(max_length =30)
def save_category(self):
self.save()
def __str__(self):
return self.name
class Location(models.Model):
name = models.CharField(max_length =30)
@classmethod
def location(cls):
location = cls.objects.all()
return location
def save_location(self):
self.save()
def __str__(self):
return self.name | 24.606557 | 74 | 0.664224 | 183 | 1,501 | 5.218579 | 0.245902 | 0.073298 | 0.075393 | 0.100524 | 0.228272 | 0.228272 | 0.196859 | 0.161257 | 0.075393 | 0 | 0 | 0.00786 | 0.237175 | 1,501 | 61 | 75 | 24.606557 | 0.826201 | 0.015989 | 0 | 0.444444 | 0 | 0 | 0.01626 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.022222 | 0.066667 | 0.688889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2a69b8fc8765325369caadb89fad5ecaea55b945 | 1,555 | py | Python | tes.py | Glitchyi/CBSExIntel-AI | 18377f524bbbec267a25ecae81629783a5a1fb71 | [
"MIT"
] | null | null | null | tes.py | Glitchyi/CBSExIntel-AI | 18377f524bbbec267a25ecae81629783a5a1fb71 | [
"MIT"
] | null | null | null | tes.py | Glitchyi/CBSExIntel-AI | 18377f524bbbec267a25ecae81629783a5a1fb71 | [
"MIT"
] | null | null | null | import re
from nltk.corpus import stopwords
from nltk.data import PathPointer
import pandas
from sklearn.feature_extraction.text import TfidfVectorizer , CountVectorizer
from nltk.tokenize import word_tokenize
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
qsn=input("Enter quries separated by ';' ")
listified_qsn = qsn.split(';')
'''
stop_words = set(stopwords.words('english'))
filtered_list = [w for w in word_tokens if not w.lower() in stop_words]
filtered_list = []
# TODO: ask for queries and search to find Keywords
for w in word_tokens:
if w not in stop_words:
if w.isalpha():
filtered_list.append(w)
filtered_list = [i.lower() for i in filtered_list]
filtered_list = [re.sub(" +",' ',data) for data in filtered_list]
filtered_str=""
for i in filtered_list:
filtered_str+=f" {i}"
print(filtered_str,'\n')
'''
# set of documents
# instantiate the vectorizer object
tfidfvectorizer = TfidfVectorizer(analyzer='word',stop_words= 'english')
# convert th documents into a matrix
tfidf_wm = tfidfvectorizer.fit_transform(listified_qsn)
#retrieve the terms found in the corpora
# if we take same parameters on both Classes(CountVectorizer and TfidfVectorizer) , it will give same output of get_feature_names() methods)
#count_tokens = tfidfvectorizer.get_feature_names() # no difference
tfidf_tokens = tfidfvectorizer.get_feature_names()
df_tfidfvect = pd.DataFrame(data = tfidf_wm.toarray(),columns = tfidf_tokens)
print("\nTD-IDF Vectorizer\n")
print(df_tfidfvect) | 33.804348 | 140 | 0.769775 | 222 | 1,555 | 5.22973 | 0.45045 | 0.082687 | 0.036176 | 0.056848 | 0.27907 | 0.192937 | 0.11714 | 0.11714 | 0 | 0 | 0 | 0 | 0.135691 | 1,555 | 46 | 141 | 33.804348 | 0.863839 | 0.210932 | 0 | 0.125 | 0 | 0 | 0.089744 | 0 | 0 | 0 | 0 | 0.021739 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
2a6b46106d761835d3731f42f25c60be4cbce095 | 677 | py | Python | projects/predictor.py | uvammm/uvammm.github.io | 232bf61b3bd7bade68923345809ee162de3e3182 | [
"MIT"
] | null | null | null | projects/predictor.py | uvammm/uvammm.github.io | 232bf61b3bd7bade68923345809ee162de3e3182 | [
"MIT"
] | null | null | null | projects/predictor.py | uvammm/uvammm.github.io | 232bf61b3bd7bade68923345809ee162de3e3182 | [
"MIT"
] | 6 | 2019-02-14T16:30:51.000Z | 2020-02-29T21:25:16.000Z |
# This is a template for your project 2 submission.
# Please fill in the get_predictions method to return key-value pairs
# for each parcelid and the predicted log-error.
# Import the libraries and give them abbreviated names:
import pandas as pd
import numpy as np
import statsmodels.api as sm
# load the data, use the directory where you saved the data: (please do not change)
df_properties = pd.read_csv('properties_2017.csv')
df_train = pd.read_csv('train_2017.csv', parse_dates=["transactiondate"])
def get_predictions():
predictions = {}
# write code here
# fill in your algorithm to calculate predicted log-error values here
return predictions
| 30.772727 | 83 | 0.756278 | 104 | 677 | 4.836538 | 0.644231 | 0.023857 | 0.067594 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016187 | 0.17873 | 677 | 21 | 84 | 32.238095 | 0.888489 | 0.57164 | 0 | 0 | 0 | 0 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2a720ba7d1954c7a99eb3c02eec13328beea9a74 | 670 | py | Python | des_y6utils/shear_masking.py | des-science/des-y6utils | fcef3fc95ebe31b6637455717dd4e3e65f89d600 | [
"BSD-3-Clause"
] | null | null | null | des_y6utils/shear_masking.py | des-science/des-y6utils | fcef3fc95ebe31b6637455717dd4e3e65f89d600 | [
"BSD-3-Clause"
] | 2 | 2021-11-04T15:36:47.000Z | 2021-11-09T15:39:28.000Z | des_y6utils/shear_masking.py | des-science/des-y6utils | fcef3fc95ebe31b6637455717dd4e3e65f89d600 | [
"BSD-3-Clause"
] | null | null | null | import hashlib
def generate_shear_masking_factor(passphrase):
"""Generate a masking factor by hashing a passphrase.
Code from Joe Zuntz w/ modifications for python 3 by Matt B.
Parameters
----------
passphrase : str
A string.
Returns
-------
factor : float
The masking factor as a float in the range 0.9 to 1.1.
"""
# make hex
m = hashlib.md5(passphrase.encode("utf-8")).hexdigest()
# convert to decimal
s = int(m, 16)
# get last 8 digits
f = s % 100_000_000
# turn 8 digit number into value between 0 and 1
g = f / 1e8
# scale value between 0.9 and 1.1
return 0.9 + 0.2*g
| 23.103448 | 64 | 0.60597 | 103 | 670 | 3.893204 | 0.640777 | 0.097257 | 0.064838 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067653 | 0.29403 | 670 | 28 | 65 | 23.928571 | 0.780127 | 0.571642 | 0 | 0 | 1 | 0 | 0.021186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.285714 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2a82844ae264150ac59f336b8f5e87844ad94d8a | 641 | py | Python | posts/views.py | delitamakanda/cautious-journey | a1d58e6ae649fc1ba84dc22ce7be30b4c1712b46 | [
"MIT"
] | 3 | 2018-06-24T16:53:47.000Z | 2021-04-02T03:11:24.000Z | posts/views.py | delitamakanda/cautious-journey | a1d58e6ae649fc1ba84dc22ce7be30b4c1712b46 | [
"MIT"
] | 8 | 2019-03-29T22:58:58.000Z | 2022-03-03T21:59:48.000Z | posts/views.py | delitamakanda/cautious-journey | a1d58e6ae649fc1ba84dc22ce7be30b4c1712b46 | [
"MIT"
] | null | null | null | from django.views.decorators.clickjacking import xframe_options_exempt
from django.utils.decorators import method_decorator
from django.shortcuts import redirect
from django.views.generic.list import ListView
from posts.models import Post, Tag
@method_decorator(xframe_options_exempt, name='dispatch')
class PostsListView(ListView):
queryset = Post.objects.order_by('-created')
model = Post
paginate_by = 4
context_object_name = 'posts'
template_name = 'post/notice.html'
def generate_fake_data(request):
from model_mommy import mommy
mommy.make('posts.Post', _quantity=30)
return redirect('posts/list')
| 29.136364 | 70 | 0.778471 | 84 | 641 | 5.761905 | 0.571429 | 0.082645 | 0.061983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005415 | 0.135725 | 641 | 21 | 71 | 30.52381 | 0.868231 | 0 | 0 | 0 | 0 | 0 | 0.088924 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.375 | 0 | 0.875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2a840a059e9bd972f74977c71abc41f04ae90a0e | 2,921 | py | Python | logistic_regression.py | wail007/ml_playground | 5a8cd1fc57d3ba32a255e665fc3480f58eb9c3c2 | [
"Apache-2.0"
] | null | null | null | logistic_regression.py | wail007/ml_playground | 5a8cd1fc57d3ba32a255e665fc3480f58eb9c3c2 | [
"Apache-2.0"
] | null | null | null | logistic_regression.py | wail007/ml_playground | 5a8cd1fc57d3ba32a255e665fc3480f58eb9c3c2 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import solver as slvr
from activation import sigmoid, sigmoid_log
class LogisticRegression(object):
def __init__(self, solver='gradient', reg=None, alpha=1e-3, e=1e-3, verbose=False):
self.w = None
self.e = e
self.verbose = verbose
self.alpha = alpha
if solver == 'newton':
self.solver = slvr.Newton(alpha, e, verbose)
else:
self.solver = slvr.GradientDescent(alpha, e, verbose)
if reg:
self.solver = slvr.ValidationSet(self.solver, 0.7, False, e, verbose)
def fit(self, x, t):
self.w = np.zeros(x.shape[1])
self.solver.solve(self, x, t)
def predict(self, x):
return np.rint(self.probability(x))
def probability(self, x):
return sigmoid(np.dot(x, self.w))
def cost(self, x, t):
a = np.dot(x, self.w)
return -np.sum(t * sigmoid_log(a) + (1 - t) * sigmoid_log(-a), axis=0, keepdims=True)
def gradient(self, x, t):
y = sigmoid(np.dot(x, self.w))
return np.dot(np.transpose(x), y - t)
def hessian(self, x, t):
y = sigmoid(np.dot(x, self.w))
return np.dot(np.transpose(x) * (y * (1 - y)), x)
def precision(self, x, t):
y = self.predict(x)
return (1.0 / len(y)) * np.sum(y == t)
def _cost(self, x, t):
a = np.dot(x, self.w)
return -np.sum(t * sigmoid_log(a) + (1 - t) * sigmoid_log(-a), axis=0, keepdims=True)
def _gradient(self, x, t):
y = sigmoid(np.dot(x, self.w))
return np.dot(np.transpose(x), y - t)
def _hessian(self, x, t):
y = sigmoid(np.dot(x, self.w))
return np.dot(np.transpose(x) * (y * (1 - y)), x)
def _cost_L2(self, x, t, reg):
return self._cost(x, t) + reg * np.dot(self.w[1:], self.w[1:])
def _gradient_L2(self, x, t, reg):
g = self._gradient(x, t)
g[1:] += 2.0 * reg * self.w[1:]
return g
class MCLogisticRegression(object):
def __init__(self, solver='gradient', alpha=1e-3, e=1e-3, verbose=False):
self.bin_logits = []
self.e = e
self.verbose = verbose
self.solver = solver
self.alpha = alpha
def fit(self, x, t):
for i in xrange(t.shape[1]):
if self.verbose:
print("Class: %d" % i)
self.bin_logits.append(LogisticRegression(self.solver, self.alpha, self.e, self.verbose))
self.bin_logits[i].fit(x, t[:,i])
def predict(self, x):
return np.argmax(self.probability(x), axis=1)
def probability(self, x):
p = self.bin_logits[0].probability(x)
for i in xrange(1, len(self.bin_logits)):
p = np.vstack([p, self.bin_logits[i].probability(x)])
return np.transpose(p)
def precision(self, x, t):
y = self.predict(x)
return (1.0 / len(y)) * np.sum(y == t) | 29.806122 | 101 | 0.549469 | 448 | 2,921 | 3.520089 | 0.160714 | 0.0539 | 0.049461 | 0.044388 | 0.500317 | 0.471148 | 0.391249 | 0.355739 | 0.355739 | 0.320228 | 0 | 0.015934 | 0.290996 | 2,921 | 98 | 102 | 29.806122 | 0.745534 | 0 | 0 | 0.416667 | 0 | 0 | 0.010609 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.041667 | 0.055556 | 0.513889 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2a84ec4b695bb6ce86a9335e62050018ca14f35d | 1,223 | py | Python | observer/NewObserver/test_observer.py | eddieir/design_patterns | db2e00761fa218f1144edc4ffe3e5f7a7c28eef9 | [
"MIT"
] | null | null | null | observer/NewObserver/test_observer.py | eddieir/design_patterns | db2e00761fa218f1144edc4ffe3e5f7a7c28eef9 | [
"MIT"
] | null | null | null | observer/NewObserver/test_observer.py | eddieir/design_patterns | db2e00761fa218f1144edc4ffe3e5f7a7c28eef9 | [
"MIT"
] | null | null | null | import logging
import unittest
from observer import Hobbits, Orcs, Weather, WeatherType
class TestObserver(unittest.TestCase):
def test_observable(self):
weather = Weather()
# register observable
orcs, hobbits = Orcs(), Hobbits()
weather.add_observer(orcs)
weather.add_observer(hobbits)
with self.assertLogs('behavioral.observer', level='INFO') as cm:
# trigger event
weather.time_pass()
self.assertEqual(len(cm.output), 3)
self.assertEqual(cm.output, [
"INFO:behavioral.observer:Weather is changed to %s" % (weather.current_weather),
"INFO:behavioral.observer:Orcs is %s" % (weather.current_weather),
"INFO:behavioral.observer:Hobbits is %s" % (weather.current_weather),
])
#
weather.remove_observer(orcs)
with self.assertLogs('behavioral.observer', level='INFO') as cm:
# trigger event
weather.time_pass()
self.assertEqual(cm.output, [
"INFO:behavioral.observer:Weather is changed to %s" % (weather.current_weather),
"INFO:behavioral.observer:Hobbits is %s" % (weather.current_weather),
])
| 33.972222 | 92 | 0.629599 | 130 | 1,223 | 5.838462 | 0.3 | 0.166008 | 0.144928 | 0.144928 | 0.644269 | 0.641634 | 0.641634 | 0.641634 | 0.641634 | 0.641634 | 0 | 0.001103 | 0.258381 | 1,223 | 35 | 93 | 34.942857 | 0.835722 | 0.03843 | 0 | 0.5 | 0 | 0 | 0.217763 | 0.134073 | 0 | 0 | 0 | 0 | 0.208333 | 1 | 0.041667 | false | 0.083333 | 0.125 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2a8a38edcb393eb14ea5490feadf6fffbac554d3 | 11,589 | py | Python | pycoilib/lib/coil/coil.py | ReciprocalSpace/pycoilib | 423287eaf09203b6aa7df2fb5b47fd0fb203964a | [
"MIT"
] | null | null | null | pycoilib/lib/coil/coil.py | ReciprocalSpace/pycoilib | 423287eaf09203b6aa7df2fb5b47fd0fb203964a | [
"MIT"
] | null | null | null | pycoilib/lib/coil/coil.py | ReciprocalSpace/pycoilib | 423287eaf09203b6aa7df2fb5b47fd0fb203964a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Coil module
Created on Tue Jan 26 08:31:05 2021
@author: Aimé Labbé
"""
from __future__ import annotations
from typing import List
import numpy as np
import os
import matplotlib.pyplot as plt
from ..segment.segment import Segment, Arc, Circle, Line
from ..wire.wire import Wire, WireRect, WireCircular
from ..inductance.inductance import calc_mutual
from ..misc.set_axes_equal import set_axes_equal
from ..misc import geometry as geo
class Coil:
"""General Coil object.
A coil is defined as the combination of a segment array and a wire type.
:param List[segment] segment_array: TODO description or delete
:param function wire: TODO description or delete
:param numpy.ndarray anchor: TODO description or delete
"""
VEC_0 = np.array([0., 0., 0.])
VEC_X = np.array([1., 0., 0.])
VEC_Y = np.array([0., 1., 0.])
VEC_Z = np.array([0., 0., 1.])
def __init__(self, segment_array: List[Segment], wire=Wire(), anchor: np.ndarray = None):
"""the constructor"""
self.segment_array = segment_array
self.wire = wire
self.anchor = self.VEC_0.copy() if anchor is None else anchor.copy()
def from_magpylib(cls, magpy_object, wire=Wire(), anchor: np.ndarray = None):
"""Construct a coil from a collection of magpy sources and a Wire object
.. WARNING::
Not implemented
"""
raise NotImplementedError
def to_magpy(self):
"""Return a list of segments as collection of magpy sources
.. WARNING::
Not implemented
"""
raise NotImplementedError
def _magpy2pycoil(self, magpy_object):
raise NotImplementedError
def _pycoil2magpy(self, coil_array):
raise NotImplementedError
def move_to(self, new_position: np.ndarray) -> Coil:
"""Move the coil to a new position.
:param numpy.ndarray new_position: TODO description or delete"""
translation = new_position - self.anchor
for segment in self.segment_array:
segment.translate(translation)
self.anchor = new_position.copy()
return self
def translate(self, translation: np.ndarray):
"""Translate the coil by a specific translation vector.
:param numpy.ndarray translation: TODO description or delete"""
for segment in self.segment_array:
segment.translate(translation)
self.anchor += translation
return self
def rotate(self, angle: float, axis: np.ndarray = None):
"""Rotate the coil around an axis by a specific angle.
:param float angle: TODO description or delete
:param numpy.ndarray axis: TODO description or delete"""
axis = self.VEC_Z if axis is None else axis
for segment in self.segment_array:
segment.rotate(angle, axis, self.anchor)
return self
def draw(self, draw_current=True, savefig=False):
"""Draw the coil in a 3D plot.
:param bool draw_current: TODO description or delete
:param bool savefig: TODO description or delete"""
fig = plt.figure(figsize=(7.5/2.4, 7.5/2.4), dpi=300,)
ax = fig.add_subplot(111, projection='3d')
for shape in self.segment_array:
shape.draw(ax, draw_current)
set_axes_equal(ax)
ax.set_xlabel("x [mm]")
ax.set_ylabel("y [mm]")
ax.set_zlabel("z [mm]")
if savefig:
i = 0
while True:
path = "Fig_"+str(i)+".png"
if os.path.exists(path):
i += 1
else:
break
plt.savefig(path, dpi=300, transparent=True)
plt.show()
def get_inductance(self):
"""Compute the coil self-inductance.
:returns: self inductance
:rtype: float"""
inductance = 0
n = len(self.segment_array)
# Mutual between segment pairs
for i, segment_i in enumerate(self.segment_array[:-1]):
for j, segment_j in enumerate(self.segment_array[i + 1:]):
res = calc_mutual(segment_i, segment_j)
inductance += 2*res[0]
# Self of segments
for i, segment_i in enumerate(self.segment_array):
res = self.wire.self_inductance(segment_i)
inductance += res
return inductance
class Loop(Coil):
"""Loop class TODO describe it
:param float radius: TODO description or delete
:param numpy.ndarray position: TODO description or delete
:param numpy.ndarray axis: TODO description or delete
:param float angle: TODO description or delete
"""
def __init__(self, radius: float, position: np.ndarray = None, axis: np.ndarray = None, angle: float = 0.,
wire=Wire()):
"""The constructor"""
position = self.VEC_0 if position is None else position
axis = self.VEC_Z if axis is None else axis
circle = Circle.from_rot(radius, position, axis, angle)
super().__init__([circle], wire)
def from_normal(cls, radius: float, position: np.ndarray = None, normal: np.ndarray = None, wire=Wire()):
""" TODO describe methode
:param float radius: TODO description or delete
:param numpy.ndarray position: TODO description or delete
:param numpy.ndarray normal: TODO description or delete
:param function wire: TODO description or delete
:returns: TODO description or delete
:rtype: TODO description or delete
"""
position = cls.VEC_0 if position is None else position
normal = cls.VEC_Y if normal is None else normal
axis, angle = geo.get_rotation(cls.VEC_Z, normal)
return cls(radius, position, axis, angle, wire)
class Solenoid(Coil):
"""Solenoid class TODO describe it
:param float radius: TODO description or delete
:param float length: TODO description or delete
:param int n_turns: TODO description or delete
:param numpy.ndarray position: TODO description or delete
:param numpy.ndarray axis: TODO description or delete
:param float angle: TODO description or delete
:param function wire: TODO description or delete
"""
def __init__(self, radius: float, length: float, n_turns: int,
position: np.ndarray = None, axis: np.ndarray = None, angle: float = 0.,
wire=Wire()):
"""constructor"""
segments = [Circle(radius, np.array([0., 0., z])) for z in np.linspace(-length/2, length/2, n_turns)]
super().__init__(segments, wire)
position = self.VEC_0 if position is None else position
axis = self.VEC_Z if axis is None else axis
self.move_to(position)
self.rotate(axis, angle)
def from_normal(cls, radius, length, n_turns, position, normal, wire=Wire()):
""" TODO describe methode
:param function cls: TODO description or delete
:param float radius: TODO description or delete
:param float length: TODO description or delete
:param int n_turns: TODO description or delete
:param numpy.ndarray position: TODO description or delete
:param numpy.ndarray normal: TODO description or delete
:param function wire: TODO description or delete
:returns: TODO description or delete
:rtype: TODO description or delete
"""
axis, angle = geo.get_rotation(cls.VEC_Z, normal)
return cls(radius, length, n_turns, position, axis, angle, wire)
class Polygon(Coil):
"""Polygon class TODO describe it
:param TYPE polygon: TODO description or delete + type
:param function wire: TODO description or delete
"""
def __init__(self, polygon, wire):
"""the constructor"""
lines = []
for p0, p1 in zip(polygon[:-1], polygon[1:]):
lines.append(Line(p0, p1))
super().__init__(lines, wire)
class Helmholtz(Coil):
"""Helmholtz class TODO describe it
:param float radius: TODO description or delete
:param numpy.ndarray position: TODO description or delete
:param numpy.ndarray axis: TODO description or delete
:param float angle: TODO description or delete
:param function wire: TODO description or delete
"""
def __init__(self, radius: float, position: np.ndarray = None, axis: np.ndarray = None, angle:float = 0.,
wire=Wire()):
segments = [Circle(radius, np.array([0, 0, -radius/2])),
Circle(radius, np.array([0, 0, radius/2]))]
super().__init__(segments, wire)
position = self.VEC_0 if position is None else position
axis = self.VEC_Z if axis is None else axis
self.move_to(position)
self.rotate(axis, angle)
# class Birdcage(Coil):
# def __init__(self,
# radius, length, nwires, position=_vec_0, axis=_vec_z, angle=0,
# wire=Wire() ):
# segments = []
# θ_0 = 2*π/(nwires-1)/2 # Angular position of the first wire
# Θ = np.linspace(θ_0, 2*π-θ_0, nwires) # Vector of angular positions
# # Linear segments
# p0, p1 = _vec_0, np.array([0,0,length] )
# positions = np.array( [radius*cos(Θ), radius*sin(Θ), -length/2 ] )
# currents = cos(Θ) # Current in each segment
# for curr, pos in zip(currents, positions):
# segments.append( segment.Line(p0+pos, p1+pos, curr))
# # Arc segments
# integral_matrix = np.zeros( (nwires, nwires) )
# for i, line in enumerate(integral_matrix.T):
# line[i:] = 1
# currents = integral_matrix @ segments_current
# currents -= np.sum(arcs_currents)
# #arcs_pos # to be implemeted
# #arcs_angle # to be implemented
# magpy_collection = magpy.collection(sources)
# angle, axis = geo.get_rotation(geo.z_vector, normal)
# magpy_collection.rotate(angle*180/π, axis)
# magpy_collection.move(position)
# vmax = norm(magpy_collection.getB(position))*1.2
# super().__init__(magpy_collection, position, vmax)
class MTLR(Coil):
"""MTLR class TODO describe it
:param float inner_radius: TODO description or delete
:param float delta_radius: TODO description or delete
:param float line_width: TODO description or delete
:param int n_turns: TODO description or delete
:param float dielectric_thickness: TODO description or delete
:param numpy.ndarray anchor: TODO description or delete
:param numpy.ndarray axis: TODO description or delete
:param float angle: TODO description or delete
"""
def __init__(self, inner_radius: float, delta_radius: float, line_width: float, n_turns,
dielectric_thickness: float,
anchor: np.ndarray = None, axis: np.ndarray = None, angle: float = 0.):
"""the constructor"""
radii = np.array([inner_radius + n * delta_radius for n in range(n_turns)])
segments = []
for radius in radii:
segments.append(Circle.from_normal(radius))
segments.append(Circle.from_normal(radius, position=np.array([0., 0., -dielectric_thickness])))
wire = WireRect(line_width, )
super().__init__(segments, wire)
if anchor:
self.translate(anchor)
if axis:
self.rotate(angle, axis)
| 35.224924 | 110 | 0.623522 | 1,478 | 11,589 | 4.771989 | 0.14479 | 0.106338 | 0.120516 | 0.163051 | 0.504324 | 0.478378 | 0.415284 | 0.394017 | 0.379413 | 0.368354 | 0 | 0.012457 | 0.279575 | 11,589 | 328 | 111 | 35.332317 | 0.832315 | 0.412805 | 0 | 0.255814 | 0 | 0 | 0.004489 | 0 | 0 | 0 | 0 | 0.17378 | 0 | 1 | 0.131783 | false | 0 | 0.077519 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2a94b2f6205bf9243a775bc0a747e64d843e675d | 1,243 | py | Python | setup.py | tipevo/webstruct | 17c8254898368ead7448a9145e98a125f8891558 | [
"MIT"
] | 210 | 2015-01-06T01:37:28.000Z | 2022-03-30T10:44:31.000Z | setup.py | tipevo/webstruct | 17c8254898368ead7448a9145e98a125f8891558 | [
"MIT"
] | 43 | 2015-02-05T05:49:53.000Z | 2021-09-02T10:27:24.000Z | setup.py | tipevo/webstruct | 17c8254898368ead7448a9145e98a125f8891558 | [
"MIT"
] | 60 | 2015-02-13T10:15:58.000Z | 2022-02-26T07:54:03.000Z | #!/usr/bin/env python
from setuptools import setup, find_packages
version = '0.6'
setup(
name='webstruct',
version=version,
description="A library for creating statistical NER systems that work on HTML data",
long_description=open('README.rst').read(),
author='Mikhail Korobov, Terry Peng',
author_email='kmike84@gmail.com, pengtaoo@gmail.com',
url='https://github.com/scrapinghub/webstruct',
packages=find_packages(),
license='MIT',
classifiers=[
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Information Analysis",
"Topic :: Text Processing :: Linguistic",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
],
install_requires=['six', 'lxml', 'scikit-learn', 'tldextract', 'requests'],
)
| 37.666667 | 88 | 0.633146 | 129 | 1,243 | 6.062016 | 0.666667 | 0.14578 | 0.191816 | 0.132992 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015496 | 0.221239 | 1,243 | 32 | 89 | 38.84375 | 0.792355 | 0.01609 | 0 | 0 | 0 | 0 | 0.60802 | 0.018003 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.034483 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aa55bf55c0f9402b5a56ad41dcd06e5f2fd4cd3e | 134 | py | Python | test/asynciotest.py | 943470549/Torando-sqlalchemy | 2088000a2885fcc638f92d473e2c6a8fe0eee573 | [
"Apache-2.0"
] | null | null | null | test/asynciotest.py | 943470549/Torando-sqlalchemy | 2088000a2885fcc638f92d473e2c6a8fe0eee573 | [
"Apache-2.0"
] | null | null | null | test/asynciotest.py | 943470549/Torando-sqlalchemy | 2088000a2885fcc638f92d473e2c6a8fe0eee573 | [
"Apache-2.0"
] | null | null | null | import asyncio
def wget(host):
print('wget %s...' % host)
connect = asyncio.open_connection(host,80)
r=yield from connect | 22.333333 | 46 | 0.671642 | 19 | 134 | 4.684211 | 0.736842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018519 | 0.19403 | 134 | 6 | 47 | 22.333333 | 0.805556 | 0 | 0 | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aa737c948e5b8683c4cb0820dd195df067f82809 | 1,950 | py | Python | raxcli/apps/monitoring/resources.py | racker/python-raxcli | c59d7ef9abca0a7cea56882113bd71feb6c5c6ef | [
"Apache-2.0"
] | 1 | 2020-01-16T09:45:28.000Z | 2020-01-16T09:45:28.000Z | raxcli/apps/monitoring/resources.py | racker/python-raxcli | c59d7ef9abca0a7cea56882113bd71feb6c5c6ef | [
"Apache-2.0"
] | null | null | null | raxcli/apps/monitoring/resources.py | racker/python-raxcli | c59d7ef9abca0a7cea56882113bd71feb6c5c6ef | [
"Apache-2.0"
] | null | null | null | # Copyright 2013 Rackspace
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from raxcli.models import Attribute, Model
class Check(Model):
"""
Check resource.
"""
id = Attribute()
label = Attribute()
type = Attribute()
entity_id = Attribute(view_list=False)
monitoring_zones = Attribute(view_list=False)
period = Attribute(view_list=False)
timeout = Attribute(view_list=False)
target_alias = Attribute(view_list=False)
target_resolver = Attribute(view_list=False)
disabled = Attribute(view_list=False)
details = Attribute(view_list=False)
class Entity(Model):
"""
Entity resource.
"""
id = Attribute()
label = Attribute()
uri = Attribute()
extra = Attribute(view_list=False)
agent_id = Attribute(view_list=False)
ip_addresses = Attribute(view_list=False)
class AgentToken(Model):
"""
Agent token resource.
"""
id = Attribute()
label = Attribute()
token = Attribute()
class Alarm(Model):
"""
Alarm resource.
"""
id = Attribute()
label = Attribute()
check_id = Attribute()
criteria = Attribute(view_list=False)
notification_plan_id = Attribute(view_list=False)
| 28.676471 | 74 | 0.706154 | 248 | 1,950 | 5.46371 | 0.443548 | 0.124723 | 0.1631 | 0.21107 | 0.231734 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005158 | 0.204615 | 1,950 | 67 | 75 | 29.104478 | 0.868472 | 0.435385 | 0 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
aa75bf1278203a261ca9adc736951d341c410334 | 232 | py | Python | crud-flask-demo/service/test/exception.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | crud-flask-demo/service/test/exception.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | crud-flask-demo/service/test/exception.py | wencan/crud-flask-demo | 1aa585761be2c7dfde334fe9cfb1658ebdfdc7d5 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# 单元测试需要的异常
# wencan
# 2019-04-23
from ..abcs import NoRowsAbstractException
__all__ = ("NoRowsForTest")
class NoRowsForTest(NoRowsAbstractException):
'''not found'''
pass | 14.5 | 45 | 0.685345 | 24 | 232 | 6.458333 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051546 | 0.163793 | 232 | 16 | 46 | 14.5 | 0.747423 | 0.349138 | 0 | 0 | 0 | 0 | 0.091549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
aa7a6c35125d4260733eadfef34a0412de81a4fc | 3,806 | py | Python | tests/unit/data/test_query.py | thiagortz/justice-league | 386429862f31ee54a60ce0c6f8c855d48c1c1865 | [
"MIT"
] | 2 | 2018-05-07T03:39:38.000Z | 2018-05-24T22:27:52.000Z | tests/unit/data/test_query.py | thiagortz/justice-league | 386429862f31ee54a60ce0c6f8c855d48c1c1865 | [
"MIT"
] | null | null | null | tests/unit/data/test_query.py | thiagortz/justice-league | 386429862f31ee54a60ce0c6f8c855d48c1c1865 | [
"MIT"
] | null | null | null | from unittest import TestCase
from unittest.mock import patch
from app.data.query import Query
class TestSolomonGrundy(TestCase):
@classmethod
def setUpClass(cls):
pass
@classmethod
def tearDownClass(cls):
pass
def setUp(self):
pass
def tearDown(self):
pass
@patch("app.service.business_service.SuperMan._request_get")
def test_resolve_score(self, mock_superman_request_get):
mock_superman_request_get.return_value.ok = True
mock_superman_request_get.return_value.text = '{"id": "15","pointing": 68.6,"last_query": "2017-11-22T21:13:28Z"}'
query = Query()
actual = query.resolve_score({'teste': 'teste'}, '10387595612')
mock_superman_request_get.assert_called_once_with(resource='5af099f2310000610096c6ee',
params={'CPF': '10387595612'})
self.assertIsNotNone(actual)
self.assertIsInstance(actual._fields, tuple)
@patch("app.service.business_service.SuperMan._request_get")
@patch("app.service.business_service.TheFlash._request_get")
@patch("app.service.business_service.SolomonGrundy._request_get")
def test_resolve_client(self, mock_sg_request_get, mock_tf_request_get, mock_sm_request_get):
mock_sg_request_get.return_value.ok = True
mock_sg_request_get.return_value.text = '{"id": "1","name": "Thiago Dias", "CPF":"59865377691", ' \
'"birth": "1991-05-21"}'
mock_tf_request_get.return_value.ok = True
mock_tf_request_get.return_value.text = '{"id": "15","financial_movement": 68.6,' \
'"last_query": "2017-11-22T21:13:28Z",' \
'"last_buy":{"id": "15","value": 60.6,"credit_card":{"id": 21,' \
'"number": "5547-7221-7804-6353","validity":"07/11/2019"}}}'
mock_sm_request_get.return_value.ok = True
mock_sm_request_get.return_value.text = '{"id": "15","pointing": 68.6,"last_query": "2017-11-22T21:13:28Z"}'
query = Query()
actual = query.resolve_client({'teste': 'teste'}, '10387595612')
mock_sm_request_get.assert_called_once_with(resource='5af099f2310000610096c6ee',
params={'CPF': '10387595612'})
mock_tf_request_get.assert_called_once_with(resource='5af0bb973100004a0096c74d',
params={'CPF': '10387595612'})
mock_sg_request_get.assert_called_once_with(resource='5af076bc3100004d0096c66b',
params={'CPF': '10387595612'})
self.assertIsNotNone(actual)
self.assertIsInstance(actual._fields, tuple)
@patch("app.service.business_service.TheFlash._request_get")
def test_resolve_event(self, mock_tf_request_get):
mock_tf_request_get.return_value.ok = True
mock_tf_request_get.return_value.text = '{"id": "15","financial_movement": 68.6,' \
'"last_query": "2017-11-22T21:13:28Z",' \
'"last_buy":{"id": "15","value": 60.6,"credit_card":{"id": 21,' \
'"number": "5547-7221-7804-6353","validity":"07/11/2019"}}}'
query = Query()
actual = query.resolve_event({'teste': 'teste'}, '10387595612')
mock_tf_request_get.assert_called_once_with(resource='5af0bb973100004a0096c74d',
params={'CPF': '10387595612'})
self.assertIsNotNone(actual)
self.assertIsInstance(actual._fields, tuple)
| 43.747126 | 122 | 0.584341 | 403 | 3,806 | 5.215881 | 0.225806 | 0.118934 | 0.076118 | 0.099905 | 0.786394 | 0.743578 | 0.69648 | 0.619886 | 0.557088 | 0.557088 | 0 | 0.122742 | 0.287178 | 3,806 | 86 | 123 | 44.255814 | 0.652046 | 0 | 0 | 0.590164 | 0 | 0.032787 | 0.290857 | 0.174724 | 0 | 0 | 0 | 0 | 0.180328 | 1 | 0.114754 | false | 0.065574 | 0.04918 | 0 | 0.180328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
aa7dc6406df915278d88a2e31b1b3d6a2cc2c2c1 | 653 | py | Python | iiif_validator/tests/size_nofull.py | zimeon/iiif-image-validator | e38d37539a071aec8342ae2d3c656c772d3178cb | [
"Apache-2.0"
] | null | null | null | iiif_validator/tests/size_nofull.py | zimeon/iiif-image-validator | e38d37539a071aec8342ae2d3c656c772d3178cb | [
"Apache-2.0"
] | null | null | null | iiif_validator/tests/size_nofull.py | zimeon/iiif-image-validator | e38d37539a071aec8342ae2d3c656c772d3178cb | [
"Apache-2.0"
] | null | null | null | from .test import BaseTest, ValidatorError
import random
class Test_No_Size_Up(BaseTest):
label = 'Size greater than 100% should only work with the ^ notation'
level = 0
category = 3
versions = [u'3.0']
validationInfo = None
def run(self, result):
params = {'size': 'full'}
try:
img = result.get_image(params)
except:
pass
# should this be a warning as size extension called full could be allowed
self.validationInfo.check('size', result.last_status != 200, True, result, "Version 3.0 has replaced the size full with max.", warning=True)
return result
| 29.681818 | 148 | 0.63706 | 86 | 653 | 4.77907 | 0.686047 | 0.009732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02537 | 0.275651 | 653 | 21 | 149 | 31.095238 | 0.843552 | 0.108729 | 0 | 0 | 0 | 0 | 0.210708 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
aa8b27e74797e5f42765c11941b9342095b4d971 | 3,131 | py | Python | web/system/migrations/0004_auto_20190214_1506.py | hannesb0/MSWH | ce214f26369106c124052638e93cc38fbd58cc91 | [
"BSD-3-Clause-LBNL"
] | 5 | 2019-05-23T00:54:33.000Z | 2021-06-01T18:06:49.000Z | web/system/migrations/0004_auto_20190214_1506.py | hannesb0/MSWH | ce214f26369106c124052638e93cc38fbd58cc91 | [
"BSD-3-Clause-LBNL"
] | 36 | 2019-05-22T23:02:35.000Z | 2021-04-04T21:24:17.000Z | web/system/migrations/0004_auto_20190214_1506.py | hannesb0/MSWH | ce214f26369106c124052638e93cc38fbd58cc91 | [
"BSD-3-Clause-LBNL"
] | 14 | 2019-08-25T01:27:40.000Z | 2021-11-17T19:25:02.000Z | # Generated by Django 2.1.2 on 2019-02-14 23:06
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("system", "0003_configuration_type"),
]
operations = [
migrations.CreateModel(
name="Climate",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
(
"name",
models.CharField(default="undefined", max_length=255),
),
(
"climate_zone",
models.CharField(default="undefined", max_length=255),
),
(
"data_source",
models.CharField(default="undefined", max_length=255),
),
("data", models.TextField(default="undefined")),
],
),
migrations.AlterField(
model_name="component",
name="type",
field=models.CharField(
choices=[
[
"converter",
[
["hp", "heat pump"],
["pv", "photovoltaic"],
["sol_col", "solar collector"],
["el_res", "electric resistance"],
["gas_burn", "gas burner"],
],
],
[
"storage",
[
["hp_tank", "heat pump tank"],
["sol_tank", "solar storage tank"],
],
],
[
"distribution",
[
["inv", "inverter"],
["dist_pump", "circulator pump"],
["sol_pump", "solar pump"],
],
],
],
default="undefined",
max_length=255,
),
),
migrations.AlterField(
model_name="configuration",
name="type",
field=models.CharField(
choices=[
["solar_electric", "solar electric"],
["solar_thermal_gas_backup", "solar thermal gas backup"],
],
default="undefined",
max_length=255,
),
),
migrations.AddField(
model_name="configuration",
name="climate",
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="system.Climate",
),
),
]
| 31.626263 | 77 | 0.358352 | 192 | 3,131 | 5.697917 | 0.453125 | 0.087751 | 0.086837 | 0.11426 | 0.258684 | 0.258684 | 0.125229 | 0.085923 | 0 | 0 | 0 | 0.023464 | 0.537209 | 3,131 | 98 | 78 | 31.94898 | 0.731539 | 0.014372 | 0 | 0.413043 | 1 | 0 | 0.156615 | 0.01524 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021739 | 0 | 0.054348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aa99e65bd1d2319e6725de26483b6495646cf65d | 5,135 | py | Python | Movie recommendation/Movie_recommendation.py | ky13-troj/data-science-projects | c5b3a406e3d1fa7c9c532ee5b1142fd2f25f9c8d | [
"MIT"
] | null | null | null | Movie recommendation/Movie_recommendation.py | ky13-troj/data-science-projects | c5b3a406e3d1fa7c9c532ee5b1142fd2f25f9c8d | [
"MIT"
] | null | null | null | Movie recommendation/Movie_recommendation.py | ky13-troj/data-science-projects | c5b3a406e3d1fa7c9c532ee5b1142fd2f25f9c8d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Updated movie recommendation.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1hURZRYyKGXs0DRedpgAx73zKrjMeo5ia
Importing dependencies
"""
import numpy as np
import pandas as pd
import difflib # User may give the movie name slightly different than the real name to counter that issue this library is imported
from sklearn.feature_extraction.text import TfidfVectorizer # To convert texts to features , Tokenization
from sklearn.metrics.pairwise import cosine_similarity # For checking the similarity between movies
"""Data Pre-Processing"""
# Loading the dataset:
movies_dataset = pd.read_csv('/content/drive/MyDrive/PROJECTS/Netflix Movie Recommendation/Datasets/updated dataset/movies.csv')
movies_dataset.head(5)
movies_dataset.shape
# Feature Selection
selected_features = ['genres', 'keywords', 'tagline', 'cast', 'director']
# Replacing the missing values with NULL Strings
for feature in selected_features:
movies_dataset[feature] = movies_dataset[feature].fillna('')
# Combining all the 5 Features:
combine_params = movies_dataset['genres']+' '+movies_dataset['keywords'] + ' ' + movies_dataset['tagline'] + ' '+movies_dataset['cast']+' '+movies_dataset['director']
combine_params.shape
combine_params.iloc[69]
# Converting text to feature vector
vectorizer = TfidfVectorizer() # Creating a object from tfid.. which will vectorize or text input in tfid algo
# Transforming text data in feature vectors
feature_vectors = vectorizer.fit_transform(combine_params)
print(feature_vectors)
# Cosine Similarity
similarity = cosine_similarity(feature_vectors)
print(similarity)
# Asking user for movie name
movie_name = input("Enter the Movie you've watched : ")
list_of_all_titles = movies_dataset['title'].tolist() # Converting into a list
# Just for clarification that the movie selected is present in the database or not
if 'The Notebook' in list_of_all_titles:
print("present")
movie_name = input("Enter the Movie you've watched : ")
# Finding closest match for the user input movie
close_matched_movie = difflib.get_close_matches(movie_name, list_of_all_titles)
print(close_matched_movie)
"""Now we can see it's returning 3 possible matches 1st being the closest match lets choose the 1st element pf this list as the closest matched movie
"""
close_match = close_matched_movie[0]
print(close_match)
"""Now, we've the proper movie name that is present in the database. Let's find it's index"""
index_of_the_movie = movies_dataset[movies_dataset['title'] == close_match]['index']
print(index_of_the_movie)
"""Yields two indexes. So, let's select one of them"""
index_of_the_movie = index_of_the_movie.values[0]
print(index_of_the_movie)
# Finding similarirty confidence for each movie with the user input movie
similar_movies = list(enumerate(similarity[index_of_the_movie]))
print(similar_movies)
"""It basically yields similarity between the movie with all the other movies . Movies having Confidence value close to 1 will be much similar ."""
# sorting the movies based on their similarity confidence
sorted_similar_movie = sorted(similar_movies, key = lambda x:x[1], reverse=True)
# Through the key part we're 1stly choosing the 2nd value of each tuple present in similar movies that is confidence value
# We want the sortng to be descending so reverse is assigned as True
print(sorted_similar_movie)
"""Now we can see movie having highest confidence value is shown at first and least at last So, our sorting is done.
From here we can easily say that movies that are close to the first element of the list will have much more similarity.
And by accessing the 1st element of the tuple we cam easily access the index of the similar movies.
Now, let's print the similar movie names
"""
print('Movies suggested for you : \n')
i = 1
for movie in sorted_similar_movie:
ind = movie[0]
title_from_index = movies_dataset[movies_dataset.index==ind]['title'].values[0]
if(i <= 69):
print(i,'.',title_from_index)
i += 1
print("I will give you results even if you slightly mistype the movie name so Take A CHILL PILL.")
movie_name = input('Enter your favourite movie name : ')
list_of_all_titles = movies_dataset['title'].tolist()
find_close_match = difflib.get_close_matches(movie_name, list_of_all_titles,1)
close_match = find_close_match[0]
print(f"Finding Similar movies for : {close_match}")
print("Don't Give 1 !!!")
choice = int(input("How many Similar movies do you want ? : "))
index_of_the_movie = movies_dataset[movies_dataset.title == close_match]['index'].values[0]
similar_movies = list(enumerate(similarity[index_of_the_movie]))
sorted_similar_movies = sorted(similar_movies, key = lambda x:x[1], reverse=True)
print('Movies suggested for you : \n')
i = 1
for movie in sorted_similar_movies:
ind = movie[0]
title_from_index = movies_dataset[movies_dataset.index==ind]['title'].values[0]
if(i <= choice):
if i == 1:
print(f"{i}.{title_from_index} (You can Re-watch it too 😉)")
else:
print(f"{i}.{title_from_index}")
i += 1
| 34.695946 | 166 | 0.766894 | 784 | 5,135 | 4.867347 | 0.315051 | 0.068134 | 0.023585 | 0.031447 | 0.265985 | 0.229036 | 0.210692 | 0.210692 | 0.190252 | 0.143082 | 0 | 0.008167 | 0.141577 | 5,135 | 147 | 167 | 34.931973 | 0.857305 | 0.19591 | 0 | 0.272727 | 1 | 0.015152 | 0.211583 | 0.037512 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075758 | null | null | 0.257576 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aaa440a2b82419b26a895541ab2442dc76193e1e | 732 | py | Python | features_fixer/reducer/low_variance.py | LudwikBielczynski/features_fixer | 43114e3d986265a1e6e34644d3734a361d3fa926 | [
"MIT"
] | null | null | null | features_fixer/reducer/low_variance.py | LudwikBielczynski/features_fixer | 43114e3d986265a1e6e34644d3734a361d3fa926 | [
"MIT"
] | null | null | null | features_fixer/reducer/low_variance.py | LudwikBielczynski/features_fixer | 43114e3d986265a1e6e34644d3734a361d3fa926 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import Union
import pandas as pd
from sklearn.feature_selection import VarianceThreshold
from .abstract import ReducerAbstract
class LowVariance(ReducerAbstract):
reducer: VarianceThreshold
def transform(self,
df: pd.DataFrame,
threshold: int = 0,
) -> pd.DataFrame:
'''
Uses automatic principal components number selection from Minka, 2000 "Automatic choice
of dimensionality for PCA". Yields best results for dense data.
'''
self.reducer = VarianceThreshold(threshold)
self.reducer.fit(X=df)
df_reduced = self.reducer.transform(df)
return df_reduced
| 29.28 | 96 | 0.665301 | 78 | 732 | 6.205128 | 0.602564 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009399 | 0.273224 | 732 | 24 | 97 | 30.5 | 0.900376 | 0.20765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
aaa875a2365e0c7a1a87e81840260b89466dd120 | 2,000 | py | Python | fitness/private/pptx/shapes/group.py | ovitrac/fitness2 | 18e6715629ebd38ef1bdd88d07f43e9958bb6be5 | [
"MIT"
] | null | null | null | fitness/private/pptx/shapes/group.py | ovitrac/fitness2 | 18e6715629ebd38ef1bdd88d07f43e9958bb6be5 | [
"MIT"
] | null | null | null | fitness/private/pptx/shapes/group.py | ovitrac/fitness2 | 18e6715629ebd38ef1bdd88d07f43e9958bb6be5 | [
"MIT"
] | null | null | null | # encoding: utf-8
"""GroupShape and related objects."""
from __future__ import absolute_import, division, print_function, unicode_literals
from fitness.private.pptx.dml.effect import ShadowFormat
from fitness.private.pptx.enum.shapes import MSO_SHAPE_TYPE
from fitness.private.pptx.shapes.base import BaseShape
from fitness.private.pptx.util import lazyproperty
class GroupShape(BaseShape):
"""A shape that acts as a container for other shapes."""
@property
def click_action(self):
"""Unconditionally raises `TypeError`.
A group shape cannot have a click action or hover action.
"""
raise TypeError("a group shape cannot have a click action")
@property
def has_text_frame(self):
"""Unconditionally |False|.
A group shape does not have a textframe and cannot itself contain
text. This does not impact the ability of shapes contained by the
group to each have their own text.
"""
return False
@lazyproperty
def shadow(self):
"""|ShadowFormat| object representing shadow effect for this group.
A |ShadowFormat| object is always returned, even when no shadow is
explicitly defined on this group shape (i.e. when the group inherits
its shadow behavior).
"""
return ShadowFormat(self._element.grpSpPr)
@property
def shape_type(self):
"""Member of :ref:`MsoShapeType` identifying the type of this shape.
Unconditionally `MSO_SHAPE_TYPE.GROUP` in this case
"""
return MSO_SHAPE_TYPE.GROUP
@lazyproperty
def shapes(self):
"""|GroupShapes| object for this group.
The |GroupShapes| object provides access to the group's member shapes
and provides methods for adding new ones.
"""
from fitness.private.pptx.shapes.shapetree import GroupShapes
return GroupShapes(self._element, self)
| 32.258065 | 83 | 0.6655 | 244 | 2,000 | 5.377049 | 0.442623 | 0.041921 | 0.068598 | 0.083841 | 0.106707 | 0.064024 | 0.064024 | 0.064024 | 0.064024 | 0 | 0 | 0.00068 | 0.2645 | 2,000 | 61 | 84 | 32.786885 | 0.89123 | 0.44 | 0 | 0.227273 | 0 | 0 | 0.044543 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.227273 | false | 0 | 0.272727 | 0 | 0.727273 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
aacf978a93f5864a265e03825ef8307fbd6c8419 | 223 | py | Python | cursoemvideoPy/Mundo1/ex005.py | BrCarlini/exPython | 3bed986e5bfa5eae191b6b18306448926aed48fd | [
"MIT"
] | null | null | null | cursoemvideoPy/Mundo1/ex005.py | BrCarlini/exPython | 3bed986e5bfa5eae191b6b18306448926aed48fd | [
"MIT"
] | null | null | null | cursoemvideoPy/Mundo1/ex005.py | BrCarlini/exPython | 3bed986e5bfa5eae191b6b18306448926aed48fd | [
"MIT"
] | null | null | null | print('=========== SUCESSOR E ANTECESSOR ===========')
n = int(input('Digite um numero: '))
sucessor = n + 1
antecessor = n - 1
print(f'O número escolhido é {n}\nSeu sucessor é {sucessor}\nE seu antecessor é {antecessor}')
| 37.166667 | 94 | 0.627803 | 32 | 223 | 4.375 | 0.59375 | 0.157143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.147982 | 223 | 5 | 95 | 44.6 | 0.726316 | 0 | 0 | 0 | 0 | 0 | 0.659193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aad267a73f68e4c9045a02dfe83c4b923158485e | 2,018 | py | Python | python/layers/fullyconnected.py | fierval/EigenSiNN | 4ed01b47d4b13b9c9e29622475d821868499942d | [
"MIT"
] | null | null | null | python/layers/fullyconnected.py | fierval/EigenSiNN | 4ed01b47d4b13b9c9e29622475d821868499942d | [
"MIT"
] | null | null | null | python/layers/fullyconnected.py | fierval/EigenSiNN | 4ed01b47d4b13b9c9e29622475d821868499942d | [
"MIT"
] | null | null | null | import tstcommon.commondata2d as cd
import torch
import torch.nn as nn
from utils import to_cpp
in_feat = 8
out_feat = 4
fc = nn.Linear(in_feat, out_feat, bias=False)
fc.weight.data = cd.weights
output = fc(cd.inp.reshape((3,8)))
fakeloss = torch.tensor([[0.13770211, 0.28582627, 0.86899745, 0.27578735],
[0.04713255, 0.51820499, 0.27709258, 0.74432141],
[0.47782332, 0.82197350, 0.52797425, 0.03082085]], requires_grad=True)
output.backward(fakeloss)
fakeloss_cpp = to_cpp(fakeloss)
inp_cpp = to_cpp(cd.inp.reshape((3,8)))
inp_grad_cpp = to_cpp(cd.inp.grad)
output_cpp = to_cpp(output)
weights_cpp = to_cpp(fc.weight.transpose(1, 0))
dweights_cpp = to_cpp(fc.weight.grad.transpose(1, 0))
print(f"fakeloss: {fakeloss_cpp}")
print(f"input: {cd.inp.reshape((3,8))}")
print(f"dL/dX: {inp_grad_cpp}")
print(f"output: {output_cpp}")
print(f"weights: {weights_cpp}")
print(f"dweights: {dweights_cpp}")
print("#########################################################")
######## With bias #########
cd.inp.grad.zero_()
fc = nn.Linear(in_feat, out_feat, bias=True)
fc.weight.data = torch.tensor([[ 0.30841491, 0.16301581, 0.05912393, 0.32572004, -0.00591815,
-0.07333553, 0.16375038, -0.35274175],
[ 0.19089887, -0.24521475, 0.27066174, -0.00526837, -0.18401390,
-0.20650741, -0.28048125, 0.29642352],
[-0.15496132, 0.15089461, 0.16939566, -0.25025240, -0.18078347,
-0.07853529, -0.32877934, 0.19627282],
[-0.28125578, -0.15781732, -0.32488498, -0.08520141, -0.27685770,
-0.02988693, 0.18739149, 0.32216403]], requires_grad=True)
fc.bias.data = torch.tensor([0.87039185, 0.08955163, 0.14195210, 0.51105964], requires_grad=True)
output = fc(cd.inp.reshape((3,8)))
output.backward(fakeloss)
output_cpp = to_cpp(output)
dweights_cpp = to_cpp(fc.weight.grad.transpose(1, 0))
bias_cpp = to_cpp(fc.bias)
dbias_cpp = to_cpp(fc.bias.grad)
print(f"output: {output_cpp}")
print(f"bias {bias_cpp}")
print(f"dL/db {dbias_cpp}")
| 32.031746 | 97 | 0.662537 | 316 | 2,018 | 4.101266 | 0.297468 | 0.042438 | 0.061728 | 0.03858 | 0.280093 | 0.177469 | 0.177469 | 0.101852 | 0.060185 | 0.060185 | 0 | 0.256425 | 0.132309 | 2,018 | 62 | 98 | 32.548387 | 0.483724 | 0.00446 | 0 | 0.217391 | 0 | 0 | 0.125628 | 0.040201 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.086957 | 0.217391 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aad90765e0099f16eff02d3d101dba4f2ce0032a | 451 | py | Python | locust/test/runtests.py | echonest/locust | ba9ec8cf235203f6b970593bd98426f8064c17e0 | [
"MIT"
] | null | null | null | locust/test/runtests.py | echonest/locust | ba9ec8cf235203f6b970593bd98426f8064c17e0 | [
"MIT"
] | null | null | null | locust/test/runtests.py | echonest/locust | ba9ec8cf235203f6b970593bd98426f8064c17e0 | [
"MIT"
] | 1 | 2019-04-16T19:31:42.000Z | 2019-04-16T19:31:42.000Z | from gevent import monkey
monkey.patch_all(thread=False)
import unittest
from test_locust_class import TestTaskSet, TestWebLocustClass, TestCatchResponse
from test_stats import TestRequestStats, TestRequestStatsWithWebserver, TestInspectLocust
from test_runners import TestMasterRunner, TestMessageSerializing
from test_taskratio import TestTaskRatio
from test_client import TestHttpSession
if __name__ == '__main__':
unittest.main()
| 32.214286 | 90 | 0.842572 | 47 | 451 | 7.765957 | 0.617021 | 0.109589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119734 | 451 | 13 | 91 | 34.692308 | 0.919395 | 0 | 0 | 0 | 0 | 0 | 0.018265 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.7 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
aad9e8c584de6806bfa542479b2cdfcfa8959bf8 | 37,379 | py | Python | DaisyXMusic/modules/play.py | xylanzii/DaisyXMusic | 0845ce61f3ed035c2be8d80b1455f327ce2a6c31 | [
"MIT"
] | null | null | null | DaisyXMusic/modules/play.py | xylanzii/DaisyXMusic | 0845ce61f3ed035c2be8d80b1455f327ce2a6c31 | [
"MIT"
] | null | null | null | DaisyXMusic/modules/play.py | xylanzii/DaisyXMusic | 0845ce61f3ed035c2be8d80b1455f327ce2a6c31 | [
"MIT"
] | null | null | null | import os
from asyncio import QueueEmpty
from os import path
from typing import Callable
import aiofiles
import aiohttp
import requests
from PIL import Image, ImageDraw, ImageFont
from pyrogram import Client, filters
from pyrogram.errors import UserAlreadyParticipant
from pyrogram.types import InlineKeyboardButton, InlineKeyboardMarkup, Message
from pytgcalls import StreamType
from pytgcalls.types.input_stream import AudioPiped
from youtube_search import YoutubeSearch
from DaisyXMusic.config import DURATION_LIMIT, que
from DaisyXMusic.function.admins import admins as a
from DaisyXMusic.helpers.admins import get_administrators
from DaisyXMusic.helpers.channelmusic import get_chat_id
from DaisyXMusic.helpers.decorators import authorized_users_only
from DaisyXMusic.helpers.filters import command, other_filters
from DaisyXMusic.helpers.gets import get_file_name
from DaisyXMusic.services.pytgcalls import pytgcalls
from DaisyXMusic.services.pytgcalls.pytgcalls import client as USER
from DaisyXMusic.services.queues import queues
from DaisyXMusic.services.youtube.youtube import get_audio
chat_id = None
DISABLED_GROUPS = []
useer = "NaN"
ACTV_CALLS = []
def cb_admin_check(func: Callable) -> Callable:
async def decorator(client, cb):
admemes = a.get(cb.message.chat.id)
if cb.from_user.id in admemes:
return await func(client, cb)
else:
await cb.answer("You ain't allowed!", show_alert=True)
return
return decorator
# Convert seconds to mm:ss
def convert_seconds(seconds):
seconds = seconds % (24 * 3600)
seconds %= 3600
minutes = seconds // 60
seconds %= 60
return "%02d:%02d" % (minutes, seconds)
# Convert hh:mm:ss to seconds
def time_to_seconds(time):
stringt = str(time)
return sum(int(x) * 60**i for i, x in enumerate(reversed(stringt.split(":"))))
# Change image size
def changeImageSize(maxWidth, maxHeight, image):
widthRatio = maxWidth / image.size[0]
heightRatio = maxHeight / image.size[1]
newWidth = int(widthRatio * image.size[0])
newHeight = int(heightRatio * image.size[1])
newImage = image.resize((newWidth, newHeight))
return newImage
async def generate_cover(requested_by, title, views, duration, thumbnail):
async with aiohttp.ClientSession() as session:
async with session.get(thumbnail) as resp:
if resp.status == 200:
f = await aiofiles.open("background.png", mode="wb")
await f.write(await resp.read())
await f.close()
image1 = Image.open("./background.png")
image2 = Image.open("./etc/foreground.png")
image3 = changeImageSize(1280, 720, image1)
image4 = changeImageSize(1280, 720, image2)
image5 = image3.convert("RGBA")
image6 = image4.convert("RGBA")
Image.alpha_composite(image5, image6).save("temp.png")
img = Image.open("temp.png")
draw = ImageDraw.Draw(img)
font = ImageFont.truetype("etc/font.otf", 32)
draw.text((205, 550), f"Title: {title}", (51, 215, 255), font=font)
draw.text((205, 590), f"Duration: {duration}", (255, 255, 255), font=font)
draw.text((205, 630), f"Views: {views}", (255, 255, 255), font=font)
draw.text(
(205, 670),
f"Added By: {requested_by}",
(255, 255, 255),
font=font,
)
img.save("final.png")
os.remove("temp.png")
os.remove("background.png")
@Client.on_message(filters.command("playlist") & filters.group & ~filters.edited)
async def playlist(client, message):
global que
if message.chat.id in DISABLED_GROUPS:
return
queue = que.get(message.chat.id)
if not queue:
await message.reply_text("Player is idle")
temp = []
for t in queue:
temp.append(t)
now_playing = temp[0][0]
by = temp[0][1].mention(style="md")
msg = "**Now Playing** in {}".format(message.chat.title)
msg += "\n- " + now_playing
msg += "\n- Req by " + by
temp.pop(0)
if temp:
msg += "\n\n"
msg += "**Queue**"
for song in temp:
name = song[0]
usr = song[1].mention(style="md")
msg += f"\n- {name}"
msg += f"\n- Req by {usr}\n"
await message.reply_text(msg)
# ============================= Settings =========================================
async def updated_stats(chat, queue, vol=100):
if chat.id in pytgcalls.active_calls:
# if chat.id in active_calls:
stats = "Settings of **{}**".format(chat.title)
if len(que) > 0:
stats += "\n\n"
stats += "Volume : {}%\n".format(vol)
stats += "Songs in queue : `{}`\n".format(len(que))
stats += "Now Playing : **{}**\n".format(queue[0][0])
stats += "Requested by : {}".format(queue[0][1].mention)
else:
stats = None
return stats
def r_ply(type_):
if type_ == "play":
pass
else:
pass
mar = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("⏹", "leave"),
InlineKeyboardButton("⏸", "puse"),
InlineKeyboardButton("▶️", "resume"),
InlineKeyboardButton("⏭", "skip"),
],
[
InlineKeyboardButton("📬Channel ", url=f"https://t.me/Vylanesu")
],
[InlineKeyboardButton("❌ Close", "cls")],
]
)
return mar
@Client.on_message(filters.command("current") & filters.group & ~filters.edited)
async def ee(client, message):
if message.chat.id in DISABLED_GROUPS:
return
queue = que.get(message.chat.id)
stats = updated_stats(message.chat, queue)
if stats:
await message.reply(stats)
else:
await message.reply("No VC instances running in this chat")
@Client.on_message(filters.command("player") & filters.group & ~filters.edited)
@authorized_users_only
async def settings(client, message):
if message.chat.id in DISABLED_GROUPS:
await message.reply("Music Player is Disabled")
return
playing = None
chat_id = get_chat_id(message.chat)
if chat_id in pytgcalls.active_chats:
playing = True
queue = que.get(chat_id)
stats = updated_stats(message.chat, queue)
if stats:
if playing:
await message.reply(stats, reply_markup=r_ply("pause"))
else:
await message.reply(stats, reply_markup=r_ply("play"))
else:
await message.reply("No VC instances running in this chat")
@Client.on_message(
filters.command("musicplayer") & ~filters.edited & ~filters.bot & ~filters.private
)
@authorized_users_only
async def hfmm(_, message):
global DISABLED_GROUPS
try:
message.from_user.id
except:
return
if len(message.command) != 2:
await message.reply_text(
"I only recognize `/musicplayer on` and /musicplayer `off only`"
)
return
status = message.text.split(None, 1)[1]
message.chat.id
if status == "ON" or status == "on" or status == "On":
lel = await message.reply("`Processing...`")
if not message.chat.id in DISABLED_GROUPS:
await lel.edit("Music Player Already Activated In This Chat")
return
DISABLED_GROUPS.remove(message.chat.id)
await lel.edit(
f"Music Player Successfully Enabled For Users In The Chat {message.chat.id}"
)
elif status == "OFF" or status == "off" or status == "Off":
lel = await message.reply("`Processing...`")
if message.chat.id in DISABLED_GROUPS:
await lel.edit("Music Player Already turned off In This Chat")
return
DISABLED_GROUPS.append(message.chat.id)
await lel.edit(
f"Music Player Successfully Deactivated For Users In The Chat {message.chat.id}"
)
else:
await message.reply_text(
"I only recognize `/musicplayer on` and /musicplayer `off only`"
)
@Client.on_callback_query(filters.regex(pattern=r"^(playlist)$"))
async def p_cb(b, cb):
global que
que.get(cb.message.chat.id)
type_ = cb.matches[0].group(1)
cb.message.chat.id
cb.message.chat
cb.message.reply_markup.inline_keyboard[1][0].callback_data
if type_ == "playlist":
queue = que.get(cb.message.chat.id)
if not queue:
await cb.message.edit("Player is idle")
temp = []
for t in queue:
temp.append(t)
now_playing = temp[0][0]
by = temp[0][1].mention(style="md")
msg = "<b>Now Playing</b> in {}".format(cb.message.chat.title)
msg += "\n- " + now_playing
msg += "\n- Req by " + by
temp.pop(0)
if temp:
msg += "\n\n"
msg += "**Queue**"
for song in temp:
name = song[0]
usr = song[1].mention(style="md")
msg += f"\n- {name}"
msg += f"\n- Req by {usr}\n"
await cb.message.edit(msg)
@Client.on_callback_query(
filters.regex(pattern=r"^(play|pause|skip|leave|puse|resume|menu|cls)$")
)
@cb_admin_check
async def m_cb(chat, b, cb):
global que
if (
cb.message.chat.title.startswith("Channel Music: ")
and chat.title[14:].isnumeric()
):
chet_id = int(chat.title[13:])
else:
chet_id = cb.message.chat.id
qeue = que.get(chet_id)
type_ = cb.matches[0].group(1)
cb.message.chat.id
m_chat = cb.message.chat
the_data = cb.message.reply_markup.inline_keyboard[1][0].callback_data
if type_ == "pause":
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) not in ACTV_CALLS:
await cb.answer("Chat is not connected!", show_alert=True)
else:
await pytgcalls.pause_stream(chat_id)
await cb.answer("Music Paused!")
await cb.message.edit(
updated_stats(m_chat, qeue), reply_markup=r_ply("play")
)
elif type_ == "resume":
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) not in ACTV_CALLS:
await cb.answer("Chat is not connected!", show_alert=True)
else:
await pytgcalls.resume_stream(chat_id)
await cb.answer("Music Resumed!")
await cb.message.edit(
updated_stats(m_chat, qeue), reply_markup=r_ply("pause")
)
elif type_ == "playlist":
queue = que.get(cb.message.chat.id)
if not queue:
await cb.message.edit("Player is idle")
temp = []
for t in queue:
temp.append(t)
now_playing = temp[0][0]
by = temp[0][1].mention(style="md")
msg = "**Now Playing** in {}".format(cb.message.chat.title)
msg += "\n- " + now_playing
msg += "\n- Req by " + by
temp.pop(0)
if temp:
msg += "\n\n"
msg += "**Queue**"
for song in temp:
name = song[0]
usr = song[1].mention(style="md")
msg += f"\n- {name}"
msg += f"\n- Req by {usr}\n"
await cb.message.edit(msg)
elif type_ == "resume":
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) not in ACTV_CALLS:
await cb.answer("Chat is not connected or already playng", show_alert=True)
else:
await pytgcalls.resume_stream(chat_id)
await cb.answer("Music Resumed!")
elif type_ == "puse":
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) not in ACTV_CALLS:
await cb.answer("Chat is not connected or already paused", show_alert=True)
else:
await pytgcalls.pause_stream(chat_id)
await cb.answer("Music Paused!")
elif type_ == "cls":
await cb.answer("Closed menu")
await cb.message.delete()
elif type_ == "menu":
stats = updated_stats(cb.message.chat, qeue)
await cb.answer("Menu opened")
marr = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("⏹", "leave"),
InlineKeyboardButton("⏸", "puse"),
InlineKeyboardButton("▶️", "resume"),
InlineKeyboardButton("⏭", "skip"),
],
[
InlineKeyboardButton("📬Channel ", url=f"https://t.me/Vylanesu")
],
[InlineKeyboardButton("❌ Close", "cls")],
]
)
await cb.message.edit(stats, reply_markup=marr)
elif type_ == "skip":
if qeue:
qeue.pop(0)
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) not in ACTV_CALLS:
await cb.answer("Chat is not connected!", show_alert=True)
else:
queues.task_done(chat_id)
if queues.is_empty(chat_id):
await pytgcalls.leave_group_call(chat_id)
await cb.message.edit("- No More Playlist..\n- Leaving VC!")
else:
await pytgcalls.change_stream(
chat_id,
AudioPiped(
queues.get(chat_id)["file"],
),
)
await cb.answer("Skipped")
await cb.message.edit((m_chat, qeue), reply_markup=r_ply(the_data))
await cb.message.reply_text(
f"- Skipped track\n- Now Playing **{qeue[0][0]}**"
)
else:
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) in ACTV_CALLS:
try:
queues.clear(chat_id)
except QueueEmpty:
pass
await pytgcalls.leave_group_call(chat_id)
await cb.message.edit("Successfully Left the Chat!")
else:
await cb.answer("Chat is not connected!", show_alert=True)
@Client.on_message(command("play") & other_filters)
async def play(_, message: Message):
global que
global useer
if message.chat.id in DISABLED_GROUPS:
return
lel = await message.reply("🔄 <b>Processing</b>")
administrators = await get_administrators(message.chat)
chid = message.chat.id
try:
user = await USER.get_me()
except:
user.first_name = "helper"
usar = user
wew = usar.id
try:
# chatdetails = await USER.get_chat(chid)
await _.get_chat_member(chid, wew)
except:
for administrator in administrators:
if administrator == message.from_user.id:
if message.chat.title.startswith("Channel Music: "):
await lel.edit(
"<b>Remember to add helper to your channel</b>",
)
try:
invitelink = await _.export_chat_invite_link(chid)
if invitelink.startswith("https://t.me/+"):
invitelink = invitelink.replace(
"https://t.me/+", "https://t.me/joinchat/"
)
except:
await lel.edit(
"<b>Add me as admin of yor group first</b>",
)
return
try:
await USER.join_chat(invitelink)
await USER.send_message(
message.chat.id, "I joined this group for playing music in VC"
)
await lel.edit(
"<b>helper userbot joined your chat</b>",
)
except UserAlreadyParticipant:
pass
except Exception:
# print(e)
await lel.edit(
f"<b>🔴 Flood Wait Error 🔴 \nUser {user.first_name} couldn't join your group due to heavy requests for userbot! Make sure user is not banned in group."
"\n\nOr manually add assistant to your Group and try again</b>",
)
try:
await USER.get_chat(chid)
# lmoa = await client.get_chat_member(chid,wew)
except:
await lel.edit(
f"<i> {user.first_name} Userbot not in this chat, Ask admin to send /play command for first time or add {user.first_name} manually</i>"
)
return
text_links = None
await lel.edit("🔎 <b>Finding</b>")
if message.reply_to_message:
if message.reply_to_message.audio:
pass
entities = []
if message.entities:
entities += entities
elif message.caption_entities:
entities += message.caption_entities
if message.reply_to_message:
message.reply_to_message.text or message.reply_to_message.caption
if message.reply_to_message.entities:
entities = message.reply_to_message.entities + entities
elif message.reply_to_message.caption_entities:
entities = message.reply_to_message.entities + entities
else:
message.text or message.caption
urls = [entity for entity in entities if entity.type == "url"]
text_links = [entity for entity in entities if entity.type == "text_link"]
else:
urls = None
if text_links:
urls = True
user_id = message.from_user.id
user_name = message.from_user.first_name
rpk = "[" + user_name + "](tg://user?id=" + str(user_id) + ")"
audio = (
(message.reply_to_message.audio or message.reply_to_message.voice)
if message.reply_to_message
else None
)
if audio:
if round(audio.duration / 60) > DURATION_LIMIT:
await lel.edit(
f"❌ Videos longer than {DURATION_LIMIT} minute(s) aren't allowed to play!"
)
return
keyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("Channel 📬 ", url=f"https://t.me/Vylanesu")
InlineKeyboardButton("Menu ⏯ ", callback_data="menu"),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
file_name = get_file_name(audio)
title = file_name
thumb_name = "https://telegra.ph/file/f6086f8909fbfeb0844f2.png"
thumbnail = thumb_name
duration = round(audio.duration / 60)
views = "Locally added"
requested_by = message.from_user.first_name
await generate_cover(requested_by, title, views, duration, thumbnail)
file = await convert(
(await message.reply_to_message.download(file_name))
if not path.isfile(path.join("downloads", file_name))
else file_name
)
elif urls:
query = toxt
await lel.edit("🎵 <b>Processing</b>")
ydl_opts = {"format": "bestaudio/best"}
try:
results = YoutubeSearch(query, max_results=1).to_dict()
url = f"https://youtube.com{results[0]['url_suffix']}"
# print(results)
title = results[0]["title"][:40]
thumbnail = results[0]["thumbnails"][0]
thumb_name = f"thumb{title}.jpg"
thumb = requests.get(thumbnail, allow_redirects=True)
open(thumb_name, "wb").write(thumb.content)
duration = results[0]["duration"]
results[0]["url_suffix"]
views = results[0]["views"]
except Exception as e:
await lel.edit(
"Song not found.Try another song or maybe spell it properly."
)
print(str(e))
return
try:
secmul, dur, dur_arr = 1, 0, duration.split(":")
for i in range(len(dur_arr) - 1, -1, -1):
dur += int(dur_arr[i]) * secmul
secmul *= 60
if (dur / 60) > DURATION_LIMIT:
await lel.edit(
f"❌ Videos longer than {DURATION_LIMIT} minutes aren't allowed to play!"
)
return
except:
pass
dlurl = url
dlurl = dlurl.replace("youtube", "youtubepp")
keyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("Channel 📬", url=f"https://t.me/Vylanesu")
InlineKeyboardButton("Menu ⏯ ", callback_data="menu"),
],
[
InlineKeyboardButton(text="🎬 YouTube", url=f"{url}"),
InlineKeyboardButton(text="Download 📥", url=f"{dlurl}"),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
requested_by = message.from_user.first_name
await generate_cover(requested_by, title, views, duration, thumbnail)
file = await get_audio(link)
else:
query = ""
for i in message.command[1:]:
query += " " + str(i)
print(query)
await lel.edit("🎵 **Processing**")
ydl_opts = {"format": "bestaudio/best"}
try:
results = YoutubeSearch(query, max_results=5).to_dict()
except:
await lel.edit("Give me something to play")
# Looks like hell. Aren't it?? FUCK OFF
try:
toxxt = "**Select the song you want to play**\n\n"
j = 0
useer = user_name
emojilist = [
"1️⃣",
"2️⃣",
"3️⃣",
"4️⃣",
"5️⃣",
]
while j < 5:
toxxt += f"{emojilist[j]} <b>Title - [{results[j]['title']}](https://youtube.com{results[j]['url_suffix']})</b>\n"
toxxt += f" ╚ <b>Duration</b> - {results[j]['duration']}\n"
toxxt += f" ╚ <b>Views</b> - {results[j]['views']}\n"
toxxt += f" ╚ <b>Channel</b> - {results[j]['channel']}\n\n"
j += 1
koyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton(
"1️⃣", callback_data=f"plll 0|{query}|{user_id}"
),
InlineKeyboardButton(
"2️⃣", callback_data=f"plll 1|{query}|{user_id}"
),
InlineKeyboardButton(
"3️⃣", callback_data=f"plll 2|{query}|{user_id}"
),
],
[
InlineKeyboardButton(
"4️⃣", callback_data=f"plll 3|{query}|{user_id}"
),
InlineKeyboardButton(
"5️⃣", callback_data=f"plll 4|{query}|{user_id}"
),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
await lel.edit(toxxt, reply_markup=koyboard, disable_web_page_preview=True)
# WHY PEOPLE ALWAYS LOVE PORN ?? (A point to think)
return
# Returning to pornhub
except:
await lel.edit("No Enough results to choose.. Starting direct play..")
# print(results)
try:
url = f"https://youtube.com{results[0]['url_suffix']}"
title = results[0]["title"][:40]
thumbnail = results[0]["thumbnails"][0]
thumb_name = f"thumb{title}.jpg"
thumb = requests.get(thumbnail, allow_redirects=True)
open(thumb_name, "wb").write(thumb.content)
duration = results[0]["duration"]
results[0]["url_suffix"]
views = results[0]["views"]
except Exception as e:
await lel.edit(
"Song not found.Try another song or maybe spell it properly."
)
print(str(e))
return
try:
secmul, dur, dur_arr = 1, 0, duration.split(":")
for i in range(len(dur_arr) - 1, -1, -1):
dur += int(dur_arr[i]) * secmul
secmul *= 60
if (dur / 60) > DURATION_LIMIT:
await lel.edit(
f"❌ Videos longer than {DURATION_LIMIT} minutes aren't allowed to play!"
)
return
except:
pass
dlurl = url
dlurl = dlurl.replace("youtube", "youtubepp")
keyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("Channel 📬 ", url=f"https://t.me/Vylanesu")
InlineKeyboardButton("Menu ⏯ ", callback_data="menu"),
],
[
InlineKeyboardButton(text="🎬 YouTube", url=f"{url}"),
InlineKeyboardButton(text="Download 📥", url=f"{dlurl}"),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
requested_by = message.from_user.first_name
await generate_cover(requested_by, title, views, duration, thumbnail)
file = await get_audio(link)
chat_id = get_chat_id(message.chat)
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) in ACTV_CALLS:
position = await queues.put(chat_id, file=file)
qeue = que.get(chat_id)
s_name = title
r_by = message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
await message.reply_photo(
photo="final.png",
caption=f"#⃣ Your requested song <b>queued</b> at position {position}!",
reply_markup=keyboard,
)
os.remove("final.png")
return await lel.delete()
else:
chat_id = get_chat_id(message.chat)
que[chat_id] = []
qeue = que.get(chat_id)
s_name = title
r_by = message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
try:
await pytgcalls.join_group_call(
chat_id,
AudioPiped(
file,
),
stream_type=StreamType().local_stream,
)
except:
message.reply("Group Call is not connected or I can't join it")
return
await message.reply_photo(
photo="final.png",
reply_markup=keyboard,
caption="▶️ <b>Playing</b> here the song requested by {} via Youtube Music 😎".format(
message.from_user.mention()
),
)
os.remove("final.png")
return await lel.delete()
@Client.on_message(filters.command("ytplay") & filters.group & ~filters.edited)
async def ytplay(_, message: Message):
global que
if message.chat.id in DISABLED_GROUPS:
return
lel = await message.reply("🔄 <b>Processing</b>")
administrators = await get_administrators(message.chat)
chid = message.chat.id
try:
user = await USER.get_me()
except:
user.first_name = "helper"
usar = user
wew = usar.id
try:
# chatdetails = await USER.get_chat(chid)
await _.get_chat_member(chid, wew)
except:
for administrator in administrators:
if administrator == message.from_user.id:
if message.chat.title.startswith("Channel Music: "):
await lel.edit(
"<b>Remember to add helper to your channel</b>",
)
try:
invitelink = await _.export_chat_invite_link(chid)
if invitelink.startswith("https://t.me/+"):
invitelink = invitelink.replace(
"https://t.me/+", "https://t.me/joinchat/"
)
except:
await lel.edit(
"<b>Add me as admin of yor group first</b>",
)
return
try:
await USER.join_chat(invitelink)
await USER.send_message(
message.chat.id, "I joined this group for playing music in VC"
)
await lel.edit(
"<b>helper userbot joined your chat</b>",
)
except UserAlreadyParticipant:
pass
except Exception:
# print(e)
await lel.edit(
f"<b>🔴 Flood Wait Error 🔴 \nUser {user.first_name} couldn't join your group due to heavy requests for userbot! Make sure user is not banned in group."
"\n\nOr manually add assistant to your Group and try again</b>",
)
try:
await USER.get_chat(chid)
# lmoa = await client.get_chat_member(chid,wew)
except:
await lel.edit(
f"<i> {user.first_name} Userbot not in this chat, Ask admin to send /play command for first time or add {user.first_name} manually</i>"
)
return
await lel.edit("🔎 <b>Finding</b>")
message.from_user.id
message.from_user.first_name
query = ""
for i in message.command[1:]:
query += " " + str(i)
print(query)
await lel.edit("🎵 <b>Processing</b>")
ydl_opts = {"format": "bestaudio/best"}
try:
results = YoutubeSearch(query, max_results=1).to_dict()
url = f"https://youtube.com{results[0]['url_suffix']}"
# print(results)
title = results[0]["title"][:40]
thumbnail = results[0]["thumbnails"][0]
thumb_name = f"thumb{title}.jpg"
thumb = requests.get(thumbnail, allow_redirects=True)
open(thumb_name, "wb").write(thumb.content)
duration = results[0]["duration"]
results[0]["url_suffix"]
views = results[0]["views"]
except Exception as e:
await lel.edit("Song not found.Try another song or maybe spell it properly.")
print(str(e))
return
try:
secmul, dur, dur_arr = 1, 0, duration.split(":")
for i in range(len(dur_arr) - 1, -1, -1):
dur += int(dur_arr[i]) * secmul
secmul *= 60
if (dur / 60) > DURATION_LIMIT:
await lel.edit(
f"❌ Videos longer than {DURATION_LIMIT} minutes aren't allowed to play!"
)
return
except:
pass
dlurl = url
dlurl = dlurl.replace("youtube", "youtubepp")
keyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("Channel 📬 ", url=f"https://t.me/Vylanesu
InlineKeyboardButton("Menu ⏯ ", callback_data="menu"),
],
[
InlineKeyboardButton(text="🎬 YouTube", url=f"{url}"),
InlineKeyboardButton(text="Download 📥", url=f"{dlurl}"),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
requested_by = message.from_user.first_name
await generate_cover(requested_by, title, views, duration, thumbnail)
file = await get_audio(link)
chat_id = get_chat_id(message.chat)
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) in ACTV_CALLS:
position = await queues.put(chat_id, file=file)
qeue = que.get(chat_id)
s_name = title
r_by = message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
await message.reply_photo(
photo="final.png",
caption=f"#⃣ Your requested song <b>queued</b> at position {position}!",
reply_markup=keyboard,
)
os.remove("final.png")
return await lel.delete()
else:
chat_id = get_chat_id(message.chat)
que[chat_id] = []
qeue = que.get(chat_id)
s_name = title
r_by = message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
try:
await pytgcalls.join_group_call(
chat_id,
AudioPiped(
file,
),
stream_type=StreamType().local_stream,
)
except:
message.reply("Group Call is not connected or I can't join it")
return
await message.reply_photo(
photo="final.png",
reply_markup=keyboard,
caption="▶️ <b>Playing</b> here the song requested by {} via Youtube Music 😎".format(
message.from_user.mention()
),
)
os.remove("final.png")
return await lel.delete()
@Client.on_callback_query(filters.regex(pattern=r"plll"))
async def lol_cb(b, cb):
global que
cbd = cb.data.strip()
chat_id = cb.message.chat.id
typed_ = cbd.split(None, 1)[1]
# useer_id = cb.message.reply_to_message.from_user.id
try:
x, query, useer_id = typed_.split("|")
except:
await cb.message.edit("Song Not Found")
return
useer_id = int(useer_id)
if cb.from_user.id != useer_id:
await cb.answer(
"You ain't the person who requested to play the song!", show_alert=True
)
return
await cb.message.edit("Hang On... Player Starting")
x = int(x)
try:
useer_name = cb.message.reply_to_message.from_user.first_name
except:
useer_name = cb.message.from_user.first_name
results = YoutubeSearch(query, max_results=5).to_dict()
resultss = results[x]["url_suffix"]
title = results[x]["title"][:40]
thumbnail = results[x]["thumbnails"][0]
duration = results[x]["duration"]
views = results[x]["views"]
url = f"https://youtube.com{resultss}"
try:
secmul, dur, dur_arr = 1, 0, duration.split(":")
for i in range(len(dur_arr) - 1, -1, -1):
dur += int(dur_arr[i]) * secmul
secmul *= 60
if (dur / 60) > DURATION_LIMIT:
await cb.message.edit(
f"Music longer than {DURATION_LIMIT}min are not allowed to play"
)
return
except:
pass
try:
thumb_name = f"thumb{title}.jpg"
thumb = requests.get(thumbnail, allow_redirects=True)
open(thumb_name, "wb").write(thumb.content)
except Exception as e:
print(e)
return
dlurl = url
dlurl = dlurl.replace("youtube", "youtubepp")
keyboard = InlineKeyboardMarkup(
[
[
InlineKeyboardButton("Channel 📬 ", url=f"https://t.me/Vylanesu
InlineKeyboardButton("Menu ⏯ ", callback_data="menu"),
],
[
InlineKeyboardButton(text="🎬 YouTube", url=f"{url}"),
InlineKeyboardButton(text="Download 📥", url=f"{dlurl}"),
],
[InlineKeyboardButton(text="❌ Close", callback_data="cls")],
]
)
requested_by = useer_name
await generate_cover(requested_by, title, views, duration, thumbnail)
file = await get_audio(link)
for x in pytgcalls.active_calls:
ACTV_CALLS.append(int(x.chat_id))
if int(chat_id) in ACTV_CALLS:
position = await queues.put(chat_id, file=file)
qeue = que.get(chat_id)
s_name = title
try:
r_by = cb.message.reply_to_message.from_user
except:
r_by = cb.message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
await cb.message.delete()
await b.send_photo(
chat_id,
photo="final.png",
caption=f"#⃣ Song requested by {r_by.mention()} <b>queued</b> at position {position}!",
reply_markup=keyboard,
)
os.remove("final.png")
else:
que[chat_id] = []
qeue = que.get(chat_id)
s_name = title
try:
r_by = cb.message.reply_to_message.from_user
except:
r_by = cb.message.from_user
loc = file
appendable = [s_name, r_by, loc]
qeue.append(appendable)
await pytgcalls.join_group_call(
chat_id,
AudioPiped(
file,
),
stream_type=StreamType().local_stream,
)
await cb.message.delete()
await b.send_photo(
chat_id,
photo="final.png",
reply_markup=keyboard,
caption=f"▶️ <b>Playing</b> here the song requested by {r_by.mention()} via Youtube Music 😎",
)
os.remove("final.png")
| 35.463947 | 174 | 0.541828 | 4,343 | 37,379 | 4.565047 | 0.106608 | 0.026934 | 0.017553 | 0.018007 | 0.721225 | 0.68047 | 0.652426 | 0.639161 | 0.609301 | 0.602946 | 0 | 0.01096 | 0.340913 | 37,379 | 1,053 | 175 | 35.497626 | 0.790226 | 0 | 0 | 0.626819 | 0 | 0.008316 | 0.15664 | 0.003317 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.010395 | 0.025988 | null | null | 0.006237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aae323c8a5942735857ae795fc0321e13acdd7a4 | 581 | py | Python | taiwanpeaks/routes/migrations/0003_auto_20201018_0743.py | bhomnick/taiwanpeaks | 0ac3aee6493bc5e982646efd4afad5ede5db005f | [
"MIT"
] | 1 | 2020-11-17T15:17:01.000Z | 2020-11-17T15:17:01.000Z | taiwanpeaks/routes/migrations/0003_auto_20201018_0743.py | bhomnick/taiwanpeaks | 0ac3aee6493bc5e982646efd4afad5ede5db005f | [
"MIT"
] | 16 | 2020-12-04T11:05:25.000Z | 2020-12-05T09:00:05.000Z | taiwanpeaks/routes/migrations/0003_auto_20201018_0743.py | bhomnick/taiwanpeaks | 0ac3aee6493bc5e982646efd4afad5ede5db005f | [
"MIT"
] | null | null | null | # Generated by Django 3.1.1 on 2020-10-18 07:43
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('routes', '0002_auto_20201018_0727'),
]
operations = [
migrations.AlterField(
model_name='cabin',
name='latitude',
field=models.DecimalField(decimal_places=5, max_digits=8),
),
migrations.AlterField(
model_name='cabin',
name='longitude',
field=models.DecimalField(decimal_places=5, max_digits=8),
),
]
| 24.208333 | 70 | 0.593804 | 61 | 581 | 5.508197 | 0.639344 | 0.119048 | 0.14881 | 0.172619 | 0.505952 | 0.505952 | 0.279762 | 0.279762 | 0.279762 | 0 | 0 | 0.085158 | 0.292599 | 581 | 23 | 71 | 25.26087 | 0.73236 | 0.077453 | 0 | 0.470588 | 1 | 0 | 0.104869 | 0.043071 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aae3edc1ee42dd67f72ec611ee88477769f337f8 | 577 | py | Python | src/game/processors/script.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | src/game/processors/script.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | src/game/processors/script.py | ShoaibSyed1/project-pokemon | 6916962cf0be478c2a229b6620e9425d707c2b29 | [
"MIT"
] | null | null | null | from esper import Processor
from game.components import ScriptComponent
class ScriptProcessor(Processor):
def __init__(self):
pass
def process(self, delta):
script_list = list(self.world.get_component(ScriptComponent))
for ent, script_comp in script_list:
if not script_comp.started:
script_comp.script.entity = ent
script_comp.script.world = self.world
script_comp.script.start()
script_comp.started = True
script_comp.script.update(delta)
| 30.368421 | 69 | 0.630849 | 64 | 577 | 5.46875 | 0.484375 | 0.2 | 0.182857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30156 | 577 | 18 | 70 | 32.055556 | 0.868486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.142857 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
aae444c96dd81a3d0c943ed96796972ec9892aa4 | 562 | py | Python | bluebottle/initiatives/migrations/0011_auto_20190522_0931.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 10 | 2015-05-28T18:26:40.000Z | 2021-09-06T10:07:03.000Z | bluebottle/initiatives/migrations/0011_auto_20190522_0931.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 762 | 2015-01-15T10:00:59.000Z | 2022-03-31T15:35:14.000Z | bluebottle/initiatives/migrations/0011_auto_20190522_0931.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
] | 9 | 2015-02-20T13:19:30.000Z | 2022-03-08T14:09:17.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.15 on 2019-05-22 07:31
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('initiatives', '0010_auto_20190521_0954'),
]
operations = [
migrations.RemoveField(
model_name='initiativeplatformsettings',
name='facebook_at_work_url',
),
migrations.RemoveField(
model_name='initiativeplatformsettings',
name='share_options',
),
]
| 23.416667 | 52 | 0.629893 | 54 | 562 | 6.296296 | 0.759259 | 0.123529 | 0.152941 | 0.176471 | 0.352941 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0.082524 | 0.266904 | 562 | 23 | 53 | 24.434783 | 0.742718 | 0.122776 | 0 | 0.375 | 1 | 0 | 0.242857 | 0.153061 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2aac962df67ade57d4531560c9c531a8f7872271 | 433 | py | Python | Dataset/Leetcode/train/20/644.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/20/644.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/20/644.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution:
def XXX(self, s: str) -> bool:
self.stack = []
dic = {'(': ')', '[': ']', '{': '}'}
for i in s:
if i in ('(', '[', '{'):
self.stack.append(i)
else:
if self.stack and i == dic[self.stack[-1]]:
self.stack.pop()
else:
return False
return not self.stack
| 27.0625 | 59 | 0.341801 | 42 | 433 | 3.52381 | 0.52381 | 0.364865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004505 | 0.487298 | 433 | 15 | 60 | 28.866667 | 0.662162 | 0 | 0 | 0.153846 | 0 | 0 | 0.020833 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2abe22bf625f5eae14959720f40eaa823215ffe9 | 1,234 | py | Python | sparksampling/tests/test_evaluation.py | Wh1isper/pyspark-sampling | 5d5883491122608ff731bb6e7f7aa0887beb556c | [
"Apache-2.0"
] | 2 | 2021-12-08T14:53:07.000Z | 2021-12-08T14:53:08.000Z | sparksampling/tests/test_evaluation.py | Wh1isper/pyspark-sampling | 5d5883491122608ff731bb6e7f7aa0887beb556c | [
"Apache-2.0"
] | null | null | null | sparksampling/tests/test_evaluation.py | Wh1isper/pyspark-sampling | 5d5883491122608ff731bb6e7f7aa0887beb556c | [
"Apache-2.0"
] | 2 | 2021-11-30T03:26:19.000Z | 2021-12-08T16:28:49.000Z | from sparksampling.app import evaluation_app
from sparksampling.tests.base_test_module import BaseTestModule
class BaseTestEvaluationModule(BaseTestModule):
def get_app(self):
return evaluation_app()
def test_json_decode_error_code(self):
super(BaseTestEvaluationModule, self).test_json_decode_error_code()
class TestBasicStatistics(BaseTestEvaluationModule):
test_url = '/v1/evaluation/statistics/'
def test_basic_statistics(self):
response = self._post_data_from_file('statistics-job.json')
self._check_code(response, 0, 'Basic Statistics Test By Job')
def test_basic_statistics_by_path(self):
response = self._post_data_from_file('statistics-path.json')
self._check_code(response, 0, 'Basic Statistics Test By Path')
class TestEvaluation(BaseTestEvaluationModule):
test_url = '/v1/evaluation/job/'
def test_compare_evaluation(self):
response = self._post_data_from_file('evaluation-compare.json')
self._check_code(response, 0, 'Compare Evaluation Test')
def test_kmeans_evaluation(self):
response = self._post_data_from_file('evaluation-kmeans.json')
self._check_code(response, 0, 'Kmeans Evaluation Test')
| 34.277778 | 75 | 0.750405 | 149 | 1,234 | 5.885906 | 0.255034 | 0.039909 | 0.072976 | 0.09122 | 0.531357 | 0.380844 | 0.321551 | 0.321551 | 0.22577 | 0.107184 | 0 | 0.00578 | 0.158833 | 1,234 | 35 | 76 | 35.257143 | 0.839114 | 0 | 0 | 0 | 0 | 0 | 0.187348 | 0.057583 | 0 | 0 | 0 | 0 | 0 | 1 | 0.26087 | false | 0 | 0.086957 | 0.043478 | 0.608696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2abec5b3832a63e4f2df5837ecc958ad230f9553 | 3,587 | py | Python | pyvivintsky/vivint_panel.py | 7ooL/pyvivintsky | 2a2286e1b568434324b8f2cecb3a740ba8fd15f5 | [
"MIT"
] | 1 | 2020-12-05T19:39:22.000Z | 2020-12-05T19:39:22.000Z | pyvivintsky/vivint_panel.py | 7ooL/pyvivintsky | 2a2286e1b568434324b8f2cecb3a740ba8fd15f5 | [
"MIT"
] | null | null | null | pyvivintsky/vivint_panel.py | 7ooL/pyvivintsky | 2a2286e1b568434324b8f2cecb3a740ba8fd15f5 | [
"MIT"
] | 2 | 2020-12-05T19:39:31.000Z | 2021-01-11T17:04:32.000Z | import asyncio
from homeauto.api_vivint.pyvivintsky.vivint_device import VivintDevice
from homeauto.api_vivint.pyvivintsky.vivint_api import VivintAPI
from homeauto.api_vivint.pyvivintsky.vivint_wireless_sensor import VivintWirelessSensor
from homeauto.api_vivint.pyvivintsky.vivint_door_lock import VivintDoorLock
from homeauto.api_vivint.pyvivintsky.vivint_unknown_device import VivintUnknownDevice
from homeauto.house import register_security_event
import logging
# This retrieves a Python logging instance (or creates it)
logger = logging.getLogger(__name__)
class VivintPanel(VivintDevice):
"""
Represents the main Vivint panel device
"""
"""
states 1 and 2 come from panels
"""
ARM_STATES = {0: "disarmed", 1: "armed_away", 2: "armed_stay", 3: "armed_stay", 4: "armed_away"}
def __init__(self, vivintapi: VivintAPI, descriptor: dict, panel: dict):
self.__vivintapi: VivintAPI = vivintapi
self.__descriptor = descriptor
self.__panel = panel
self.__child_devices = self.__init_devices()
def __init_devices(self):
"""
Initialize the devices
"""
devices = {}
for device in self.__panel[u"system"][u"par"][0][u"d"]:
devices[str(device[u"_id"])] = self.get_device_class(device[u"t"])(
device, self
)
return devices
def id(self):
return str(self.__panel[u"system"][u"panid"])
def get_armed_state(self):
"""Return panels armed state."""
return self.ARM_STATES[self.__descriptor[u"par"][0][u"s"]]
def street(self):
"""Return the panels street address."""
return self.__panel[u"system"][u"add"]
def zip_code(self):
"""Return the panels zip code."""
return self.__panel[u"system"][u"poc"]
def city(self):
"""Return the panels city."""
return self.__panel[u"system"][u"cit"]
def climate_state(self):
"""Return the climate state"""
return self.__panel[u"system"][u"csce"]
async def poll_devices(self):
"""
Poll all devices attached to this panel.
"""
self.__panel = await self.__vivintapi.get_system_info(self.id())
def get_devices(self):
"""
Returns the current list of devices attached to the panel.
"""
return self.__child_devices
def get_device(self, id):
return self.__child_devices[id]
def update_device(self, id, updates):
self.__child_devices[id].update_device(updates)
def handle_message(self, message):
if u"d" in message[u"da"].keys():
for msg_device in message[u"da"][u"d"]:
self.update_device(str(msg_device[u"_id"]), msg_device)
def handle_armed_message(self, message):
logger.debug(message[u"da"][u"seca"][u"n"]+" set system "+self.ARM_STATES[message[u"da"][u"seca"][u"s"]])
register_security_event(message[u"da"][u"seca"][u"n"],self.ARM_STATES[message[u"da"][u"seca"][u"s"]])
def handle_disarmed_message(self, message):
logger.debug(message[u"da"][u"secd"][u"n"]+" set system "+self.ARM_STATES[message[u"da"][u"secd"][u"s"]])
register_security_event(message[u"da"][u"secd"][u"n"],self.ARM_STATES[message[u"da"][u"secd"][u"s"]])
@staticmethod
def get_device_class(type_string):
mapping = {
VivintDevice.DEVICE_TYPE_WIRELESS_SENSOR: VivintWirelessSensor,
VivintDevice.DEVICE_TYPE_DOOR_LOCK: VivintDoorLock
}
return mapping.get(type_string, VivintUnknownDevice)
| 35.514851 | 113 | 0.654586 | 471 | 3,587 | 4.745223 | 0.242038 | 0.035794 | 0.044743 | 0.044295 | 0.2783 | 0.263087 | 0.136913 | 0.129754 | 0.129754 | 0.063535 | 0 | 0.003189 | 0.21327 | 3,587 | 100 | 114 | 35.87 | 0.788802 | 0.088375 | 0 | 0 | 0 | 0 | 0.066102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.133333 | 0.033333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2ac8e9d68165c2b0af3da2682f2a2ae7ee56af3c | 5,725 | py | Python | test_turn.py | samdrivendevelopment/matching_monkey | ec42241759fcd9b03584df5265da686362c7b8bd | [
"BSD-2-Clause"
] | null | null | null | test_turn.py | samdrivendevelopment/matching_monkey | ec42241759fcd9b03584df5265da686362c7b8bd | [
"BSD-2-Clause"
] | null | null | null | test_turn.py | samdrivendevelopment/matching_monkey | ec42241759fcd9b03584df5265da686362c7b8bd | [
"BSD-2-Clause"
] | null | null | null |
from card_matching import CardMatchingGame
class BetterTestUI(object):
def read(self):
fake_user_input = self.input_list[self.index]
self.index += 1
return fake_user_input
def write(self, text):
pass
def make_better_test_game(input_list):
state = {
'board': ['_', '_', '_', '_'],
'len_board': ['0', '1', '2', '3'],
'key': ['a', 'a', 'b', 'b'],
'matches': 0,
}
game = CardMatchingGame()
game.state = state
game.ui = BetterTestUI()
game.ui.index = 0
game.ui.input_list = input_list
return game
def test_should_st_quit():
game = make_better_test_game(['q'])
result = game.turn()
return result == True
def test_should_st_not_quit():
game = make_better_test_game(['1', '2'])
result = game.turn()
return result == False
def test_should_nd_quit():
game = make_better_test_game(['1', 'q'])
result = game.turn()
return result == True
def test_should_nd_not_quit():
game = make_better_test_game(['1', '2'])
result = game.turn()
return result == False
def test_is_st_valid():
game = make_better_test_game(['1', '2'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_st_invalid():
game = make_better_test_game(['t'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_nd_valid():
game = make_better_test_game(['1', '2'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_nd_invalid():
game = make_better_test_game(['1', 't'])
result = game.turn()
return result == False
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_st_overlapping():
game = make_better_test_game(['1'])
game.state['board'][1] = 'a'
result = game.turn()
same_board = game.state['board'] == ['_', 'a', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_st_not_overlapping():
game = make_better_test_game(['1', '2'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_nd_overlapping():
game = make_better_test_game(['1', '2'])
game.state['board'][2] = 'b'
result = game.turn()
same_board = game.state['board'] == ['_', '_', 'b', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_is_nd_not_overlapping():
game = make_better_test_game(['1', '2'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_reset():
game = make_better_test_game(['1', '2'])
result = game.turn()
same_board = game.state['board'] == ['_', '_', '_', '_']
same_matches = game.state['matches'] == 0
return (result == False) and same_board and same_matches
def test_match():
game = make_better_test_game(['0', '1'])
result = game.turn()
board_change = game.state['board'] == ['a', 'a', '_', '_']
matches_change = game.state['matches'] == 1
return (result == False) and board_change and matches_change
def test_has_won():
game = make_better_test_game(['0', '1'])
game.state['board'] = ['_', '_', 'b', 'b']
result = game.turn()
return result == True
def test_has_not_won():
game = make_better_test_game(['1', '2'])
result = game.turn()
return result == False
def main():
print 'start test'
if not test_should_st_quit():
print 'turn did not detect the first quit.'
if not test_should_st_not_quit():
print 'turn detected a first quit when there was none.'
if not test_should_nd_quit():
print 'turn did not detect the secondquit.'
if not test_should_nd_not_quit():
print 'turn detected a second quit when there was none.'
if not test_is_st_valid():
print 'turn detected a first invalid char when there was none.'
if not test_is_st_invalid():
print 'turn did not detect the first invaild char.'
if not test_is_nd_valid():
print 'turn detected a second invalid char when there was none.'
if not test_is_nd_invalid():
print 'turn did not detect the second invalid char.'
if not test_is_st_overlapping():
print 'turn did not detect the first overlap.'
if not test_is_st_not_overlapping():
print 'turn detected a first overlap when there was none.'
if not test_is_nd_overlapping():
print 'turn did not detect the second overlap.'
if not test_is_nd_not_overlapping():
print 'turn detected a second overlap when there was none.'
if not test_reset():
print 'turn did not detect the reset.'
if not test_match():
print 'turn did not detect the match.'
if not test_has_won():
print 'turn did not detect the win.'
if not test_has_not_won():
print 'turn detected a win when there was none.'
print 'end test'
if __name__ == '__main__':
main()
| 30.452128 | 72 | 0.626026 | 797 | 5,725 | 4.18946 | 0.099122 | 0.06469 | 0.071279 | 0.091644 | 0.784966 | 0.720276 | 0.628032 | 0.530997 | 0.465708 | 0.44714 | 0 | 0.009923 | 0.225502 | 5,725 | 187 | 73 | 30.614973 | 0.743121 | 0 | 0 | 0.40411 | 0 | 0 | 0.164424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.006849 | 0.006849 | null | null | 0.123288 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2acb7080e09962a3a0d53b43c1e67bb17f6b930f | 481 | py | Python | tests/test_predict_pipeline.py | KatyKasilina/StumbleUpon-Evergreen-DataMining | de8824bb85f00aef5b9ad57690191dbc984b9384 | [
"MIT"
] | null | null | null | tests/test_predict_pipeline.py | KatyKasilina/StumbleUpon-Evergreen-DataMining | de8824bb85f00aef5b9ad57690191dbc984b9384 | [
"MIT"
] | null | null | null | tests/test_predict_pipeline.py | KatyKasilina/StumbleUpon-Evergreen-DataMining | de8824bb85f00aef5b9ad57690191dbc984b9384 | [
"MIT"
] | null | null | null | import os
from src.entities import PredictingPipelineParams
from src.predict_pipeline import predict_pipeline
def test_eval_pipeline(predict_pipeline_params: PredictingPipelineParams,
output_predictions_path: str,
train_synthetic):
predictions = predict_pipeline(predict_pipeline_params)
assert os.path.exists(output_predictions_path)
assert 200 == predictions.shape[0]
assert {0, 1} == set(predictions.iloc[:, 0]) | 32.066667 | 73 | 0.72973 | 53 | 481 | 6.358491 | 0.490566 | 0.222552 | 0.136499 | 0.172107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018229 | 0.201663 | 481 | 15 | 74 | 32.066667 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2accc7f990d30860f5517ab54f9380af40bb94c1 | 13,490 | py | Python | pysnmp-with-texts/ADTRAN-ATLAS-MODULE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 8 | 2019-05-09T17:04:00.000Z | 2021-06-09T06:50:51.000Z | pysnmp-with-texts/ADTRAN-ATLAS-MODULE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 4 | 2019-05-31T16:42:59.000Z | 2020-01-31T21:57:17.000Z | pysnmp-with-texts/ADTRAN-ATLAS-MODULE-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module ADTRAN-ATLAS-MODULE-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ADTRAN-ATLAS-MODULE-MIB
# Produced by pysmi-0.3.4 at Wed May 1 11:14:30 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
adATLASUnitFPStatus, adATLASUnitSlotAddress, adATLASUnitPortAddress = mibBuilder.importSymbols("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus", "adATLASUnitSlotAddress", "adATLASUnitPortAddress")
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, ConstraintsIntersection, ValueSizeConstraint, ValueRangeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "ConstraintsIntersection", "ValueSizeConstraint", "ValueRangeConstraint", "SingleValueConstraint")
ifIndex, = mibBuilder.importSymbols("IF-MIB", "ifIndex")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
NotificationType, enterprises, Counter64, Gauge32, NotificationType, Bits, TimeTicks, ObjectIdentity, ModuleIdentity, IpAddress, Integer32, Unsigned32, MibIdentifier, iso, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter32 = mibBuilder.importSymbols("SNMPv2-SMI", "NotificationType", "enterprises", "Counter64", "Gauge32", "NotificationType", "Bits", "TimeTicks", "ObjectIdentity", "ModuleIdentity", "IpAddress", "Integer32", "Unsigned32", "MibIdentifier", "iso", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter32")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
adtran = MibIdentifier((1, 3, 6, 1, 4, 1, 664))
adMgmt = MibIdentifier((1, 3, 6, 1, 4, 1, 664, 2))
adATLASmg = MibIdentifier((1, 3, 6, 1, 4, 1, 664, 2, 154))
adGenATLASmg = MibIdentifier((1, 3, 6, 1, 4, 1, 664, 2, 154, 1))
adATLASModulemg = MibIdentifier((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6))
adATLASModuleInfoNumber = MibScalar((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoNumber.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoNumber.setDescription('This value indicates the number of entries found in the Atlas Module Information Table and corresponds to the number of physical slots in the particular Atlas product.')
adATLASModuleInfoTable = MibTable((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2), )
if mibBuilder.loadTexts: adATLASModuleInfoTable.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoTable.setDescription('The Atlas Module Information Table')
adATLASModuleInfoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1), ).setIndexNames((0, "ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoIndex"))
if mibBuilder.loadTexts: adATLASModuleInfoEntry.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoEntry.setDescription('An entry in the Atlas Module Information Table')
adATLASModuleInfoIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoIndex.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoIndex.setDescription("An index into the Atlas Module Information Table. This variable corresponds to the module's slot number.")
adATLASModuleInfoNumIfs = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoNumIfs.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoNumIfs.setDescription('The number of physical interfaces (i.e. ports) on the module.')
adATLASModuleInfoNumRsrcs = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoNumRsrcs.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoNumRsrcs.setDescription('The total number of resources (e.g. number of bonding sessions in an IMUX module) on the module.')
adATLASModuleInfoOID = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 4), ObjectIdentifier()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoOID.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoOID.setDescription('The OID that uniquely identifies the specific module.')
adATLASModuleInfoPartNum = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 5), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoPartNum.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoPartNum.setDescription('The ADTRAN part number of the module.')
adATLASModuleInfoSerialNum = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 6), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoSerialNum.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoSerialNum.setDescription('The serial number of the module.')
adATLASModuleInfoHardwareRev = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 7), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoHardwareRev.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoHardwareRev.setDescription('The hardware revision of the module.')
adATLASModuleInfoFirmwareRev = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 8), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoFirmwareRev.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoFirmwareRev.setDescription('The firmware revision of the module.')
adATLASModuleInfoState = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("online", 1), ("offline", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: adATLASModuleInfoState.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoState.setDescription('The operational state of the module. It can be set to either Online or Offline. When a module is taken Offline, it is no longer considered to be an available resource. This setting may be useful in system troubleshooting.')
adATLASModuleInfoStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9))).clone(namedValues=NamedValues(("online", 1), ("offline", 2), ("noResponse", 3), ("unResponsiveOffline", 4), ("notReady", 5), ("restarting", 6), ("notSupported", 7), ("standby", 8), ("empty", 9)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoStatus.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoStatus.setDescription('The hardware status of the module.')
adATLASModuleInfoFPStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 664, 2, 154, 1, 6, 2, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: adATLASModuleInfoFPStatus.setStatus('mandatory')
if mibBuilder.loadTexts: adATLASModuleInfoFPStatus.setDescription("A bit-encoded variable that indicates the front panel status of the module. It is encoded as follows: OFF 0x00 OK 0x01 ONLINE 0x02 TESTING 0x04 FLASH DOWNLOAD 0x08 ERROR 0x10 ALARM 0x20 STANDBY 0x40 WARN 0x80 Note: Multiple bits may be set concurrently, based on the module's current state.")
adATLASModuleOffline = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400600)).setObjects(("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoIndex"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASModuleOffline.setDescription('This trap indicates a module is offline.')
adATLASModuleOnline = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400601)).setObjects(("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoIndex"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASModuleOnline.setDescription('This trap indicates a module is online.')
adATLASCbuBackupAttempt = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400602)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuBackupAttempt.setDescription('This trap indicates an endpoint has detected a failure and is attempting a backup call.')
adATLASCbuBackupAttemptFailed = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400603)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuBackupAttemptFailed.setDescription('This trap indicates a backup call has failed.')
adATLASCbuBackupActive = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400604)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuBackupActive.setDescription('This trap indicates a backup call has connected.')
adATLASCbuPrimaryRestored = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400605)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuPrimaryRestored.setDescription('This trap indicates an endpoint has come out of backup.')
adATLASCbuTestCallOriginated = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400606)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuTestCallOriginated.setDescription('This trap indicates an endpoint has originated a test call.')
adATLASCbuTestCallConnected = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400607)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuTestCallConnected.setDescription("This trap indicates an endpoint's test call has connected.")
adATLASCbuTestCallPassed = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400608)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuTestCallPassed.setDescription("This trap indicates an endpoint's test call has passed.")
adATLASCbuTestCallFailed = NotificationType((1, 3, 6, 1, 4, 1, 664, 2, 154) + (0,15400609)).setObjects(("IF-MIB", "ifIndex"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitSlotAddress"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitPortAddress"), ("ADTRAN-ATLAS-MODULE-MIB", "adATLASModuleInfoFPStatus"), ("ADTRAN-ATLAS-UNIT-MIB", "adATLASUnitFPStatus"))
if mibBuilder.loadTexts: adATLASCbuTestCallFailed.setDescription("This trap indicates an endpoint's test call has failed.")
mibBuilder.exportSymbols("ADTRAN-ATLAS-MODULE-MIB", adATLASModulemg=adATLASModulemg, adATLASCbuPrimaryRestored=adATLASCbuPrimaryRestored, adATLASModuleInfoNumRsrcs=adATLASModuleInfoNumRsrcs, adATLASModuleInfoState=adATLASModuleInfoState, adATLASCbuTestCallPassed=adATLASCbuTestCallPassed, adATLASModuleInfoNumber=adATLASModuleInfoNumber, adATLASModuleInfoStatus=adATLASModuleInfoStatus, adATLASCbuBackupAttemptFailed=adATLASCbuBackupAttemptFailed, adATLASModuleInfoPartNum=adATLASModuleInfoPartNum, adATLASModuleInfoSerialNum=adATLASModuleInfoSerialNum, adATLASModuleInfoOID=adATLASModuleInfoOID, adATLASCbuBackupAttempt=adATLASCbuBackupAttempt, adATLASModuleInfoTable=adATLASModuleInfoTable, adATLASModuleInfoFPStatus=adATLASModuleInfoFPStatus, adtran=adtran, adATLASModuleInfoEntry=adATLASModuleInfoEntry, adMgmt=adMgmt, adATLASCbuBackupActive=adATLASCbuBackupActive, adATLASModuleInfoHardwareRev=adATLASModuleInfoHardwareRev, adGenATLASmg=adGenATLASmg, adATLASmg=adATLASmg, adATLASCbuTestCallConnected=adATLASCbuTestCallConnected, adATLASModuleInfoIndex=adATLASModuleInfoIndex, adATLASModuleInfoNumIfs=adATLASModuleInfoNumIfs, adATLASCbuTestCallOriginated=adATLASCbuTestCallOriginated, adATLASModuleInfoFirmwareRev=adATLASModuleInfoFirmwareRev, adATLASModuleOnline=adATLASModuleOnline, adATLASCbuTestCallFailed=adATLASCbuTestCallFailed, adATLASModuleOffline=adATLASModuleOffline)
| 160.595238 | 1,382 | 0.783543 | 1,473 | 13,490 | 7.175832 | 0.184657 | 0.044749 | 0.075497 | 0.010974 | 0.503311 | 0.408136 | 0.372185 | 0.322044 | 0.311447 | 0.309272 | 0 | 0.053697 | 0.080578 | 13,490 | 83 | 1,383 | 162.53012 | 0.798516 | 0.025204 | 0 | 0 | 0 | 0.052632 | 0.342592 | 0.125181 | 0 | 0 | 0.00274 | 0 | 0 | 1 | 0 | false | 0.039474 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2ad28890a419721b32ae86ff409bb2473b379f2a | 308 | py | Python | rp_recruit/forms.py | praekeltfoundation/rp-sidekick | 01f2d1ced8caefb39c93112f74baac70dbe943bc | [
"BSD-3-Clause"
] | 1 | 2018-10-05T21:47:43.000Z | 2018-10-05T21:47:43.000Z | rp_recruit/forms.py | praekeltfoundation/rp-sidekick | 01f2d1ced8caefb39c93112f74baac70dbe943bc | [
"BSD-3-Clause"
] | 114 | 2018-08-14T14:37:20.000Z | 2020-07-31T15:56:51.000Z | rp_recruit/forms.py | praekeltfoundation/rp-sidekick | 01f2d1ced8caefb39c93112f74baac70dbe943bc | [
"BSD-3-Clause"
] | null | null | null | from django import forms
from django.forms import widgets
from phonenumber_field.formfields import PhoneNumberField
class SignupForm(forms.Form):
name = forms.CharField(required=True)
msisdn = PhoneNumberField(
required=True, widget=widgets.TextInput(attrs={"class": "form-control"})
)
| 28 | 80 | 0.756494 | 35 | 308 | 6.628571 | 0.6 | 0.086207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149351 | 308 | 10 | 81 | 30.8 | 0.885496 | 0 | 0 | 0 | 0 | 0 | 0.055195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2ad75cc096fdd5b70b691977905a9d31888b4cf4 | 995 | py | Python | easy/python3/c0025_110_balanced-binary-tree/00_leetcode_0025.py | drunkwater/leetcode | 8cc4a07763e71efbaedb523015f0c1eff2927f60 | [
"Ruby"
] | null | null | null | easy/python3/c0025_110_balanced-binary-tree/00_leetcode_0025.py | drunkwater/leetcode | 8cc4a07763e71efbaedb523015f0c1eff2927f60 | [
"Ruby"
] | null | null | null | easy/python3/c0025_110_balanced-binary-tree/00_leetcode_0025.py | drunkwater/leetcode | 8cc4a07763e71efbaedb523015f0c1eff2927f60 | [
"Ruby"
] | 3 | 2018-02-09T02:46:48.000Z | 2021-02-20T08:32:03.000Z | # DRUNKWATER TEMPLATE(add description and prototypes)
# Question Title and Description on leetcode.com
# Function Declaration and Function Prototypes on leetcode.com
#110. Balanced Binary Tree
#Given a binary tree, determine if it is height-balanced.
#For this problem, a height-balanced binary tree is defined as:
#a binary tree in which the depth of the two subtrees of every node never differ by more than 1.
#Example 1:
#Given the following tree [3,9,20,null,null,15,7]:
# 3
# / \
# 9 20
# / \
# 15 7
#Return true.
#Example 2:
#Given the following tree [1,2,2,3,3,null,null,4,4]:
# 1
# / \
# 2 2
# / \
# 3 3
# / \
# 4 4
#Return false.
## Definition for a binary tree node.
## class TreeNode:
## def __init__(self, x):
## self.val = x
## self.left = None
## self.right = None
#class Solution:
# def isBalanced(self, root):
# """
# :type root: TreeNode
# :rtype: bool
# """
# Time Is Money | 24.268293 | 96 | 0.615075 | 145 | 995 | 4.193103 | 0.524138 | 0.082237 | 0.054276 | 0.069079 | 0.016447 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046639 | 0.267337 | 995 | 41 | 97 | 24.268293 | 0.78738 | 0.914573 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2ad903dc29ad06a226a2ad0a6e4aea3625b02e97 | 543 | py | Python | notifier/apps.py | hwvn/django-notifier | 56ea70929b96fd09e578f7c933dac0e0e3844b31 | [
"BSD-2-Clause"
] | null | null | null | notifier/apps.py | hwvn/django-notifier | 56ea70929b96fd09e578f7c933dac0e0e3844b31 | [
"BSD-2-Clause"
] | null | null | null | notifier/apps.py | hwvn/django-notifier | 56ea70929b96fd09e578f7c933dac0e0e3844b31 | [
"BSD-2-Clause"
] | null | null | null | # on each start up, create notification emails
# TODO do this via migrations?? So no stress starting up the app
from django.apps import AppConfig
from django.db.models.signals import post_migrate
def post_migration_callback(sender, **kwargs):
from notifier.management import create_notifications, create_backends
create_backends(app='notifier')
create_notifications(app='notifier')
class NotifierConfig(AppConfig):
name = 'notifier'
def ready(self):
post_migrate.connect(post_migration_callback, sender=self)
| 28.578947 | 73 | 0.773481 | 70 | 543 | 5.857143 | 0.614286 | 0.04878 | 0.102439 | 0.131707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151013 | 543 | 18 | 74 | 30.166667 | 0.889371 | 0.197053 | 0 | 0 | 0 | 0 | 0.055427 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0.2 | false | 0 | 0.3 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
2adec7eab4e4eccb570d4d778d09ade349f6486f | 5,561 | py | Python | python/pointcloud/lidaroverview.py | NLeSC/pointcloud-benchmark | 860cb91d5a0b984d9f82401e97b6840bd3f8aa3a | [
"Apache-2.0"
] | 9 | 2015-03-02T23:17:01.000Z | 2022-03-01T01:52:50.000Z | python/pointcloud/lidaroverview.py | NLeSC/pointcloud-benchmark | 860cb91d5a0b984d9f82401e97b6840bd3f8aa3a | [
"Apache-2.0"
] | 2 | 2015-02-20T10:21:18.000Z | 2016-05-20T10:46:55.000Z | python/pointcloud/lidaroverview.py | NLeSC/pointcloud-benchmark | 860cb91d5a0b984d9f82401e97b6840bd3f8aa3a | [
"Apache-2.0"
] | 8 | 2015-10-20T12:00:16.000Z | 2022-03-01T01:52:51.000Z | #!/usr/bin/env python
################################################################################
# Created by Oscar Martinez #
# o.rubi@esciencecenter.nl #
################################################################################
import os, optparse, psycopg2, multiprocessing, logging
from pointcloud import utils, postgresops, lasops
def runChild(childId, childrenQueue, connectionString, dbtable, srid):
kill_received = False
connection = psycopg2.connect(connectionString)
cursor = connection.cursor()
while not kill_received:
job = None
try:
# This call will patiently wait until new job is available
job = childrenQueue.get()
except:
# if there is an error we will quit the loop
kill_received = True
if job == None:
kill_received = True
else:
[identifier, inputFile,] = job
(_, count, minX, minY, minZ, maxX, maxY, maxZ, scaleX, scaleY, scaleZ, offsetX, offsetY, offsetZ) = lasops.getPCFileDetails(inputFile)
insertStatement = """INSERT INTO """ + dbtable + """(id,filepath,num,scalex,scaley,scalez,offsetx,offsety,offsetz,geom) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, ST_MakeEnvelope(%s, %s, %s, %s, %s));"""
insertArgs = [identifier, inputFile, int(count), float(scaleX), float(scaleY), float(scaleZ), float(offsetX), float(offsetY), float(offsetZ), float(minX), float(minY), float(maxX), float(maxY), int(srid)]
logging.info(cursor.mogrify(insertStatement, insertArgs))
cursor.execute(insertStatement, insertArgs)
connection.commit()
cursor.close()
connection.close()
def run(inputFolder, numcores, dbname, dbuser, dbpass, dbhost, dbport, createdb, dbtable, srid):
opts = 0
childrenQueue = multiprocessing.Queue()
ifiles = utils.getFiles(inputFolder)
for i in range(len(ifiles)):
childrenQueue.put([i, ifiles[i]])
for i in range(int(numcores)): #we add as many None jobs as numWorkers to tell them to terminate (queue is FIFO)
childrenQueue.put(None)
clineConString = postgresops.getConnectString(dbname, dbuser, dbpass, dbhost, dbport, True)
psycopgConString = postgresops.getConnectString(dbname, dbuser, dbpass, dbhost, dbport, False)
if createdb:
os.system('dropdb ' + clineConString)
os.system('createdb ' + clineConString)
connection = psycopg2.connect(psycopgConString)
cursor = connection.cursor()
if createdb:
cursor.execute('CREATE EXTENSION postgis')
connection.commit()
q = """
CREATE TABLE """ + dbtable + """ (
id integer,
filepath text,
num integer,
scalex double precision,
scaley double precision,
scalez double precision,
offsetx double precision,
offsety double precision,
offsetz double precision,
geom public.geometry(Geometry,""" + str(srid) + """)
)"""
logging.info(cursor.mogrify(q))
cursor.execute(q)
connection.commit()
# q = "select addgeometrycolumn('" + dbtable + "','geom',28992,'POLYGON',2)"
# logging.info(cursor.mogrify(q))
# cursor.execute(q)
# connection.commit()
print 'numcores',numcores
children = []
# We start numcores children processes
for i in range(int(numcores)):
children.append(multiprocessing.Process(target=runChild,
args=(i, childrenQueue, psycopgConString, dbtable, srid)))
children[-1].start()
# wait for all children to finish their execution
for i in range(int(numcores)):
children[i].join()
q = "create index ON " + dbtable + " using GIST (geom)"
logging.info(cursor.mogrify(q))
cursor.execute(q)
connection.commit()
old_isolation_level = connection.isolation_level
connection.set_isolation_level(0)
q = "VACUUM FULL ANALYZE " + dbtable
logging.info(cursor.mogrify(q))
cursor.execute(q)
connection.commit()
connection.set_isolation_level(old_isolation_level)
cursor.close()
def main(opts):
run(opts.input, opts.cores, opts.dbname, opts.dbuser, opts.dbpass, opts.dbhost, opts.dbport, opts.create, opts.dbtable, opts.srid)
if __name__ == "__main__":
usage = 'Usage: %prog [options]'
description = "Creates a table with geometries describing the areas of which the LAS files contain points."
op = optparse.OptionParser(usage=usage, description=description)
op.add_option('-i','--input',default='',help='Input folder where to find the LAS files',type='string')
op.add_option('-s','--srid',default='',help='SRID',type='string')
op.add_option('-x','--create',default=False,help='Creates the database',action='store_true')
op.add_option('-n','--dbname',default='',help='Postgres DB name where to store the geometries',type='string')
op.add_option('-u','--dbuser',default='',help='DB user',type='string')
op.add_option('-p','--dbpass',default='',help='DB pass',type='string')
op.add_option('-m','--dbhost',default='',help='DB host',type='string')
op.add_option('-r','--dbport',default='',help='DB port',type='string')
op.add_option('-t','--dbtable',default='',help='DB table',type='string')
op.add_option('-c','--cores',default='',help='Number of used processes',type='string')
(opts, args) = op.parse_args()
main(opts)
| 45.211382 | 216 | 0.618414 | 625 | 5,561 | 5.4464 | 0.3584 | 0.007051 | 0.008813 | 0.009401 | 0.216804 | 0.147767 | 0.11839 | 0.067274 | 0.06463 | 0.06463 | 0 | 0.002761 | 0.218306 | 5,561 | 122 | 217 | 45.581967 | 0.780308 | 0.107535 | 0 | 0.229167 | 0 | 0.010417 | 0.219461 | 0.019211 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.052083 | 0.020833 | null | null | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2af9217c8968d6bddb3f0c2fa82a54ed2ec20772 | 1,924 | py | Python | resnetMnist.py | AndreaCeccarelli/gpu-monitor | aad4dc88387a69235e9c370cb08da1f16ba4aa96 | [
"MIT"
] | 4 | 2021-11-02T08:06:22.000Z | 2022-02-08T13:18:51.000Z | resnetMnist.py | AndreaCeccarelli/gpu-monitor | aad4dc88387a69235e9c370cb08da1f16ba4aa96 | [
"MIT"
] | null | null | null | resnetMnist.py | AndreaCeccarelli/gpu-monitor | aad4dc88387a69235e9c370cb08da1f16ba4aa96 | [
"MIT"
] | null | null | null |
import torch as torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from art.utils import load_dataset
import art.attacks
from art.classifiers import PyTorchClassifier
from art.utils import load_mnist
from art.utils import load_cifar10
import torchvision
import torchvision.transforms as transforms
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import datasetSTL10
from datasetSTL10 import stl10
from sklearn.utils import shuffle
def tensor_map(x_train,y_train,x_val,y_val): return map(torch.tensor,(x_train,y_train,x_val,y_val))
def preprocess(x):
return x.view(-1, 1, 28, 28)
def conv(in_size, out_size, pad=1):
return nn.Conv2d(in_size, out_size, kernel_size=3, stride=2, padding=pad)
class ResBlock(nn.Module):
def __init__(self, in_size:int, hidden_size:int, out_size:int, pad:int):
super().__init__()
self.conv1 = conv(in_size, hidden_size, pad)
self.conv2 = conv(hidden_size, out_size, pad)
self.batchnorm1 = nn.BatchNorm2d(hidden_size)
self.batchnorm2 = nn.BatchNorm2d(out_size)
def convblock(self, x):
x = F.relu(self.batchnorm1(self.conv1(x)))
x = F.relu(self.batchnorm2(self.conv2(x)))
return x
def forward(self, x): return x + self.convblock(x) # skip connection
class ResNet(nn.Module):
def __init__(self, n_classes=10):
super().__init__()
self.res1 = ResBlock(1, 8, 16, 15)
self.res2 = ResBlock(16, 32, 16, 15)
self.conv = conv(16, n_classes)
self.batchnorm = nn.BatchNorm2d(n_classes)
self.maxpool = nn.AdaptiveMaxPool2d(1)
def forward(self, x):
x = preprocess(x)
x = self.res1(x)
x = self.res2(x)
x = self.maxpool(self.batchnorm(self.conv(x)))
return x.view(x.size(0), -1)
| 31.540984 | 99 | 0.700624 | 297 | 1,924 | 4.37037 | 0.299663 | 0.009245 | 0.024653 | 0.041602 | 0.127889 | 0.030817 | 0.030817 | 0.030817 | 0 | 0 | 0 | 0.035393 | 0.192308 | 1,924 | 60 | 100 | 32.066667 | 0.799871 | 0.007796 | 0 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.163265 | false | 0 | 0.367347 | 0.081633 | 0.653061 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2afdfc8e4280b526146ac2a7108015dcd3d8577b | 209 | py | Python | gans/utils/config.py | tlatkowski/gans-2.0 | 974efc5bbcea39c0a7dec9405ba4514ada6dc39c | [
"MIT"
] | 78 | 2019-09-25T15:09:18.000Z | 2022-02-09T09:56:15.000Z | gans/utils/config.py | tlatkowski/gans-2.0 | 974efc5bbcea39c0a7dec9405ba4514ada6dc39c | [
"MIT"
] | 23 | 2019-10-09T21:24:39.000Z | 2022-03-12T00:00:53.000Z | gans/utils/config.py | tlatkowski/gans-2.0 | 974efc5bbcea39c0a7dec9405ba4514ada6dc39c | [
"MIT"
] | 18 | 2020-01-24T13:13:57.000Z | 2022-02-15T18:58:12.000Z | import yaml
from easydict import EasyDict as edict
def read_config(problem_type):
with open('config/{}.yml'.format(problem_type.lower())) as f:
config = edict(yaml.load(f))
return config
| 23.222222 | 65 | 0.688995 | 30 | 209 | 4.7 | 0.633333 | 0.156028 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196172 | 209 | 8 | 66 | 26.125 | 0.839286 | 0 | 0 | 0 | 0 | 0 | 0.062201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
6302385d4820e9fc34f7dc47149c7e2988b88b19 | 880 | py | Python | mangopaysdk/configuration.py | bearstech/mangopay2-python-sdk | c01ff0bd55c0b2d6e53a81097d028fb1fa28fb1e | [
"MIT"
] | null | null | null | mangopaysdk/configuration.py | bearstech/mangopay2-python-sdk | c01ff0bd55c0b2d6e53a81097d028fb1fa28fb1e | [
"MIT"
] | null | null | null | mangopaysdk/configuration.py | bearstech/mangopay2-python-sdk | c01ff0bd55c0b2d6e53a81097d028fb1fa28fb1e | [
"MIT"
] | 1 | 2017-09-22T13:29:53.000Z | 2017-09-22T13:29:53.000Z | from mangopaysdk.tools import enums
import logging
class Configuration:
"""Configuration class for MangoPay API SDK.
All fields are required.
"""
# Setting for client: client Id and client password
ClientID = ''
ClientPassword = ''
# Base URL to MangoPay API
BaseUrl = 'https://api.sandbox.mangopay.com'
# path to temp - required to cache auth tokens
TempPath = "c:\Temp\\"
# Constant to switch debug mode (0/1) - display all request and response data
DebugMode = 0
# SSL verification (False (no verification) or path to the cacert.pem file)
SSLVerification = False
# RestTool class
# NB: you can swap this class for one of ours that implement some custom logic
RestToolClass = None
# we use DEBUG level for internal debugging
if (Configuration.DebugMode):
logging.basicConfig(level=logging.DEBUG)
| 25.882353 | 82 | 0.696591 | 114 | 880 | 5.377193 | 0.710526 | 0.026101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004431 | 0.230682 | 880 | 33 | 83 | 26.666667 | 0.901034 | 0.535227 | 0 | 0 | 0 | 0 | 0.105943 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.166667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
632680393c1fd780b9c0c165bffcc2d58cfe55d8 | 13,711 | py | Python | Rover/install_isolated/lib/python2.7/dist-packages/cartographer_ros_msgs/msg/_TrajectoryOptions.py | Rose-Hulman-Rover-Team/Rover-2019-2020 | d75a9086fa733f8a8b5240005bee058737ad82c7 | [
"MIT"
] | 1 | 2018-10-04T14:37:00.000Z | 2018-10-04T14:37:00.000Z | TrekBot_WS/install_isolated/lib/python2.7/dist-packages/cartographer_ros_msgs/msg/_TrajectoryOptions.py | Rafcin/TrekBot | d3dc63e6c16a040b16170f143556ef358018b7da | [
"Unlicense"
] | null | null | null | TrekBot_WS/install_isolated/lib/python2.7/dist-packages/cartographer_ros_msgs/msg/_TrajectoryOptions.py | Rafcin/TrekBot | d3dc63e6c16a040b16170f143556ef358018b7da | [
"Unlicense"
] | null | null | null | # This Python file uses the following encoding: utf-8
"""autogenerated by genpy from cartographer_ros_msgs/TrajectoryOptions.msg. Do not edit."""
import sys
python3 = True if sys.hexversion > 0x03000000 else False
import genpy
import struct
class TrajectoryOptions(genpy.Message):
_md5sum = "7eda9b62c16c18fa1563587e73501e47"
_type = "cartographer_ros_msgs/TrajectoryOptions"
_has_header = False #flag to mark the presence of a Header object
_full_text = """# Copyright 2016 The Cartographer Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
string tracking_frame
string published_frame
string odom_frame
bool provide_odom_frame
bool use_odometry
bool use_nav_sat
bool use_landmarks
bool publish_frame_projected_to_2d
int32 num_laser_scans
int32 num_multi_echo_laser_scans
int32 num_subdivisions_per_laser_scan
int32 num_point_clouds
float64 rangefinder_sampling_ratio
float64 odometry_sampling_ratio
float64 fixed_frame_pose_sampling_ratio
float64 imu_sampling_ratio
float64 landmarks_sampling_ratio
# This is a binary-encoded
# 'cartographer.mapping.proto.TrajectoryBuilderOptions' proto.
string trajectory_builder_options_proto
"""
__slots__ = ['tracking_frame','published_frame','odom_frame','provide_odom_frame','use_odometry','use_nav_sat','use_landmarks','publish_frame_projected_to_2d','num_laser_scans','num_multi_echo_laser_scans','num_subdivisions_per_laser_scan','num_point_clouds','rangefinder_sampling_ratio','odometry_sampling_ratio','fixed_frame_pose_sampling_ratio','imu_sampling_ratio','landmarks_sampling_ratio','trajectory_builder_options_proto']
_slot_types = ['string','string','string','bool','bool','bool','bool','bool','int32','int32','int32','int32','float64','float64','float64','float64','float64','string']
def __init__(self, *args, **kwds):
"""
Constructor. Any message fields that are implicitly/explicitly
set to None will be assigned a default value. The recommend
use is keyword arguments as this is more robust to future message
changes. You cannot mix in-order arguments and keyword arguments.
The available fields are:
tracking_frame,published_frame,odom_frame,provide_odom_frame,use_odometry,use_nav_sat,use_landmarks,publish_frame_projected_to_2d,num_laser_scans,num_multi_echo_laser_scans,num_subdivisions_per_laser_scan,num_point_clouds,rangefinder_sampling_ratio,odometry_sampling_ratio,fixed_frame_pose_sampling_ratio,imu_sampling_ratio,landmarks_sampling_ratio,trajectory_builder_options_proto
:param args: complete set of field values, in .msg order
:param kwds: use keyword arguments corresponding to message field names
to set specific fields.
"""
if args or kwds:
super(TrajectoryOptions, self).__init__(*args, **kwds)
#message fields cannot be None, assign default values for those that are
if self.tracking_frame is None:
self.tracking_frame = ''
if self.published_frame is None:
self.published_frame = ''
if self.odom_frame is None:
self.odom_frame = ''
if self.provide_odom_frame is None:
self.provide_odom_frame = False
if self.use_odometry is None:
self.use_odometry = False
if self.use_nav_sat is None:
self.use_nav_sat = False
if self.use_landmarks is None:
self.use_landmarks = False
if self.publish_frame_projected_to_2d is None:
self.publish_frame_projected_to_2d = False
if self.num_laser_scans is None:
self.num_laser_scans = 0
if self.num_multi_echo_laser_scans is None:
self.num_multi_echo_laser_scans = 0
if self.num_subdivisions_per_laser_scan is None:
self.num_subdivisions_per_laser_scan = 0
if self.num_point_clouds is None:
self.num_point_clouds = 0
if self.rangefinder_sampling_ratio is None:
self.rangefinder_sampling_ratio = 0.
if self.odometry_sampling_ratio is None:
self.odometry_sampling_ratio = 0.
if self.fixed_frame_pose_sampling_ratio is None:
self.fixed_frame_pose_sampling_ratio = 0.
if self.imu_sampling_ratio is None:
self.imu_sampling_ratio = 0.
if self.landmarks_sampling_ratio is None:
self.landmarks_sampling_ratio = 0.
if self.trajectory_builder_options_proto is None:
self.trajectory_builder_options_proto = ''
else:
self.tracking_frame = ''
self.published_frame = ''
self.odom_frame = ''
self.provide_odom_frame = False
self.use_odometry = False
self.use_nav_sat = False
self.use_landmarks = False
self.publish_frame_projected_to_2d = False
self.num_laser_scans = 0
self.num_multi_echo_laser_scans = 0
self.num_subdivisions_per_laser_scan = 0
self.num_point_clouds = 0
self.rangefinder_sampling_ratio = 0.
self.odometry_sampling_ratio = 0.
self.fixed_frame_pose_sampling_ratio = 0.
self.imu_sampling_ratio = 0.
self.landmarks_sampling_ratio = 0.
self.trajectory_builder_options_proto = ''
def _get_types(self):
"""
internal API method
"""
return self._slot_types
def serialize(self, buff):
"""
serialize message into buffer
:param buff: buffer, ``StringIO``
"""
try:
_x = self.tracking_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.published_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.odom_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_5B4i5d().pack(_x.provide_odom_frame, _x.use_odometry, _x.use_nav_sat, _x.use_landmarks, _x.publish_frame_projected_to_2d, _x.num_laser_scans, _x.num_multi_echo_laser_scans, _x.num_subdivisions_per_laser_scan, _x.num_point_clouds, _x.rangefinder_sampling_ratio, _x.odometry_sampling_ratio, _x.fixed_frame_pose_sampling_ratio, _x.imu_sampling_ratio, _x.landmarks_sampling_ratio))
_x = self.trajectory_builder_options_proto
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize(self, str):
"""
unpack serialized message in str into this message instance
:param str: byte array of serialized message, ``str``
"""
try:
end = 0
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.tracking_frame = str[start:end].decode('utf-8')
else:
self.tracking_frame = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.published_frame = str[start:end].decode('utf-8')
else:
self.published_frame = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.odom_frame = str[start:end].decode('utf-8')
else:
self.odom_frame = str[start:end]
_x = self
start = end
end += 61
(_x.provide_odom_frame, _x.use_odometry, _x.use_nav_sat, _x.use_landmarks, _x.publish_frame_projected_to_2d, _x.num_laser_scans, _x.num_multi_echo_laser_scans, _x.num_subdivisions_per_laser_scan, _x.num_point_clouds, _x.rangefinder_sampling_ratio, _x.odometry_sampling_ratio, _x.fixed_frame_pose_sampling_ratio, _x.imu_sampling_ratio, _x.landmarks_sampling_ratio,) = _get_struct_5B4i5d().unpack(str[start:end])
self.provide_odom_frame = bool(self.provide_odom_frame)
self.use_odometry = bool(self.use_odometry)
self.use_nav_sat = bool(self.use_nav_sat)
self.use_landmarks = bool(self.use_landmarks)
self.publish_frame_projected_to_2d = bool(self.publish_frame_projected_to_2d)
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.trajectory_builder_options_proto = str[start:end].decode('utf-8')
else:
self.trajectory_builder_options_proto = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
def serialize_numpy(self, buff, numpy):
"""
serialize message with numpy array types into buffer
:param buff: buffer, ``StringIO``
:param numpy: numpy python module
"""
try:
_x = self.tracking_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.published_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self.odom_frame
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
_x = self
buff.write(_get_struct_5B4i5d().pack(_x.provide_odom_frame, _x.use_odometry, _x.use_nav_sat, _x.use_landmarks, _x.publish_frame_projected_to_2d, _x.num_laser_scans, _x.num_multi_echo_laser_scans, _x.num_subdivisions_per_laser_scan, _x.num_point_clouds, _x.rangefinder_sampling_ratio, _x.odometry_sampling_ratio, _x.fixed_frame_pose_sampling_ratio, _x.imu_sampling_ratio, _x.landmarks_sampling_ratio))
_x = self.trajectory_builder_options_proto
length = len(_x)
if python3 or type(_x) == unicode:
_x = _x.encode('utf-8')
length = len(_x)
buff.write(struct.pack('<I%ss'%length, length, _x))
except struct.error as se: self._check_types(struct.error("%s: '%s' when writing '%s'" % (type(se), str(se), str(locals().get('_x', self)))))
except TypeError as te: self._check_types(ValueError("%s: '%s' when writing '%s'" % (type(te), str(te), str(locals().get('_x', self)))))
def deserialize_numpy(self, str, numpy):
"""
unpack serialized message in str into this message instance using numpy for array types
:param str: byte array of serialized message, ``str``
:param numpy: numpy python module
"""
try:
end = 0
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.tracking_frame = str[start:end].decode('utf-8')
else:
self.tracking_frame = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.published_frame = str[start:end].decode('utf-8')
else:
self.published_frame = str[start:end]
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.odom_frame = str[start:end].decode('utf-8')
else:
self.odom_frame = str[start:end]
_x = self
start = end
end += 61
(_x.provide_odom_frame, _x.use_odometry, _x.use_nav_sat, _x.use_landmarks, _x.publish_frame_projected_to_2d, _x.num_laser_scans, _x.num_multi_echo_laser_scans, _x.num_subdivisions_per_laser_scan, _x.num_point_clouds, _x.rangefinder_sampling_ratio, _x.odometry_sampling_ratio, _x.fixed_frame_pose_sampling_ratio, _x.imu_sampling_ratio, _x.landmarks_sampling_ratio,) = _get_struct_5B4i5d().unpack(str[start:end])
self.provide_odom_frame = bool(self.provide_odom_frame)
self.use_odometry = bool(self.use_odometry)
self.use_nav_sat = bool(self.use_nav_sat)
self.use_landmarks = bool(self.use_landmarks)
self.publish_frame_projected_to_2d = bool(self.publish_frame_projected_to_2d)
start = end
end += 4
(length,) = _struct_I.unpack(str[start:end])
start = end
end += length
if python3:
self.trajectory_builder_options_proto = str[start:end].decode('utf-8')
else:
self.trajectory_builder_options_proto = str[start:end]
return self
except struct.error as e:
raise genpy.DeserializationError(e) #most likely buffer underfill
_struct_I = genpy.struct_I
def _get_struct_I():
global _struct_I
return _struct_I
_struct_5B4i5d = None
def _get_struct_5B4i5d():
global _struct_5B4i5d
if _struct_5B4i5d is None:
_struct_5B4i5d = struct.Struct("<5B4i5d")
return _struct_5B4i5d
| 42.058282 | 433 | 0.698271 | 1,945 | 13,711 | 4.582519 | 0.120308 | 0.072927 | 0.032088 | 0.036127 | 0.709862 | 0.632896 | 0.601369 | 0.57332 | 0.564569 | 0.553125 | 0 | 0.017232 | 0.200058 | 13,711 | 325 | 434 | 42.187692 | 0.795405 | 0.118591 | 0 | 0.706522 | 1 | 0 | 0.163719 | 0.047615 | 0 | 0 | 0.000838 | 0 | 0 | 1 | 0.028986 | false | 0 | 0.01087 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
632d19e08cd85b2118d08298c02d7c4c43785f49 | 5,759 | py | Python | geo/src/ptg/bspline_basis_manual.py | sandialabs/sibl | 010cbc3fbbd14cdfa3742ec4c95100f05146dce8 | [
"MIT"
] | 3 | 2020-07-08T13:30:24.000Z | 2021-11-03T22:43:40.000Z | geo/src/ptg/bspline_basis_manual.py | sandialabs/sibl | 010cbc3fbbd14cdfa3742ec4c95100f05146dce8 | [
"MIT"
] | 31 | 2020-02-03T15:32:43.000Z | 2022-03-07T14:51:17.000Z | geo/src/ptg/bspline_basis_manual.py | sandialabs/sibl | 010cbc3fbbd14cdfa3742ec4c95100f05146dce8 | [
"MIT"
] | null | null | null | from typing import Union
import numpy as np
def bspline_basis_manual(
knot_vector_t: Union[list, tuple],
knot_i: int = 0,
p: int = 0,
nti: int = 1,
verbose: bool = False,
):
"""Computes the B-spline polynomial basis,
currently limited to degree constant, linear, or quadratic.
Args:
knot_vector_t (float array): [t0, t1, t2, ... tI]
len(knot_vector_t) = (I + 1)
(I + 1) knots with (I) knot spans
must have length of two or more
must be a non-decreasing sequence
knot_i (int): index in the list of
possible knot_index values = [0, 1, 2, ... I]
p (int): polynomial degree
(p=0: constant, p=1: linear, p=2: quadratic, p=3: cubic, etc.),
currently limited to p = [0, 1, 2].
nti (int): number of time intervals for t in per-knot-span
interval [t_i, t_{i+1}],
nti = 1 is default
verbose (bool): prints polynomial or error checking
Returns:
tuple: arrays of (t, f(t)) as time t and polynomial evaluated at t; or,
AssertionError: if input is out of range
"""
num_knots = len(knot_vector_t)
MAX_DEGREE = 2
try:
assert (
len(knot_vector_t) >= 2
), "Error: knot vector length must be two or larger."
assert knot_i >= 0, "Error: knot index knot_i must be non-negative."
assert p >= 0, "Error: polynomial degree p must be non-negative."
assert (
p <= MAX_DEGREE
), f"Error: polynomial degree p exceeds maximum of {MAX_DEGREE}"
assert nti >= 1, "Error: number of time intervals nti must be 1 or greater."
assert knot_i <= (
num_knots - 1
), "Error: knot index knot_i exceeds knot vector length minus 1."
num_knots_i_to_end = len(knot_vector_t[knot_i:])
assert (
num_knots_i_to_end >= p + 1
), "Error: insufficient remaining knots for local support."
except AssertionError as error:
if verbose:
print(error)
return error
knots_lhs = knot_vector_t[0:-1] # left-hand-side knot values
knots_rhs = knot_vector_t[1:] # right-hand-side knot values
knot_spans = np.array(knots_rhs) - np.array(knots_lhs)
dt = knot_spans / nti
# assert all([dti >= 0 for dti in dt]), "Error: knot vector is decreasing."
if not all([dti >= 0 for dti in dt]):
raise ValueError("Error: knot vector is decreasing.")
# improve index notation
# t = [knots_lhs[i] + k * dt[i] for i in np.arange(num_knots-1) for k in np.arange(nti)]
t = [
knots_lhs[k] + j * dt[k]
for k in np.arange(num_knots - 1)
for j in np.arange(nti)
]
t.append(knot_vector_t[-1])
t = np.array(t)
# y = np.zeros((num_knots - 1) * nti + 1)
# y = np.zeros(len(t))
f_of_t = np.zeros(len(t))
if verbose:
print(f"Knot vector: {knot_vector_t}")
print(f"Number of knots = {num_knots}")
print(f"Knot index: {knot_i}")
print(f"Left-hand-side knot vector values: {knots_lhs}")
print(f"Right-hand-side knot vector values: {knots_rhs}")
print(f"Knot spans: {knot_spans}")
print(f"Number of time intervals per knot span: {nti}")
print(f"Knot span deltas: {dt}")
if p == 0:
f_of_t[knot_i * nti : knot_i * nti + nti] = 1.0
if verbose:
print(f"t = {t}")
print(f"f(t) = {f_of_t}")
if p == 1:
for (eix, te) in enumerate(t): # e for evaluations, ix for index
if te >= knot_vector_t[knot_i] and te < knot_vector_t[knot_i + 1]:
f_of_t[eix] = (te - knot_vector_t[knot_i]) / (
knot_vector_t[knot_i + 1] - knot_vector_t[knot_i]
)
elif te >= knot_vector_t[knot_i + 1] and te < knot_vector_t[knot_i + 2]:
f_of_t[eix] = (knot_vector_t[knot_i + 2] - te) / (
knot_vector_t[knot_i + 2] - knot_vector_t[knot_i + 1]
)
if p == 2:
for (eix, te) in enumerate(t): # e for evaluations, ix for index
if te >= knot_vector_t[knot_i] and te < knot_vector_t[knot_i + 1]:
a_1 = (te - knot_vector_t[knot_i]) / (
knot_vector_t[knot_i + 2] - knot_vector_t[knot_i]
)
a_2 = (te - knot_vector_t[knot_i]) / (
knot_vector_t[knot_i + 1] - knot_vector_t[knot_i]
)
f_of_t[eix] = a_1 * a_2
elif te >= knot_vector_t[knot_i + 1] and te < knot_vector_t[knot_i + 2]:
b_1 = (te - knot_vector_t[knot_i]) / (
knot_vector_t[knot_i + 2] - knot_vector_t[knot_i]
)
b_2 = (knot_vector_t[knot_i + 2] - te) / (
knot_vector_t[knot_i + 2] - knot_vector_t[knot_i + 1]
)
b_3 = (knot_vector_t[knot_i + 3] - te) / (
knot_vector_t[knot_i + 3] - knot_vector_t[knot_i + 1]
)
b_4 = (te - knot_vector_t[knot_i + 1]) / (
knot_vector_t[knot_i + 2] - knot_vector_t[knot_i + 1]
)
f_of_t[eix] = (b_1 * b_2) + (b_3 * b_4)
elif te >= knot_vector_t[knot_i + 2] and te < knot_vector_t[knot_i + 3]:
c_1 = (knot_vector_t[knot_i + 3] - te) / (
knot_vector_t[knot_i + 3] - knot_vector_t[knot_i + 1]
)
c_2 = (knot_vector_t[knot_i + 3] - te) / (
knot_vector_t[knot_i + 3] - knot_vector_t[knot_i + 2]
)
f_of_t[eix] = c_1 * c_2
return t, f_of_t
| 36.681529 | 92 | 0.537593 | 883 | 5,759 | 3.260476 | 0.157418 | 0.197985 | 0.191039 | 0.213616 | 0.424106 | 0.365405 | 0.327197 | 0.282737 | 0.282737 | 0.282737 | 0 | 0.023974 | 0.348151 | 5,759 | 156 | 93 | 36.916667 | 0.742941 | 0.215315 | 0 | 0.203884 | 0 | 0 | 0.155289 | 0 | 0 | 0 | 0 | 0 | 0.07767 | 1 | 0.009709 | false | 0 | 0.019417 | 0 | 0.048544 | 0.106796 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2d5a2ad67f9fb9299929b6233ce0d9e1c67f07c8 | 576 | py | Python | apps/exams/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | 1 | 2021-11-28T14:18:16.000Z | 2021-11-28T14:18:16.000Z | apps/exams/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | 1 | 2022-02-10T10:53:12.000Z | 2022-02-10T10:53:12.000Z | apps/exams/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Exam
from .models import Question
from .models import Option
from .models import Submission
from .models import Answer
@admin.register(Exam)
class ExamAdmin(admin.ModelAdmin):
pass
@admin.register(Question)
class QuestionAdmin(admin.ModelAdmin):
pass
@admin.register(Option)
class OptionAdmin(admin.ModelAdmin):
pass
@admin.register(Submission)
class SubmissionAdmin(admin.ModelAdmin):
readonly_fields = ('started_at', 'ended_at')
@admin.register(Answer)
class AnswerAdmin(admin.ModelAdmin):
pass
| 20.571429 | 48 | 0.779514 | 70 | 576 | 6.371429 | 0.357143 | 0.112108 | 0.179372 | 0.161435 | 0.215247 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.126736 | 576 | 27 | 49 | 21.333333 | 0.88668 | 0 | 0 | 0.190476 | 0 | 0 | 0.03125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.190476 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
2d6b343e199581568140014db5745455184b5bbf | 287 | py | Python | ex13.py | nopythoner/python | 7d39eb361f3d3dd78d61c92740897ab04c80b195 | [
"MIT"
] | null | null | null | ex13.py | nopythoner/python | 7d39eb361f3d3dd78d61c92740897ab04c80b195 | [
"MIT"
] | null | null | null | ex13.py | nopythoner/python | 7d39eb361f3d3dd78d61c92740897ab04c80b195 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# coding:utf-8
from sys import argv
script,first,second,third = argv #需在终端进行python ex13.py first 2nd 3rd
print "The script is callde:",script
print "Your first variable is:",first
print "Your second variable is:",second
print "Your third variable is :",third
| 22.076923 | 68 | 0.745645 | 46 | 287 | 4.652174 | 0.565217 | 0.126168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020492 | 0.149826 | 287 | 12 | 69 | 23.916667 | 0.856557 | 0.236934 | 0 | 0 | 0 | 0 | 0.427907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.666667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
2d7c50e9327f14b8692db7ba9e8efdce28dfe332 | 1,007 | py | Python | conftest.py | vavalomi/ssaw | 30172f22e8703f29b1abc159e52e4090960207be | [
"MIT"
] | 9 | 2019-04-06T09:36:20.000Z | 2022-01-18T18:25:37.000Z | conftest.py | vavalomi/ssaw | 30172f22e8703f29b1abc159e52e4090960207be | [
"MIT"
] | 4 | 2020-06-15T01:36:37.000Z | 2021-12-02T06:51:37.000Z | conftest.py | vavalomi/ssaw | 30172f22e8703f29b1abc159e52e4090960207be | [
"MIT"
] | 3 | 2018-04-09T18:17:54.000Z | 2022-01-14T08:38:02.000Z | import json
import os
from dotenv import load_dotenv
import pytest
from ssaw import Client
@pytest.fixture(scope="session", autouse=True)
def load_env_vars(request):
curr_path = os.path.dirname(os.path.realpath(__file__))
env_path = os.path.join(curr_path, "tests/env_vars.sh")
load_dotenv(dotenv_path=env_path)
env_path = os.path.join(curr_path, "tests/env_vars_override.sh")
if os.path.isfile(env_path):
load_dotenv(dotenv_path=env_path, override=True)
@pytest.fixture(scope="session")
def session():
return Client(
os.environ.get("base_url"),
os.environ.get("SOLUTIONS_API_USER", ""),
os.environ.get("SOLUTIONS_API_PASSWORD", ""))
@pytest.fixture(scope="session")
def admin_session():
return Client(
os.environ.get("base_url"),
os.environ.get("admin_username", ""),
os.environ.get("admin_password", ""))
@pytest.fixture(scope="session")
def params():
return json.load(open("tests/params.json", mode="r"))
| 25.175 | 68 | 0.690169 | 142 | 1,007 | 4.676056 | 0.309859 | 0.081325 | 0.108434 | 0.150602 | 0.548193 | 0.451807 | 0.262048 | 0.262048 | 0.262048 | 0.262048 | 0 | 0 | 0.157895 | 1,007 | 39 | 69 | 25.820513 | 0.783019 | 0 | 0 | 0.25 | 0 | 0 | 0.171797 | 0.047666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.071429 | 0.178571 | 0.107143 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 2 |
2d8672b519630d870ea9c4139ee7ef11673ec564 | 892 | py | Python | src/devices/esp32-test02/main.py | hwinther/lanot | f6700cacb3946535081624467b746fdfd38e021d | [
"Apache-2.0"
] | null | null | null | src/devices/esp32-test02/main.py | hwinther/lanot | f6700cacb3946535081624467b746fdfd38e021d | [
"Apache-2.0"
] | null | null | null | src/devices/esp32-test02/main.py | hwinther/lanot | f6700cacb3946535081624467b746fdfd38e021d | [
"Apache-2.0"
] | null | null | null | import test02
import prometheus.pgc as gc
import prometheus.server.multiserver
import prometheus.server.socketserver.udp
import prometheus.server.socketserver.tcp
import prometheus.server.socketserver.jsonrest
import prometheus.tftpd
import prometheus.logging as logging
gc.collect()
def td():
prometheus.tftpd.tftpd()
node = test02.Test02()
gc.collect()
logging.debug(gc.mem_free())
# multiserver = prometheus.server.multiserver.MultiServer()
udpserver = prometheus.server.socketserver.udp.UdpSocketServer(node)
# multiserver.add(udpserver)
gc.collect()
# tcpserver = prometheus.server.socketserver.tcp.TcpSocketServer(node)
# multiserver.add(tcpserver)
# gc.collect()
#
# jsonrestserver = prometheus.server.socketserver.jsonrest.JsonRestServer(node, loop_tick_delay=0.1)
# multiserver.add(jsonrestserver, bind_port=8080)
# gc.collect()
logging.boot(udpserver)
udpserver.start()
| 24.777778 | 100 | 0.804933 | 105 | 892 | 6.8 | 0.352381 | 0.179272 | 0.235294 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014634 | 0.080717 | 892 | 35 | 101 | 25.485714 | 0.856098 | 0.39574 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.444444 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
2dac3026a24dba28c7853550faf33bad3ef07350 | 42,183 | py | Python | pysnmp/FN100-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/FN100-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/FN100-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module FN100-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/FN100-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 19:00:25 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueRangeConstraint, ValueSizeConstraint, ConstraintsUnion, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueRangeConstraint", "ValueSizeConstraint", "ConstraintsUnion", "ConstraintsIntersection")
ModuleCompliance, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "NotificationGroup")
Gauge32, Bits, TimeTicks, NotificationType, mgmt, Counter64, Unsigned32, Counter32, enterprises, ObjectIdentity, MibIdentifier, ModuleIdentity, Integer32, IpAddress, MibScalar, MibTable, MibTableRow, MibTableColumn, iso = mibBuilder.importSymbols("SNMPv2-SMI", "Gauge32", "Bits", "TimeTicks", "NotificationType", "mgmt", "Counter64", "Unsigned32", "Counter32", "enterprises", "ObjectIdentity", "MibIdentifier", "ModuleIdentity", "Integer32", "IpAddress", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "iso")
DisplayString, TextualConvention, PhysAddress = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention", "PhysAddress")
cmu = MibIdentifier((1, 3, 6, 1, 4, 1, 3))
sigma = MibIdentifier((1, 3, 6, 1, 4, 1, 97))
sys = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 1))
platform = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5))
es_1fe = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3)).setLabel("es-1fe")
sfhw = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 1))
sfsw = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 2))
sfadmin = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 3))
sfswdis = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 4))
sfaddr = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 5))
sfif = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 6))
sfuart = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 7))
sfdebug = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 8))
sfproto = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 9))
sftrunk = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 10))
sfworkGroup = MibIdentifier((1, 3, 6, 1, 4, 1, 97, 5, 3, 11))
systems = MibIdentifier((1, 3, 6, 1, 4, 1, 3, 1))
mibs = MibIdentifier((1, 3, 6, 1, 4, 1, 3, 2))
cmuSNMP = MibIdentifier((1, 3, 6, 1, 4, 1, 3, 1, 1))
cmuKip = MibIdentifier((1, 3, 6, 1, 4, 1, 3, 1, 2))
cmuRouter = MibIdentifier((1, 3, 6, 1, 4, 1, 3, 1, 3))
sysID = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(2))).clone(namedValues=NamedValues(("es-1fe-bridge", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sysID.setStatus('mandatory')
sysReset = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 2), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sysReset.setStatus('mandatory')
sysTrapAck = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("traps-need-acks", 1), ("traps-not-acked", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sysTrapAck.setStatus('mandatory')
sysTrapTime = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 4), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sysTrapTime.setStatus('mandatory')
sysTrapRetry = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 5), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sysTrapRetry.setStatus('mandatory')
sysTrapPort = MibScalar((1, 3, 6, 1, 4, 1, 97, 1, 6), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sysTrapPort.setStatus('mandatory')
sfhwDiagCode = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 1), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwDiagCode.setStatus('mandatory')
sfhwManufData = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwManufData.setStatus('mandatory')
sfhwPortCount = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwPortCount.setStatus('mandatory')
sfhwPortTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4), )
if mibBuilder.loadTexts: sfhwPortTable.setStatus('mandatory')
sfhwPortEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1), ).setIndexNames((0, "FN100-MIB", "sfhwPortIndex"))
if mibBuilder.loadTexts: sfhwPortEntry.setStatus('mandatory')
sfhwPortIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwPortIndex.setStatus('mandatory')
sfhwPortType = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 6, 255))).clone(namedValues=NamedValues(("port-csma", 1), ("port-uart", 6), ("port-none", 255)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwPortType.setStatus('mandatory')
sfhwPortSubType = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(10, 13, 16, 80, 255))).clone(namedValues=NamedValues(("csmacd-fx", 10), ("csmacd-tpx", 13), ("csmacd-tpx-fx", 16), ("uart-female-9pin", 80), ("no-information", 255)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwPortSubType.setStatus('mandatory')
sfhwPortDiagPassed = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("diag-passed", 1), ("diag-failed", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwPortDiagPassed.setStatus('mandatory')
sfhwAddr = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 1, 4, 1, 5), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfhwAddr.setStatus('mandatory')
sfswNumber = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswNumber.setStatus('mandatory')
sfswFilesetTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2), )
if mibBuilder.loadTexts: sfswFilesetTable.setStatus('mandatory')
sfswFileset = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1), ).setIndexNames((0, "FN100-MIB", "sfswIndex"))
if mibBuilder.loadTexts: sfswFileset.setStatus('mandatory')
sfswIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("currently-executing", 1), ("next-boot", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswIndex.setStatus('mandatory')
sfswDesc = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 2), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswDesc.setStatus('mandatory')
sfswCount = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswCount.setStatus('mandatory')
sfswType = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 4), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswType.setStatus('mandatory')
sfswSizes = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 5), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswSizes.setStatus('mandatory')
sfswStarts = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 6), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswStarts.setStatus('mandatory')
sfswBases = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 7), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswBases.setStatus('mandatory')
sfswFlashBank = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 2, 2, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("first-bank", 1), ("second-bank", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswFlashBank.setStatus('mandatory')
sfadminFatalErr = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 1), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminFatalErr.setStatus('mandatory')
sfadminAnyPass = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 2), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminAnyPass.setStatus('mandatory')
sfadminGetPass = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 3), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminGetPass.setStatus('mandatory')
sfadminNMSIPAddr = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 4), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminNMSIPAddr.setStatus('mandatory')
sfadminAlarmDynamic = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminAlarmDynamic.setStatus('mandatory')
sfadminAlarmAddressChange = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminAlarmAddressChange.setStatus('mandatory')
sfadminStorageFailure = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminStorageFailure.setStatus('mandatory')
sfadminAuthenticationFailure = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 8), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminAuthenticationFailure.setStatus('mandatory')
sfadminMPReceiveCongests = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminMPReceiveCongests.setStatus('mandatory')
sfadminArpEntries = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminArpEntries.setStatus('mandatory')
sfadminArpStatics = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminArpStatics.setStatus('mandatory')
sfadminArpOverflows = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminArpOverflows.setStatus('mandatory')
sfadminIpEntries = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminIpEntries.setStatus('mandatory')
sfadminIpStatics = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminIpStatics.setStatus('mandatory')
sfadminStaticPreference = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 15), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminStaticPreference.setStatus('mandatory')
sfadminRipPreference = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 16), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminRipPreference.setStatus('mandatory')
sfadminRipRouteDiscards = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminRipRouteDiscards.setStatus('mandatory')
sfadminRebootConfig = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 18), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("no-change", 1), ("tftp-config", 2), ("revert-to-defaults", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminRebootConfig.setStatus('mandatory')
sfadminTempOK = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 19), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("temperature-normal", 1), ("temperature-too-hot", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfadminTempOK.setStatus('mandatory')
sfadminDisableButton = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 20), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminDisableButton.setStatus('mandatory')
sfadminButtonSelection = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 21), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("led-any-activity", 1), ("led-rx-activity", 2), ("led-tx-activity", 3), ("led-any-collision", 4), ("led-programmed", 5), ("led-speed", 6)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminButtonSelection.setStatus('mandatory')
sfadminLEDProgramOption = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 22), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1))).clone(namedValues=NamedValues(("program-led-any-error", 1)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminLEDProgramOption.setStatus('mandatory')
sfadminVirtualSwitch1 = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 23), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminVirtualSwitch1.setStatus('mandatory')
sfadminVirtualSwitch2 = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 24), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminVirtualSwitch2.setStatus('mandatory')
sfadminVirtualSwitch3 = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 25), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminVirtualSwitch3.setStatus('mandatory')
sfadminVirtualSwitch4 = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 26), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminVirtualSwitch4.setStatus('mandatory')
sfadminDefaultVirtualSwitch = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 3, 27), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("virtual-switch-1", 1), ("virtual-switch-2", 2), ("virtual-switch-3", 3), ("virtual-switch-4", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfadminDefaultVirtualSwitch.setStatus('mandatory')
sfswdisDesc = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 1), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswdisDesc.setStatus('mandatory')
sfswdisAccess = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("protected", 1), ("any-software", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfswdisAccess.setStatus('mandatory')
sfswdisWriteStatus = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5))).clone(namedValues=NamedValues(("in-progress", 1), ("success", 2), ("config-error", 3), ("flash-error", 4), ("config-and-flash-errors", 5)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfswdisWriteStatus.setStatus('mandatory')
sfswdisConfigIp = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 4), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfswdisConfigIp.setStatus('mandatory')
sfswdisConfigRetryTime = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 5), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfswdisConfigRetryTime.setStatus('mandatory')
sfswdisConfigTotalTimeout = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 4, 6), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfswdisConfigTotalTimeout.setStatus('mandatory')
sfaddrDynamics = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDynamics.setStatus('mandatory')
sfaddrDynamicMax = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 2), Gauge32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrDynamicMax.setStatus('mandatory')
sfaddrFlags = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 3), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrFlags.setStatus('mandatory')
sfaddrMAC = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 4), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrMAC.setStatus('mandatory')
sfaddrPort = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 5), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrPort.setStatus('mandatory')
sfaddrOperation = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("read-random", 1), ("read-next", 2), ("reserved", 3), ("update", 4), ("delete", 5), ("read-block", 6)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrOperation.setStatus('mandatory')
sfaddrIndex = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 7), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrIndex.setStatus('mandatory')
sfaddrNext = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 8), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrNext.setStatus('mandatory')
sfaddrBlockSize = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 9), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrBlockSize.setStatus('mandatory')
sfaddrBlock = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 10), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrBlock.setStatus('mandatory')
sfaddrAlarmMAC = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 11), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrAlarmMAC.setStatus('mandatory')
sfaddrDbFullBuckets = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 12), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbFullBuckets.setStatus('mandatory')
sfaddrDbMaxFullBuckets = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 13), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrDbMaxFullBuckets.setStatus('mandatory')
sfaddrDbMaxSize = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 14), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbMaxSize.setStatus('mandatory')
sfaddrDbBuckets = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 15), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbBuckets.setStatus('mandatory')
sfaddrDbSearchDepth = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 16), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfaddrDbSearchDepth.setStatus('mandatory')
sfaddrDbDistribution = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 17), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbDistribution.setStatus('mandatory')
sfaddrDbTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 18), )
if mibBuilder.loadTexts: sfaddrDbTable.setStatus('mandatory')
sfaddrDbEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 18, 1), ).setIndexNames((0, "FN100-MIB", "sfaddrDbBucketAddress"))
if mibBuilder.loadTexts: sfaddrDbEntry.setStatus('mandatory')
sfaddrDbBucketAddress = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 18, 1, 1), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 4))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbBucketAddress.setStatus('mandatory')
sfaddrDbBucketEntCnt = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 18, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbBucketEntCnt.setStatus('mandatory')
sfaddrDbBucketEntries = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 5, 18, 1, 3), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfaddrDbBucketEntries.setStatus('mandatory')
sfifTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1), )
if mibBuilder.loadTexts: sfifTable.setStatus('mandatory')
sfifEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1), ).setIndexNames((0, "FN100-MIB", "sfifIndex"))
if mibBuilder.loadTexts: sfifEntry.setStatus('mandatory')
sfifIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifIndex.setStatus('mandatory')
sfifRxCnt = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxCnt.setStatus('mandatory')
sfifTxCnt = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifTxCnt.setStatus('mandatory')
sfifTxStormCnt = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 4), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifTxStormCnt.setStatus('mandatory')
sfifTxStormTime = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 5), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifTxStormTime.setStatus('mandatory')
sfifFilterFloodSourceSame = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifFilterFloodSourceSame.setStatus('mandatory')
sfifFunction = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 7), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifFunction.setStatus('mandatory')
sfifRxPacket = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 8), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxPacket.setStatus('mandatory')
sfifRxHwFCSs = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxHwFCSs.setStatus('mandatory')
sfifRxQueues = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxQueues.setStatus('mandatory')
sfifTxPacket = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 11), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifTxPacket.setStatus('mandatory')
sfifTxStorms = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifTxStorms.setStatus('mandatory')
sfifStatisticsTime = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 13), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifStatisticsTime.setStatus('mandatory')
sfifIpAddr = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 14), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifIpAddr.setStatus('mandatory')
sfifIpGroupAddr = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 15), IpAddress()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifIpGroupAddr.setStatus('mandatory')
sfifRxForwardChars = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 16), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxForwardChars.setStatus('mandatory')
sfifRxFilteredChars = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifRxFilteredChars.setStatus('mandatory')
sfifSpeed = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 18), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifSpeed.setStatus('mandatory')
sfifMgntRxQueueSize = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 19), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifMgntRxQueueSize.setStatus('mandatory')
sfifVirtualSwitchID = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 20), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifVirtualSwitchID.setStatus('mandatory')
sfifTPLinkOK = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 21), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifTPLinkOK.setStatus('mandatory')
sfifLedOn = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 22), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("led-on", 1), ("led-off", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifLedOn.setStatus('mandatory')
sfifTxCollisions = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 23), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifTxCollisions.setStatus('mandatory')
sfifFuseOkay = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 24), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifFuseOkay.setStatus('mandatory')
sfifCrashEvents = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 25), Counter32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifCrashEvents.setStatus('mandatory')
sfifCrashTime = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 26), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifCrashTime.setStatus('mandatory')
sfifMinimumUpTime = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 27), TimeTicks()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifMinimumUpTime.setStatus('mandatory')
sfifDMAFlowControlEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 28), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifDMAFlowControlEnable.setStatus('mandatory')
sfifDMARetryCount = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 29), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifDMARetryCount.setStatus('mandatory')
sfifDMARetryBufferCount = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 30), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifDMARetryBufferCount.setStatus('mandatory')
sfifDMAPeakRetries = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 31), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifDMAPeakRetries.setStatus('mandatory')
sfifDMATotalRetries = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 32), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifDMATotalRetries.setStatus('mandatory')
sfifDMAPackets = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 33), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifDMAPackets.setStatus('mandatory')
sfifDMADroppedPackets = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 34), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifDMADroppedPackets.setStatus('mandatory')
sfifDescr = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 35), DisplayString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifDescr.setStatus('mandatory')
sfifMgtDroppedPackets = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 36), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifMgtDroppedPackets.setStatus('mandatory')
sfifLinkStatusOutages = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 37), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfifLinkStatusOutages.setStatus('mandatory')
sfifLocalFilter = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 6, 1, 1, 38), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("hardware", 1), ("software", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfifLocalFilter.setStatus('mandatory')
sfuartTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1), )
if mibBuilder.loadTexts: sfuartTable.setStatus('mandatory')
sfuartEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1, 1), ).setIndexNames((0, "FN100-MIB", "sfuartIndex"))
if mibBuilder.loadTexts: sfuartEntry.setStatus('mandatory')
sfuartIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfuartIndex.setStatus('mandatory')
sfuartBaud = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11))).clone(namedValues=NamedValues(("external-clock", 1), ("b1200-baud", 2), ("b2400-baud", 3), ("b4800-baud", 4), ("b9600-baud", 5), ("b19200-baud", 6), ("b38400-baud", 7), ("b56-kilobits", 8), ("b1544-kilobits", 9), ("b2048-kilobits", 10), ("b45-megabits", 11)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfuartBaud.setStatus('mandatory')
sfuartAlignmentErrors = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfuartAlignmentErrors.setStatus('mandatory')
sfuartOverrunErrors = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 7, 1, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfuartOverrunErrors.setStatus('mandatory')
sfdebugStringID = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfdebugStringID.setStatus('mandatory')
sfdebugString = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 2), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfdebugString.setStatus('mandatory')
sfdebugTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3), )
if mibBuilder.loadTexts: sfdebugTable.setStatus('mandatory')
sfdebugEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1), ).setIndexNames((0, "FN100-MIB", "sfdebugIndex"))
if mibBuilder.loadTexts: sfdebugEntry.setStatus('mandatory')
sfdebugIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1, 1), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 100))).clone(namedValues=NamedValues(("debug-port1", 1), ("debug-port2", 2), ("debug-port3", 3), ("debug-port4", 4), ("debug-port5", 5), ("debug-port6", 6), ("debug-port7", 7), ("debug-port8", 8), ("debug-port9", 9), ("debug-port10", 10), ("debug-port11", 11), ("debug-port12", 12), ("debug-mp", 100)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfdebugIndex.setStatus('mandatory')
sfdebugOperation = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("examine", 1), ("modify", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfdebugOperation.setStatus('mandatory')
sfdebugBase = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1, 3), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfdebugBase.setStatus('mandatory')
sfdebugLength = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1, 4), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfdebugLength.setStatus('mandatory')
sfdebugData = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 8, 3, 1, 5), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfdebugData.setStatus('mandatory')
sfprotoTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1), )
if mibBuilder.loadTexts: sfprotoTable.setStatus('mandatory')
sfprotoEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1), ).setIndexNames((0, "FN100-MIB", "sfprotoIfIndex"))
if mibBuilder.loadTexts: sfprotoEntry.setStatus('mandatory')
sfprotoIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfprotoIfIndex.setStatus('mandatory')
sfprotoBridge = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 4))).clone(namedValues=NamedValues(("transparent", 1), ("none", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfprotoBridge.setStatus('mandatory')
sfprotoSuppressBpdu = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("normal", 1), ("suppressed", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfprotoSuppressBpdu.setStatus('mandatory')
sfprotoRipListen = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfprotoRipListen.setStatus('mandatory')
sfprotoTrunking = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 9, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enabled", 1), ("disabled", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfprotoTrunking.setStatus('mandatory')
sftrunkTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1), )
if mibBuilder.loadTexts: sftrunkTable.setStatus('mandatory')
sftrunkEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1), ).setIndexNames((0, "FN100-MIB", "sftrunkIfIndex"))
if mibBuilder.loadTexts: sftrunkEntry.setStatus('mandatory')
sftrunkIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkIfIndex.setStatus('mandatory')
sftrunkState = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("closed", 1), ("oneway", 2), ("joined", 3), ("helddown", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkState.setStatus('mandatory')
sftrunkRemoteBridgeId = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 3), OctetString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkRemoteBridgeId.setStatus('mandatory')
sftrunkRemoteIp = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 4), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkRemoteIp.setStatus('mandatory')
sftrunkLastError = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("none", 1), ("in-bpdu", 2), ("multiple-bridges", 3), ("no-ack", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkLastError.setStatus('mandatory')
sftrunkLinkOrdinal = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 6), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkLinkOrdinal.setStatus('mandatory')
sftrunkLinkCount = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 7), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkLinkCount.setStatus('mandatory')
sftrunkLastChange = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 10, 1, 1, 8), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sftrunkLastChange.setStatus('mandatory')
sfworkGroupNextIndex = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfworkGroupNextIndex.setStatus('mandatory')
sfworkGroupCurrentCounts = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 2), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfworkGroupCurrentCounts.setStatus('mandatory')
sfworkGroupMaxCount = MibScalar((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 3), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfworkGroupMaxCount.setStatus('mandatory')
sfworkGroupTable = MibTable((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4), )
if mibBuilder.loadTexts: sfworkGroupTable.setStatus('mandatory')
sfworkGroupEntry = MibTableRow((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4, 1), ).setIndexNames((0, "FN100-MIB", "sfworkGroupIndex"))
if mibBuilder.loadTexts: sfworkGroupEntry.setStatus('mandatory')
sfworkGroupIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sfworkGroupIndex.setStatus('mandatory')
sfworkGroupName = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4, 1, 2), DisplayString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfworkGroupName.setStatus('mandatory')
sfworkGroupType = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("workgroup-all", 1), ("workgroup-multicast", 2), ("workgroup-unicast", 3), ("workgroup-invalid", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfworkGroupType.setStatus('mandatory')
sfworkGroupPort = MibTableColumn((1, 3, 6, 1, 4, 1, 97, 5, 3, 11, 4, 1, 4), OctetString()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sfworkGroupPort.setStatus('mandatory')
mibBuilder.exportSymbols("FN100-MIB", sfadminVirtualSwitch4=sfadminVirtualSwitch4, sfifTxStormCnt=sfifTxStormCnt, sfadminArpEntries=sfadminArpEntries, sfifDMATotalRetries=sfifDMATotalRetries, sfaddrDbFullBuckets=sfaddrDbFullBuckets, sfifTxStormTime=sfifTxStormTime, sfif=sfif, sfaddr=sfaddr, sfhwPortCount=sfhwPortCount, sfadminVirtualSwitch2=sfadminVirtualSwitch2, sfadminAlarmAddressChange=sfadminAlarmAddressChange, sftrunkIfIndex=sftrunkIfIndex, sfworkGroupType=sfworkGroupType, sfaddrDbEntry=sfaddrDbEntry, platform=platform, sfadmin=sfadmin, cmuKip=cmuKip, sfswSizes=sfswSizes, sfprotoIfIndex=sfprotoIfIndex, sfswDesc=sfswDesc, sfifMgntRxQueueSize=sfifMgntRxQueueSize, sfdebugTable=sfdebugTable, sfuartOverrunErrors=sfuartOverrunErrors, sftrunkLinkCount=sftrunkLinkCount, sfifMgtDroppedPackets=sfifMgtDroppedPackets, sfworkGroupName=sfworkGroupName, sfifFilterFloodSourceSame=sfifFilterFloodSourceSame, sfhwPortTable=sfhwPortTable, sfadminLEDProgramOption=sfadminLEDProgramOption, sfifCrashTime=sfifCrashTime, sfworkGroup=sfworkGroup, sftrunkTable=sftrunkTable, sfifRxForwardChars=sfifRxForwardChars, sfprotoTrunking=sfprotoTrunking, sfadminArpStatics=sfadminArpStatics, sfifSpeed=sfifSpeed, es_1fe=es_1fe, sfifIpAddr=sfifIpAddr, sfadminVirtualSwitch3=sfadminVirtualSwitch3, sfifDMAPeakRetries=sfifDMAPeakRetries, sfaddrDbMaxFullBuckets=sfaddrDbMaxFullBuckets, sfadminRipPreference=sfadminRipPreference, sfifIndex=sfifIndex, sfworkGroupTable=sfworkGroupTable, sfifRxQueues=sfifRxQueues, sfifLedOn=sfifLedOn, sftrunkState=sftrunkState, sfprotoSuppressBpdu=sfprotoSuppressBpdu, sfadminAnyPass=sfadminAnyPass, sftrunkLastError=sftrunkLastError, cmu=cmu, sfdebug=sfdebug, sfaddrOperation=sfaddrOperation, sftrunkRemoteIp=sftrunkRemoteIp, sfproto=sfproto, sfswdisWriteStatus=sfswdisWriteStatus, sfaddrDbBucketEntries=sfaddrDbBucketEntries, sfsw=sfsw, sfswFilesetTable=sfswFilesetTable, sfadminTempOK=sfadminTempOK, sfadminIpEntries=sfadminIpEntries, sfifDescr=sfifDescr, sfdebugEntry=sfdebugEntry, sfswBases=sfswBases, sfhwPortSubType=sfhwPortSubType, sfifDMARetryCount=sfifDMARetryCount, sysTrapRetry=sysTrapRetry, sfhwManufData=sfhwManufData, sfadminArpOverflows=sfadminArpOverflows, sysID=sysID, sfhwAddr=sfhwAddr, sfprotoEntry=sfprotoEntry, sfhwPortIndex=sfhwPortIndex, sfuartAlignmentErrors=sfuartAlignmentErrors, sfswIndex=sfswIndex, sfaddrFlags=sfaddrFlags, sfuartEntry=sfuartEntry, sfifRxCnt=sfifRxCnt, sfifVirtualSwitchID=sfifVirtualSwitchID, sysTrapAck=sysTrapAck, sysTrapTime=sysTrapTime, sfhwPortDiagPassed=sfhwPortDiagPassed, sfadminGetPass=sfadminGetPass, sfhwPortEntry=sfhwPortEntry, sfdebugString=sfdebugString, sfaddrDbBucketEntCnt=sfaddrDbBucketEntCnt, sfadminStaticPreference=sfadminStaticPreference, sfifTable=sfifTable, sfadminRebootConfig=sfadminRebootConfig, sfswdis=sfswdis, sfswCount=sfswCount, sfswType=sfswType, sftrunkEntry=sftrunkEntry, sfaddrAlarmMAC=sfaddrAlarmMAC, sfadminAuthenticationFailure=sfadminAuthenticationFailure, sfdebugData=sfdebugData, sftrunkLastChange=sftrunkLastChange, sfifFuseOkay=sfifFuseOkay, sfworkGroupPort=sfworkGroupPort, sfaddrDbSearchDepth=sfaddrDbSearchDepth, sfswdisConfigRetryTime=sfswdisConfigRetryTime, sfdebugBase=sfdebugBase, sfaddrDbDistribution=sfaddrDbDistribution, sfadminVirtualSwitch1=sfadminVirtualSwitch1, sfprotoRipListen=sfprotoRipListen, sfadminIpStatics=sfadminIpStatics, sfaddrPort=sfaddrPort, sfadminDefaultVirtualSwitch=sfadminDefaultVirtualSwitch, sfifStatisticsTime=sfifStatisticsTime, sfuartIndex=sfuartIndex, sfprotoBridge=sfprotoBridge, sfworkGroupEntry=sfworkGroupEntry, sfuartBaud=sfuartBaud, sfdebugOperation=sfdebugOperation, sfadminButtonSelection=sfadminButtonSelection, sfworkGroupCurrentCounts=sfworkGroupCurrentCounts, sfadminAlarmDynamic=sfadminAlarmDynamic, sfdebugStringID=sfdebugStringID, sfworkGroupMaxCount=sfworkGroupMaxCount, sfifRxPacket=sfifRxPacket, sfprotoTable=sfprotoTable, sfaddrDbMaxSize=sfaddrDbMaxSize, sfaddrDynamicMax=sfaddrDynamicMax, cmuRouter=cmuRouter, sfhwDiagCode=sfhwDiagCode, systems=systems, sfswdisConfigTotalTimeout=sfswdisConfigTotalTimeout, sftrunk=sftrunk, sfuartTable=sfuartTable, sfadminRipRouteDiscards=sfadminRipRouteDiscards, sysReset=sysReset, sftrunkRemoteBridgeId=sftrunkRemoteBridgeId, sysTrapPort=sysTrapPort, sfadminDisableButton=sfadminDisableButton, sftrunkLinkOrdinal=sftrunkLinkOrdinal, sfaddrDynamics=sfaddrDynamics, sfifTPLinkOK=sfifTPLinkOK, sfifRxHwFCSs=sfifRxHwFCSs, sfhw=sfhw, sys=sys, sfuart=sfuart, sfifIpGroupAddr=sfifIpGroupAddr, sfswFileset=sfswFileset, sfifDMAPackets=sfifDMAPackets, sfifLinkStatusOutages=sfifLinkStatusOutages, sfhwPortType=sfhwPortType, sfadminStorageFailure=sfadminStorageFailure, sfifCrashEvents=sfifCrashEvents, sfworkGroupIndex=sfworkGroupIndex, sfdebugLength=sfdebugLength, sfaddrMAC=sfaddrMAC, sfswFlashBank=sfswFlashBank, sfswNumber=sfswNumber, sfadminMPReceiveCongests=sfadminMPReceiveCongests, sfswdisAccess=sfswdisAccess, sfswStarts=sfswStarts, sfifDMAFlowControlEnable=sfifDMAFlowControlEnable, sfdebugIndex=sfdebugIndex, sfifRxFilteredChars=sfifRxFilteredChars, sfaddrDbTable=sfaddrDbTable, sfifEntry=sfifEntry, sfaddrIndex=sfaddrIndex, sfifDMADroppedPackets=sfifDMADroppedPackets, sfadminNMSIPAddr=sfadminNMSIPAddr, cmuSNMP=cmuSNMP, sfswdisConfigIp=sfswdisConfigIp, sfifTxCollisions=sfifTxCollisions, sigma=sigma, sfifLocalFilter=sfifLocalFilter, sfworkGroupNextIndex=sfworkGroupNextIndex, sfifTxCnt=sfifTxCnt, sfifMinimumUpTime=sfifMinimumUpTime, sfaddrDbBuckets=sfaddrDbBuckets, mibs=mibs, sfswdisDesc=sfswdisDesc, sfaddrDbBucketAddress=sfaddrDbBucketAddress, sfifTxPacket=sfifTxPacket, sfaddrNext=sfaddrNext, sfaddrBlock=sfaddrBlock, sfaddrBlockSize=sfaddrBlockSize, sfifDMARetryBufferCount=sfifDMARetryBufferCount, sfifTxStorms=sfifTxStorms, sfifFunction=sfifFunction, sfadminFatalErr=sfadminFatalErr)
| 116.527624 | 5,793 | 0.740986 | 5,164 | 42,183 | 6.052285 | 0.083462 | 0.014462 | 0.021501 | 0.023549 | 0.491009 | 0.471908 | 0.353427 | 0.29577 | 0.261631 | 0.239585 | 0 | 0.079359 | 0.088898 | 42,183 | 361 | 5,794 | 116.850416 | 0.733849 | 0.007396 | 0 | 0 | 0 | 0 | 0.108045 | 0.002604 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.019774 | 0.016949 | 0 | 0.016949 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
93026b98d3fba06a6da7f6c252116e277bbb44a1 | 388 | py | Python | scripts/buildAndCopyWebApp.py | stanbar/stasbar-app | 3869ac8147d854b9fde6442398d89cfc51f5b401 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | scripts/buildAndCopyWebApp.py | stanbar/stasbar-app | 3869ac8147d854b9fde6442398d89cfc51f5b401 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | scripts/buildAndCopyWebApp.py | stanbar/stasbar-app | 3869ac8147d854b9fde6442398d89cfc51f5b401 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-02-11T08:33:38.000Z | 2020-02-11T08:33:38.000Z | #!/usr/local/bin/python3
import os
import shutil
while not os.getcwd().lower().endswith("stasbar-app"):
os.chdir("..")
os.chdir("frontend")
subprocess.call(['npm', 'run', 'build'])
os.chdir("..")
shutil.rmtree("backend/src/main/resources/assets/static/js")
os.system("cp -rf frontend/build/ backend/src/main/resources/assets/")
os.system("git add backend/src/main/resources/assets")
| 25.866667 | 70 | 0.713918 | 57 | 388 | 4.859649 | 0.596491 | 0.075812 | 0.151625 | 0.249097 | 0.314079 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00277 | 0.069588 | 388 | 14 | 71 | 27.714286 | 0.764543 | 0.059278 | 0 | 0.2 | 0 | 0 | 0.480769 | 0.302198 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
930b9eb6e046a663541704ee491d286a4493f0d9 | 574 | py | Python | src/pyq/tests/test_version.py | kmr0877/pyq | dcad1f5d52f9b0df4a77f2918af4fdd5c00a5d80 | [
"Apache-2.0"
] | 149 | 2017-11-04T17:11:22.000Z | 2022-03-12T16:27:28.000Z | src/pyq/tests/test_version.py | kmr0877/pyq | dcad1f5d52f9b0df4a77f2918af4fdd5c00a5d80 | [
"Apache-2.0"
] | 111 | 2017-11-08T03:06:20.000Z | 2021-10-06T17:16:11.000Z | src/pyq/tests/test_version.py | kmr0877/pyq | dcad1f5d52f9b0df4a77f2918af4fdd5c00a5d80 | [
"Apache-2.0"
] | 35 | 2017-11-05T05:37:38.000Z | 2021-08-03T07:59:42.000Z | from __future__ import absolute_import
import sys
import pytest
import pkg_resources
def test_version():
from pyq import __version__ as v
pv = pkg_resources.parse_version(v)
assert pv
assert v != 'unknown'
@pytest.fixture
def no_version_pyq(monkeypatch):
monkeypatch.setitem(sys.modules, 'pyq.version', None)
monkeypatch.delitem(sys.modules, 'pyq', raising=False)
monkeypatch.delitem(sys.modules, 'pyq._k', raising=False)
import pyq
return pyq
def test_no_version(no_version_pyq):
assert no_version_pyq.__version__ == 'unknown'
| 22.96 | 61 | 0.743902 | 78 | 574 | 5.141026 | 0.371795 | 0.089776 | 0.089776 | 0.139651 | 0.154613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165505 | 574 | 24 | 62 | 23.916667 | 0.837161 | 0 | 0 | 0 | 0 | 0 | 0.059233 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
932a56e1b94ec5a38659a225ab80367b5c170783 | 1,969 | py | Python | sachima/config.py | DessertsLab/Sachima | 5ddf2c8afd493b593e36703dbb09b000b08eeede | [
"MIT"
] | 4 | 2019-01-25T01:44:36.000Z | 2020-06-28T00:44:43.000Z | sachima/config.py | DessertsLab/Sachima | 5ddf2c8afd493b593e36703dbb09b000b08eeede | [
"MIT"
] | 154 | 2019-01-28T03:35:34.000Z | 2022-03-24T03:04:25.000Z | sachima/config.py | DessertsLab/Sachima | 5ddf2c8afd493b593e36703dbb09b000b08eeede | [
"MIT"
] | 1 | 2019-02-18T06:10:55.000Z | 2019-02-18T06:10:55.000Z | import sys
import os
import logging
sys.path.insert(0, os.getcwd())
# from sachima.log import logger
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
PROJ_DIR = "./"
# ---------------------------------------------------------
# Superset API
# ---------------------------------------------------------
SUPERSET_WEBSERVER_ADDRESS = "http://0.0.0.0/"
SUPERSET_WEBSERVER_PORT = 8088
SUPERSET_USERNAME = "admin"
SUPERSET_PASSWORD = "general"
SUPERSET_API_TABLE_BP = "/sachima/v1/save_or_overwrite_slice/"
# ---------------------------------------------------------
# mail config
# ---------------------------------------------------------
MAIL_HOST = ""
MAIL_ADD = ""
MAIL_USER = ""
MAIL_PASS = ""
MAIL_SENDER = "MAIL_SENDER"
# ---------------------------------------------------------
# sns config
# ---------------------------------------------------------
SNS_DINGDING_ERROR_GRP_TOKEN = ""
SNS_DINGDING_INFO_GRP_TOKEN = ""
SNS_DINGDING_SENDING_STR = ""
SNS_DINGDING_ERRSENT_STR = ""
# ---------------------------------------------------------
# db config
# ---------------------------------------------------------
DB_SUPERSET = "~/.superset/superset.db"
# ---------------------------------------------------------
# logging config
# ---------------------------------------------------------
LOG_LEVEL = logging.DEBUG
LOG_DIR = PROJ_DIR + "/logs"
# ---------------------------------------------------------
# Custom config
# ---------------------------------------------------------
try:
from sachima_config import * # noqa
import sachima_config
print("Loading local config file: [{}]".format(sachima_config.__file__))
except ImportError as error:
print("[There is no sachima_config.py. You're not in sachima project.]")
try:
from cache_list import CACHE_LIST # noqa
import cache_list
print("Loading cache_list file: [{}]".format(cache_list.__file__))
except ImportError as error:
print("[There is no cache_list.py.]")
| 28.955882 | 76 | 0.465719 | 178 | 1,969 | 4.814607 | 0.432584 | 0.063011 | 0.007001 | 0.044341 | 0.098016 | 0.098016 | 0.098016 | 0.098016 | 0.098016 | 0 | 0 | 0.005682 | 0.106145 | 1,969 | 67 | 77 | 29.38806 | 0.48125 | 0.411884 | 0 | 0.114286 | 0 | 0 | 0.224472 | 0.051937 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.057143 | 0.257143 | 0 | 0.257143 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.