hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d9d08f09796f0ebab0d3e9eaa58ab68dad4a5334 | 218 | py | Python | home/urls.py | gitkp11/myportfolio | 0e208413d7a2497e3c11b9dbe6be50314f5269b6 | [
"MIT"
] | null | null | null | home/urls.py | gitkp11/myportfolio | 0e208413d7a2497e3c11b9dbe6be50314f5269b6 | [
"MIT"
] | null | null | null | home/urls.py | gitkp11/myportfolio | 0e208413d7a2497e3c11b9dbe6be50314f5269b6 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import HomePageView, SingleBlogView
urlpatterns = [
path('', HomePageView.as_view(), name='home'),
path('blogsingle/', SingleBlogView.as_view(), name='blogsingle'),
]
| 24.222222 | 69 | 0.715596 | 24 | 218 | 6.416667 | 0.583333 | 0.077922 | 0.12987 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133028 | 218 | 8 | 70 | 27.25 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.114679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
d9e3787be6fb41bfb50d9aef076291f7b58ee091 | 202 | py | Python | airtable.py | jtviolet/airtable-auto-asignee | defb900911d694e1eb4e7d046a84d1d37f0f6a62 | [
"MIT"
] | 2 | 2019-04-27T17:09:59.000Z | 2019-04-27T18:23:30.000Z | airtable.py | jtviolet/airtable-auto-asignee | defb900911d694e1eb4e7d046a84d1d37f0f6a62 | [
"MIT"
] | null | null | null | airtable.py | jtviolet/airtable-auto-asignee | defb900911d694e1eb4e7d046a84d1d37f0f6a62 | [
"MIT"
] | null | null | null | class AirTable:
# AirTable configuration variables
API_KEY = ''
BASE = ''
PROJECT_TABLE_NAME = ''
PROJECT_PHASE_OWNERS_TABLE = ''
PROJECT_PHASE_FIELD = ''
ASSIGNEE_FIELD = '' | 25.25 | 38 | 0.648515 | 20 | 202 | 6.1 | 0.7 | 0.196721 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252475 | 202 | 8 | 39 | 25.25 | 0.807947 | 0.158416 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8a00daf261c3ebe2300b2bb5d4a023e692f15ea3 | 2,223 | py | Python | our_scripts/run_scripts/run_ideal_some.py | shrivats-pu/Prescient | 3d4238e98ddd767e2b81adc4091bb723dbf563d3 | [
"BSD-3-Clause"
] | 1 | 2021-10-14T20:39:50.000Z | 2021-10-14T20:39:50.000Z | our_scripts/run_scripts/run_ideal_some.py | shrivats-pu/Prescient | 3d4238e98ddd767e2b81adc4091bb723dbf563d3 | [
"BSD-3-Clause"
] | null | null | null | our_scripts/run_scripts/run_ideal_some.py | shrivats-pu/Prescient | 3d4238e98ddd767e2b81adc4091bb723dbf563d3 | [
"BSD-3-Clause"
] | null | null | null | # run_ideal_all.py: run scenarios for zone 1 solar assets where everything is stochastic except each asset.
# requirements: proper install of Prescient and download of rts-gmlc data. saves outputs in non-collated form in downloads folder.
# intended system: Tiger
# dependencies: run_helpers.py
# author: Ethan Reese
# email: ereese@princeton.edu
# Created: June 16, 2021
import os
import prescient_helpers.run_helpers as rh
import numpy as np
import pandas as pd
import sys
path_template = "./scenario_ideal_"
solar_path = "./solar_quotients.csv"
no_solar_path = "./no_solar_quotients.csv"
runs = 100
#deterministic_assets = sys.argv[1]
def run(i, det_assets):
rh.copy_directory(i, path_template)
os.chdir(path_template+'%03d'%i)
rh.perturb_data(rh.file_paths_combined, solar_path, no_solar_path, deterministic_assets=det_assets)
rh.run_prescient(i, True)
os.chdir("..")
# program body
os.chdir("..")
os.chdir("..")
os.chdir("./downloads")
assets = ['./timeseries_data_files/101_PV_1_forecasts_actuals.csv','./timeseries_data_files/101_PV_2_forecasts_actuals.csv',
'./timeseries_data_files/101_PV_3_forecasts_actuals.csv','./timeseries_data_files/101_PV_4_forecasts_actuals.csv',
'./timeseries_data_files/102_PV_1_forecasts_actuals.csv','./timeseries_data_files/102_PV_2_forecasts_actuals.csv',
'./timeseries_data_files/103_PV_1_forecasts_actuals.csv','./timeseries_data_files/104_PV_1_forecasts_actuals.csv',
'./timeseries_data_files/113_PV_1_forecasts_actuals.csv','./timeseries_data_files/118_RTPV_1_forecasts_actuals.csv',
'./timeseries_data_files/118_RTPV_2_forecasts_actuals.csv','./timeseries_data_files/118_RTPV_3_forecasts_actuals.csv',
'./timeseries_data_files/118_RTPV_4_forecasts_actuals.csv','./timeseries_data_files/118_RTPV_5_forecasts_actuals.csv',
'./timeseries_data_files/119_PV_1_forecasts_actuals.csv', './timeseries_data_files/215_PV_1_forecasts_actuals.csv',
]
for deterministic_assets in assets:
path_template = "id_" + deterministic_assets[24:-4] + "_"
for j in range(runs):
run(j, [deterministic_assets])
| 43.588235 | 132 | 0.745839 | 313 | 2,223 | 4.881789 | 0.338658 | 0.146597 | 0.198953 | 0.284686 | 0.456806 | 0.42801 | 0.403141 | 0.399869 | 0.060209 | 0 | 0 | 0.04215 | 0.146199 | 2,223 | 50 | 133 | 44.46 | 0.762908 | 0.181736 | 0 | 0.096774 | 0 | 0 | 0.531527 | 0.508296 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.16129 | 0 | 0.193548 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a030e591a118e260d13e034ce7d2adb2fe77959 | 4,856 | py | Python | sdk/storage/azure-storage-file-share/azure/storage/fileshare/_generated/models/__init__.py | JianpingChen/azure-sdk-for-python | 3072fc8c0366287fbaea1b02493a50259c3248a2 | [
"MIT"
] | 3 | 2020-06-23T02:25:27.000Z | 2021-09-07T18:48:11.000Z | sdk/storage/azure-storage-file-share/azure/storage/fileshare/_generated/models/__init__.py | JianpingChen/azure-sdk-for-python | 3072fc8c0366287fbaea1b02493a50259c3248a2 | [
"MIT"
] | 510 | 2019-07-17T16:11:19.000Z | 2021-08-02T08:38:32.000Z | sdk/storage/azure-storage-file-share/azure/storage/fileshare/_generated/models/__init__.py | JianpingChen/azure-sdk-for-python | 3072fc8c0366287fbaea1b02493a50259c3248a2 | [
"MIT"
] | 15 | 2017-10-02T18:48:20.000Z | 2022-03-03T14:03:49.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
try:
from ._models_py3 import AccessPolicy
from ._models_py3 import ClearRange
from ._models_py3 import CopyFileSmbInfo
from ._models_py3 import CorsRule
from ._models_py3 import DirectoryItem
from ._models_py3 import FileHTTPHeaders
from ._models_py3 import FileItem
from ._models_py3 import FileProperty
from ._models_py3 import FileRange
from ._models_py3 import FilesAndDirectoriesListSegment
from ._models_py3 import HandleItem
from ._models_py3 import LeaseAccessConditions
from ._models_py3 import ListFilesAndDirectoriesSegmentResponse
from ._models_py3 import ListHandlesResponse
from ._models_py3 import ListSharesResponse
from ._models_py3 import Metrics
from ._models_py3 import RetentionPolicy
from ._models_py3 import ShareFileRangeList
from ._models_py3 import ShareItemInternal
from ._models_py3 import SharePermission
from ._models_py3 import SharePropertiesInternal
from ._models_py3 import ShareProtocolSettings
from ._models_py3 import ShareSmbSettings
from ._models_py3 import ShareStats
from ._models_py3 import SignedIdentifier
from ._models_py3 import SmbMultichannel
from ._models_py3 import SourceModifiedAccessConditions
from ._models_py3 import StorageError
from ._models_py3 import StorageServiceProperties
except (SyntaxError, ImportError):
from ._models import AccessPolicy # type: ignore
from ._models import ClearRange # type: ignore
from ._models import CopyFileSmbInfo # type: ignore
from ._models import CorsRule # type: ignore
from ._models import DirectoryItem # type: ignore
from ._models import FileHTTPHeaders # type: ignore
from ._models import FileItem # type: ignore
from ._models import FileProperty # type: ignore
from ._models import FileRange # type: ignore
from ._models import FilesAndDirectoriesListSegment # type: ignore
from ._models import HandleItem # type: ignore
from ._models import LeaseAccessConditions # type: ignore
from ._models import ListFilesAndDirectoriesSegmentResponse # type: ignore
from ._models import ListHandlesResponse # type: ignore
from ._models import ListSharesResponse # type: ignore
from ._models import Metrics # type: ignore
from ._models import RetentionPolicy # type: ignore
from ._models import ShareFileRangeList # type: ignore
from ._models import ShareItemInternal # type: ignore
from ._models import SharePermission # type: ignore
from ._models import SharePropertiesInternal # type: ignore
from ._models import ShareProtocolSettings # type: ignore
from ._models import ShareSmbSettings # type: ignore
from ._models import ShareStats # type: ignore
from ._models import SignedIdentifier # type: ignore
from ._models import SmbMultichannel # type: ignore
from ._models import SourceModifiedAccessConditions # type: ignore
from ._models import StorageError # type: ignore
from ._models import StorageServiceProperties # type: ignore
from ._azure_file_storage_enums import (
CopyStatusType,
DeleteSnapshotsOptionType,
FileRangeWriteType,
LeaseDurationType,
LeaseStateType,
LeaseStatusType,
ListSharesIncludeType,
PermissionCopyModeType,
ShareAccessTier,
ShareRootSquash,
StorageErrorCode,
)
__all__ = [
'AccessPolicy',
'ClearRange',
'CopyFileSmbInfo',
'CorsRule',
'DirectoryItem',
'FileHTTPHeaders',
'FileItem',
'FileProperty',
'FileRange',
'FilesAndDirectoriesListSegment',
'HandleItem',
'LeaseAccessConditions',
'ListFilesAndDirectoriesSegmentResponse',
'ListHandlesResponse',
'ListSharesResponse',
'Metrics',
'RetentionPolicy',
'ShareFileRangeList',
'ShareItemInternal',
'SharePermission',
'SharePropertiesInternal',
'ShareProtocolSettings',
'ShareSmbSettings',
'ShareStats',
'SignedIdentifier',
'SmbMultichannel',
'SourceModifiedAccessConditions',
'StorageError',
'StorageServiceProperties',
'CopyStatusType',
'DeleteSnapshotsOptionType',
'FileRangeWriteType',
'LeaseDurationType',
'LeaseStateType',
'LeaseStatusType',
'ListSharesIncludeType',
'PermissionCopyModeType',
'ShareAccessTier',
'ShareRootSquash',
'StorageErrorCode',
]
| 38.539683 | 94 | 0.726112 | 428 | 4,856 | 8.016355 | 0.219626 | 0.169047 | 0.109881 | 0.160595 | 0.324104 | 0.111921 | 0.111921 | 0.111921 | 0.111921 | 0.111921 | 0 | 0.007591 | 0.186161 | 4,856 | 125 | 95 | 38.848 | 0.860577 | 0.170717 | 0 | 0 | 0 | 0 | 0.167669 | 0.06391 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.521739 | 0 | 0.521739 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8a0a666f33d95c1caea03d08066a6a106c4794af | 247 | py | Python | setup.py | namiwa/scrapyd-authenticated | 0b21334bf3e2f5b17af163741054b8d129f05853 | [
"MIT"
] | 1 | 2022-01-06T17:01:29.000Z | 2022-01-06T17:01:29.000Z | setup.py | namiwa/scrapyd-authenticated | 0b21334bf3e2f5b17af163741054b8d129f05853 | [
"MIT"
] | null | null | null | setup.py | namiwa/scrapyd-authenticated | 0b21334bf3e2f5b17af163741054b8d129f05853 | [
"MIT"
] | 2 | 2021-10-01T14:37:19.000Z | 2022-01-06T17:05:59.000Z | from setuptools import find_packages, setup
# running the egg https://stackoverflow.com/a/37800297
setup(
name="default",
version="1.0",
packages=find_packages(),
entry_points={"scrapy": ["settings = default.settings"]},
)
| 27.444444 | 62 | 0.680162 | 29 | 247 | 5.689655 | 0.793103 | 0.145455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04902 | 0.174089 | 247 | 8 | 63 | 30.875 | 0.759804 | 0.210526 | 0 | 0 | 0 | 0 | 0.232432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a0dd5aff2ba52d3b365409a547cab4e8641aa59 | 596 | py | Python | metarecord/migrations/0022_remove_name.py | kerkkoheiskanen/helerm | bdaf801a940d42325a1076b42bb0edef831fbac9 | [
"MIT"
] | 2 | 2017-04-21T15:36:23.000Z | 2020-12-04T09:32:39.000Z | metarecord/migrations/0022_remove_name.py | kerkkoheiskanen/helerm | bdaf801a940d42325a1076b42bb0edef831fbac9 | [
"MIT"
] | 168 | 2016-10-05T12:58:41.000Z | 2021-08-31T14:29:56.000Z | metarecord/migrations/0022_remove_name.py | kerkkoheiskanen/helerm | bdaf801a940d42325a1076b42bb0edef831fbac9 | [
"MIT"
] | 7 | 2016-10-13T12:51:36.000Z | 2021-01-21T13:05:04.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.9.5 on 2017-04-11 19:20
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('metarecord', '0021_add_validation_dates'),
]
operations = [
migrations.RemoveField(
model_name='action',
name='name',
),
migrations.RemoveField(
model_name='phase',
name='name',
),
migrations.RemoveField(
model_name='record',
name='name',
),
]
| 21.285714 | 52 | 0.557047 | 57 | 596 | 5.631579 | 0.649123 | 0.196262 | 0.242991 | 0.280374 | 0.23676 | 0.23676 | 0 | 0 | 0 | 0 | 0 | 0.049751 | 0.325503 | 596 | 27 | 53 | 22.074074 | 0.748756 | 0.112416 | 0 | 0.45 | 1 | 0 | 0.121673 | 0.047529 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a16d7ac97397da918c0a05a80759b5a5e68daf4 | 713 | py | Python | ElevatorBot/static/destinyDates.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 3 | 2019-10-19T11:24:50.000Z | 2021-01-29T12:02:17.000Z | ElevatorBot/static/destinyDates.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 29 | 2019-10-14T12:26:10.000Z | 2021-07-28T20:50:29.000Z | ElevatorBot/static/destinyDates.py | LukasSchmid97/destinyBloodoakStats | 1420802ce01c3435ad5c283f44eb4531d9b22c38 | [
"MIT"
] | 2 | 2019-10-13T17:11:09.000Z | 2020-05-13T15:29:04.000Z | expansion_dates = [
["2017-09-06", "D2 Vanilla"],
["2018-09-04", "Forsaken"],
["2019-10-01", "Shadowkeep"],
["2020-11-10", "Beyond Light"],
["2022-22-02", "Witch Queen"],
]
season_dates = [
["2017-12-05", "Curse of Osiris"],
["2018-05-08", "Warmind"],
["2018-12-04", "Season of the Forge"],
["2019-03-05", "Season of the Drifter"],
["2019-06-04", "Season of Opulence"],
["2019-12-10", "Season of Dawn"],
["2020-03-10", "Season of the Worthy"],
["2020-06-09", "Season of Arrivals"],
["2021-02-09", "Season of the Chosen"],
["2021-05-11", "Season of the Splicer"],
["2021-08-24", "Season of the Lost"],
["2021-12-07", "30th Anniversary Pack"],
]
| 31 | 44 | 0.553997 | 103 | 713 | 3.815534 | 0.466019 | 0.183206 | 0.167939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243433 | 0.199158 | 713 | 22 | 45 | 32.409091 | 0.444834 | 0 | 0 | 0 | 0 | 0 | 0.607293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a1d2704d30c9a69c4eb6f80013acd3872091f35 | 585 | py | Python | python/fusion_engine_client/utils/trace.py | jimenezjose/fusion-engine-client | 2de4dbccfb6b9a0746b7f3ef5a170f1332f93bea | [
"MIT"
] | 8 | 2020-08-29T22:03:37.000Z | 2022-01-31T00:54:56.000Z | python/fusion_engine_client/utils/trace.py | jimenezjose/fusion-engine-client | 2de4dbccfb6b9a0746b7f3ef5a170f1332f93bea | [
"MIT"
] | 8 | 2020-09-06T05:32:18.000Z | 2022-01-16T20:34:21.000Z | python/fusion_engine_client/utils/trace.py | jimenezjose/fusion-engine-client | 2de4dbccfb6b9a0746b7f3ef5a170f1332f93bea | [
"MIT"
] | 8 | 2020-09-18T19:05:58.000Z | 2021-12-29T20:55:36.000Z | import logging
import sys
__all__ = []
# Define Logger TRACE level and associated trace() function if it doesn't exist.
if not hasattr(logging, 'TRACE'):
logging.TRACE = logging.DEBUG - 1
if sys.version_info.major == 2:
logging._levelNames['TRACE'] = logging.TRACE
logging._levelNames[logging.TRACE] = 'TRACE'
else:
logging._nameToLevel['TRACE'] = logging.TRACE
logging._levelToName[logging.TRACE] = 'TRACE'
def trace(self, msg, *args, **kwargs):
self.log(logging.TRACE, msg, *args, **kwargs)
logging.Logger.trace = trace
| 30.789474 | 80 | 0.666667 | 72 | 585 | 5.291667 | 0.472222 | 0.220472 | 0.199475 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00431 | 0.206838 | 585 | 18 | 81 | 32.5 | 0.81681 | 0.133333 | 0 | 0 | 0 | 0 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a29a3e451b2197f601cf5d34997f56afea4fe65 | 969 | py | Python | scan_models/proconos/scan.py | ssdemajia/ids-backend | 188af247befa44596f62c660c24b05474d1ba29f | [
"MIT"
] | 1 | 2020-05-22T09:52:33.000Z | 2020-05-22T09:52:33.000Z | scan_models/proconos/scan.py | ssdemajia/ids-backend | 188af247befa44596f62c660c24b05474d1ba29f | [
"MIT"
] | 8 | 2021-03-18T21:22:40.000Z | 2022-03-11T23:32:48.000Z | scan_models/proconos/scan.py | ssdemajia/ids-backend | 188af247befa44596f62c660c24b05474d1ba29f | [
"MIT"
] | null | null | null | from pymongo import MongoClient
from core.utils import convert
module_type_to_key = {
'proconos': 'proconos'
}
def proconos_resolve(protocol_element):
info = dict()
info['固件版本'] = protocol_element.get('Fireware Version', '')
info['固件日期'] = protocol_element.get('Fireware Date', '')
info['固件时间'] = protocol_element.get('Fireware Time', '')
info['设备序列号'] = protocol_element.get('Model Number', '')
info['PLC 型号'] = protocol_element.get('PLC Type', '')
info['profile'] = 'ProConOS ' + protocol_element.get('Fireware Version', '')
info['key'] = ['proconos']
info['key'] = {
'Model': protocol_element.get('PLC Type', ''),
}
return info
def proconos_scan(keys):
mongo = MongoClient()
db = mongo.ids
vul = db.vulnerability
result = []
keys = [convert(module_type_to_key, key) for key in keys]
keys = ' '.join(keys)
result.extend(vul.find({'$text': {'$search': keys}}))
return result
| 29.363636 | 80 | 0.631579 | 115 | 969 | 5.182609 | 0.417391 | 0.201342 | 0.211409 | 0.174497 | 0.281879 | 0.124161 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196078 | 969 | 32 | 81 | 30.28125 | 0.765083 | 0 | 0 | 0 | 0 | 0 | 0.178535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.074074 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a2be33505f3c89f8884d95a4e6ac50881e67f8e | 288 | py | Python | api/core/schemas/__init__.py | D0rs4n/api | 530a62fae664475e8e6c6caf1a92dc198d8623ea | [
"MIT"
] | null | null | null | api/core/schemas/__init__.py | D0rs4n/api | 530a62fae664475e8e6c6caf1a92dc198d8623ea | [
"MIT"
] | 1 | 2021-06-14T19:41:21.000Z | 2021-06-14T19:41:21.000Z | api/core/schemas/__init__.py | D0rs4n/api | 530a62fae664475e8e6c6caf1a92dc198d8623ea | [
"MIT"
] | null | null | null | """
Schemas used by the Python Discord API.
This package contains the schemas used by the various
endpoints of the API. Schemas are represented by pydantic
models, which simplifies data coercion and validation.
"""
from .errors import ErrorMessage
from .health_check import HealthCheck
| 26.181818 | 57 | 0.805556 | 41 | 288 | 5.634146 | 0.731707 | 0.095238 | 0.112554 | 0.138528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152778 | 288 | 10 | 58 | 28.8 | 0.946721 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8a2df23f7eb67e983726931148ef9feff595de97 | 16,067 | py | Python | support.py | FrBonnefoy/pointage | 036f868e273ab50003a1b8c45cbb940a5a2b8d23 | [
"MIT"
] | 1 | 2021-07-12T06:14:30.000Z | 2021-07-12T06:14:30.000Z | support.py | FrBonnefoy/pointage | 036f868e273ab50003a1b8c45cbb940a5a2b8d23 | [
"MIT"
] | null | null | null | support.py | FrBonnefoy/pointage | 036f868e273ab50003a1b8c45cbb940a5a2b8d23 | [
"MIT"
] | null | null | null | from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
import time
import requests
import warnings
warnings.filterwarnings('ignore', message='Unverified HTTPS request')
from IPython.display import Image
import os
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import glob
import csv
from xlsxwriter.workbook import Workbook
import pandas as pd
from IPython.display import display
import re
from urllib.parse import quote
import random
import gzip
import sys
current_path=os.getcwd()
# List of user agents (for req2)
for x in sys.path:
try:
with gzip.open(x+'/pointage/user_agents.txt.gz','rb') as f:
user_agents=f.readlines()
break
except:
pass
user_agents=[x.decode('utf-8').strip() for x in user_agents]
#Define proxies
http_proxy = "http://127.0.0.1:24000"
https_proxy = "https://127.0.0.1:24000"
ftp_proxy = "ftp://127.0.0.1:24000"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
#Define requests function
def req(x):
global page
page=requests.get(x,proxies=proxyDict,verify=False)
#Define requests function
def req2(x):
global page
default_user='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
try:
user_agent=random.choice(user_agents)
except:
user_agent=default_user
headers = {'User-Agent': user_agent}
page=requests.get(x,proxies=proxyDict,verify=False,headers=headers)
def help():
print( '''
open_session() : Ouvre une nouvelle séance de chrome avec une nouvelle adresse IP
close_session() : Fermer la session avant de reouvrir une nouvelle avec une nouvelle adresse IP (économise des ressources)
change(x) : Aller sur le site x avec une séance ouvert de chrome. Ex: change('https://www.google.com/') pour aller à Google.
data(): Lire le code html tel comme il est présenté dans le navigateur virtuel.
scroll(): Aller à la fin de la page.
screnshoot(x): Sauvegarder une capture d'écran du navigateur avec l'image x. Ex: screenshot('test.png')
screen(): Sauvegarder une capture d'écran du navigateur sous le nom 'browser.png'
scrape(x,y): Trouver tous les éléments avec les identifiants html x et y. Ex: identifiant= h2 class="mb0" -> x='h2' ; y={'class':'mb0'}. Le data scrape s'effectue quand la fonction .now() est appellée.
printext(x): Imprimer le texte trouvé sur la console
geturls(x): Capturer les urls dans le récipient html specifié par scrape dans une liste appellée urls.
printhtml(x): Imprimer le code html de tous les récipients qui s'allignent avec la définition donnée par scrape.
''')
def image(x):
Image(filename=x)
def open_session_firefox_no_proxy():
global browser
options = FirefoxOptions()
options.add_argument("--headless")
options.add_argument("--window-size=1280×720")
#options.add_argument('start-maximized')
profile = webdriver.FirefoxProfile()
#profile.add_extension(current_path+"/disable_webrtc-1.0.23-an+fx.xpi")
#profile.add_extension(current_path+"/adblock_for_firefox-4.24.1-fx.xpi")
#profile.add_extension(current_path+"/image_block-5.0-fx.xpi")
#profile.add_extension(current_path+"/ublock_origin-1.31.0-an+fx.xpi")
profile.DEFAULT_PREFERENCES['frozen']["media.peerconnection.enabled" ] = False
profile.set_preference("media.peerconnection.enabled", False)
profile.set_preference("permissions.default.image", 2)
profile.update_preferences()
browser = webdriver.Firefox(profile,options=options)
#browser.install_addon(current_path+"/disable_webrtc-1.0.23-an+fx.xpi", temporary=True)
#browser.install_addon(current_path+"/image_block-5.0-fx.xpi", temporary=True)
#browser.install_addon(current_path+"/ublock_origin-1.31.0-an+fx.xpi", temporary=True)
def open_session_firefox():
global browser
PROXY="127.0.0.1:24001"
webdriver.DesiredCapabilities.FIREFOX['proxy'] = {
"httpProxy": PROXY,
"ftpProxy": PROXY,
"sslProxy": PROXY,
"proxyType": "MANUAL",
}
options = FirefoxOptions()
options.add_argument('--proxy-server=%s' % PROXY)
options.add_argument("--headless")
options.add_argument("--window-size=1024x5000")
#options.add_argument('start-maximized')
profile = webdriver.FirefoxProfile()
#profile.add_extension(current_path+"/disable_webrtc-1.0.23-an+fx.xpi")
#profile.add_extension(current_path+"/adblock_for_firefox-4.24.1-fx.xpi")
#profile.add_extension(current_path+"/image_block-5.0-fx.xpi")
#profile.add_extension(current_path+"/ublock_origin-1.31.0-an+fx.xpi")
profile.DEFAULT_PREFERENCES['frozen']["media.peerconnection.enabled" ] = False
profile.set_preference("media.peerconnection.enabled", False)
#profile.set_preference("permissions.default.image", 2)
profile.update_preferences()
browser = webdriver.Firefox(profile,options=options)
#browser.install_addon(current_path+"/disable_webrtc-1.0.23-an+fx.xpi", temporary=True)
#browser.install_addon(current_path+"/image_block-5.0-fx.xpi", temporary=True)
#browser.install_addon(current_path+"/ublock_origin-1.31.0-an+fx.xpi", temporary=True)
def open_session_firefox2():
global browser
PROXY="127.0.0.1:24002"
webdriver.DesiredCapabilities.FIREFOX['proxy'] = {
"httpProxy": PROXY,
"ftpProxy": PROXY,
"sslProxy": PROXY,
"proxyType": "MANUAL",
}
options = FirefoxOptions()
options.add_argument('--proxy-server=%s' % PROXY)
options.add_argument("--headless")
options.add_argument("--window-size=1024x5000")
options.add_argument("--private")
#options.add_argument("user-agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36")
#options.add_argument('start-maximized')
profile = webdriver.FirefoxProfile()
#profile.add_extension(current_path+"/disable_webrtc-1.0.23-an+fx.xpi")
#profile.add_extension(current_path+"/adblock_for_firefox-4.24.1-fx.xpi")
#profile.add_extension(current_path+"/image_block-5.0-fx.xpi")
#profile.add_extension(current_path+"/ublock_origin-1.31.0-an+fx.xpi")
profile.DEFAULT_PREFERENCES['frozen']["media.peerconnection.enabled" ] = False
profile.set_preference("media.peerconnection.enabled", False)
profile.set_preference("browser.privatebrowsing.autostart", True)
#profile.set_preference("general.useragent.override", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36")
#profile.set_preference("permissions.default.image", 2)
profile.update_preferences()
browser = webdriver.Firefox(profile,options=options)
#browser.install_addon(current_path+"/disable_webrtc-1.0.23-an+fx.xpi", temporary=True)
#browser.install_addon(current_path+"/image_block-5.0-fx.xpi", temporary=True)
#browser.install_addon(current_path+"/ublock_origin-1.31.0-an+fx.xpi", temporary=True)
def open_session():
global browser
PROXY = "127.0.0.1:24001"
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--headless")
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--proxy-server=%s' % PROXY)
chrome_options.add_argument("--window-size=1920x4080")
chrome_options.add_argument('start-maximized')
chrome_options.add_argument('disable-infobars')
chrome_options.add_extension('~/webrtc.crx')
'''
preferences = {
"webrtc.ip_handling_policy" : "disable_non_proxied_udp",
"webrtc.multiple_routes_enabled": False,
"webrtc.nonproxied_udp_enabled" : False,
'profile.managed_default_content_settings.javascript': 2,
"enforce-webrtc-ip-permission-check": True
}
chrome_options.add_experimental_option("prefs", preferences)
chrome_options.add_argument('--force-webrtc-ip-handling-policy')
'''
browser = webdriver.Chrome(options=chrome_options)
def open_session2():
global browser
PROXY = "127.0.0.1:24002"
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--headless")
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--proxy-server=%s' % PROXY)
chrome_options.add_argument("--window-size=1920x10080")
chrome_options.add_argument('start-maximized')
chrome_options.add_argument('disable-infobars')
user_agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.50 Safari/537.36'
chrome_options.add_argument('user-agent={0}'.format(user_agent))
#chrome_options.add_extension('~/webrtc.crx')
'''
preferences = {
"webrtc.ip_handling_policy" : "disable_non_proxied_udp",
"webrtc.multiple_routes_enabled": False,
"webrtc.nonproxied_udp_enabled" : False,
'profile.managed_default_content_settings.javascript': 2,
"enforce-webrtc-ip-permission-check": True
}
chrome_options.add_experimental_option("prefs", preferences)
chrome_options.add_argument('--force-webrtc-ip-handling-policy')
'''
browser = webdriver.Chrome(options=chrome_options)
def screenshot(x):
browser.save_screenshot(x)
def screen():
browser.save_screenshot('browser.png')
def close_session():
browser.close()
def data():
global content
content=browser.page_source
global sopa
sopa=soup(content,'html.parser')
def scroll():
height=0
height2=1
while height!=height2:
height= browser.execute_script("return $(document).height()")
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2)
height2 = browser.execute_script("return $(document).height()")
browser.save_screenshot("endscroll.png")
def change(x):
browser.get(x)
http_proxy2 = "http://127.0.0.1:24003"
https_proxy2 = "https://127.0.0.1:24003"
ftp_proxy2 = "ftp://127.0.0.1:24003"
proxyDict2 = {
"http2" : http_proxy2,
"https2" : https_proxy2,
"ftp2" : ftp_proxy2
}
class google_search_site:
def __init__(self,x,y):
self.x = x
self.y = y
self.url='https://www.bing.com/search?q='+quote(self.x)+quote(' ')+quote(self.y)
def request(self):
global page
global google_url
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36", 'referer':'https://www.google.com/' }
http_proxy = "http://127.0.0.1:24004"
https_proxy = "https://127.0.0.1:24004"
ftp_proxy = "ftp://127.0.0.1:24004"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
page=requests.get(self.url,proxies=proxyDict,verify=False,headers=headers)
description=scrape_light('li',{'class':'b_algo'})
lecture=description.now()
tempurls=[]
for link in lecture:
try:
tempurls.append(link.h2.a['href'])
except:
pass
final_url=[x for x in tempurls if '//fr' in x]
try:
google_url=final_url[0]
except:
google_url=""
return google_url
class google_search_site_trip:
def __init__(self,x,y):
self.x = x
self.y = y
self.url='https://www.bing.com/search?q='+quote(self.x)+quote(' ')+quote(self.y)
def request(self):
global page
global google_url
headers = {'User-Agent': "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.50 Safari/537.36", 'referer':'https://www.google.com/' }
http_proxy = "http://127.0.0.1:24004"
https_proxy = "https://127.0.0.1:24004"
ftp_proxy = "ftp://127.0.0.1:24004"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
page=requests.get(self.url,proxies=proxyDict,verify=False,headers=headers)
description=scrape_light('li',{'class':'b_algo'})
lecture=description.now()
tempurls=[]
for link in lecture:
try:
tempurls.append(link.h2.a['href'])
except:
pass
final_url=[x for x in tempurls if 'tripadvisor.fr' in x]
try:
google_url=final_url[0]
except:
google_url=""
return google_url
class scrape:
def __init__(self,x,y=None):
self.x = x
self.y = y if y is not None else x
def now(self):
content=browser.page_source
sopa=soup(content,'html.parser')
if self.x==self.y:
return sopa.findAll(self.x)
else:
return sopa.findAll(self.x,self.y)
def find(self,z):
treasure=re.compile(z)
tempfind=[]
content=browser.page_source
sopa=soup(content,'html.parser')
if self.x==self.y:
nugget=sopa.findAll(self.x)
else:
nugget=sopa.findAll(self.x,self.y)
for a in nugget:
findings=treasure.findall(a.text)
tempfind.append(findings)
return tempfind
class scrape_light:
def __init__(self,x,y=None):
self.x = x
self.y = y if y is not None else x
def now(self):
content=page.text
sopa=soup(content,'html.parser')
if self.x==self.y:
return sopa.findAll(self.x)
else:
return sopa.findAll(self.x,self.y)
def find(self,z):
treasure=re.compile(z)
tempfind=[]
content=page.text
sopa=soup(content,'html.parser')
if self.x==self.y:
nugget=sopa.findAll(self.x)
else:
nugget=sopa.findAll(self.x,self.y)
for a in nugget:
tempfind=treasure.findall(a.text)
if len(tempfind)==0:
tempfind.append("")
#tempfind.append(findings)
return tempfind
def printext(x):
for a in x:
print(a.text.strip())
def geturls(x):
global urls
urls=[]
for a in x:
try:
urls.append(a['href'])
except:
pass
def alterurls(x,y):
return list(map(lambda z: y+z,x))
def printhtml(x):
for a in x:
print(a)
def excelfy():
for csvfile in glob.glob(os.path.join('.', '*.csv')):
df=pd.read_csv(csvfile, sep='\t')
excelfile=csvfile[:-4] + '.xlsx'
df.to_excel(excelfile, index = False)
display(df)
def excelfy_specific(x):
df=pd.read_csv(x,sep='\t')
excelfile=x[:-4] + '.xlsx'
df.to_excel(excelfile, index = False)
display(df)
def reste_a_pointer(x,y,z):
if x[-4:]=='.csv':
df=pd.read_csv(x, sep='\t')
filtered_df = df[df[z].isnull()]
filtered_df=filtered_df[~filtered_df[y].isnull()]
noms = filtered_df[y].tolist()
with open(x[:-4]+'_a_pointer.txt','w') as f:
for nom in noms:
print(str(nom).strip(),file=f)
print(nom)
if x[-5:]=='.xlsx':
df=pd.read_excel(x)
filtered_df = df[df[z].isnull()]
filtered_df=filtered_df[~filtered_df[y].isnull()]
noms = filtered_df[y].tolist()
with open(x[:-4]+'_a_pointer.txt','w') as f:
for nom in noms:
print(str(nom).strip(),file=f)
print(nom)
| 35.080786 | 201 | 0.660982 | 2,180 | 16,067 | 4.737615 | 0.181651 | 0.03292 | 0.052285 | 0.039504 | 0.743997 | 0.71524 | 0.713885 | 0.685902 | 0.666344 | 0.661115 | 0 | 0.038864 | 0.202465 | 16,067 | 457 | 202 | 35.157549 | 0.767052 | 0.141532 | 0 | 0.604167 | 0 | 0.026786 | 0.247674 | 0.042571 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.011905 | 0.074405 | 0.002976 | 0.208333 | 0.032738 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a4b1204f3f86bed440f8c05a27bb1d3d2981833 | 170 | py | Python | memecrypt/__main__.py | Sh3llcod3/memecrypt | d1a7a0e8ebf8ca9c4a055587f7287a9b05aaf9d0 | [
"MIT"
] | 1 | 2019-06-22T10:15:11.000Z | 2019-06-22T10:15:11.000Z | memecrypt/__main__.py | Sh3llcod3/memecrypt | d1a7a0e8ebf8ca9c4a055587f7287a9b05aaf9d0 | [
"MIT"
] | 2 | 2020-06-08T17:44:56.000Z | 2020-10-04T00:12:30.000Z | memecrypt/__main__.py | Sh3llcod3/Memecrypt | d1a7a0e8ebf8ca9c4a055587f7287a9b05aaf9d0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
try:
from .memecrypt import *
main()
except(ImportError, SystemError):
import memecrypt
memecrypt.main()
| 15.454545 | 33 | 0.635294 | 19 | 170 | 5.684211 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.211765 | 170 | 10 | 34 | 17 | 0.791045 | 0.252941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8a5909241dcf25013db33508fda77a420f1245c9 | 519 | py | Python | play_back.py | takat0m0/test_solve_easy_maze_with_Q | 6ea49000e94ea6c8baa6670eadafdaf1a6694379 | [
"MIT"
] | null | null | null | play_back.py | takat0m0/test_solve_easy_maze_with_Q | 6ea49000e94ea6c8baa6670eadafdaf1a6694379 | [
"MIT"
] | null | null | null | play_back.py | takat0m0/test_solve_easy_maze_with_Q | 6ea49000e94ea6c8baa6670eadafdaf1a6694379 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
import os
import sys
class PlayBack(object):
def __init__(self, state, action, next_state, reward):
self.state = state
self.action = action
self.next_state = next_state
self.reward = reward
class PlayBacks(object):
def __init__(self):
self.__data = []
def append(self, pb):
self.__data.append(pb)
def __len__(self):
return len(self.__data)
def __iter__(self):
return self.__data.__iter__()
| 20.76 | 58 | 0.593449 | 62 | 519 | 4.467742 | 0.370968 | 0.115523 | 0.093863 | 0.122744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002732 | 0.294798 | 519 | 24 | 59 | 21.625 | 0.754098 | 0.038536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.117647 | 0.117647 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
8a5f0b4f39cd749606542cb60be202738e5c15f6 | 3,807 | py | Python | tests/cp2/test_cp2_csetbounds_cases.py | capt-hb/cheritest | d3b3637a81a0005ee7272eca0f33a9f9911fdb32 | [
"Apache-2.0"
] | null | null | null | tests/cp2/test_cp2_csetbounds_cases.py | capt-hb/cheritest | d3b3637a81a0005ee7272eca0f33a9f9911fdb32 | [
"Apache-2.0"
] | 2 | 2020-06-02T13:44:55.000Z | 2020-06-02T14:06:29.000Z | tests/cp2/test_cp2_csetbounds_cases.py | capt-hb/cheritest | d3b3637a81a0005ee7272eca0f33a9f9911fdb32 | [
"Apache-2.0"
] | null | null | null | #-
# Copyright (c) 2015 Michael Roe
# All rights reserved.
#
# This software was developed by the University of Cambridge Computer
# Laboratory as part of the Rigorous Engineering of Mainstream Systems (REMS)
# project, funded by EPSRC grant EP/K008528/1.
#
# @BERI_LICENSE_HEADER_START@
#
# Licensed to BERI Open Systems C.I.C. (BERI) under one or more contributor
# license agreements. See the NOTICE file distributed with this work for
# additional information regarding copyright ownership. BERI licenses this
# file to you under the BERI Hardware-Software License, Version 1.0 (the
# "License"); you may not use this file except in compliance with the
# License. You may obtain a copy of the License at:
#
# http://www.beri-open-systems.org/legal/license-1-0.txt
#
# Unless required by applicable law or agreed to in writing, Work distributed
# under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, either express or implied. See the License for the
# specific language governing permissions and limitations under the License.
#
# @BERI_LICENSE_HEADER_END@
#
from beritest_tools import BaseBERITestCase, attr, HexInt
@attr('capabilities')
class test_cp2_csetbounds_cases(BaseBERITestCase):
def test_cp2_csetbounds_base_or_length_unexpected(self):
self.assertRegisterEqual(self.MIPS.a0, 0, "One case of CSetBounds did not get an expected value")
def test_case_one_base(self):
assert self.MIPS.c5.base == HexInt(0x1600f4000)
assert self.MIPS.c6.base == HexInt(0x1600f4000 + 0x1ffe0)
def test_case_one_length(self):
assert self.MIPS.c5.length == HexInt(0x20000)
assert self.MIPS.c6.length == HexInt(0x10)
def test_case_two_base(self):
assert self.MIPS.c7.base == HexInt(0x7fffffe8c0)
assert self.MIPS.c8.base == HexInt(0x7fffffe8c0)
def test_case_two_length(self):
assert self.MIPS.c7.length == HexInt(0x0)
assert self.MIPS.c8.length == HexInt(0x0)
def test_case_three_first_cap(self):
assert self.MIPS.c9.base == HexInt(0x16022e000)
assert self.MIPS.c9.length == HexInt(0x400000)
assert self.MIPS.c9.offset == HexInt(0)
assert self.MIPS.c9.t
def test_case_three_second_cap(self):
assert self.MIPS.c10.base == HexInt(0x16022e000)
assert self.MIPS.c10.length == HexInt(0x400000)
assert self.MIPS.c10.offset == HexInt(0x7ee940)
assert self.MIPS.c10.t
def test_case_three_third_cap(self):
assert self.MIPS.c11.base == HexInt(0x16022e000)
assert self.MIPS.c11.length == HexInt(0x400000)
assert self.MIPS.c11.offset == HexInt(0x7ee940 - 0xf18)
assert self.MIPS.c11.t
def test_case_four_base(self):
assert self.MIPS.c12.base == HexInt(0x160600000)
assert self.MIPS.c13.base == HexInt(0x160600000)
assert self.MIPS.c14.base == HexInt(0x160600000)
def test_case_four_length(self):
assert self.MIPS.c12.length == HexInt(0x300000)
assert self.MIPS.c13.length == HexInt(0x300000)
assert self.MIPS.c14.length == HexInt(0x300000)
def test_case_four_sealed_bit(self):
assert self.MIPS.c12.s == False
assert self.MIPS.c13.s == True
assert self.MIPS.c14.s == False
def test_case_five_first_cap(self):
assert self.MIPS.c15.base == HexInt(0x98000000600f9000)
assert self.MIPS.c15.length == HexInt(0x38000)
assert self.MIPS.c15.offset == HexInt(0x88c0)
assert self.MIPS.c15.t
def test_case_five_return_cap(self):
# base and length should be the same, offset unpredictable
assert self.MIPS.c15.base == HexInt(0x98000000600f9000)
assert self.MIPS.c15.length == HexInt(0x38000)
assert self.MIPS.c15.t
| 39.247423 | 105 | 0.707906 | 539 | 3,807 | 4.892393 | 0.326531 | 0.112249 | 0.191126 | 0.075085 | 0.321198 | 0.219568 | 0.074327 | 0.074327 | 0.074327 | 0.074327 | 0 | 0.092187 | 0.19648 | 3,807 | 96 | 106 | 39.65625 | 0.769859 | 0.294983 | 0 | 0.113208 | 0 | 0 | 0.024087 | 0 | 0 | 0 | 0.098231 | 0 | 0.698113 | 1 | 0.245283 | false | 0 | 0.018868 | 0 | 0.283019 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a67ccc9968ae9e61a6b61b549c93ba7fd24d528 | 5,512 | py | Python | cinder/volume/drivers/netapp/dataontap/fc_7mode.py | Nexenta/cinder-nedge1.1 | 66d0cb89e5425b4bd8d0597d9381797e40f32e02 | [
"Apache-2.0"
] | 3 | 2019-01-31T01:16:12.000Z | 2021-09-16T18:46:08.000Z | cinder/volume/drivers/netapp/dataontap/fc_7mode.py | Nexenta/cinder-nedge1.1 | 66d0cb89e5425b4bd8d0597d9381797e40f32e02 | [
"Apache-2.0"
] | 5 | 2018-01-25T11:31:56.000Z | 2019-05-06T23:13:35.000Z | cinder/volume/drivers/netapp/dataontap/fc_7mode.py | Nexenta/cinder-nedge1.1 | 66d0cb89e5425b4bd8d0597d9381797e40f32e02 | [
"Apache-2.0"
] | 11 | 2015-02-20T18:48:24.000Z | 2021-01-30T20:26:18.000Z | # Copyright (c) - 2014, Clinton Knight. All rights reserved.
# Copyright (c) 2016 Mike Rooney. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Volume driver for NetApp Data ONTAP (7-mode) FibreChannel storage systems.
"""
from cinder import interface
from cinder.volume import driver
from cinder.volume.drivers.netapp.dataontap import block_7mode
from cinder.zonemanager import utils as fczm_utils
@interface.volumedriver
class NetApp7modeFibreChannelDriver(driver.BaseVD,
driver.ConsistencyGroupVD,
driver.ManageableVD,
driver.ExtendVD,
driver.TransferVD,
driver.SnapshotVD):
"""NetApp 7-mode FibreChannel volume driver."""
DRIVER_NAME = 'NetApp_FibreChannel_7mode_direct'
# ThirdPartySystems wiki page
CI_WIKI_NAME = "NetApp_CI"
VERSION = block_7mode.NetAppBlockStorage7modeLibrary.VERSION
def __init__(self, *args, **kwargs):
super(NetApp7modeFibreChannelDriver, self).__init__(*args, **kwargs)
self.library = block_7mode.NetAppBlockStorage7modeLibrary(
self.DRIVER_NAME, 'FC', **kwargs)
def do_setup(self, context):
self.library.do_setup(context)
def check_for_setup_error(self):
self.library.check_for_setup_error()
def create_volume(self, volume):
self.library.create_volume(volume)
def create_volume_from_snapshot(self, volume, snapshot):
self.library.create_volume_from_snapshot(volume, snapshot)
def create_cloned_volume(self, volume, src_vref):
self.library.create_cloned_volume(volume, src_vref)
def delete_volume(self, volume):
self.library.delete_volume(volume)
def create_snapshot(self, snapshot):
self.library.create_snapshot(snapshot)
def delete_snapshot(self, snapshot):
self.library.delete_snapshot(snapshot)
def get_volume_stats(self, refresh=False):
return self.library.get_volume_stats(refresh,
self.get_filter_function(),
self.get_goodness_function())
def get_default_filter_function(self):
return self.library.get_default_filter_function()
def get_default_goodness_function(self):
return self.library.get_default_goodness_function()
def extend_volume(self, volume, new_size):
self.library.extend_volume(volume, new_size)
def ensure_export(self, context, volume):
return self.library.ensure_export(context, volume)
def create_export(self, context, volume, connector):
return self.library.create_export(context, volume)
def remove_export(self, context, volume):
self.library.remove_export(context, volume)
def manage_existing(self, volume, existing_ref):
return self.library.manage_existing(volume, existing_ref)
def manage_existing_get_size(self, volume, existing_ref):
return self.library.manage_existing_get_size(volume, existing_ref)
def unmanage(self, volume):
return self.library.unmanage(volume)
@fczm_utils.AddFCZone
def initialize_connection(self, volume, connector):
return self.library.initialize_connection_fc(volume, connector)
@fczm_utils.RemoveFCZone
def terminate_connection(self, volume, connector, **kwargs):
return self.library.terminate_connection_fc(volume, connector,
**kwargs)
def get_pool(self, volume):
return self.library.get_pool(volume)
def create_consistencygroup(self, context, group):
return self.library.create_consistencygroup(group)
def delete_consistencygroup(self, context, group, volumes):
return self.library.delete_consistencygroup(group, volumes)
def update_consistencygroup(self, context, group,
add_volumes=None, remove_volumes=None):
return self.library.update_consistencygroup(group, add_volumes=None,
remove_volumes=None)
def create_cgsnapshot(self, context, cgsnapshot, snapshots):
return self.library.create_cgsnapshot(cgsnapshot, snapshots)
def delete_cgsnapshot(self, context, cgsnapshot, snapshots):
return self.library.delete_cgsnapshot(cgsnapshot, snapshots)
def create_consistencygroup_from_src(self, context, group, volumes,
cgsnapshot=None, snapshots=None,
source_cg=None, source_vols=None):
return self.library.create_consistencygroup_from_src(
group, volumes, cgsnapshot=cgsnapshot, snapshots=snapshots,
source_cg=source_cg, source_vols=source_vols)
def failover_host(self, context, volumes, secondary_id=None):
raise NotImplementedError()
| 39.654676 | 78 | 0.679971 | 615 | 5,512 | 5.886179 | 0.274797 | 0.085083 | 0.079834 | 0.022099 | 0.187845 | 0.101657 | 0.101657 | 0.060221 | 0.028729 | 0 | 0 | 0.005268 | 0.24238 | 5,512 | 138 | 79 | 39.942029 | 0.86159 | 0.146589 | 0 | 0 | 0 | 0 | 0.009194 | 0.006842 | 0 | 0 | 0 | 0 | 0 | 1 | 0.341176 | false | 0 | 0.047059 | 0.2 | 0.635294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
8a67e10baf7118488034424829080f923d52e314 | 7,054 | py | Python | Gathered CTF writeups/ptr-yudai-writeups/2019/watevrCTF_2019/repyc/decomp.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:00:41.000Z | 2022-03-27T06:00:41.000Z | Gathered CTF writeups/ptr-yudai-writeups/2019/watevrCTF_2019/repyc/decomp.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | null | null | null | Gathered CTF writeups/ptr-yudai-writeups/2019/watevrCTF_2019/repyc/decomp.py | mihaid-b/CyberSakura | f60e6b6bfd6898c69b84424b080090ae98f8076c | [
"MIT"
] | 1 | 2022-03-27T06:01:42.000Z | 2022-03-27T06:01:42.000Z | # Source Generated with Decompyle++
# File: 3nohtyp.pyc (Python 3.6)
def run_vm(instructions):
pc = 0
굿 = 0
regs = [0] * 2 ** (2 * 2)
mem = [0] * 100
jmplist = []
while instructions[pc][0] != '\xeb\x93\x83':
ope = instructions[pc][0].lower()
operand = instructions[pc][1:]
if ope == '\xeb\x89\x83':
regs[operand[0]] = regs[operand[1]] + regs[operand[2]]
elif ope == '\xeb\xa0\x80':
regs[operand[0]] = regs[operand[1]] ^ regs[operand[2]]
elif ope == '\xeb\xa0\xb3':
regs[operand[0]] = regs[operand[1]] - regs[operand[2]]
elif ope == '\xeb\x83\x83':
regs[operand[0]] = regs[operand[1]] * regs[operand[2]]
elif ope == '\xeb\xa2\xaf':
regs[operand[0]] = regs[operand[1]] / regs[operand[2]]
elif ope == '\xeb\xa5\x87':
regs[operand[0]] = regs[operand[1]] & regs[operand[2]]
elif ope == '\xeb\xa7\xb3':
regs[operand[0]] = regs[operand[1]] | regs[operand[2]]
elif ope == '\xea\xb4\xa1':
regs[operand[0]] = regs[operand[0]]
elif ope == '\xeb\xab\x87':
regs[operand[0]] = regs[operand[1]]
elif ope == '\xea\xbc\x96':
regs[operand[0]] = operand[1]
elif ope == '\xeb\xab\xbb':
mem[operand[0]] = regs[operand[1]]
elif ope == '\xeb\x94\x93':
regs[operand[0]] = mem[operand[1]]
elif ope == '\xeb\x8c\x92':
regs[operand[0]] = 0
elif ope == '\xeb\xac\x87':
mem[operand[0]] = 0
elif ope == '\xeb\xac\x9f':
regs[operand[0]] = input(regs[operand[1]])
elif ope == '\xea\xbd\xba':
mem[operand[0]] = input(regs[operand[1]])
elif ope == '\xeb\x8f\xaf':
print(regs[operand[0]])
elif ope == '\xeb\xad\x97':
print(mem[operand[0]])
elif ope == '\xeb\xad\xbf':
pc = regs[operand[0]]
elif ope == '\xeb\xae\x93':
pc = mem[operand[0]]
elif ope == '\xeb\xae\xb3':
pc = jmplist.pop()
elif ope == '\xeb\xaf\x83' and regs[operand[1]] > regs[operand[2]]:
pc = operand[0]
jmplist.append(pc)
continue
elif ope == '\xea\xbd\xb2':
regs[7] = 0
for i in range(len(regs[operand[0]])):
if regs[operand[0]] != regs[operand[1]]:
regs[7] = 1
pc = regs[operand[2]]
jmplist.append(pc)
elif ope == '\xea\xbe\xae':
괢 = ''
for i in range(len(regs[operand[0]])):
괢 += chr(ord(regs[operand[0]][i]) ^ regs[operand[1]])
regs[operand[0]] = 괢
elif ope == '\xea\xbf\x9a':
괢 = ''
for i in range(len(regs[operand[0]])):
괢 += chr(ord(regs[operand[0]][i]) - regs[operand[1]])
regs[operand[0]] = 괢
elif ope == '\xeb\x96\x87' and regs[operand[1]] > regs[operand[2]]:
pc = regs[operand[0]]
jmplist.append(pc)
continue
elif ope == '\xeb\x97\x8b' and regs[operand[1]] > regs[operand[2]]:
pc = mem[operand[0]]
jmplist.append(pc)
continue
elif ope == '\xeb\x98\xb7' and regs[operand[1]] == regs[operand[2]]:
pc = operand[0]
jmplist.append(pc)
continue
elif ope == '\xeb\x9a\xab' and regs[operand[1]] == regs[operand[2]]:
pc = regs[operand[0]]
jmplist.append(pc)
continue
elif ope == '\xeb\x9d\x87' and regs[operand[1]] == regs[operand[2]]:
pc = mem[operand[0]]
jmplist.append(pc)
continue
pc += 1
run_vm([
[
'\xea\xbc\x96',
0,
'Authentication token: '],
[
'\xea\xbd\xba',
0,
0],
[
'\xea\xbc\x96',
6,
'\xc3\xa1\xc3\x97\xc3\xa4\xc3\x93\xc3\xa2\xc3\xa6\xc3\xad\xc3\xa4\xc3\xa0\xc3\x9f\xc3\xa5\xc3\x89\xc3\x9b\xc3\xa3\xc3\xa5\xc3\xa4\xc3\x89\xc3\x96\xc3\x93\xc3\x89\xc3\xa4\xc3\xa0\xc3\x93\xc3\x89\xc3\x96\xc3\x93\xc3\xa5\xc3\xa4\xc3\x89\xc3\x93\xc3\x9a\xc3\x95\xc3\xa6\xc3\xaf\xc3\xa8\xc3\xa4\xc3\x9f\xc3\x99\xc3\x9a\xc3\x89\xc3\x9b\xc3\x93\xc3\xa4\xc3\xa0\xc3\x99\xc3\x94\xc3\x89\xc3\x93\xc3\xa2\xc3\xa6\xc3\x89\xc3\xa0\xc3\x93\xc3\x9a\xc3\x95\xc3\x93\xc3\x92\xc3\x99\xc3\xa6\xc3\xa4\xc3\xa0\xc3\x89\xc3\xa4\xc3\xa0\xc3\x9f\xc3\xa5\xc3\x89\xc3\x9f\xc3\xa5\xc3\x89\xc3\xa4\xc3\xa0\xc3\x93\xc3\x89\xc3\x9a\xc3\x93\xc3\xa1\xc3\x89\xc2\xb7\xc3\x94\xc3\xa2\xc3\x97\xc3\x9a\xc3\x95\xc3\x93\xc3\x94\xc3\x89\xc2\xb3\xc3\x9a\xc3\x95\xc3\xa6\xc3\xaf\xc3\xa8\xc3\xa4\xc3\x9f\xc3\x99\xc3\x9a\xc3\x89\xc3\x85\xc3\xa4\xc3\x97\xc3\x9a\xc3\x94\xc3\x97\xc3\xa6\xc3\x94\xc3\x89\xc3\x97\xc3\x9a\xc3\xaf\xc3\xa1\xc3\x97\xc3\xaf\xc3\xa5\xc3\x89\xc3\x9f\xc3\x89\xc3\x94\xc3\x99\xc3\x9a\xc3\xa4\xc3\x89\xc3\xa6\xc3\x93\xc3\x97\xc3\x9c\xc3\x9c\xc3\xaf\xc3\x89\xc3\xa0\xc3\x97\xc3\xa2\xc3\x93\xc3\x89\xc3\x97\xc3\x89\xc3\x91\xc3\x99\xc3\x99\xc3\x94\xc3\x89\xc3\xa2\xc3\x9f\xc3\x94\xc3\x89\xc3\x96\xc3\xa3\xc3\xa4\xc3\x89\xc3\x9f\xc3\x89\xc3\xa6\xc3\x93\xc3\x97\xc3\x9c\xc3\x9c\xc3\xaf\xc3\x89\xc3\x93\xc3\x9a\xc3\x9e\xc3\x99\xc3\xaf\xc3\x89\xc3\xa4\xc3\xa0\xc3\x9f\xc3\xa5\xc3\x89\xc3\xa5\xc3\x99\xc3\x9a\xc3\x91\xc3\x89\xc3\x9f\xc3\x89\xc3\xa0\xc3\x99\xc3\xa8\xc3\x93\xc3\x89\xc3\xaf\xc3\x99\xc3\xa3\xc3\x89\xc3\xa1\xc3\x9f\xc3\x9c\xc3\x9c\xc3\x89\xc3\x93\xc3\x9a\xc3\x9e\xc3\x99\xc3\xaf\xc3\x89\xc3\x9f\xc3\xa4\xc3\x89\xc3\x97\xc3\xa5\xc3\xa1\xc3\x93\xc3\x9c\xc3\x9c\xc2\x97\xc3\x89\xc3\xaf\xc3\x99\xc3\xa3\xc3\xa4\xc3\xa3\xc3\x96\xc3\x93\xc2\x9a\xc3\x95\xc3\x99\xc3\x9b\xc2\x99\xc3\xa1\xc3\x97\xc3\xa4\xc3\x95\xc3\xa0\xc2\xa9\xc3\xa2\xc2\xab\xc2\xb3\xc2\xa3\xc3\xaf\xc2\xb2\xc3\x95\xc3\x94\xc3\x88\xc2\xb7\xc2\xb1\xc3\xa2\xc2\xa8\xc3\xab'],
[
'\xea\xbc\x96',
2,
2 ** (3 * 2 + 1) - 2 ** (2 + 1)],
[
'\xea\xbc\x96',
4,
15],
[
'\xea\xbc\x96',
3,
1],
[
'\xeb\x83\x83',
2,
2,
3],
[
'\xeb\x89\x83',
2,
2,
4],
[
'\xea\xb4\xa1',
0,
2],
[
'\xeb\x8c\x92',
3],
[
'\xea\xbe\xae',
6,
3],
[
'\xea\xbc\x96',
0,
'Thanks.'],
[
'\xea\xbc\x96',
1,
'Authorizing access...'],
[
'\xeb\x8f\xaf',
0],
[
'\xeb\x94\x93',
0,
0],
[
'\xea\xbe\xae',
0,
2],
[
'\xea\xbf\x9a',
0,
4],
[
'\xea\xbc\x96',
5,
19],
[
'\xea\xbd\xb2',
0,
6,
5],
[
'\xeb\x8f\xaf',
1],
[
'\xeb\x93\x83'],
[
'\xea\xbc\x96',
1,
'Access denied!'],
[
'\xeb\x8f\xaf',
1],
[
'\xeb\x93\x83']])
| 36.931937 | 1,924 | 0.501843 | 1,093 | 7,054 | 3.236962 | 0.098811 | 0.186546 | 0.089033 | 0.072357 | 0.693895 | 0.607405 | 0.570096 | 0.436122 | 0.410684 | 0.390051 | 0 | 0.16553 | 0.293451 | 7,054 | 190 | 1,925 | 37.126316 | 0.544342 | 0.009073 | 0 | 0.394595 | 1 | 0.005405 | 0.377272 | 0.273651 | 0 | 1 | 0 | 0 | 0 | 1 | 0.005405 | false | 0 | 0 | 0 | 0.005405 | 0.010811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8a94814b2d083ac86380fefaeb0930a8bb16a6c5 | 316 | py | Python | lifelist/lifelist/settings/__init__.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | 3 | 2017-08-17T07:12:03.000Z | 2017-10-18T11:13:44.000Z | lifelist/lifelist/settings/__init__.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | 1 | 2018-05-30T14:38:52.000Z | 2018-05-30T14:38:52.000Z | lifelist/lifelist/settings/__init__.py | andela-mnzomo/life-list | 28a7fa9d16e2b322e4a1bce269dbe7331e783534 | [
"Unlicense"
] | null | null | null | import os
from django_envie.workroom import convertfiletovars
convertfiletovars()
# Ensure development settings are not used in testing and production:
if os.getenv('HEROKU') is not None:
from production import *
elif os.getenv('CI') is not None:
from testing import *
else:
from development import *
| 22.571429 | 69 | 0.756329 | 43 | 316 | 5.534884 | 0.581395 | 0.067227 | 0.07563 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177215 | 316 | 13 | 70 | 24.307692 | 0.915385 | 0.212025 | 0 | 0 | 0 | 0 | 0.032389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.555556 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8a9d801c03a67a9964c86c6f4e106c002f3ec98c | 3,964 | py | Python | server/app/schemas.py | tsukumijima/KonomiTV | d5edf37bbd1bddf0ac95cec197d29be97d3adc17 | [
"MIT"
] | 69 | 2021-09-22T09:39:55.000Z | 2022-03-31T00:51:10.000Z | server/app/schemas.py | tsukumijima/KonomiTV | d5edf37bbd1bddf0ac95cec197d29be97d3adc17 | [
"MIT"
] | 3 | 2021-11-03T05:41:31.000Z | 2022-02-11T22:59:29.000Z | server/app/schemas.py | tsukumijima/KonomiTV | d5edf37bbd1bddf0ac95cec197d29be97d3adc17 | [
"MIT"
] | 9 | 2021-11-03T05:35:15.000Z | 2022-03-20T15:25:03.000Z |
from datetime import datetime
from pydantic import AnyHttpUrl, BaseModel, Field, FilePath, PositiveInt
from pydantic.networks import stricturl
from tortoise.contrib.pydantic import pydantic_model_creator
from typing import Any, Dict, List, Literal, Optional
from app import models
# 環境設定を表す Pydantic モデル
# バリデーションは環境設定をこの Pydantic モデルに通して行う
class Config(BaseModel):
class General(BaseModel):
debug: bool
backend: Literal['Mirakurun', 'EDCB']
mirakurun_url: AnyHttpUrl
edcb_url: stricturl(allowed_schemes={'tcp'}, tld_required=False)
class LiveStream(BaseModel):
encoder: Literal['FFmpeg', 'QSVEncC', 'NVEncC', 'VCEEncC']
max_alive_time: PositiveInt
debug_mode_ts_path: Optional[FilePath]
general: General
livestream: LiveStream
# モデルを表す Pydantic モデル
# 基本的には pydantic_model_creator() で Tortoise ORM モデルから変換したものを継承
# JSONField など変換だけでは補いきれない部分や、新しく追加したいカラムなどを追加で定義する
# Channel モデルで Program モデルを使っているため、先に定義する
class Program(pydantic_model_creator(models.Program, name='Program')):
class Genre(BaseModel):
major: str
middle: str
detail: Dict[str, str]
genre: List[Genre]
class Channel(pydantic_model_creator(models.Channel, name='Channel')):
is_display: bool = True # 追加カラム
viewers: int
program_present: Optional[Program] # 追加カラム
program_following: Optional[Program] # 追加カラム
class LiveStream(BaseModel):
# LiveStream は特殊なモデルのため、ここで全て定義する
status: str
detail: str
updated_at: float
clients_count: int
class TwitterAccount(pydantic_model_creator(models.TwitterAccount, name='TwitterAccount',
exclude=('access_token', 'access_token_secret'))):
pass
class User(pydantic_model_creator(models.User, name='User',
exclude=('password', 'client_settings', 'niconico_access_token', 'niconico_refresh_token', 'created_at', 'updated_at'))):
twitter_accounts: List[TwitterAccount] # 追加カラム
created_at: datetime # twitter_accounts の下に配置するために、一旦 exclude した上で再度定義する
updated_at: datetime # twitter_accounts の下に配置するために、一旦 exclude した上で再度定義する
# API リクエストに利用する Pydantic モデル
# リクエストボティの JSON の構造を表す
class UserCreateRequest(BaseModel):
username: str
password: str
class UserUpdateRequest(BaseModel):
username: Optional[str]
password: Optional[str]
class UserUpdateRequestForAdmin(BaseModel):
username: Optional[str]
password: Optional[str]
is_admin: Optional[bool]
# API レスポンスに利用する Pydantic モデル
# モデルを List や Dict でまとめたものが中心
class Channels(BaseModel):
GR: List[Channel]
BS: List[Channel]
CS: List[Channel]
CATV: List[Channel]
SKY: List[Channel]
STARDIGIO: List[Channel]
class JikkyoSession(BaseModel):
is_success: bool
audience_token: Optional[str]
detail: str
class LiveStreams(BaseModel):
Restart: Dict[str, LiveStream]
Idling: Dict[str, LiveStream]
ONAir: Dict[str, LiveStream]
Standby: Dict[str, LiveStream]
Offline: Dict[str, LiveStream]
class ClientSettings(BaseModel):
# 詳細は client/src/utils/Utils.ts を参照
# デバイス間で同期するとかえって面倒なことになりそうな設定は除外している
pinned_channel_ids: List[str] = Field([])
is_display_superimpose_tv: bool = Field(True)
panel_display_state: Literal['RestorePreviousState', 'AlwaysDisplay', 'AlwaysFold'] = Field('RestorePreviousState')
panel_active_tab: Literal['Program', 'Channel', 'Comment', 'Twitter'] = Field('Program')
capture_save_mode: Literal['Browser', 'UploadServer', 'Both'] = Field('Browser')
capture_caption_mode: Literal['VideoOnly', 'CompositingCaption', 'Both'] = Field('Both')
comment_speed_rate: float = Field(1)
comment_font_size: int = Field(34)
class ThirdpartyAuthURL(BaseModel):
authorization_url: Optional[str]
class TweetResult(BaseModel):
is_success: bool
tweet_url: Optional[str]
detail: str
class Users(BaseModel):
__root__: List[User]
class UserAccessToken(BaseModel):
access_token: str
token_type: str
| 30.96875 | 125 | 0.736125 | 444 | 3,964 | 6.414414 | 0.398649 | 0.027037 | 0.042135 | 0.036517 | 0.088483 | 0.070927 | 0.070927 | 0.037921 | 0 | 0 | 0 | 0.000908 | 0.166498 | 3,964 | 127 | 126 | 31.212598 | 0.861077 | 0.140767 | 0 | 0.123596 | 0 | 0 | 0.104579 | 0.012703 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.05618 | 0.067416 | 0 | 0.865169 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
8aa8967925a4ef55242a5c358efc414f7546bcbc | 207 | py | Python | config_example.py | Danielhiversen/strava-maps | d355b152030f99adaf2222e3543c499d7866474d | [
"MIT"
] | null | null | null | config_example.py | Danielhiversen/strava-maps | d355b152030f99adaf2222e3543c499d7866474d | [
"MIT"
] | null | null | null | config_example.py | Danielhiversen/strava-maps | d355b152030f99adaf2222e3543c499d7866474d | [
"MIT"
] | null | null | null | #
# "Client-ID" and "Client-Secret" from https://www.strava.com/settings/api
#
CLIENT_ID = '19661'
CLIENT_SECRET = '673409cdf6d02b8bc47b0e88cd03015283dddba2'
AUTH_URL = 'http://127.0.0.1:7123/auth' | 29.571429 | 74 | 0.714976 | 27 | 207 | 5.37037 | 0.740741 | 0.110345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21547 | 0.125604 | 207 | 7 | 75 | 29.571429 | 0.585635 | 0.347826 | 0 | 0 | 0 | 0 | 0.537879 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76d00e6834946026e774b2fb9e99057147da0d88 | 971 | py | Python | llspy/gui/styles.py | VolkerH/LLSpy | d14b2387058f679981ff08af546570527bc723d9 | [
"BSD-3-Clause"
] | null | null | null | llspy/gui/styles.py | VolkerH/LLSpy | d14b2387058f679981ff08af546570527bc723d9 | [
"BSD-3-Clause"
] | null | null | null | llspy/gui/styles.py | VolkerH/LLSpy | d14b2387058f679981ff08af546570527bc723d9 | [
"BSD-3-Clause"
] | null | null | null | from PyQt5 import QtWidgets, QtGui, QtCore
# first use APP.setStyle(QtW.QStyleFactory.create("fusion"))
DarkPalette = QtGui.QPalette()
DarkPalette.setColor(QtGui.QPalette.Window, QtGui.QColor(53, 53, 53))
DarkPalette.setColor(QtGui.QPalette.WindowText, QtCore.Qt.lightGray)
DarkPalette.setColor(QtGui.QPalette.Base, QtGui.QColor(15, 15, 15))
DarkPalette.setColor(QtGui.QPalette.AlternateBase, QtGui.QColor(53, 53, 53))
DarkPalette.setColor(QtGui.QPalette.ToolTipBase, QtCore.Qt.lightGray)
DarkPalette.setColor(QtGui.QPalette.ToolTipText, QtCore.Qt.lightGray)
DarkPalette.setColor(QtGui.QPalette.Text, QtCore.Qt.gray)
DarkPalette.setColor(QtGui.QPalette.Button, QtGui.QColor(53, 53, 53))
DarkPalette.setColor(QtGui.QPalette.ButtonText, QtCore.Qt.gray)
DarkPalette.setColor(QtGui.QPalette.BrightText, QtCore.Qt.red)
DarkPalette.setColor(QtGui.QPalette.Highlight, QtGui.QColor(142, 45, 197).lighter())
DarkPalette.setColor(QtGui.QPalette.HighlightedText, QtCore.Qt.black)
| 53.944444 | 84 | 0.815654 | 124 | 971 | 6.387097 | 0.330645 | 0.213384 | 0.363636 | 0.484848 | 0.482323 | 0.482323 | 0.482323 | 0.185606 | 0.185606 | 0 | 0 | 0.035792 | 0.050463 | 971 | 17 | 85 | 57.117647 | 0.82321 | 0.059732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
76f0169985ddb6cd8e671bb5885f79cb1cc9bf70 | 6,821 | py | Python | src/utils/util.py | NKUST-ITC/NKUST-AP-API | 96b5961170fb99f87490be9abdf869a8556c25d3 | [
"MIT"
] | 7 | 2020-01-20T14:32:45.000Z | 2020-11-17T03:05:10.000Z | src/utils/util.py | NKUST-ITC/NKUST-AP-API | 96b5961170fb99f87490be9abdf869a8556c25d3 | [
"MIT"
] | 71 | 2019-07-10T13:23:42.000Z | 2020-09-18T18:14:18.000Z | src/utils/util.py | NKUST-ITC/NKUST-AP-API | 96b5961170fb99f87490be9abdf869a8556c25d3 | [
"MIT"
] | 6 | 2019-10-13T15:17:11.000Z | 2020-09-18T08:45:23.000Z | import random
import string
import falcon
from cache.ap_cache import login as webap_login
from cache.bus_cache import login as bus_login
from cache.library_cache import login as library_login
from cache.leave_cache import login as leave_login
from utils import error_code, config
def randStr(lens):
return ''.join([random.choice(string.ascii_letters + string.digits) for n in range(lens)])
def max_body(limit):
def hook(req, resp, resource, params):
length = req.content_length
if length is not None and length > limit:
msg = ('The size of the request is too large. The body must not '
'exceed ' + str(limit) + ' bytes in length.')
raise falcon.HTTPPayloadTooLarge(
'Request body is too large', msg)
return hook
def webap_login_cache_required(req, resp, resource, params):
"""This function is for falcon.before to use, like a decorator,
check user have cache cookie.
Args:
req ([type]): falcon default.
resp ([type]): falcon default.
resource ([type]): falcon default.
params ([type]): falcon default.
Raises:
falcon.HTTPUnauthorized: HTTP_401,Just login fail,or maybe NKUST server is down.
falcon.HTTPServiceUnavailable: HTTP_503, NKUST server problem.
falcon.HTTPInternalServerError: HTTP_500, something error.
Returns:
[bool]: True, login success.
"""
# jwt payload
payload = req.context['user']['user']
login_status = webap_login(
username=payload['username'], password=payload['password'])
if login_status == error_code.CACHE_WENAP_LOGIN_SUCCESS:
return True
elif login_status == error_code.CACHE_WEBAP_LOGIN_FAIL:
raise falcon.HTTPUnauthorized(description='login fail')
elif login_status == error_code.CACHE_WEBAP_SERVER_ERROR:
raise falcon.HTTPServiceUnavailable()
elif login_status == error_code.CACHE_WEBAP_ERROR:
raise falcon.HTTPInternalServerError()
raise falcon.HTTPInternalServerError()
def bus_login_cache_required(req, resp, resource, params):
"""This function is for falcon.before to use, like a decorator,
check user have cache cookie.
Args:
req ([type]): falcon default.
resp ([type]): falcon default.
resource ([type]): falcon default.
params ([type]): falcon default.
Raises:
falcon.HTTPUnauthorized: HTTP_401, login fail,or maybe NKUST server is down.
falcon.HTTPForbidden: HTTP_403, wrong campus.
falcon.HTTPServiceUnavailable: HTTP_503, NKUST server problem, timeout.
falcon.HTTPInternalServerError: HTTP_500, something error.
Returns:
[bool]: True, login success.
"""
# jwt payload
payload = req.context['user']['user']
login_status = bus_login(
username=payload['username'], password=payload['password'])
if login_status == error_code.CACHE_BUS_LOGIN_SUCCESS:
return True
elif login_status == error_code.BUS_WRONG_PASSWORD:
# 401
raise falcon.HTTPUnauthorized(description='login fail')
elif login_status == error_code.BUS_USER_WRONG_CAMPUS_OR_NOT_FOUND_USER:
# 403
raise falcon.HTTPForbidden(description='wrong campus')
elif login_status == error_code.BUS_TIMEOUT_ERROR:
# 503
raise falcon.HTTPServiceUnavailable()
raise falcon.HTTPInternalServerError()
def library_login_cache_required(req, resp, resource, params):
"""This function is for falcon.before to use, like a decorator,
check user have cache cookie.
Args:
req ([type]): falcon default.
resp ([type]): falcon default.
resource ([type]): falcon default.
params ([type]): falcon default.
Raises:
falcon.HTTPUnauthorized: HTTP_401, login fail.
falcon.HTTPServiceUnavailable: HTTP_503, NKUST server problem, timeout or login error.
(If use the wrong account to login, almost get timeout error,
and if not limit timeout, always spend 5 sec or more to get
fail login status.)
falcon.HTTPInternalServerError: HTTP_500, something error.
Returns:
[bool]: True, login success.
"""
# jwt payload
payload = req.context['user']['user']
login_status = library_login(
username=payload['username'], password=payload['password'])
if login_status == error_code.CACHE_LIBRARY_LOGIN_SUCCESS:
return True
elif login_status == error_code.LIBRARY_LOGIN_FAIL:
# 401
raise falcon.HTTPUnauthorized(description='login fail')
elif login_status == error_code.LIBRARY_ERROR:
raise falcon.HTTPServiceUnavailable()
raise falcon.HTTPInternalServerError()
def leave_login_cache_required(req, resp, resource, params):
"""This function is for falcon.before to use, like a decorator,
check user have cache cookie.
Args:
req ([type]): falcon default.
resp ([type]): falcon default.
resource ([type]): falcon default.
params ([type]): falcon default.
Raises:
falcon.HTTPUnauthorized: HTTP_401, login fail,or maybe NKUST server is down.
falcon.HTTPServiceUnavailable: HTTP_503, NKUST server problem, timeout.
falcon.HTTPInternalServerError: HTTP_500, something error.
Returns:
[bool]: True, login success.
"""
# jwt payload
payload = req.context['user']['user']
login_status = leave_login(
username=payload['username'], password=payload['password'])
if login_status == error_code.CACHE_LEAVE_LOGIN_SUCCESS:
return True
elif login_status == error_code.LEAVE_LOGIN_FAIL:
# 401
raise falcon.HTTPUnauthorized(description='login fail')
elif login_status == error_code.LEAVE_LOGIN_TIMEOUT:
# 503
raise falcon.HTTPServiceUnavailable()
raise falcon.HTTPInternalServerError()
def falcon_admin_required(req, resp, resource, params):
"""This function is for falcon.before to use, like a decorator,
check user status, for news use.
Args:
req ([type]): falcon default.
resp ([type]): falcon default.
resource ([type]): falcon default.
params ([type]): falcon default.
Raises:
falcon.HTTPUnauthorized: HTTP_401, login fail,or maybe NKUST server is down.
falcon.HTTPInternalServerError: HTTP_500, something error.
Returns:
[bool]: True.
"""
# jwt payload
payload = req.context['user']['user']
if payload['username'] in config.NEWS_ADMIN:
return True
elif payload['username'] == config.NEWS_ADMIN_ACCOUNT:
return True
else:
# 401
raise falcon.HTTPUnauthorized(description='not a admin :( ')
raise falcon.HTTPInternalServerError()
| 33.767327 | 94 | 0.67688 | 805 | 6,821 | 5.58882 | 0.150311 | 0.044454 | 0.075572 | 0.062236 | 0.738386 | 0.729273 | 0.718826 | 0.680151 | 0.634363 | 0.580351 | 0 | 0.012777 | 0.231198 | 6,821 | 201 | 95 | 33.935323 | 0.845156 | 0.404926 | 0 | 0.381579 | 0 | 0 | 0.077971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.065789 | 0.105263 | 0.013158 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
76fd92f3a3cbb808cdcc240c811c345ab4b72204 | 611 | py | Python | test.py | mitchelparish/docker-locust-2 | 8d6e8d913b44b055d5cf342a9dd625e249284956 | [
"Apache-2.0"
] | null | null | null | test.py | mitchelparish/docker-locust-2 | 8d6e8d913b44b055d5cf342a9dd625e249284956 | [
"Apache-2.0"
] | null | null | null | test.py | mitchelparish/docker-locust-2 | 8d6e8d913b44b055d5cf342a9dd625e249284956 | [
"Apache-2.0"
] | null | null | null | from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
def on_start(self):
self.login()
def on_stop(self):
self.logout()
def login(self):
self.client.post("/login", {"username":"user1", "password":"p@leCrown14"})
def logout(self):
self.client.post("/logout", {"username":"user1", "password":"p@leCrown14"})
@task(2)
def index(self):
self.client.get("/")
@task(4)
def blog(self):
self.client.get("/blog")
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000
| 20.366667 | 83 | 0.602291 | 74 | 611 | 4.905405 | 0.459459 | 0.132231 | 0.15427 | 0.099174 | 0.170799 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034261 | 0.235679 | 611 | 29 | 84 | 21.068966 | 0.743041 | 0 | 0 | 0 | 0 | 0 | 0.135843 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0.1 | 0.05 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
0a0edd248d8b5daec16b98082eb7848853a86e88 | 222 | py | Python | second_carrier/lesson/serializers.py | beproud/second_carrier | caf6466cee517e32f760f17696750ae0a7c134cb | [
"MIT"
] | null | null | null | second_carrier/lesson/serializers.py | beproud/second_carrier | caf6466cee517e32f760f17696750ae0a7c134cb | [
"MIT"
] | 1 | 2021-06-21T03:25:33.000Z | 2021-06-21T03:25:33.000Z | second_carrier/lesson/serializers.py | beproud/second_carrier | caf6466cee517e32f760f17696750ae0a7c134cb | [
"MIT"
] | 1 | 2022-01-19T08:10:57.000Z | 2022-01-19T08:10:57.000Z | from rest_framework import serializers
from .models import Coach
class CoachSerializer(serializers.ModelSerializer):
class Meta:
model = Coach
fields = ["id", "last_name", "first_name", "categeory"]
| 22.2 | 63 | 0.707207 | 24 | 222 | 6.416667 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198198 | 222 | 9 | 64 | 24.666667 | 0.865169 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a12eac9043a2bc3763dca5fe4cd6fb4ed2c5ec1 | 36 | py | Python | notification/config.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | 14 | 2019-02-12T09:14:52.000Z | 2021-03-11T18:42:22.000Z | notification/config.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | 1,079 | 2019-01-10T04:31:24.000Z | 2022-03-29T06:16:42.000Z | notification/config.py | vinthedark/snet-marketplace-service | 66ed9d093b00f09d3e28ef4d86c4e4c125037d06 | [
"MIT"
] | 20 | 2018-12-18T13:06:41.000Z | 2021-09-17T11:13:01.000Z | EMAIL_FOR_SENDING_NOTIFICATION = ""
| 18 | 35 | 0.833333 | 4 | 36 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a21dfaf20cfcb296a4756828995f91f5b7e3b32 | 15,485 | py | Python | litenn/core/op/element_wise_op.py | dna2fork/litenn | 016aade87446c41ba666ddfa2422a777ced39fd0 | [
"MIT"
] | 61 | 2020-11-06T14:50:35.000Z | 2022-03-19T21:31:51.000Z | litenn/core/op/element_wise_op.py | dna2fork/litenn | 016aade87446c41ba666ddfa2422a777ced39fd0 | [
"MIT"
] | 3 | 2020-11-18T22:42:38.000Z | 2020-12-31T10:40:30.000Z | litenn/core/op/element_wise_op.py | dna2fork/litenn | 016aade87446c41ba666ddfa2422a777ced39fd0 | [
"MIT"
] | 3 | 2021-01-21T13:43:57.000Z | 2022-02-24T14:39:40.000Z | import traceback
import numpy as np
import litenn as nn
import litenn.core as nc
from litenn.core import CLKernelHelper as ph
class ElementWiseOpKernel:
"""
Base class for kernels to use in element_wise_op()
"""
def get_forward_kernel_text(self):
"""
return kernel C code block for forward operation
This block will be inserted to the complete OpenCL kernel code.
available variables:
I - input for forward
O - store result of forward
You can declare and use intermediate C variables.
example code block for ReLU activation:
return "O = I * (I >= 0);"
"""
raise NotImplementedError()
def get_backward_kernel_text(self):
"""
return kernel C code for backward operation
This block will be inserted to the complete OpenCL kernel code.
available variables:
I - input for forward
O - result of forward
dO - gradient of O
dI - store result of backward for input I
example code block for backward ReLU activation:
return "dI = dO * (I >= 0);"
"""
raise NotImplementedError()
def get_op_name(self):
raise NotImplementedError()
def element_wise_op(ElementWiseOpKernel_cls, ElementWiseOpKernel_args, input_t, output_t=None, is_add_to_output=False):
"""
operator for ElementWiseOpKernel ops
arguments
ElementWiseOpKernel_cls class of ElementWiseOpKernel
ElementWiseOpKernel_args args to construct ElementWiseOpKernel_cls
output_t compute result to this Tensor.
Tensor may be with different shape, but should match total size.
gradfn will not be set.
is_add_to_output add result to output_t if output_t is set.
"""
is_add_to_output = False if output_t is None else is_add_to_output
op = nc.Cacheton.get(_ElementWiseOp, ElementWiseOpKernel_cls, ElementWiseOpKernel_args, input_t.shape, is_add_to_output)
if output_t is None:
output_t = nn.Tensor ( op.output_shape )
output_t._set_op_name(f'{op.kernel.get_op_name()}')
output_t._assign_gradfn (input_t, lambda O_t, dO_t: input_1_gradfn(op, input_t, O_t, dO_t) )
elif output_t.shape.size != op.output_shape.size:
raise ValueError(f'output_t must have size {op.output_shape.size}')
op.forward_krn.run(output_t, input_t)
return output_t
def input_1_gradfn(op, input_t, O_t, dO_t):
op.backward_krn.run(input_t.get_grad(), input_t, O_t, dO_t)
class _ElementWiseOp():
def __init__(self, ElementWiseOpKernel_cls, ElementWiseOpKernel_args, input_shape, is_add_to_output):
self.output_shape = input_shape
self.kernel = ElementWiseOpKernel_cls(*ElementWiseOpKernel_args)
self.forward_krn = nc.CLKernel(global_shape=(input_shape.size,), kernel_text=f"""
__kernel void impl(__global float* O_t, __global const float* I_t)
{{
size_t idx = get_global_id(0);
float I = I_t[idx];
float O = 0.0;
{self.kernel.get_forward_kernel_text()}
O_t[idx] {'+=' if is_add_to_output else '='} O;
}}
""")
self.backward_krn = nc.CLKernel(global_shape=(input_shape.size,), kernel_text=f"""
__kernel void impl(__global float* dI_t, __global const float* I_t, __global const float* O_t, __global const float* dO_t)
{{
size_t idx = get_global_id(0);
float I = I_t[idx];
float O = O_t[idx];
float dO = dO_t[idx];
float dI = 0.0;
{self.kernel.get_backward_kernel_text()}
dI_t[idx] += dI;
}}
""")
class abs_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = fabs(I);"
def get_backward_kernel_text(self): return f"dI = dO * ( I / fabs(I) );"
def get_op_name(self): return f"abs"
def abs_op(input_t, output_t=None, is_add_to_output=False):
return element_wise_op(abs_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def abs(input_t):
return abs_op(input_t)
class add_const_kernel(ElementWiseOpKernel):
def __init__(self, value): self.value = value
def get_forward_kernel_text(self): return f"O = I+({self.value});"
def get_backward_kernel_text(self): return f"dI = dO;"
def get_op_name(self): return f"add_const"
def add_const_op(input_t, value, output_t=None, is_add_to_output=False):
return element_wise_op(add_const_kernel, (value,), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def add_const(input_t, value):
return add_const_op(input_t, value)
class clip_kernel(ElementWiseOpKernel):
def __init__(self, min_value, max_value,preserve_gradient=False):
self.min_value, self.max_value, self.preserve_gradient = min_value, max_value, preserve_gradient
def get_forward_kernel_text(self): return f"O = I*(I>={self.min_value}&I<={self.max_value})+{self.min_value}*(I<{self.min_value})+{self.max_value}*(I>{self.max_value});"
def get_backward_kernel_text(self):
return f"dI = dO;" if self.preserve_gradient else \
f"dI = dO*(I>={self.min_value}&I<={self.max_value});"
def get_op_name(self): return f"clip"
def clip_op(input_t, min_value, max_value, preserve_gradient=False, output_t=None, is_add_to_output=False):
"""
Element-wise clip by min/max value
arguments
input_t Tensor
min_value float
max_value float
preserve_gradient(False)
if False, gradient will be supressed
on values which are outside of range
"""
if min_value > max_value:
raise ValueError(f'{min_value} > {max_value}')
return element_wise_op(clip_kernel, (min_value,max_value,preserve_gradient), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def clip(input_t, min_value, max_value, preserve_gradient=False):
"""
Element-wise clip by min/max value
arguments
input_t Tensor
min_value float
max_value float
preserve_gradient(False)
if False, gradient will be supressed
on values which are outside of range
"""
return clip_op(input_t, min_value, max_value, preserve_gradient)
class cos_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = cos(I);"
def get_backward_kernel_text(self): return f"dI = dO * -sin(I);"
def get_op_name(self): return f"cos"
def cos_op(input_t, output_t=None, is_add_to_output=False):
return element_wise_op(cos_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def cos(input_t):
return cos_op(input_t)
def div_const_op(input_t, value, output_t=None, is_add_to_output=False):
return mul_const_op(input_t, 1.0/value, output_t=output_t, is_add_to_output=is_add_to_output)
def div_const(input_t, value):
return mul_const(input_t, 1.0/value )
class exp_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = exp(I);"
def get_backward_kernel_text(self): return f"dI = dO * exp(I);"
def get_op_name(self): return f"exp"
def exp_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise exponential of input_t.
"""
return element_wise_op(exp_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def exp(input_t):
"""
Element-wise exponential of input_t.
"""
return exp_op(input_t)
class leaky_relu_kernel(ElementWiseOpKernel):
def __init__(self, alpha=0.1): self.alpha = alpha
def get_forward_kernel_text(self): return f"O = I * (I >= 0) + {self.alpha} * I * (I < 0);"
def get_backward_kernel_text(self): return f"dI = dO * ( (I >= 0) + {self.alpha} * (I < 0) );"
def get_op_name(self): return f"leaky_relu({self.alpha})"
def leaky_relu_op(input_t, alpha, output_t=None, is_add_to_output=False):
return element_wise_op(leaky_relu_kernel, (alpha,), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def leaky_relu(input_t, alpha=0.1):
"""
leaky_relu operator
alpha(0.1) float
"""
return leaky_relu_op(input_t, alpha)
class log_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = log(I);"
def get_backward_kernel_text(self): return f"dI = dO * ( 1.0 / I );"
def get_op_name(self): return f"log"
def log_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise natural logarithm.
"""
return element_wise_op(log_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def log(input_t):
"""
Element-wise natural logarithm.
"""
return log_op(input_t)
class mul_const_kernel(ElementWiseOpKernel):
def __init__(self, value): self.value = value
def get_forward_kernel_text(self): return f"O = I*({self.value});"
def get_backward_kernel_text(self): return f"dI = dO*({self.value});"
def get_op_name(self): return f"mul_const"
def mul_const_op(input_t, value, output_t=None, is_add_to_output=False):
return element_wise_op(mul_const_kernel, (value,), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def mul_const(input_t, value):
return mul_const_op(input_t, value)
class rdiv_const_kernel(ElementWiseOpKernel):
def __init__(self, value): self.value = value
def get_forward_kernel_text(self): return f"O = ({self.value}) / I;"
def get_backward_kernel_text(self): return f"dI = dO* ( -( ({self.value}) / (I*I))) ;"
def get_op_name(self): return f"rdiv_const"
def rdiv_const_op(input_t, value, output_t=None, is_add_to_output=False):
return element_wise_op(rdiv_const_kernel, (value,), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def rdiv_const(input_t, value):
return rdiv_const_op(input_t, value)
class rsub_const_kernel(ElementWiseOpKernel):
def __init__(self, value): self.value = value
def get_forward_kernel_text(self): return f"O = ({self.value})-I;"
def get_backward_kernel_text(self): return f"dI = -dO;"
def get_op_name(self): return f"rsub_const"
def rsub_const_op(input_t, value, output_t=None, is_add_to_output=False):
return element_wise_op(rsub_const_kernel, (value,), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def rsub_const(input_t, value):
return rsub_const_op(input_t, value)
class relu_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return "O = I * (I >= 0);"
def get_backward_kernel_text(self): return "dI = dO * (I >= 0);"
def get_op_name(self): return "relu"
def relu_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise relu operator.
"""
return element_wise_op(relu_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def relu(input_t):
"""
Element-wise relu operator.
"""
return relu_op(input_t)
class sigmoid_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = 1.0 / (1.0 + exp(-I));"
def get_backward_kernel_text(self): return f"dI = dO * ( O * (1.0 - O) );"
def get_op_name(self): return f"sigmoid"
def sigmoid_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise sigmoid operator.
"""
return element_wise_op(sigmoid_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def sigmoid(input_t):
"""
Element-wise sigmoid operator.
"""
return sigmoid_op(input_t)
class sin_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = sin(I);"
def get_backward_kernel_text(self): return f"dI = dO * cos(I);"
def get_op_name(self): return f"sin"
def sin_op(input_t, output_t=None, is_add_to_output=False):
return element_wise_op(sin_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def sin(input_t):
return sin_op(input_t)
def softmax(input_t, axis=-1):
"""
Softmax operator.
"""
e = exp(input_t)
return e / e.sum (axis, keepdims=True)
class sqrt_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = sqrt(I);"
def get_backward_kernel_text(self): return f"dI = dO * ( 1.0 / ( 2 * sqrt(I) ) );"
def get_op_name(self): return f"sqrt"
def sqrt_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise sqrt operator.
"""
return element_wise_op(sqrt_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def sqrt(input_t):
"""
Element-wise sqrt operator.
"""
return sqrt_op(input_t)
class square_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = I*I;"
def get_backward_kernel_text(self): return f"dI = dO * 2 * I;"
def get_op_name(self): return f"square"
def square_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise square operator.
"""
return element_wise_op(square_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def square(input_t):
"""
Element-wise square operator.
"""
return square_op(input_t)
def sub_const_op(input_t, value, output_t=None, is_add_to_output=False):
return add_const_op(input_t, -value, output_t=output_t, is_add_to_output=is_add_to_output)
def sub_const(input_t, value):
return add_const(input_t, -value)
class tanh_kernel(ElementWiseOpKernel):
def get_forward_kernel_text(self): return f"O = 2.0 / (1.0 + exp(-2.0*I)) - 1.0;"
def get_backward_kernel_text(self): return f"dI = dO * ( 1.0 - O * O );"
def get_op_name(self): return f"tanh"
def tanh_op(input_t, output_t=None, is_add_to_output=False):
"""
Element-wise tanh operator.
"""
return element_wise_op(tanh_kernel, (), input_t, output_t=output_t, is_add_to_output=is_add_to_output)
def tanh(input_t):
"""
Element-wise tanh operator.
"""
return tanh_op(input_t)
def element_wise_op_test():
add_params = { add_const : [1.0],
clip : [0.0, 1.0],
div_const : [2.0],
mul_const : [2.0],
leaky_relu : [0.1],
rdiv_const : [2.0],
rsub_const : [2.0],
sub_const : [1.0],
}
for op in [abs, add_const, clip, cos, div_const, exp, leaky_relu, log, mul_const, rdiv_const, rsub_const, relu, sigmoid, sin, softmax, sqrt, square, sub_const, tanh]:
print(f'{op.__name__}()')
for _ in range(10):
for shape_len in range(1,3):
try:
shape = (np.random.randint( 8, size=(shape_len,) )+1).tolist()
value_n = np.random.randint( 128, size=shape ).astype(np.float32)-64
value_t = nn.Tensor_from_value(value_n)
args = add_params.get(op, None)
if args is None:
args = []
result_t = op( *([value_t]+args) )
result_t.backward(grad_for_non_trainables=True)
if not value_t.has_grad():
raise Exception('No grad.')
except:
raise Exception(f"""
shape : {shape}
op : {op.__name__}
args : {args}
exception : {traceback.format_exc()}
""") | 38.424318 | 174 | 0.670649 | 2,381 | 15,485 | 4.0189 | 0.075179 | 0.053924 | 0.044623 | 0.082872 | 0.703731 | 0.620023 | 0.569757 | 0.534852 | 0.492632 | 0.482078 | 0 | 0.006613 | 0.218792 | 15,485 | 403 | 175 | 38.424318 | 0.784409 | 0.145689 | 0 | 0.085837 | 0 | 0.021459 | 0.137 | 0.02683 | 0 | 0 | 0 | 0 | 0 | 1 | 0.420601 | false | 0 | 0.021459 | 0.287554 | 0.686695 | 0.004292 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a3010dbe564d93204facfbd2b789a1518726584 | 61 | py | Python | torchkit/tools/__init__.py | cosmic-cortex/torchkit | 9f44c8a500a4345d81feac14b6b200c5d190283a | [
"MIT"
] | null | null | null | torchkit/tools/__init__.py | cosmic-cortex/torchkit | 9f44c8a500a4345d81feac14b6b200c5d190283a | [
"MIT"
] | null | null | null | torchkit/tools/__init__.py | cosmic-cortex/torchkit | 9f44c8a500a4345d81feac14b6b200c5d190283a | [
"MIT"
] | null | null | null | from torchkit.tools.wrapper import Model
__all__ = ['Model'] | 20.333333 | 40 | 0.770492 | 8 | 61 | 5.375 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 61 | 3 | 41 | 20.333333 | 0.796296 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a324abd845fd6173c3876f1c142e39cced2278e | 1,151 | py | Python | attrdict.py | postpop/attrdict | c63d8381fd308d1dec6cab88f88c61ba0b230a9f | [
"Apache-2.0"
] | null | null | null | attrdict.py | postpop/attrdict | c63d8381fd308d1dec6cab88f88c61ba0b230a9f | [
"Apache-2.0"
] | null | null | null | attrdict.py | postpop/attrdict | c63d8381fd308d1dec6cab88f88c61ba0b230a9f | [
"Apache-2.0"
] | null | null | null | """A dictionary on stereoids(sic!)."""
import flammkuchen
from collections import defaultdict
class AttrDict(defaultdict):
"""Dictionaries with dot-notation and default values and deepdish hdf5 io.
# dictionary with default value 42 for new keys (defaults to None)
ad = AttrDict(lambda: 42)
# save to file with zlib compression (defaults to blosc)
ad.save(filename, compression='zlib')
# load from file
ad = AttrDict().load(filename)
"""
def __init__(self, d=None, default_factory=lambda: None, **kwargs):
"""Init with dict or key, name pairs."""
super().__init__(default_factory)
if d is None:
d = {}
if kwargs:
d.update(**kwargs)
for key, value in d.items():
setattr(self, key, value)
def __getattr__(self, key):
return self[key]
def __setattr__(self, key, value):
self[key] = value
def save(self, filename, compression='blosc'):
flammkuchen.save(filename, self, compression=compression)
def load(self, filename, compression='blosc'):
return AttrDict(flammkuchen.load(filename))
| 28.775 | 78 | 0.640313 | 140 | 1,151 | 5.135714 | 0.421429 | 0.048679 | 0.05007 | 0.052851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005774 | 0.247611 | 1,151 | 39 | 79 | 29.512821 | 0.82448 | 0.32841 | 0 | 0 | 0 | 0 | 0.013736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0 | 0.105263 | 0.105263 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a39cb117b6a36f3f3f4f16dc66b49f0e86cb0e9 | 502 | py | Python | server/logic/tankinfo.py | grggxxy/my-code | c094996bcbd783679bc4c1e5c2ce72da1f6f2c94 | [
"MIT"
] | null | null | null | server/logic/tankinfo.py | grggxxy/my-code | c094996bcbd783679bc4c1e5c2ce72da1f6f2c94 | [
"MIT"
] | null | null | null | server/logic/tankinfo.py | grggxxy/my-code | c094996bcbd783679bc4c1e5c2ce72da1f6f2c94 | [
"MIT"
] | null | null | null | from network.configure import CONFIGURE
class TankInfo(object):
position = CONFIGURE["TANK_INIT_POSITION"]
turret_rotation = 0.0
body_rotation = 0.0
driver_id = 0xff
is_driven = False
hp = CONFIGURE["TANK_INIT_HP"]
@classmethod
def reset(cls):
cls.position = CONFIGURE["TANK_INIT_POSITION"]
cls.turret_rotation = 0.0
cls.body_rotation = 0.0
cls.driver_id = 0xff
cls.is_driven = False
cls.hp = CONFIGURE["TANK_INIT_HP"]
| 25.1 | 54 | 0.653386 | 66 | 502 | 4.727273 | 0.378788 | 0.166667 | 0.217949 | 0.160256 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026738 | 0.25498 | 502 | 19 | 55 | 26.421053 | 0.807487 | 0 | 0 | 0 | 0 | 0 | 0.119522 | 0 | 0 | 0 | 0.015936 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
0a41e89767da26e96eae696cb0f95c84301f0025 | 590 | py | Python | tests/test_problemak.py | ssebastianj/tap-2016 | 5935008a15fb2ff969e0ee8b865ffec1b751c5cc | [
"MIT"
] | null | null | null | tests/test_problemak.py | ssebastianj/tap-2016 | 5935008a15fb2ff969e0ee8b865ffec1b751c5cc | [
"MIT"
] | null | null | null | tests/test_problemak.py | ssebastianj/tap-2016 | 5935008a15fb2ff969e0ee8b865ffec1b751c5cc | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from tap.problemak.solucion import diferencia_hojas
class TestProblemaK:
def test_diferencia_de_hojas(self):
assert diferencia_hojas(
[
(2, 1, 2),
(5, 3),
(1, 2)
]
) == 2
assert diferencia_hojas(
[
(6, 2, 3),
(1, 6, 4, 3, 2, 2),
(1, 2),
(2, 3),
(3, 4),
(3, 5),
(5, 6)
]
) == -1
| 20.344828 | 51 | 0.359322 | 57 | 590 | 3.526316 | 0.438596 | 0.223881 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102113 | 0.518644 | 590 | 28 | 52 | 21.071429 | 0.605634 | 0.035593 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a44b19cec8c70283e952cd461d81a0b16809cbb | 1,195 | py | Python | hashdd/algorithms/hashdd_haval224.py | hashdd/pyhashdd | 938366a8c1ff26e074c419d71b09d592730940e9 | [
"Apache-2.0",
"BSD-3-Clause"
] | 20 | 2017-02-22T11:32:24.000Z | 2019-11-25T18:51:41.000Z | hashdd/algorithms/hashdd_haval224.py | hashdd/pyhashdd | 938366a8c1ff26e074c419d71b09d592730940e9 | [
"Apache-2.0",
"BSD-3-Clause"
] | 11 | 2017-02-24T15:18:15.000Z | 2022-01-13T00:41:29.000Z | hashdd/algorithms/hashdd_haval224.py | hashdd/pyhashdd | 938366a8c1ff26e074c419d71b09d592730940e9 | [
"Apache-2.0",
"BSD-3-Clause"
] | 4 | 2017-02-22T14:42:52.000Z | 2017-11-26T21:24:04.000Z | """
@brad_anton
License:
Copyright 2015 hashdd.com
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import re
import hashlib
from hashdd.algorithms.algorithm import algorithm
from hashdd.mhashlib import haval224 as mhaval224
class hashdd_haval224(algorithm):
name = 'hashdd_haval224'
validation_regex = re.compile(r'^[a-f0-9]{56}$', re.IGNORECASE)
sample = 'A36CBEF0F3A26F0EBDC7F169B2B97AF84B27A1AD5AABC146F50AC131'
def setup(self, arg):
self.h = mhaval224()
def digest(self):
return self.h.digest()
def hexdigest(self):
return self.h.hexdigest().upper().decode()
def update(self, arg):
self.h.update(arg)
hashlib.hashdd_haval224 = hashdd_haval224
| 27.159091 | 72 | 0.74477 | 166 | 1,195 | 5.325301 | 0.584337 | 0.067873 | 0.029412 | 0.036199 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061678 | 0.172385 | 1,195 | 43 | 73 | 27.790698 | 0.832154 | 0.477824 | 0 | 0 | 0 | 0 | 0.137987 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.235294 | 0.117647 | 0.823529 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a4ef5b5033b278923e6ebf02f227e244cb8839a | 630 | py | Python | 18. Decorators - Exercise/03_bold_italic_underline.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-27T16:56:30.000Z | 2021-03-27T16:56:30.000Z | 18. Decorators - Exercise/03_bold_italic_underline.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | null | null | null | 18. Decorators - Exercise/03_bold_italic_underline.py | elenaborisova/Python-OOP | 584882c08f84045b12322917f0716c7c7bd9befc | [
"MIT"
] | 1 | 2021-03-15T14:50:39.000Z | 2021-03-15T14:50:39.000Z | def make_bold(func):
def wrapper(*args, **kwargs):
return f'<b>{func(*args, **kwargs)}</b>'
return wrapper
def make_italic(func):
def wrapper(*args, **kwargs):
return f'<i>{func(*args, **kwargs)}</i>'
return wrapper
def make_underline(func):
def wrapper(*args, **kwargs):
return f'<u>{func(*args, **kwargs)}</u>'
return wrapper
@make_bold
@make_italic
@make_underline
def greet(name):
return f'Hello, {name}'
@make_bold
@make_italic
@make_underline
def greet_all(*args):
return f'Hello, {", ".join(args)}'
print(greet('Peter'))
print(greet_all('Peter', 'George'))
| 18 | 48 | 0.626984 | 87 | 630 | 4.413793 | 0.252874 | 0.15625 | 0.109375 | 0.140625 | 0.445313 | 0.445313 | 0.445313 | 0.203125 | 0 | 0 | 0 | 0 | 0.184127 | 630 | 34 | 49 | 18.529412 | 0.747082 | 0 | 0 | 0.5 | 0 | 0 | 0.226984 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.208333 | 0.666667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a54225ed619d78362fd9acf8772754916fdd463 | 1,521 | py | Python | cachetclient/v1/metrics.py | ccampo133/cachet-client | 8323c71fdc6b04b45912bf8f4e7d87f901084661 | [
"MIT"
] | null | null | null | cachetclient/v1/metrics.py | ccampo133/cachet-client | 8323c71fdc6b04b45912bf8f4e7d87f901084661 | [
"MIT"
] | null | null | null | cachetclient/v1/metrics.py | ccampo133/cachet-client | 8323c71fdc6b04b45912bf8f4e7d87f901084661 | [
"MIT"
] | null | null | null | from cachetclient.base import Manager, Resource
class Metrics(Resource):
@property
def id(self) -> int:
return self.get('id')
@property
def name(self) -> str:
return self.get('name')
@property
def suffix(self) -> str:
return self.get('suffix')
@property
def description(self):
return self.get('description')
@property
def default_value(self):
return self.get('default_value')
@property
def calc_type(self) -> int:
return self.get('calc_type')
@property
def display_chart(self) -> int:
return self.get('display_chart')
@property
def created_at(self):
self.get('created_at')
@property
def updated_at(self):
self.get('updated_at')
@property
def default_view_name(self):
return self.get('default_view_name')
class MetricsManager(Manager):
resource_class = Metrics
path = 'metrics'
def create(self):
pass
def list(self, page: int = 1, per_page: int = 20):
"""
List all metrics
Keyword Args:
page (int): Page to start listing
per_page (int): Number of entries per page
Returns:
Generator of Metrics instances
"""
return self._list_paginated(
self.path,
page=page,
per_page=per_page,
)
def get(self):
pass
def delete(self, metrics_id):
self._delete(self.path, metrics_id)
| 20.013158 | 54 | 0.579224 | 179 | 1,521 | 4.78771 | 0.290503 | 0.128355 | 0.121354 | 0.05951 | 0.172695 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002885 | 0.316239 | 1,521 | 75 | 55 | 20.28 | 0.821154 | 0.105851 | 0 | 0.255319 | 0 | 0 | 0.078704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.297872 | false | 0.042553 | 0.021277 | 0.170213 | 0.595745 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a59d439833554cf01bd365bff31966f77a64875 | 1,353 | py | Python | stan/plugins.py | ahartikainen/pystan | 899658e9c242a48b394ca2378a8404a17a58a2ce | [
"0BSD"
] | 1,030 | 2015-01-05T19:15:04.000Z | 2022-03-31T00:18:14.000Z | stan/plugins.py | ahartikainen/pystan | 899658e9c242a48b394ca2378a8404a17a58a2ce | [
"0BSD"
] | 691 | 2015-01-04T19:04:40.000Z | 2022-03-02T11:42:21.000Z | stan/plugins.py | ahartikainen/pystan | 899658e9c242a48b394ca2378a8404a17a58a2ce | [
"0BSD"
] | 243 | 2015-01-12T22:10:23.000Z | 2022-03-09T11:44:09.000Z | import abc
from typing import Generator
import pkg_resources
import stan.fit
def get_plugins() -> Generator[pkg_resources.EntryPoint, None, None]:
"""Iterate over available plugins."""
return pkg_resources.iter_entry_points(group="stan.plugins")
class PluginBase(abc.ABC):
"""Base class for PyStan plugins.
Plugin developers should create a class which subclasses `PluginBase`.
This class must be referenced in their package's entry points section.
"""
# Implementation note: this plugin system is simple because there are only
# a couple of places a plugin developer might want to change behavior. For
# a more full-featured plugin system, see Stevedore
# (<https://docs.openstack.org/stevedore>). This plugin system follows
# (approximately) the pattern stevedore labels `ExtensionManager`.
def on_post_sample(self, fit: stan.fit.Fit) -> stan.fit.Fit:
"""Called with Fit instance when sampling has finished.
The plugin can report information about the samples
contained in the Fit object. It may also add to or
modify the Fit instance.
If the plugin only analyzes the contents of `fit`,
it must return the `fit`.
Argument:
fit: Fit instance.
Returns:
The Fit instance.
"""
return fit
| 30.066667 | 78 | 0.687361 | 178 | 1,353 | 5.179775 | 0.589888 | 0.047722 | 0.034707 | 0.0282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.240946 | 1,353 | 44 | 79 | 30.75 | 0.89776 | 0.637842 | 0 | 0 | 0 | 0 | 0.032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.444444 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0a5b9443d6292a2e315379a198c956b9efd23769 | 862 | py | Python | jesse/strategies/TestTakeProfitPriceIsReplacedWithMarketOrderWhenMoreConvenientLongPosition/__init__.py | weselyj/jesse | 24ce05c17494b6ac7b4201cf06b4fa9d16d4d709 | [
"MIT"
] | null | null | null | jesse/strategies/TestTakeProfitPriceIsReplacedWithMarketOrderWhenMoreConvenientLongPosition/__init__.py | weselyj/jesse | 24ce05c17494b6ac7b4201cf06b4fa9d16d4d709 | [
"MIT"
] | 7 | 2022-02-14T11:39:49.000Z | 2022-03-31T04:57:36.000Z | jesse/strategies/TestTakeProfitPriceIsReplacedWithMarketOrderWhenMoreConvenientLongPosition/__init__.py | weselyj/jesse | 24ce05c17494b6ac7b4201cf06b4fa9d16d4d709 | [
"MIT"
] | null | null | null | from jesse.strategies import Strategy
from jesse.enums import order_types
class TestTakeProfitPriceIsReplacedWithMarketOrderWhenMoreConvenientLongPosition(Strategy):
def before(self) -> None:
if self.price == 15:
last_trade = self.trades[-1]
# it should have closed on the market price at the time being 10 instead of 8
last_trade.exit_price = 10
# the order type should be market
assert self.orders[0].type == order_types.MARKET
assert self.orders[1].type == order_types.MARKET
def should_long(self) -> bool:
return self.price == 10
def go_long(self):
self.buy = 1, 10
self.take_profit = 1, 8
def should_short(self) -> bool:
return False
def go_short(self):
pass
def should_cancel(self):
return False
| 27.806452 | 91 | 0.638051 | 110 | 862 | 4.890909 | 0.472727 | 0.055762 | 0.05948 | 0.081784 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027642 | 0.286543 | 862 | 30 | 92 | 28.733333 | 0.847154 | 0.12413 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.3 | false | 0.05 | 0.1 | 0.15 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
0a5bf774cc33ccc53c9cb8f3ed7ea2cd197af7f2 | 194 | py | Python | pimill/app.py | fibasile/pi.mill | fc6e1d830d7759629c8de5bef74a4222d542e63f | [
"MIT"
] | 1 | 2015-07-23T22:30:33.000Z | 2015-07-23T22:30:33.000Z | pimill/app.py | fibasile/pi.mill | fc6e1d830d7759629c8de5bef74a4222d542e63f | [
"MIT"
] | null | null | null | pimill/app.py | fibasile/pi.mill | fc6e1d830d7759629c8de5bef74a4222d542e63f | [
"MIT"
] | null | null | null | from bottle import Bottle, route, template
from server import Server
import os
import logging
import logging.config
if __name__ == "__main__":
s = Server(debug=True)
s.run()
| 14.923077 | 42 | 0.695876 | 26 | 194 | 4.884615 | 0.615385 | 0.188976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226804 | 194 | 12 | 43 | 16.166667 | 0.846667 | 0 | 0 | 0 | 0 | 0 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
6a52881eb8a41fca00165f4a2ab7342ecf0f0f9b | 242 | py | Python | cauldron/test/steptesting/libs/_testlib/__init__.py | DanMayhew/cauldron | ac41481830fc1a363c145f4b58ce785aac054d10 | [
"MIT"
] | 90 | 2016-09-02T15:11:10.000Z | 2022-01-02T11:37:57.000Z | cauldron/test/steptesting/libs/_testlib/__init__.py | DanMayhew/cauldron | ac41481830fc1a363c145f4b58ce785aac054d10 | [
"MIT"
] | 86 | 2016-09-23T16:52:22.000Z | 2022-03-31T21:39:56.000Z | cauldron/test/steptesting/libs/_testlib/__init__.py | DanMayhew/cauldron | ac41481830fc1a363c145f4b58ce785aac054d10 | [
"MIT"
] | 261 | 2016-12-22T05:36:48.000Z | 2021-11-26T12:40:42.000Z |
def patching_test(value):
"""
A test function for patching values during step tests. By default this
function returns the value it was passed. Patching this should change its
behavior in step tests.
"""
return value
| 24.2 | 77 | 0.702479 | 34 | 242 | 4.970588 | 0.735294 | 0.106509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247934 | 242 | 9 | 78 | 26.888889 | 0.928571 | 0.694215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6a61b99dbe42604cd74c5f55cf455648c88d5b44 | 165 | py | Python | Python3-built-in-functions/0x48map.py | DropsDevopsOrg/PythonWiki | 0c344edad37ed34c03cf066df991922cb4bdeee0 | [
"Apache-2.0"
] | 15 | 2019-04-09T04:20:21.000Z | 2022-02-08T20:33:42.000Z | Python3-built-in-functions/0x48map.py | sep8dog/PythonWiki | 0c344edad37ed34c03cf066df991922cb4bdeee0 | [
"Apache-2.0"
] | 1 | 2019-07-22T07:27:10.000Z | 2020-10-09T08:00:17.000Z | Python3-built-in-functions/0x48map.py | sep8dog/PythonWiki | 0c344edad37ed34c03cf066df991922cb4bdeee0 | [
"Apache-2.0"
] | 16 | 2019-09-13T14:06:42.000Z | 2022-03-15T06:02:01.000Z | #map() 会根据提供的函数对指定序列做映射。第一个参数是函数,第二个参数是序列
#把序列里面的每个元素作为参数传递到函数,返回一个迭代器
def square(x):
return x**2
ls = [1, 2, 3]
for i in map(square, ls):
print(i) | 18.333333 | 42 | 0.648485 | 24 | 165 | 4.458333 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030769 | 0.212121 | 165 | 9 | 43 | 18.333333 | 0.792308 | 0.406061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
6a649b1afb1faf723d9535c5c38cb3a7a1a52ae4 | 142 | py | Python | qradiomics/__init__.py | taznux/radiomics-tools | 74d3b314ddc00427fd7f2a17f79fbde372dee2ce | [
"MIT"
] | 18 | 2016-07-01T20:37:27.000Z | 2021-12-29T07:16:29.000Z | qradiomics/__init__.py | taznux/radiomics-tools | 74d3b314ddc00427fd7f2a17f79fbde372dee2ce | [
"MIT"
] | 13 | 2016-07-18T22:14:19.000Z | 2019-08-29T15:33:07.000Z | qradiomics/__init__.py | taznux/radiomics-tools | 74d3b314ddc00427fd7f2a17f79fbde372dee2ce | [
"MIT"
] | 13 | 2016-08-27T06:59:07.000Z | 2021-01-04T07:41:27.000Z | import os.path as osp
pkg_dir = osp.abspath(osp.dirname(__file__))
from . import io
from . import util
from .io import metadata as metadata
| 17.75 | 44 | 0.760563 | 24 | 142 | 4.291667 | 0.583333 | 0.194175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161972 | 142 | 7 | 45 | 20.285714 | 0.865546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6aa107d2c94654ac7d746f4b90f981225ff013b3 | 1,842 | py | Python | webapp/crowdclass/migrations/0004_auto_20160425_0119.py | dorisjlee/crowdclass | 5497c58c84cadfb017669a5f2a3ff020dc1a0d75 | [
"BSD-3-Clause"
] | null | null | null | webapp/crowdclass/migrations/0004_auto_20160425_0119.py | dorisjlee/crowdclass | 5497c58c84cadfb017669a5f2a3ff020dc1a0d75 | [
"BSD-3-Clause"
] | null | null | null | webapp/crowdclass/migrations/0004_auto_20160425_0119.py | dorisjlee/crowdclass | 5497c58c84cadfb017669a5f2a3ff020dc1a0d75 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.9.5 on 2016-04-25 01:19
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('crowdclass', '0003_auto_20160425_0113'),
]
operations = [
migrations.AddField(
model_name='bardescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='bulgedescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='dustdescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='edgedescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='ellipticaldescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='lensdescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='mergingdescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='spiraldescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
migrations.AddField(
model_name='tidaldescription',
name='image',
field=models.IntegerField(blank=True, null=True),
),
]
| 30.196721 | 61 | 0.565689 | 163 | 1,842 | 6.288344 | 0.306748 | 0.158049 | 0.201951 | 0.237073 | 0.640976 | 0.640976 | 0.640976 | 0.640976 | 0.640976 | 0.593171 | 0 | 0.025457 | 0.31759 | 1,842 | 60 | 62 | 30.7 | 0.789976 | 0.036374 | 0 | 0.679245 | 1 | 0 | 0.126975 | 0.024831 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.037736 | 0 | 0.09434 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6aa45cb25506912ba8540e1aa38c6d4b77413b5c | 59 | py | Python | ex002.py | klebervieirati/PYTHON | 1bb03e775df2ff0d996aab0e3ce8f6f058bc5a05 | [
"MIT"
] | null | null | null | ex002.py | klebervieirati/PYTHON | 1bb03e775df2ff0d996aab0e3ce8f6f058bc5a05 | [
"MIT"
] | null | null | null | ex002.py | klebervieirati/PYTHON | 1bb03e775df2ff0d996aab0e3ce8f6f058bc5a05 | [
"MIT"
] | null | null | null | nome=input('Digite Seu Nome ')
print('Seja Bem Vindo',nome) | 29.5 | 30 | 0.728814 | 10 | 59 | 4.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 59 | 2 | 31 | 29.5 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
6aa67e2275b815785e63b9ceb6e5611ce38514c7 | 367 | py | Python | src/sol/handle_notimestamp.py | bryanlabs/staketaxcsv | 7879b75d242d796ee6959e7d9e0b4d9dcb2966dd | [
"MIT"
] | null | null | null | src/sol/handle_notimestamp.py | bryanlabs/staketaxcsv | 7879b75d242d796ee6959e7d9e0b4d9dcb2966dd | [
"MIT"
] | null | null | null | src/sol/handle_notimestamp.py | bryanlabs/staketaxcsv | 7879b75d242d796ee6959e7d9e0b4d9dcb2966dd | [
"MIT"
] | null | null | null |
from common.make_tx import make_simple_tx
from common.ExporterTypes import TX_TYPE_MISSING_TIMESTAMP
def is_notimestamp_tx(txinfo):
if txinfo.timestamp is None or txinfo.timestamp == "":
return True
return False
def handle_notimestamp_tx(exporter, txinfo):
row = make_simple_tx(txinfo, TX_TYPE_MISSING_TIMESTAMP)
exporter.ingest_row(row)
| 24.466667 | 59 | 0.776567 | 52 | 367 | 5.173077 | 0.461538 | 0.074349 | 0.089219 | 0.163569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160763 | 367 | 14 | 60 | 26.214286 | 0.873377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
6aa801d0db5df73d07f9219481b32466df67de34 | 228 | py | Python | mathics/builtin/box/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 1,920 | 2015-01-06T17:56:26.000Z | 2022-03-24T14:33:29.000Z | mathics/builtin/box/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 868 | 2015-01-04T06:19:40.000Z | 2022-03-14T13:39:38.000Z | mathics/builtin/box/__init__.py | skirpichev/Mathics | 318e06dea8f1c70758a50cb2f95c9900150e3a68 | [
"Apache-2.0"
] | 240 | 2015-01-16T13:31:26.000Z | 2022-03-12T12:52:46.000Z | """
Boxing modules.
Boxes are added in formatting Mathics S-Expressions.
Boxing information like width and size makes it easier for formatters to do
layout without having to know the intricacies of what is inside the box.
"""
| 25.333333 | 75 | 0.785088 | 36 | 228 | 4.972222 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171053 | 228 | 8 | 76 | 28.5 | 0.94709 | 0.960526 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ab031b440eb398e3ee38726b63ff8c8dba89a27 | 948 | py | Python | dask/bytes/hdfs3.py | abhinavralhan/dask | e840ba38eadfa93c3b9959347f0a43c1279a94ab | [
"BSD-3-Clause"
] | 2 | 2021-06-18T17:00:45.000Z | 2022-03-08T00:59:40.000Z | dask/bytes/hdfs3.py | abhinavralhan/dask | e840ba38eadfa93c3b9959347f0a43c1279a94ab | [
"BSD-3-Clause"
] | 2 | 2019-03-19T22:19:04.000Z | 2019-03-26T19:04:00.000Z | dask/bytes/hdfs3.py | abhinavralhan/dask | e840ba38eadfa93c3b9959347f0a43c1279a94ab | [
"BSD-3-Clause"
] | 1 | 2021-08-01T14:29:04.000Z | 2021-08-01T14:29:04.000Z | from __future__ import print_function, division, absolute_import
import posixpath
from .glob import generic_glob
from ..base import tokenize
import hdfs3
class HDFS3HadoopFileSystem(object):
sep = "/"
def __init__(self, **kwargs):
self.fs = hdfs3.HDFileSystem(**kwargs)
@classmethod
def from_hdfs3(cls, fs):
out = object.__new__(cls)
out.fs = fs
return out
def open(self, path, mode='rb', **kwargs):
return self.fs.open(path, mode=mode, **kwargs)
def glob(self, path):
return sorted(generic_glob(self.fs, posixpath, path))
def mkdirs(self, path):
return self.fs.makedirs(path)
def ukey(self, path):
return tokenize(path, self.fs.info(path)['last_mod'])
def size(self, path):
return self.fs.info(path)['size']
def _get_pyarrow_filesystem(self):
from .pyarrow import HDFS3Wrapper
return HDFS3Wrapper(self.fs)
| 23.121951 | 64 | 0.650844 | 121 | 948 | 4.92562 | 0.363636 | 0.07047 | 0.09396 | 0.060403 | 0.067114 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 0.234177 | 948 | 40 | 65 | 23.7 | 0.812672 | 0 | 0 | 0 | 0 | 0 | 0.015823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.296296 | false | 0 | 0.222222 | 0.185185 | 0.851852 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6ab8e42d0062be747c94ddaf5087387597c1d1e9 | 392 | py | Python | ramp-utils/ramp_utils/tests/test_string_encoding.py | DjalelBBZ/ramp-board | 7f78c48ee53977dbcdf6859558a9fa47895633cb | [
"BSD-3-Clause"
] | null | null | null | ramp-utils/ramp_utils/tests/test_string_encoding.py | DjalelBBZ/ramp-board | 7f78c48ee53977dbcdf6859558a9fa47895633cb | [
"BSD-3-Clause"
] | null | null | null | ramp-utils/ramp_utils/tests/test_string_encoding.py | DjalelBBZ/ramp-board | 7f78c48ee53977dbcdf6859558a9fa47895633cb | [
"BSD-3-Clause"
] | null | null | null | import sys
from ramp_utils import encode_string
PYTHON3 = sys.version_info[0] == 3
def test_encode_string():
if PYTHON3:
string = encode_string('a string')
assert isinstance(string, bytes)
string = encode_string(b'a string')
assert isinstance(string, bytes)
else:
string = encode_string('a string')
assert isinstance(string, bytes)
| 23.058824 | 43 | 0.663265 | 49 | 392 | 5.142857 | 0.428571 | 0.238095 | 0.214286 | 0.27381 | 0.547619 | 0.547619 | 0.412698 | 0.412698 | 0.412698 | 0 | 0 | 0.013559 | 0.247449 | 392 | 16 | 44 | 24.5 | 0.840678 | 0 | 0 | 0.416667 | 0 | 0 | 0.061224 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6aca4f07ca6283a7254a3874e56fc0b5bf9a8cb3 | 305 | py | Python | gazelle/testdata/first_party_file_and_directory_modules/__main__.py | f0rmiga/rules_python | 2b1d6beb4d5d8f59d629597e30e9aa519182d9a9 | [
"Apache-2.0"
] | null | null | null | gazelle/testdata/first_party_file_and_directory_modules/__main__.py | f0rmiga/rules_python | 2b1d6beb4d5d8f59d629597e30e9aa519182d9a9 | [
"Apache-2.0"
] | null | null | null | gazelle/testdata/first_party_file_and_directory_modules/__main__.py | f0rmiga/rules_python | 2b1d6beb4d5d8f59d629597e30e9aa519182d9a9 | [
"Apache-2.0"
] | null | null | null | import foo
from baz import baz as another_baz
from foo.bar import baz
from one.two import two
from package1.subpackage1.module1 import find_me
assert not hasattr(foo, 'foo')
assert baz() == 'baz from foo/bar.py'
assert another_baz() == 'baz from baz.py'
assert two() == 'two'
assert find_me() == 'found'
| 25.416667 | 48 | 0.734426 | 52 | 305 | 4.230769 | 0.365385 | 0.127273 | 0.090909 | 0.118182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011538 | 0.147541 | 305 | 11 | 49 | 27.727273 | 0.834615 | 0 | 0 | 0 | 0 | 0 | 0.147541 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6ad109dce7c707f5b250814e6445893250639d8a | 4,470 | py | Python | pyke/krb_compiler/kfbparser_tables.py | rch/pyke-1.1.1 | e399b06f0c655eb6baafebaed09b4eb8f9c44b82 | [
"MIT"
] | 76 | 2015-04-20T12:10:25.000Z | 2021-11-27T20:26:27.000Z | pyke/krb_compiler/kfbparser_tables.py | w-simon/pyke | cfe95d8aaa06de123264f9b7f5bea20eb5924ecd | [
"MIT"
] | 2 | 2016-03-09T14:33:27.000Z | 2018-10-22T11:25:49.000Z | pyke/krb_compiler/kfbparser_tables.py | w-simon/pyke | cfe95d8aaa06de123264f9b7f5bea20eb5924ecd | [
"MIT"
] | 42 | 2015-03-16T13:11:30.000Z | 2022-02-12T14:45:48.000Z |
# /home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser_tables.py
# This file is automatically generated. Do not edit.
_tabversion = '3.2'
_lr_method = 'LALR'
_lr_signature = '4\xa4a\x00\xea\xcdZp5\xc6@\xa5\xfa\x1dCA'
_lr_action_items = {'NONE_TOK':([8,12,24,26,],[11,11,11,11,]),'LP_TOK':([5,8,12,24,26,],[8,12,12,12,12,]),'STRING_TOK':([8,12,24,26,],[13,13,13,13,]),'RP_TOK':([8,11,12,13,14,16,17,18,19,20,22,23,26,27,28,29,],[15,-8,22,-14,-13,25,-17,-15,-16,-19,-18,-9,-10,29,-20,-21,]),',':([11,13,14,16,17,18,19,20,22,23,28,29,],[-8,-14,-13,24,-17,-15,-16,-19,-18,26,-20,-21,]),'NUMBER_TOK':([8,12,24,26,],[14,14,14,14,]),'NL_TOK':([0,6,7,15,21,25,],[3,10,-4,-6,-5,-7,]),'TRUE_TOK':([8,12,24,26,],[17,17,17,17,]),'IDENTIFIER_TOK':([0,1,3,8,10,12,24,26,],[-11,5,-12,18,5,18,18,18,]),'FALSE_TOK':([8,12,24,26,],[19,19,19,19,]),'$end':([0,1,2,3,4,6,7,9,10,15,21,25,],[-11,-2,0,-12,-1,-11,-4,-3,-12,-6,-5,-7,]),}
_lr_action = { }
for _k, _v in _lr_action_items.items():
for _x,_y in zip(_v[0],_v[1]):
if not _x in _lr_action: _lr_action[_x] = { }
_lr_action[_x][_k] = _y
del _lr_action_items
_lr_goto_items = {'facts_opt':([1,],[4,]),'nl_opt':([0,6,],[1,9,]),'comma_opt':([23,],[27,]),'data_list':([8,12,],[16,23,]),'file':([0,],[2,]),'facts':([1,],[6,]),'data':([8,12,24,26,],[20,20,28,28,]),'fact':([1,10,],[7,21,]),}
_lr_goto = { }
for _k, _v in _lr_goto_items.items():
for _x,_y in zip(_v[0],_v[1]):
if not _x in _lr_goto: _lr_goto[_x] = { }
_lr_goto[_x][_k] = _y
del _lr_goto_items
_lr_productions = [
("S' -> file","S'",1,None,None,None),
('file -> nl_opt facts_opt','file',2,'p_file','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',36),
('facts_opt -> <empty>','facts_opt',0,'p_file','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',37),
('facts_opt -> facts nl_opt','facts_opt',2,'p_file','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',38),
('facts -> fact','facts',1,'p_file','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',39),
('facts -> facts NL_TOK fact','facts',3,'p_file','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',40),
('fact -> IDENTIFIER_TOK LP_TOK RP_TOK','fact',3,'p_fact0','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',45),
('fact -> IDENTIFIER_TOK LP_TOK data_list RP_TOK','fact',4,'p_fact1','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',49),
('data -> NONE_TOK','data',1,'p_none','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',53),
('comma_opt -> <empty>','comma_opt',0,'p_none','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',54),
('comma_opt -> ,','comma_opt',1,'p_none','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',55),
('nl_opt -> <empty>','nl_opt',0,'p_none','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',56),
('nl_opt -> NL_TOK','nl_opt',1,'p_none','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',57),
('data -> NUMBER_TOK','data',1,'p_number','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',62),
('data -> STRING_TOK','data',1,'p_string','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',67),
('data -> IDENTIFIER_TOK','data',1,'p_quoted_last','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',72),
('data -> FALSE_TOK','data',1,'p_false','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',77),
('data -> TRUE_TOK','data',1,'p_true','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',82),
('data -> LP_TOK RP_TOK','data',2,'p_empty_tuple','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',87),
('data_list -> data','data_list',1,'p_start_list','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',92),
('data_list -> data_list , data','data_list',3,'p_append_list','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',97),
('data -> LP_TOK data_list comma_opt RP_TOK','data',4,'p_tuple','/home/bruce/python/workareas/pyke-hg/r1_working/pyke/krb_compiler/kfbparser.py',103),
]
| 87.647059 | 695 | 0.682103 | 823 | 4,470 | 3.477521 | 0.151883 | 0.069182 | 0.115304 | 0.184486 | 0.631377 | 0.565688 | 0.565688 | 0.565688 | 0.565688 | 0.55311 | 0 | 0.09936 | 0.0566 | 4,470 | 50 | 696 | 89.4 | 0.579322 | 0.030425 | 0 | 0.04878 | 1 | 0.512195 | 0.596305 | 0.387529 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ad34d48b1004a3ad8e9c13efb4a8f33bbfd394b | 2,880 | py | Python | aliyun-python-sdk-lubancloud/aliyunsdklubancloud/request/v20180509/SubmitGenerateTaskRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 1,001 | 2015-07-24T01:32:41.000Z | 2022-03-25T01:28:18.000Z | aliyun-python-sdk-lubancloud/aliyunsdklubancloud/request/v20180509/SubmitGenerateTaskRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 363 | 2015-10-20T03:15:00.000Z | 2022-03-08T12:26:19.000Z | aliyun-python-sdk-lubancloud/aliyunsdklubancloud/request/v20180509/SubmitGenerateTaskRequest.py | yndu13/aliyun-openapi-python-sdk | 12ace4fb39fe2fb0e3927a4b1b43ee4872da43f5 | [
"Apache-2.0"
] | 682 | 2015-09-22T07:19:02.000Z | 2022-03-22T09:51:46.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
class SubmitGenerateTaskRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'lubancloud', '2018-05-09', 'SubmitGenerateTask','luban')
self.set_method('POST')
def get_ImageCount(self):
return self.get_query_params().get('ImageCount')
def set_ImageCount(self,ImageCount):
self.add_query_param('ImageCount',ImageCount)
def get_ActionPoint(self):
return self.get_query_params().get('ActionPoint')
def set_ActionPoint(self,ActionPoint):
self.add_query_param('ActionPoint',ActionPoint)
def get_LogoImagePath(self):
return self.get_query_params().get('LogoImagePath')
def set_LogoImagePath(self,LogoImagePath):
self.add_query_param('LogoImagePath',LogoImagePath)
def get_Type(self):
return self.get_query_params().get('Type')
def set_Type(self,Type):
self.add_query_param('Type',Type)
def get_MajorImagePaths(self):
return self.get_query_params().get('MajorImagePaths')
def set_MajorImagePaths(self, MajorImagePaths):
for depth1 in range(len(MajorImagePaths)):
if MajorImagePaths[depth1] is not None:
self.add_query_param('MajorImagePath.' + str(depth1 + 1) , MajorImagePaths[depth1])
def get_Width(self):
return self.get_query_params().get('Width')
def set_Width(self,Width):
self.add_query_param('Width',Width)
def get_CopyWrites(self):
return self.get_query_params().get('CopyWrites')
def set_CopyWrites(self, CopyWrites):
for depth1 in range(len(CopyWrites)):
if CopyWrites[depth1] is not None:
self.add_query_param('CopyWrite.' + str(depth1 + 1) , CopyWrites[depth1])
def get_PropertyIds(self):
return self.get_query_params().get('PropertyIds')
def set_PropertyIds(self, PropertyIds):
for depth1 in range(len(PropertyIds)):
if PropertyIds[depth1] is not None:
self.add_query_param('PropertyId.' + str(depth1 + 1) , PropertyIds[depth1])
def get_Height(self):
return self.get_query_params().get('Height')
def set_Height(self,Height):
self.add_query_param('Height',Height) | 33.488372 | 88 | 0.745833 | 390 | 2,880 | 5.346154 | 0.297436 | 0.025899 | 0.060432 | 0.073381 | 0.207194 | 0.179856 | 0.179856 | 0.046043 | 0 | 0 | 0 | 0.010989 | 0.146875 | 2,880 | 86 | 89 | 33.488372 | 0.837607 | 0.261806 | 0 | 0 | 0 | 0 | 0.106009 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.404255 | false | 0 | 0.021277 | 0.191489 | 0.638298 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6ad694215a155f90334d058ef0166e4b91ae7766 | 83 | py | Python | DataAnalysis/extract_stock_info_realtime.py | yuxiang-zhou/MarketAnalysor | 4d19d2589d07409cd699f394921d1a95f3097e94 | [
"MIT"
] | null | null | null | DataAnalysis/extract_stock_info_realtime.py | yuxiang-zhou/MarketAnalysor | 4d19d2589d07409cd699f394921d1a95f3097e94 | [
"MIT"
] | null | null | null | DataAnalysis/extract_stock_info_realtime.py | yuxiang-zhou/MarketAnalysor | 4d19d2589d07409cd699f394921d1a95f3097e94 | [
"MIT"
] | null | null | null | http://www.google.co.uk/finance/historical?q=ASL&ei=YuHVVYG4F4XHU-GTsrAF&output=csv | 83 | 83 | 0.819277 | 14 | 83 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024096 | 0 | 83 | 1 | 83 | 83 | 0.795181 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6ae7ca67f0368d408cf74a04344ec7c37886344b | 66 | py | Python | FictionTools/amitools/test/suite/util_muldiv.py | polluks/Puddle-BuildTools | c1762d53a33002b62d8cffe3db129505a387bec3 | [
"BSD-2-Clause"
] | 38 | 2021-06-18T12:56:15.000Z | 2022-03-12T20:38:40.000Z | FictionTools/amitools/test/suite/util_muldiv.py | polluks/Puddle-BuildTools | c1762d53a33002b62d8cffe3db129505a387bec3 | [
"BSD-2-Clause"
] | 2 | 2021-06-20T16:28:12.000Z | 2021-11-17T21:33:56.000Z | FictionTools/amitools/test/suite/util_muldiv.py | polluks/Puddle-BuildTools | c1762d53a33002b62d8cffe3db129505a387bec3 | [
"BSD-2-Clause"
] | 6 | 2021-06-18T18:18:36.000Z | 2021-12-22T08:01:32.000Z | def run_test(vamos):
vamos.run_prog_check_data("util_muldiv")
| 22 | 44 | 0.772727 | 11 | 66 | 4.181818 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106061 | 66 | 2 | 45 | 33 | 0.779661 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a7598af7dbe2fb6d98cb0f51c4edc32a4bee867 | 201 | py | Python | setup.py | traineryou/fast | 7962f6080552ef142d8bc2d29381983ce06ac0bc | [
"MIT"
] | null | null | null | setup.py | traineryou/fast | 7962f6080552ef142d8bc2d29381983ce06ac0bc | [
"MIT"
] | null | null | null | setup.py | traineryou/fast | 7962f6080552ef142d8bc2d29381983ce06ac0bc | [
"MIT"
] | null | null | null | from setuptools import setup
setup(
name = "remocolab.py",
version = "0.1",
py_modules = ['remocolab'],
url = "https://github.com/traineryou/bitturk.git",
author = "traineryou",
)
| 20.1 | 54 | 0.626866 | 23 | 201 | 5.434783 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012579 | 0.208955 | 201 | 9 | 55 | 22.333333 | 0.773585 | 0 | 0 | 0 | 0 | 0 | 0.373134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 0.125 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a7d0ea897ae567df352ddbb5937c15d39c49d10 | 200 | py | Python | qnd/test_test.py | raviqqe/qnd.tf | fab2bd93b1b2b2a15fd6eac8d30ede382522c368 | [
"Unlicense"
] | 69 | 2016-12-23T16:23:23.000Z | 2019-06-08T16:38:06.000Z | qnd/test_test.py | raviqqe/qnd.tf | fab2bd93b1b2b2a15fd6eac8d30ede382522c368 | [
"Unlicense"
] | 7 | 2016-12-26T03:00:21.000Z | 2017-05-20T10:25:46.000Z | qnd/test_test.py | raviqqe/qnd.tf | fab2bd93b1b2b2a15fd6eac8d30ede382522c368 | [
"Unlicense"
] | 7 | 2016-12-25T12:56:14.000Z | 2019-07-16T00:29:50.000Z | import tensorflow as tf
from .test import *
def test_oracle_model():
oracle_model(tf.zeros([100]), tf.zeros([100]))
def test_user_input_fn():
user_input_fn(tf.FIFOQueue(64, [tf.string]))
| 16.666667 | 50 | 0.705 | 32 | 200 | 4.15625 | 0.53125 | 0.105263 | 0.150376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046784 | 0.145 | 200 | 11 | 51 | 18.181818 | 0.730994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a7f722a330d2c5e7426d62963aff820a102dcbe | 151 | py | Python | _solutions/intermediate/database/sqlite3_fetch_a.py | sages-pl/2022-01-pythonsqlalchemy-aptiv | 1d6d856608e9dbe25b139e8968c48b7f46753b84 | [
"MIT"
] | null | null | null | _solutions/intermediate/database/sqlite3_fetch_a.py | sages-pl/2022-01-pythonsqlalchemy-aptiv | 1d6d856608e9dbe25b139e8968c48b7f46753b84 | [
"MIT"
] | null | null | null | _solutions/intermediate/database/sqlite3_fetch_a.py | sages-pl/2022-01-pythonsqlalchemy-aptiv | 1d6d856608e9dbe25b139e8968c48b7f46753b84 | [
"MIT"
] | null | null | null |
with sqlite3.connect(DATABASE) as db:
db.execute(SQL_CREATE_TABLE)
db.executemany(SQL_INSERT, DATA)
result = list(db.execute(SQL_SELECT))
| 25.166667 | 41 | 0.735099 | 22 | 151 | 4.863636 | 0.727273 | 0.168224 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007813 | 0.152318 | 151 | 5 | 42 | 30.2 | 0.828125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0a935d42f52f51d28b2f2c11f5ae8daa0b0068cd | 1,326 | py | Python | pcapkit/protocols/__init__.py | binref/PyPCAPKit | 7c5ba2cfa95bdc80a95b53b6669340a8783d2ad9 | [
"BSD-3-Clause"
] | null | null | null | pcapkit/protocols/__init__.py | binref/PyPCAPKit | 7c5ba2cfa95bdc80a95b53b6669340a8783d2ad9 | [
"BSD-3-Clause"
] | null | null | null | pcapkit/protocols/__init__.py | binref/PyPCAPKit | 7c5ba2cfa95bdc80a95b53b6669340a8783d2ad9 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# pylint: disable=unused-import,unused-wildcard-import,fixme
"""protocol family
:mod:`pcapkit.protocols` is collection of all protocol families,
with detailed implementation and methods.
"""
# TODO: Implement specified classes for MAC and IP addresses.
# Base Class for Protocols
from pcapkit.protocols.protocol import Protocol
# Utility Classes for Protocols
from pcapkit.protocols.misc import *
# Protocols & Macros
from pcapkit.protocols.link import *
from pcapkit.protocols.internet import *
from pcapkit.protocols.transport import *
from pcapkit.protocols.application import *
# Deprecated / Base Protocols
from pcapkit.protocols.internet.ip import IP
from pcapkit.protocols.internet.ipsec import IPsec
from pcapkit.protocols.application.http import HTTP
__all__ = [
# Protocol Numbers
'LINKTYPE', 'ETHERTYPE', 'TP_PROTO',
# PCAP Headers
'Header', 'Frame',
# No Payload
'NoPayload',
# Raw Packet
'Raw',
# Link Layer
'ARP', 'DRARP', 'Ethernet', 'InARP', 'L2TP',
'OSPF', 'RARP', 'VLAN',
# Internet Layer
'AH', 'IP', 'IPsec', 'IPv4', 'IPv6', 'IPX',
# IPv6 Extension Header
'HIP', 'HOPOPT', 'IPv6_Frag', 'IPv6_Opts',
'IPv6_Route', 'MH',
# Transport Layer
'TCP', 'UDP',
# Application Layer
'FTP', 'HTTP',
]
| 22.862069 | 64 | 0.686275 | 156 | 1,326 | 5.782051 | 0.525641 | 0.177384 | 0.199557 | 0.096452 | 0.070953 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.18552 | 1,326 | 57 | 65 | 23.263158 | 0.827778 | 0.377828 | 0 | 0 | 0 | 0 | 0.195761 | 0 | 0 | 0 | 0 | 0.035088 | 0 | 1 | 0 | false | 0 | 0.409091 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a98817e2727d1ac93b22ba39dfd829cb5e77f28 | 6,404 | py | Python | Vocola/exec/vcl2py.py | mrob95/natlink | dc806ab62da89b7bb3d3683387af00c696601e16 | [
"MIT"
] | null | null | null | Vocola/exec/vcl2py.py | mrob95/natlink | dc806ab62da89b7bb3d3683387af00c696601e16 | [
"MIT"
] | null | null | null | Vocola/exec/vcl2py.py | mrob95/natlink | dc806ab62da89b7bb3d3683387af00c696601e16 | [
"MIT"
] | null | null | null | # vcl2py: Convert Vocola voice command files to NatLink Python "grammar"
# classes implementing those voice commands
#
# Usage: python vcl2py.py [<option>...] <inputFileOrFolder> <outputFolder>
# Where <option> can be:
# -debug <n> -- specify debugging level
# (0 = no info, 1 = show statements,
# 2 = detailed info)
# -extensions <filename> -- specify filename containing extension interface
# information
# -f -- force processing even if file(s) not out of date
# -INI_file <filename> -- specify filename of INI file to use
# -log_file <filename> -- specify filename to log to
# -log_stdout -- log to standard out instead of a file
# -max_commands <n> -- specify maximum number of commands per utterance
# -numbers <s0>,<s1>,<s2>,...
# -- use spoken form <s0> instead of "0" in ranges,
# <s1> instead of "1" in ranges, etc.
# -q -- ignore any INI file
# -suffix <s> -- use suffix <s> to distinguish Vocola generated
# files (default is "_vcl")
#
#
# Copyright (c) 2000-2003, 2005, 2007, 2009-2012 by Rick Mohr.
#
# Portions Copyright (c) 2012-15 by Hewlett-Packard Development Company, L.P.
#
# Portions Copyright (c) 2015-16 by Mark Lillibridge.
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
#
# 11/34/2015 ml Split up into modules
# 12/26/2013 ml Added new built-ins, If and When
# 4/23/2013 ml Any series of one or more terms at least one of which
# is not optional or <_anything> can now be optional.
# 5/01/2012 ml Ported to Python line by line, parser replaced with
# lexer/traditional parser
# 5/14/2011 ml Selected numbers in ranges can now be spelled out
# 11/28/2010 ml Extensions can now be called
# 05/28/2010 ml Print_* functions -> unparse_* to avoid compiler bug
# 05/08/2010 ml Underscores now converted to spaces by VocolaUtils
# 03/31/2010 ml Runtime errors now caught and passed to handle_error along
# with filename and line number of error location
# 01/27/2010 ml Actions now implemented via direct translation to
# Python, with no delay of Dragon calls, etc.
# 01/01/2010 ml User functions are now implemented via unrolling
# 12/30/2009 ml Eval is now implemented via transformation to EvalTemplate
# 12/28/2009 ml New EvalTemplate built-in function
# 09/06/2009 ml New $set directive replaces old non-working sequence directive
# binary Use Command Sequences replaced by n-ary MaximumCommands
# 01/19/2009 ml Unimacro built-in added
# 12/06/2007 ml Arguments to Dragon functions are now checked for proper
# number and datatype
# 06/02/2007 ml Output filenames are now mangled in an invertable fashion
# 05/17/2007 ml Eval now works correctly on any action instead of just word
# and reference actions.
# 05/15/2007 ml Variable substitution regularized
# Empty context statements now work
# 04/18/2007 ml (Function) Names may now start with underscores
# 04/08/2007 ml Quotation marks can be escaped by doubling
# 01/03/2005 rm Commands can incorporate arbitrary dictation
# Enable/disable command sequences via ini file
# 04/12/2003 rm Case insensitive window title comparisons
# Output e.g. "emacs_vcl.py" (don't clobber existing NatLink
# files)
# 11/24/2002 rm Option to process a single file, or only changed files
# 10/12/2002 rm Use <any>+ instead of exporting individual NatLink commands
# 10/05/2002 rm Generalized indenting, emit()
# 09/29/2002 rm Built-in function: Repeat()
# 09/15/2002 rm User-defined functions
# 08/17/2002 rm Use recursive grammar for command sequences
# 07/14/2002 rm Context statements can contain '|'
# Support environment variable references in include statements
# 07/06/2002 rm Function arguments allow multiple actions
# Built-in function: Eval()!
# 07/05/2002 rm New code generation using VocolaUtils.py
# 07/04/2002 rm Improve generated code: use "elif" in menus
# 06/02/2002 rm Command sequences!
# 05/19/2002 rm Support "include" statement
# 05/03/2002 rm Version 1.1
# 05/03/2002 rm Handle application names containing '_'
# 05/03/2002 rm Convert '\' to '\\' early to avoid quotewords bug
# 02/18/2002 rm Version 0.9
# 12/08/2001 rm convert e.g. "{Tab_2}" to "{Tab 2}"
# expand in-string references (e.g. "{Up $1}")
# 03/31/2001 rm Detect and report unbalanced quotes
# 03/06/2001 rm Improve error checking for complex menus
# 02/24/2001 rm Change name to Vocola
# 02/18/2001 rm Handle terms containing an apostrophe
# 02/06/2001 rm Machine-specific command files
# 02/04/2001 rm Error on undefined variable or reference out of range
# 08/22/2000 rm First usable version
# Style notes:
# Global variables are capitalized (e.g. Definitions)
# Local variables are lowercase (e.g. in_folder)
from vcl2py.main import main_routine
# ---------------------------------------------------------------------------
# Okay, let's run!
main_routine();
#import profile
#profile.run('main_routine()')
| 50.825397 | 80 | 0.671924 | 926 | 6,404 | 4.62959 | 0.431965 | 0.022393 | 0.016095 | 0.006998 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088381 | 0.242036 | 6,404 | 125 | 81 | 51.232 | 0.794808 | 0.953935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0a98e92c15967b552eed766737d8580588d4e424 | 363 | py | Python | wouso/games/specialquest/forms.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 117 | 2015-01-02T18:07:33.000Z | 2021-01-06T22:36:25.000Z | wouso/games/specialquest/forms.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 229 | 2015-01-12T07:07:58.000Z | 2019-10-12T08:27:01.000Z | wouso/games/specialquest/forms.py | AlexandruGhergut/wouso | f26244ff58ae626808ae8c58ccc93d21f9f2666f | [
"Apache-2.0"
] | 96 | 2015-01-07T05:26:09.000Z | 2020-06-25T07:28:51.000Z | from django.forms import ModelForm, TextInput
from django.forms.fields import DateField
from models import SpecialQuestTask
class TaskForm(ModelForm):
class Meta:
model = SpecialQuestTask
widgets = {'start_date': TextInput(attrs={'placeholder': 'yyyy-mm-dd'}),
'end_date': TextInput(attrs={'placeholder': 'yyyy-mm-dd'})}
| 30.25 | 80 | 0.688705 | 40 | 363 | 6.2 | 0.575 | 0.080645 | 0.120968 | 0.233871 | 0.298387 | 0.298387 | 0.298387 | 0 | 0 | 0 | 0 | 0 | 0.192837 | 363 | 11 | 81 | 33 | 0.846416 | 0 | 0 | 0 | 0 | 0 | 0.166205 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0aa396f26253085bea846f6bcdd8099c9978ce4a | 1,100 | py | Python | apis/v1/admin/administrator/interface_login.py | billijoe/wechat_spider | 5f4f82e9624b5ce9bd40e7b10bee82fd8467d963 | [
"Apache-2.0"
] | null | null | null | apis/v1/admin/administrator/interface_login.py | billijoe/wechat_spider | 5f4f82e9624b5ce9bd40e7b10bee82fd8467d963 | [
"Apache-2.0"
] | null | null | null | apis/v1/admin/administrator/interface_login.py | billijoe/wechat_spider | 5f4f82e9624b5ce9bd40e7b10bee82fd8467d963 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@project : WechatTogether
@Time : 2020/9/7 15:18
@Auth : AJay13
@File :interface_login.py
@IDE :PyCharm
@Motto:ABC(Always Be Coding)
"""
__all__ = ['InterfaceLogin']
from flask import views, current_app
import models
from apis.common import response_code
from apis.common.api_version import api_version
from apis.common.auth import generate_auth_token
from apis.v1.admin.administrator.verify_administrator import LoginForm
class InterfaceLogin(views.MethodView):
'''
管理员 登录
'''
@api_version
def get(self, version):
return '服务开启'
@api_version
def post(self, version):
form = LoginForm().validate_for_api() # 验证表单
identity = models.Admin.verify(form.username.data, form.password.data) # 验证数据库数据
expiration = current_app.config['TOKEN_EXPIRATION'] # token存活周期
access_token = generate_auth_token(identity['uid'],
expiration).decode('ascii') # 生成token
return response_code.LayuiSuccess(data={'access_token': access_token}, message='Login success')
| 28.947368 | 103 | 0.687273 | 132 | 1,100 | 5.545455 | 0.568182 | 0.043716 | 0.057377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015945 | 0.201818 | 1,100 | 37 | 104 | 29.72973 | 0.817768 | 0.175455 | 0 | 0.105263 | 0 | 0 | 0.076223 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0.052632 | 0.315789 | 0.052632 | 0.578947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
0ab01c604547a9b288e6ab2df369ba3984e5411f | 120 | py | Python | run.py | abirafdirp/reddit-notifier | 504317277a9302b153cca4827f902cc2c7e1d2c0 | [
"Unlicense"
] | null | null | null | run.py | abirafdirp/reddit-notifier | 504317277a9302b153cca4827f902cc2c7e1d2c0 | [
"Unlicense"
] | null | null | null | run.py | abirafdirp/reddit-notifier | 504317277a9302b153cca4827f902cc2c7e1d2c0 | [
"Unlicense"
] | null | null | null | from bot import Bot
from emailhandler import EmailHandler
Bot.validate()
EmailHandler.register()
bot = Bot()
bot.run()
| 15 | 37 | 0.775 | 16 | 120 | 5.8125 | 0.4375 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 120 | 7 | 38 | 17.142857 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0adda991f05dd0a1900ffb88b0264a761f484534 | 2,139 | py | Python | build/lib/radproc/api.py | esride-jts/radproc | e52ff07878a354a55882cf1bf37d10afefe50ced | [
"MIT"
] | 8 | 2019-09-04T19:31:14.000Z | 2021-09-14T08:11:58.000Z | build/lib/radproc/api.py | esride-jts/radproc | e52ff07878a354a55882cf1bf37d10afefe50ced | [
"MIT"
] | 3 | 2018-08-17T09:28:46.000Z | 2018-11-08T08:34:47.000Z | build/lib/radproc/api.py | esride-jts/radproc | e52ff07878a354a55882cf1bf37d10afefe50ced | [
"MIT"
] | 4 | 2018-10-14T03:16:04.000Z | 2021-09-14T08:13:26.000Z | # -*- coding: utf-8 -*-
# Radproc - A GIS-compatible Python-Package for automated RADOLAN Composite Processing and Analysis.
# Copyright (c) 2018, Jennifer Kreklow.
# DOI: https://doi.org/10.5281/zenodo.1313701
#
# Distributed under the MIT License (see LICENSE.txt for more information), complemented with the following provision:
# For the scientific transparency and verification of results obtained and communicated to the public after
# using a modified version of the work, You (as the recipient of the source code and author of this modified version,
# used to produce the published results in scientific communications) commit to make this modified source code available
# in a repository that is easily and freely accessible for a duration of five years after the communication of the obtained results.
"""
=============
radproc API
=============
"""
from __future__ import print_function
from radproc.core import coordinates_degree_to_stereographic, save_idarray_to_txt, import_idarray_from_txt
from radproc.core import load_months_from_hdf5, load_month, load_years_and_resample, hdf5_to_years, hdf5_to_months, hdf5_to_days, hdf5_to_hours, hdf5_to_hydrologicalSeasons
from radproc.raw import unzip_RW_binaries, unzip_YW_binaries, radolan_binaries_to_dataframe, radolan_binaries_to_hdf5, create_idraster_and_process_radolan_data, process_radolan_data
from radproc.wradlib_io import read_RADOLAN_composite
from radproc.heavyrain import find_heavy_rainfalls, count_heavy_rainfall_intervals
from radproc.dwd_gauge import stationfile_to_df, summarize_metadata_files, dwd_gauges_to_hdf5
try:
from radproc.arcgis import create_idraster_germany, clip_idraster, raster_to_array, import_idarray_from_raster, create_idarray
from radproc.arcgis import export_to_raster, export_dfrows_to_gdb, attribute_table_to_df, join_df_columns_to_attribute_table
from radproc.arcgis import idTable_nineGrid, idTable_to_valueTable, valueTable_nineGrid, rastervalues_to_points, zonalstatistics
except:
# here, additional imports for future QGIS or GDAL functions might be possible
print("ArcGIS is unavailable!")
| 48.613636 | 181 | 0.821879 | 305 | 2,139 | 5.462295 | 0.521311 | 0.059424 | 0.030612 | 0.041417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013822 | 0.120617 | 2,139 | 43 | 182 | 49.744186 | 0.871877 | 0.425432 | 0 | 0 | 0 | 0 | 0.018197 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.769231 | 0 | 0.769231 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
0ae8b475311ba875dbfd4faba7bb386462ec8601 | 606 | py | Python | saleor/shipping/utils.py | arneb/saleor | 0bdde822e774904cc9c2e8e136151dc033fd6315 | [
"BSD-3-Clause"
] | 1 | 2020-10-24T14:25:53.000Z | 2020-10-24T14:25:53.000Z | saleor/shipping/utils.py | arneb/saleor | 0bdde822e774904cc9c2e8e136151dc033fd6315 | [
"BSD-3-Clause"
] | 6 | 2021-02-08T20:20:06.000Z | 2022-03-11T23:18:59.000Z | saleor/shipping/utils.py | arneb/saleor | 0bdde822e774904cc9c2e8e136151dc033fd6315 | [
"BSD-3-Clause"
] | 3 | 2017-10-07T19:25:30.000Z | 2019-06-17T21:58:59.000Z | from prices import PriceRange
from .models import ShippingMethodCountry
def get_shipment_options(country_code):
shipping_methods_qs = ShippingMethodCountry.objects.select_related(
'shipping_method')
shipping_methods = shipping_methods_qs.filter(country_code=country_code)
if not shipping_methods.exists():
shipping_methods = shipping_methods_qs.filter(country_code='')
if shipping_methods:
shipping_methods = shipping_methods.values_list('price', flat=True)
return PriceRange(
min_price=min(shipping_methods), max_price=max(shipping_methods))
| 37.875 | 77 | 0.767327 | 70 | 606 | 6.285714 | 0.442857 | 0.375 | 0.209091 | 0.272727 | 0.222727 | 0.222727 | 0.222727 | 0.222727 | 0 | 0 | 0 | 0 | 0.158416 | 606 | 15 | 78 | 40.4 | 0.862745 | 0 | 0 | 0 | 0 | 0 | 0.033003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0aead8519183ed6369f3c2aecf733770d7ab2455 | 2,927 | py | Python | constants/defaults.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 206 | 2015-10-15T07:05:08.000Z | 2021-02-19T11:48:36.000Z | constants/defaults.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 8 | 2017-10-16T10:18:31.000Z | 2022-03-09T14:24:27.000Z | constants/defaults.py | goztrk/django-htk | c56bf112e5d627780d2f4288460eae5cce80fa9e | [
"MIT"
] | 61 | 2015-10-15T08:12:44.000Z | 2022-03-10T12:25:06.000Z | ##
# Allowed hosts
HTK_ALLOWED_HOST_REGEXPS = (
# TODO: remove this rule, it's too permissive
r'(.*)',
# e.g.
#r'(.*\.)?hacktoolkit\.com(\.)?',
)
##
# Miscellaneous settings
HTK_DEFAULT_DOMAIN = 'hacktoolkit.com'
HTK_DEFAULT_APP_LABEL = 'htk'
HTK_SITE_NAME = 'Hacktoolkit'
HTK_SYMBOLIC_SITE_NAME = 'hacktoolkit'
HTK_PATH_ADMIN = '/admin'
HTK_PATH_ADMINTOOLS = '/admintools'
HTK_URLS_NAMESPACE = None
HTK_INDEX_URL_NAME = 'index'
HTK_REDIRECT_URL_NAME = 'redir'
HTK_STATIC_META_TITLE_VALUES = {}
HTK_STATIC_META_DESCRIPTION_VALUES = {}
HTK_TEMPLATE_RENDERER = 'htk.view_helpers.render_custom'
HTK_TEMPLATE_CONTEXT_GENERATOR = 'htk.view_helpers.wrap_data'
HTK_CSS_EXTENSION = 'css'
##
# JSON Serialization Settings
HTK_JSON_DECIMAL_SHOULD_QUANTIZE = True
HTK_JSON_DECIMAL_QUANTIZE = '0.01'
##
# Email settings
HTK_EMAIL_BASE_TEMPLATE_HTML = 'emails/base.html'
HTK_EMAIL_BASE_TEMPLATE_TEXT = 'emails/base.txt'
HTK_DEFAULT_EMAIL_SENDING_DOMAIN = 'hacktoolkit.com'
HTK_DEFAULT_EMAIL_SENDER = 'Hacktoolkit <no-reply@hacktoolkit.com>'
HTK_DEFAULT_EMAIL_RECIPIENTS = ['info@hacktoolkit.com',]
HTK_EMAIL_CONTEXT_GENERATOR = 'htk.mailers.email_context_generator'
HTK_EMAIL_ATTACHMENTS = ()
HTK_FIND_EMAILS_VALIDATOR = 'htk.lib.fullcontact.utils.find_valid_emails'
HTK_EMAIL_PERSON_RESOLVER = 'htk.lib.fullcontact.utils.find_person_by_email'
##
# Locale
HTK_DEFAULT_COUNTRY = 'US'
HTK_DEFAULT_TIMEZONE = 'America/Los_Angeles'
##
# Domain Verification URLs
HTK_DOMAIN_META_URL_NAMES = (
'robots',
'google_site_verification',
'bing_site_auth',
'sitemap',
)
##
# Hostnames
HTK_DEV_HOST_REGEXPS = []
##
# Forms
HTK_FORMS_USE_CUSTOM_LABELS = False
HTK_FORMS_CUSTOM_LABELS = {}
##
# Crypto
HTK_LUHN_XOR_KEYS = {}
##
# Enums
HTK_ENUM_SYMBOLIC_NAME_OVERRIDES = {}
# HTK Imports
from htk.admintools.constants.defaults import *
from htk.apps.accounts.constants.defaults import *
from htk.apps.bible.constants.defaults import *
from htk.apps.cpq.constants.defaults import *
from htk.apps.file_storage.constants.defaults import *
from htk.apps.invitations.constants.defaults import *
from htk.apps.maintenance_mode.constants.defaults import *
from htk.apps.notifications.constants.defaults import *
from htk.apps.organizations.constants.defaults import *
from htk.cache.constants.defaults import *
from htk.forms.constants.defaults import *
from htk.lib.alexa.constants.defaults import *
from htk.lib.dynamic_screening_solutions.constants.defaults import *
from htk.lib.fullcontact.constants.defaults import *
from htk.lib.iterable.constants.defaults import *
from htk.lib.mongodb.constants.defaults import *
from htk.lib.qrcode.constants.defaults import *
from htk.lib.shopify_lib.constants.defaults import *
from htk.lib.slack.constants.defaults import *
from htk.lib.stripe_lib.constants.defaults import *
from htk.lib.yelp.constants.defaults import *
from htk.lib.zuora.constants.defaults import *
| 28.144231 | 76 | 0.791937 | 398 | 2,927 | 5.515075 | 0.354271 | 0.070159 | 0.230524 | 0.258314 | 0.385877 | 0.292027 | 0.032802 | 0 | 0 | 0 | 0 | 0.001141 | 0.101469 | 2,927 | 103 | 77 | 28.417476 | 0.83346 | 0.079604 | 0 | 0 | 0 | 0 | 0.163219 | 0.086499 | 0 | 0 | 0 | 0.009709 | 0 | 1 | 0 | false | 0 | 0.349206 | 0 | 0.349206 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0afb54e81cd8ed2dc4d7213428be19bb0f27d660 | 1,471 | py | Python | neuralnetnumpy/activation.py | raufbhat-dev/Deep-Neural-Net-Numpy | db347057c0763945b5a4852128b99cc64cc562a4 | [
"BSD-3-Clause"
] | null | null | null | neuralnetnumpy/activation.py | raufbhat-dev/Deep-Neural-Net-Numpy | db347057c0763945b5a4852128b99cc64cc562a4 | [
"BSD-3-Clause"
] | null | null | null | neuralnetnumpy/activation.py | raufbhat-dev/Deep-Neural-Net-Numpy | db347057c0763945b5a4852128b99cc64cc562a4 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
class Activation:
def __init__(self,activation):
self.activation = activation
self.activation_derivative = np.ones(shape = (1,1))
def getActivation(self):
if self.activation == 'sigmoid':
def func(y):
y_ret = np.matrix(1/(1+np.exp(1-y)*np.exp(-1)))
self.activation_derivative = np.multiply(y_ret, np.ones(y_ret.shape[-1])- y_ret)
return y_ret
elif self.activation == 'relu':
def func(y):
y_ret = np.where(y<0, 0, y)
self.activation_derivative = np.matrix(np.where(y>0, 1, 0))
return y_ret
elif self.activation == 'leakyRelu':
def func(y):
alpha = 0.01
y_ret = np.where(y > 0, y, y*alpha)
self.activation_derivative = np.matrix(np.where(y>0, 1, alpha))
return y_ret
elif self.activation == 'softmax':
def func(y):
shift_y = y - np.max(y)
exps = np.exp(shift_y)
softmax = np.array(exps / np.sum(exps,axis=1))
self.activation_derivative = np.ones(y.shape[-1])
return softmax
elif self.activation == 'tanh':
def func(y):
act_tanh = np.tanh(y)
self.activation_derivative = (1 - np.power(act_tanh, 2))
return act_tanh
return func
| 38.710526 | 96 | 0.509857 | 187 | 1,471 | 3.882353 | 0.219251 | 0.250689 | 0.198347 | 0.179063 | 0.418733 | 0.297521 | 0.115702 | 0.115702 | 0.115702 | 0.115702 | 0 | 0.023784 | 0.371176 | 1,471 | 37 | 97 | 39.756757 | 0.761081 | 0 | 0 | 0.228571 | 0 | 0 | 0.021074 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.028571 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
0afbef17cc74c28bc98ba37e5da33f363c8db2e5 | 113 | py | Python | tests/__init__.py | mkshgh/resizePixel | ec775c59dec4b3fa568c6080420aff0de39c0b9d | [
"MIT"
] | null | null | null | tests/__init__.py | mkshgh/resizePixel | ec775c59dec4b3fa568c6080420aff0de39c0b9d | [
"MIT"
] | null | null | null | tests/__init__.py | mkshgh/resizePixel | ec775c59dec4b3fa568c6080420aff0de39c0b9d | [
"MIT"
] | null | null | null | import pytest
from resizePixel.resizePixel import *
import unittest
__all__ = [
'pytest',
'unittest',
]
| 12.555556 | 37 | 0.699115 | 11 | 113 | 6.818182 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20354 | 113 | 8 | 38 | 14.125 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.123894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
0afc346108013b2fb5cdfe74ed01447ffc8f07ba | 35 | py | Python | NDBSCANjDE/eucl_dist/__init__.py | krowck/ISDA-NCjDE-HJ | 44c33ba12542a88eaa39fe2b72398ffd7b439372 | [
"MIT"
] | null | null | null | NDBSCANjDE/eucl_dist/__init__.py | krowck/ISDA-NCjDE-HJ | 44c33ba12542a88eaa39fe2b72398ffd7b439372 | [
"MIT"
] | null | null | null | NDBSCANjDE/eucl_dist/__init__.py | krowck/ISDA-NCjDE-HJ | 44c33ba12542a88eaa39fe2b72398ffd7b439372 | [
"MIT"
] | null | null | null | __all__ = ["gpu_dist", "cpu_dist"]
| 17.5 | 34 | 0.657143 | 5 | 35 | 3.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c0ebad592fff0b57076126ca7a37d69f13dbb65 | 563 | py | Python | src/typeDefs/section_1_5/section_1_5_3.py | dheerajgupta0001/wrldc_mis_monthly_report_generator | dd5ae6f28ec6bf8e6532820fd71dd63f8b223f0b | [
"MIT"
] | null | null | null | src/typeDefs/section_1_5/section_1_5_3.py | dheerajgupta0001/wrldc_mis_monthly_report_generator | dd5ae6f28ec6bf8e6532820fd71dd63f8b223f0b | [
"MIT"
] | null | null | null | src/typeDefs/section_1_5/section_1_5_3.py | dheerajgupta0001/wrldc_mis_monthly_report_generator | dd5ae6f28ec6bf8e6532820fd71dd63f8b223f0b | [
"MIT"
] | null | null | null | from typing import TypedDict
import datetime as dt
class ISection_1_5_3(TypedDict):
prev_month_name: str
wr_avg_con: str
wr_avg_con_prev_month: str
wr_avg_con_last_year: str
wr_avg_con_perc_change_prev_month: float
wr_avg_con_perc_change_last_year: float
wr_max_con: float
wr_max_con_prev_month: float
wr_max_con_last_year: float
wr_max_con_perc_change_prev_month: float
wr_max_con_perc_change_last_year: float
wr_max_con_date_str: str
wr_max_con_date_str_prev_month: str
wr_max_con_date_str_last_year: str | 31.277778 | 44 | 0.802842 | 104 | 563 | 3.721154 | 0.25 | 0.103359 | 0.165375 | 0.20155 | 0.568475 | 0.529716 | 0.325581 | 0.175711 | 0.175711 | 0 | 0 | 0.006397 | 0.166963 | 563 | 18 | 45 | 31.277778 | 0.818763 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.117647 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7c12945f5968f28156298bdb13b11aab318148e8 | 484 | py | Python | core/SecretCommands.py | shadoso/ShadoBot | bf25a69b979a54107c8ea20e829922543ed49919 | [
"MIT"
] | 1 | 2022-02-08T05:41:39.000Z | 2022-02-08T05:41:39.000Z | core/SecretCommands.py | shadoso/ShadoBot | bf25a69b979a54107c8ea20e829922543ed49919 | [
"MIT"
] | null | null | null | core/SecretCommands.py | shadoso/ShadoBot | bf25a69b979a54107c8ea20e829922543ed49919 | [
"MIT"
] | null | null | null | # web = discord.Embed(title="ERROR_404_NOT_FOUND", description=HACKER_DESCRIPTION, color=0x1d2b53)
# web.set_thumbnail(url=HACKER)
# web.add_field(name=":unlock: CHANGE_ORG_USER :coin: 99.945,99", value=ORG, inline=False)
# await ctx.send(embed=web)
# HACKER = "https://cdn.discordapp.com/attachments/935364491804303392/935422018311036958/image3A31064_eightbit.png"
# HACKER_DESCRIPTION = "ERROR_404_NOT_FOUND " * 9
# ORG = f"> Hacks the server and decodes the security hash-256 key" | 60.5 | 115 | 0.780992 | 69 | 484 | 5.289855 | 0.753623 | 0.043836 | 0.060274 | 0.087671 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144796 | 0.086777 | 484 | 8 | 116 | 60.5 | 0.680995 | 0.969008 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c1d0bfca56c3b11d374b3c51b9d3087abbec777 | 183 | py | Python | pyoptools/raytrace/shape/__init__.py | fcichos/pyoptools | ce0df42d45420f02d351e76d5f11fded4df8969d | [
"BSD-3-Clause"
] | 1 | 2021-05-21T14:11:09.000Z | 2021-05-21T14:11:09.000Z | pyoptools/raytrace/shape/__init__.py | fcichos/pyoptools | ce0df42d45420f02d351e76d5f11fded4df8969d | [
"BSD-3-Clause"
] | null | null | null | pyoptools/raytrace/shape/__init__.py | fcichos/pyoptools | ce0df42d45420f02d351e76d5f11fded4df8969d | [
"BSD-3-Clause"
] | 2 | 2015-03-21T23:37:10.000Z | 2018-10-22T18:03:57.000Z |
from shape import *
from rectangular import *
from circular import *
from triangular import *
__all__=["Shape",
"Circular",
"Rectangular",
"Triangular"]
| 16.636364 | 25 | 0.628415 | 17 | 183 | 6.529412 | 0.411765 | 0.27027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.273224 | 183 | 10 | 26 | 18.3 | 0.834586 | 0 | 0 | 0 | 0 | 0 | 0.186813 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7c205832c09ecaf65199878a01f82930b209e6fa | 272 | py | Python | series_tiempo_ar_api/libs/custom_admins/utils.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 28 | 2017-12-16T20:30:52.000Z | 2021-08-11T17:35:04.000Z | series_tiempo_ar_api/libs/custom_admins/utils.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 446 | 2017-11-16T15:21:40.000Z | 2021-06-10T20:14:21.000Z | series_tiempo_ar_api/libs/custom_admins/utils.py | datosgobar/series-tiempo-ar-api | 6b553c573f6e8104f8f3919efe79089b7884280c | [
"MIT"
] | 12 | 2018-08-23T16:13:32.000Z | 2022-03-01T23:12:28.000Z | from elasticsearch_dsl import Search, Q
from series_tiempo_ar_api.apps.metadata import constants
def delete_metadata(fields: list):
search = Search(index=constants.METADATA_ALIAS)
return search.filter('terms', id=[field.identifier for field in fields]).delete()
| 34 | 85 | 0.790441 | 38 | 272 | 5.5 | 0.710526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 272 | 7 | 86 | 38.857143 | 0.870833 | 0 | 0 | 0 | 0 | 0 | 0.018382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7c25038fdb463a8dce5fcb0ea6a7db645ad01323 | 1,251 | py | Python | pysbolgraph/S2Interaction.py | zhfanrui/pysbolgraph | c4914705bd9b22a2b69db0fc4d43049fcb07ad17 | [
"BSD-2-Clause"
] | 4 | 2018-06-29T10:43:08.000Z | 2019-03-27T22:33:33.000Z | pysbolgraph/S2Interaction.py | zhfanrui/pysbolgraph | c4914705bd9b22a2b69db0fc4d43049fcb07ad17 | [
"BSD-2-Clause"
] | 14 | 2019-01-22T16:03:12.000Z | 2019-11-11T19:05:32.000Z | pysbolgraph/S2Interaction.py | zhfanrui/pysbolgraph | c4914705bd9b22a2b69db0fc4d43049fcb07ad17 | [
"BSD-2-Clause"
] | 12 | 2018-07-01T10:59:37.000Z | 2021-03-01T08:48:20.000Z | from .S2Identified import S2Identified
from .S2Participation import S2Participation
from .S2IdentifiedFactory import S2IdentifiedFactory
from .terms import SBOL2
class S2Interaction(S2Identified):
def __init__(self, g, uri):
super(S2Interaction, self).__init__(g, uri)
@property
def type(self):
return self.get_uri_property(SBOL2.type)
@type.setter
def type(self, the_type):
self.set_uri_property(SBOL2.type, the_type)
@property
def participations(self):
return [S2Participation(self.g, uri) for uri in self.get_uri_properties(SBOL2.participation)]
def create_participation(self, display_id, participant, role):
identified = S2IdentifiedFactory.create_child(self.g, SBOL2.Participation, self, display_id)
participation = S2Participation(self.g, identified.uri)
participation.participant = participant
participation.add_role(role)
self.insert_uri_property(SBOL2.participation, participation.uri)
return participation
@property
def measure(self):
return self.get_identified_property(SBOL2.measure)
@measure.setter
def measure(self, measure):
self.set_identified_property(SBOL2.measure, measure)
| 31.275 | 101 | 0.727418 | 140 | 1,251 | 6.307143 | 0.264286 | 0.073613 | 0.05436 | 0.038505 | 0.083805 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019743 | 0.190248 | 1,251 | 40 | 102 | 31.275 | 0.851925 | 0 | 0 | 0.103448 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.241379 | false | 0 | 0.137931 | 0.103448 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7c42a448df72ab67070dc305b5150d0eba6c485b | 80 | py | Python | legacy/AISing2019/A1.py | mo-mo-666/AtCoder | 99556f5ed98510850aaa8ab2b845da6a9359f5a5 | [
"MIT"
] | null | null | null | legacy/AISing2019/A1.py | mo-mo-666/AtCoder | 99556f5ed98510850aaa8ab2b845da6a9359f5a5 | [
"MIT"
] | null | null | null | legacy/AISing2019/A1.py | mo-mo-666/AtCoder | 99556f5ed98510850aaa8ab2b845da6a9359f5a5 | [
"MIT"
] | null | null | null | n = int(input())
h = int(input())
w = int(input())
print((n-h+1) * (n-w+1)) | 16 | 24 | 0.475 | 16 | 80 | 2.375 | 0.4375 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.2 | 80 | 5 | 24 | 16 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c52bdb30fb06c4dab28bdc142ac208ffe5a205a | 110 | py | Python | mafi/services/processpool.py | WiFiFi/mafibot | 835560b3eb3f39b589eec373f5515d6a9db68c78 | [
"MIT"
] | 2 | 2021-06-11T13:33:19.000Z | 2021-06-11T13:34:14.000Z | mafi/services/processpool.py | WiFiFi/mafibot | 835560b3eb3f39b589eec373f5515d6a9db68c78 | [
"MIT"
] | null | null | null | mafi/services/processpool.py | WiFiFi/mafibot | 835560b3eb3f39b589eec373f5515d6a9db68c78 | [
"MIT"
] | 1 | 2021-06-11T13:33:21.000Z | 2021-06-11T13:33:21.000Z | from concurrent.futures import ProcessPoolExecutor
processpool_executor = ProcessPoolExecutor(max_workers=3) | 27.5 | 57 | 0.881818 | 11 | 110 | 8.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.072727 | 110 | 4 | 57 | 27.5 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7c57ed66de1d0d3cd14ea213de4c85e39d224b58 | 9,530 | py | Python | gcraft/utils/geometry/mesh_ops.py | ddomurad/gcraft | 7174fcacee875fd90e8d878463108aae3c77873e | [
"MIT"
] | null | null | null | gcraft/utils/geometry/mesh_ops.py | ddomurad/gcraft | 7174fcacee875fd90e8d878463108aae3c77873e | [
"MIT"
] | null | null | null | gcraft/utils/geometry/mesh_ops.py | ddomurad/gcraft | 7174fcacee875fd90e8d878463108aae3c77873e | [
"MIT"
] | null | null | null | from OpenGL.GL import *
from gcraft.utils.geometry.mesh_geometry import MeshGeometry
from gcraft.utils.transformation import Transformation
from gcraft.utils.vector_ops import *
def mod_vertex_data(geometry: MeshGeometry, data_tag, data_len, mod):
if not geometry.contains_data(data_tag):
return
vertex_stride = geometry.get_vertex_stride()
# data offset
do1 = geometry.get_data_offset(data_tag)
do2 = do1 + data_len
for i in range(geometry.vertex_count):
vs = i * vertex_stride
ve = (i + 1) * vertex_stride
v = geometry.vertex_data[vs:ve]
v[do1:do2] = mod(v[do1:do2])
geometry.vertex_data[vs:ve] = v
def remove_indices_from_mesh(geometry: MeshGeometry):
new_vertex_data = []
vertex_stride = sum([d[1] for d in geometry.vertex_metadata])
if not geometry.index_data:
return
for index in geometry.index_data:
new_vertex_data.extend(
geometry.vertex_data[index*vertex_stride: (index+1)*vertex_stride])
geometry.vertex_data = new_vertex_data
geometry.vertex_count = len(new_vertex_data)//vertex_stride
geometry.index_data = None
def add_tangents_data(geometry: MeshGeometry):
if "v_tangent" in geometry.vertex_metadata:
return
if geometry.index_data is None:
raise ValueError("Vertex tangent calculation not supported for not indexed mesh")
if geometry.primitive_type != GL_TRIANGLES:
raise ValueError("Vertex tangent calculation not supported for other primitives than triangles")
tangent_work_data = list([[0, 0, 0] for i in range(geometry.vertex_count)])
vertex_stride = geometry.get_vertex_stride()
# vertex pos offset
vpo = geometry.get_data_offset("v_pos")
# vertex uv offset
vuo = geometry.get_data_offset("uv_0")
if vuo is None:
raise ValueError("Vertex tangent calculation not supported without uv coordinates")
for i in range(len(geometry.index_data)//3):
i0 = geometry.index_data[0 + i * 3]
i1 = geometry.index_data[1 + i * 3]
i2 = geometry.index_data[2 + i * 3]
v0 = geometry.vertex_data[i0 * vertex_stride: (i0 + 1) * vertex_stride]
v1 = geometry.vertex_data[i1 * vertex_stride: (i1 + 1) * vertex_stride]
v2 = geometry.vertex_data[i2 * vertex_stride: (i2 + 1) * vertex_stride]
e1 = v3_sub(v1[vpo:vpo + 3], v0[vpo:vpo + 3])
e2 = v3_sub(v2[vpo:vpo + 3], v0[vpo:vpo + 3])
delta_u1 = v1[vuo] - v0[vuo]
delta_v1 = v1[vuo + 1] - v0[vuo + 1]
delta_u2 = v2[vuo] - v0[vuo]
delta_v2 = v2[vuo + 1] - v0[vuo + 1]
t = delta_u1*delta_v2 - delta_u2*delta_v1
if t == 0:
t = 0.0000001
f = 1/t
tx = f * (delta_v2 * e1[0] - delta_v1 * e2[0])
ty = f * (delta_v2 * e1[1] - delta_v1 * e2[1])
tz = f * (delta_v2 * e1[2] - delta_v1 * e2[2])
tangent = [tx, ty, tz]
v3_add_self(tangent_work_data[i0], tangent)
v3_add_self(tangent_work_data[i1], tangent)
v3_add_self(tangent_work_data[i2], tangent)
new_vertex_data = []
for vi in range(geometry.vertex_count):
tangent = tangent_work_data[vi]
v3_normalize_self(tangent)
new_vertex_data.extend(
geometry.vertex_data[vi*vertex_stride:(vi+1)*vertex_stride])
new_vertex_data.extend(tangent)
geometry.vertex_data = new_vertex_data
geometry.vertex_metadata.append(('v_tangent', 3))
def move_to_cog(geometry: MeshGeometry, select_axis = [1, 1, 1]):
cog = [0, 0, 0]
vertex_stride = geometry.get_vertex_stride()
vpo = geometry.get_data_offset("v_pos")
for i in range(geometry.vertex_count):
v3_add_self(cog, geometry.vertex_data[i * vertex_stride + vpo: i * vertex_stride + vpo + 3])
v3_div_self(cog, geometry.vertex_count)
for i in range(geometry.vertex_count):
if select_axis[0]:
geometry.vertex_data[i * vertex_stride + vpo + 0] -= cog[0]
if select_axis[1]:
geometry.vertex_data[i * vertex_stride + vpo + 1] -= cog[1]
if select_axis[2]:
geometry.vertex_data[i * vertex_stride + vpo + 2] -= cog[2]
def transform(geometry: MeshGeometry, transformation: Transformation, data_types=['v_pos', 'v_normal']):
vertex_stride = geometry.get_vertex_stride()
matrix = transformation.get_matrix()
data_positions = [geometry.get_data_offset(data_type) for data_type in data_types]
for i in range(geometry.vertex_count):
for data_offset in data_positions:
v = geometry.vertex_data[i * vertex_stride + data_offset: i * vertex_stride + data_offset + 3]
tv = m4_dot_v3(matrix, v)
geometry.vertex_data[i * vertex_stride + data_offset + 0] = tv[0]
geometry.vertex_data[i * vertex_stride + data_offset + 1] = tv[1]
geometry.vertex_data[i * vertex_stride + data_offset + 2] = tv[2]
def normalize_normals(geometry: MeshGeometry):
normalize_data(geometry, ["v_normal"])
def normalize_data(geometry: MeshGeometry, data_types):
vertex_stride = geometry.get_vertex_stride()
data_positions = [geometry.get_data_offset(data_type) for data_type in data_types]
for i in range(geometry.vertex_count):
for data_offset in data_positions:
v = geometry.vertex_data[i * vertex_stride + data_offset: i * vertex_stride + data_offset + 3]
v3_normalize_self(v)
geometry.vertex_data[i * vertex_stride + data_offset + 0] = v[0]
geometry.vertex_data[i * vertex_stride + data_offset + 1] = v[1]
geometry.vertex_data[i * vertex_stride + data_offset + 2] = v[2]
def add_normals_data(geometry: MeshGeometry):
if geometry.index_data is None:
_add_normals_data_non_indexed_mesh(geometry)
else:
_add_normals_data_to_indexed_mesh(geometry)
def _add_normals_data_to_indexed_mesh(geometry: MeshGeometry):
if "v_normal" in geometry.vertex_metadata:
return
if geometry.index_data is None:
raise ValueError("Vertex tangent calculation not supported for not indexed mesh")
if geometry.primitive_type != GL_TRIANGLES:
raise ValueError("Vertex tangent calculation not supported for other primitives than triangles")
normal_work_data = list([[0, 0, 0] for i in range(geometry.vertex_count)])
normals_avg_count = [0]*geometry.vertex_count
vertex_stride = geometry.get_vertex_stride()
# vertex pos offset
vpo = geometry.get_data_offset("v_pos")
for i in range(len(geometry.index_data)//3):
i0 = geometry.index_data[0 + i * 3]
i1 = geometry.index_data[1 + i * 3]
i2 = geometry.index_data[2 + i * 3]
v0 = geometry.vertex_data[i0 * vertex_stride: (i0 + 1) * vertex_stride]
v1 = geometry.vertex_data[i1 * vertex_stride: (i1 + 1) * vertex_stride]
v2 = geometry.vertex_data[i2 * vertex_stride: (i2 + 1) * vertex_stride]
e1 = v3_sub(v1[vpo:vpo + 3], v0[vpo:vpo + 3])
e2 = v3_sub(v2[vpo:vpo + 3], v0[vpo:vpo + 3])
normal = v3_cross(e1, e2)
v3_add_self(normal_work_data[i0], normal)
v3_add_self(normal_work_data[i1], normal)
v3_add_self(normal_work_data[i2], normal)
normals_avg_count[i0] += 1
normals_avg_count[i1] += 1
normals_avg_count[i2] += 1
new_vertex_data = []
for vi in range(geometry.vertex_count):
normal = v3_div(normal_work_data[vi], normals_avg_count[vi])
v3_normalize_self(normal)
new_vertex_data.extend(
geometry.vertex_data[vi*vertex_stride:(vi+1)*vertex_stride])
new_vertex_data.extend(normal)
geometry.vertex_data = new_vertex_data
geometry.vertex_metadata.append(('v_normal', 3))
def _add_normals_data_non_indexed_mesh(geometry: MeshGeometry):
if "v_normal" in geometry.vertex_metadata:
return
if geometry.primitive_type != GL_TRIANGLES:
raise ValueError("Vertex tangent calculation not supported for other primitives than triangles")
normal_work_data = list([[0, 0, 0] for i in range(geometry.vertex_count)])
normals_avg_count = [0]*geometry.vertex_count
vertex_stride = geometry.get_vertex_stride()
# vertex pos offset
vpo = geometry.get_data_offset("v_pos")
for i in range(geometry.vertex_count//3):
i0 = i*3 + 0
i1 = i*3 + 1
i2 = i*3 + 2
v0 = geometry.vertex_data[i0 * vertex_stride: (i0 + 1) * vertex_stride]
v1 = geometry.vertex_data[i1 * vertex_stride: (i1 + 1) * vertex_stride]
v2 = geometry.vertex_data[i2 * vertex_stride: (i2 + 1) * vertex_stride]
e1 = v3_sub(v1[vpo:vpo + 3], v0[vpo:vpo + 3])
e2 = v3_sub(v2[vpo:vpo + 3], v0[vpo:vpo + 3])
normal = v3_cross(e1, e2)
v3_add_self(normal_work_data[i0], normal)
v3_add_self(normal_work_data[i1], normal)
v3_add_self(normal_work_data[i2], normal)
normals_avg_count[i0] += 1
normals_avg_count[i1] += 1
normals_avg_count[i2] += 1
new_vertex_data = []
for vi in range(geometry.vertex_count):
normal = v3_div(normal_work_data[vi], normals_avg_count[vi])
v3_normalize_self(normal)
new_vertex_data.extend(
geometry.vertex_data[vi*vertex_stride:(vi+1)*vertex_stride])
new_vertex_data.extend(normal)
geometry.vertex_data = new_vertex_data
geometry.vertex_metadata.append(('v_normal', 3))
| 35.036765 | 106 | 0.661805 | 1,368 | 9,530 | 4.330409 | 0.082602 | 0.119514 | 0.094193 | 0.042539 | 0.7684 | 0.752363 | 0.72451 | 0.644328 | 0.636732 | 0.62711 | 0 | 0.034737 | 0.232739 | 9,530 | 271 | 107 | 35.166052 | 0.775438 | 0.008604 | 0 | 0.578378 | 0 | 0 | 0.053802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0 | 0.021622 | 0 | 0.102703 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c823e8601336d906df0bebdb855983f270e790f | 887 | py | Python | setup.py | akrobotics/rosjava_build_tools | 5b5afb4aa245589314d67929edf8bd7775ed7556 | [
"Apache-2.0"
] | 1 | 2021-01-21T16:38:06.000Z | 2021-01-21T16:38:06.000Z | setup.py | akrobotics/rosjava_build_tools | 5b5afb4aa245589314d67929edf8bd7775ed7556 | [
"Apache-2.0"
] | 22 | 2015-02-11T07:15:23.000Z | 2021-01-18T10:02:32.000Z | setup.py | akrobotics/rosjava_build_tools | 5b5afb4aa245589314d67929edf8bd7775ed7556 | [
"Apache-2.0"
] | 25 | 2016-04-18T04:10:06.000Z | 2021-08-22T05:50:28.000Z | #!/usr/bin/env python
from distutils.core import setup
from catkin_pkg.python_setup import generate_distutils_setup
d = generate_distutils_setup(
packages=['rosjava_build_tools'],
package_dir={'': 'src'},
scripts=['scripts/catkin_create_android_pkg',
'scripts/catkin_create_android_project',
'scripts/catkin_create_android_library_project',
'scripts/catkin_create_rosjava_pkg',
'scripts/catkin_create_rosjava_project',
'scripts/catkin_create_rosjava_library_project',
],
package_data = {'rosjava_build_tools': [
'templates/android_package/*',
'templates/android_project/*',
'templates/rosjava_library_project/*',
'templates/rosjava_package/*',
'templates/rosjava_project/*',
'templates/init_repo/*',
]},
)
setup(**d)
| 32.851852 | 61 | 0.652762 | 88 | 887 | 6.136364 | 0.329545 | 0.144444 | 0.211111 | 0.144444 | 0.122222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232244 | 887 | 26 | 62 | 34.115385 | 0.792952 | 0.022548 | 0 | 0 | 1 | 0 | 0.502309 | 0.454965 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7c97354a003328c288fcfb6f4129286cdd8d9893 | 416 | py | Python | django_clienthints/utils.py | reef-technologies/django-clienthints | 045bb15cc3e41799fdc41a5907268b3cc1fd6ccc | [
"BSD-3-Clause"
] | null | null | null | django_clienthints/utils.py | reef-technologies/django-clienthints | 045bb15cc3e41799fdc41a5907268b3cc1fd6ccc | [
"BSD-3-Clause"
] | null | null | null | django_clienthints/utils.py | reef-technologies/django-clienthints | 045bb15cc3e41799fdc41a5907268b3cc1fd6ccc | [
"BSD-3-Clause"
] | null | null | null | from django.conf import settings
def get_ch_accept_header_value():
if not settings.CLIENTHINTS:
return ''
return ','.join(settings.CLIENTHINTS)
def get_future_policy_header_value():
if not settings.CLIENTHINTS_ALLOWLIST:
return ''
return ';'.join(
f'{feature.lower()} {" ".join(allowlist)}'
for feature, allowlist in settings.CLIENTHINTS_ALLOWLIST.items()
)
| 21.894737 | 72 | 0.673077 | 47 | 416 | 5.744681 | 0.531915 | 0.281481 | 0.096296 | 0.118519 | 0.259259 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213942 | 416 | 18 | 73 | 23.111111 | 0.825688 | 0 | 0 | 0.166667 | 0 | 0 | 0.098558 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7ca78f753e909c633274e8cf42d50121adbac682 | 4,402 | py | Python | kombu/asynchronous/aws/sqs/queue.py | CountRedClaw/kombu | 14d395aa859b905874d8b4abd677a4c7ac86e10b | [
"BSD-3-Clause"
] | null | null | null | kombu/asynchronous/aws/sqs/queue.py | CountRedClaw/kombu | 14d395aa859b905874d8b4abd677a4c7ac86e10b | [
"BSD-3-Clause"
] | null | null | null | kombu/asynchronous/aws/sqs/queue.py | CountRedClaw/kombu | 14d395aa859b905874d8b4abd677a4c7ac86e10b | [
"BSD-3-Clause"
] | null | null | null | """Amazon SQS queue implementation."""
from __future__ import annotations
from vine import transform
from .message import AsyncMessage
_all__ = ['AsyncQueue']
def list_first(rs):
"""Get the first item in a list, or None if list empty."""
return rs[0] if len(rs) == 1 else None
class AsyncQueue:
"""Async SQS Queue."""
def __init__(self, connection=None, url=None, message_class=AsyncMessage):
self.connection = connection
self.url = url
self.message_class = message_class
self.visibility_timeout = None
def _NA(self, *args, **kwargs):
raise NotImplementedError()
count_slow = dump = save_to_file = save_to_filename = save = \
save_to_s3 = load_from_s3 = load_from_file = load_from_filename = \
load = clear = _NA
def get_attributes(self, attributes='All', callback=None):
return self.connection.get_queue_attributes(
self, attributes, callback,
)
def set_attribute(self, attribute, value, callback=None):
return self.connection.set_queue_attribute(
self, attribute, value, callback,
)
def get_timeout(self, callback=None, _attr='VisibilityTimeout'):
return self.get_attributes(
_attr, transform(
self._coerce_field_value, callback, _attr, int,
),
)
def _coerce_field_value(self, key, type, response):
return type(response[key])
def set_timeout(self, visibility_timeout, callback=None):
return self.set_attribute(
'VisibilityTimeout', visibility_timeout,
transform(
self._on_timeout_set, callback,
)
)
def _on_timeout_set(self, visibility_timeout):
if visibility_timeout:
self.visibility_timeout = visibility_timeout
return self.visibility_timeout
def add_permission(self, label, aws_account_id, action_name,
callback=None):
return self.connection.add_permission(
self, label, aws_account_id, action_name, callback,
)
def remove_permission(self, label, callback=None):
return self.connection.remove_permission(self, label, callback)
def read(self, visibility_timeout=None, wait_time_seconds=None,
callback=None):
return self.get_messages(
1, visibility_timeout,
wait_time_seconds=wait_time_seconds,
callback=transform(list_first, callback),
)
def write(self, message, delay_seconds=None, callback=None):
return self.connection.send_message(
self, message.get_body_encoded(), delay_seconds,
callback=transform(self._on_message_sent, callback, message),
)
def write_batch(self, messages, callback=None):
return self.connection.send_message_batch(
self, messages, callback=callback,
)
def _on_message_sent(self, orig_message, new_message):
orig_message.id = new_message.id
orig_message.md5 = new_message.md5
return new_message
def get_messages(self, num_messages=1, visibility_timeout=None,
attributes=None, wait_time_seconds=None, callback=None):
return self.connection.receive_message(
self, number_messages=num_messages,
visibility_timeout=visibility_timeout,
attributes=attributes,
wait_time_seconds=wait_time_seconds,
callback=callback,
)
def delete_message(self, message, callback=None):
return self.connection.delete_message(self, message, callback)
def delete_message_batch(self, messages, callback=None):
return self.connection.delete_message_batch(
self, messages, callback=callback,
)
def change_message_visibility_batch(self, messages, callback=None):
return self.connection.change_message_visibility_batch(
self, messages, callback=callback,
)
def delete(self, callback=None):
return self.connection.delete_queue(self, callback=callback)
def count(self, page_size=10, vtimeout=10, callback=None,
_attr='ApproximateNumberOfMessages'):
return self.get_attributes(
_attr, callback=transform(
self._coerce_field_value, callback, _attr, int,
),
)
| 33.603053 | 78 | 0.65493 | 487 | 4,402 | 5.634497 | 0.211499 | 0.058309 | 0.085277 | 0.104227 | 0.430029 | 0.307216 | 0.284621 | 0.156341 | 0.0707 | 0.037901 | 0 | 0.003688 | 0.260791 | 4,402 | 130 | 79 | 33.861538 | 0.839582 | 0.023171 | 0 | 0.131313 | 0 | 0 | 0.017274 | 0.006303 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212121 | false | 0 | 0.030303 | 0.161616 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7ca80a43ce964edc98480e8ff68bac67ebd245af | 221 | py | Python | neural_network/loss_function.py | hirotoyoshidome/python_utils | 1c59aa8b8a3de6c350abddf2be29484a427e45ef | [
"MIT"
] | null | null | null | neural_network/loss_function.py | hirotoyoshidome/python_utils | 1c59aa8b8a3de6c350abddf2be29484a427e45ef | [
"MIT"
] | null | null | null | neural_network/loss_function.py | hirotoyoshidome/python_utils | 1c59aa8b8a3de6c350abddf2be29484a427e45ef | [
"MIT"
] | null | null | null | #!/bin/usr python3
import numpy as np
# 2乗和誤差
def mean__squared_error(y, t):
return 0.5 * np.sun((y-t) ** 2)
# 交差エントロピー誤差
def cross_entropy_error(y, t):
delta = le - 7
return -np.sum(t * np.log(y + delta))
| 17 | 41 | 0.624434 | 40 | 221 | 3.325 | 0.675 | 0.045113 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034682 | 0.217195 | 221 | 12 | 42 | 18.416667 | 0.734104 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7cabf4fa3a406d358f75714724ad8a7d52003023 | 1,685 | py | Python | Blender 2.91/2.91/scripts/addons/power_sequencer/ui/__init__.py | calculusrobotics/RNNs-for-Bayesian-State-Estimation | 2aacf86d2e447e10c840b4926d4de7bc5e46d9bc | [
"MIT"
] | 1 | 2021-06-30T00:39:40.000Z | 2021-06-30T00:39:40.000Z | release/scripts/addons/power_sequencer/ui/__init__.py | ringsce/Rings3D | 8059d1e2460fc8d6f101eff8e695f68a99f6671d | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | release/scripts/addons/power_sequencer/ui/__init__.py | ringsce/Rings3D | 8059d1e2460fc8d6f101eff8e695f68a99f6671d | [
"Naumen",
"Condor-1.1",
"MS-PL"
] | null | null | null | #
# Copyright (C) 2016-2020 by Nathan Lovato, Daniel Oakey, Razvan Radulescu, and contributors
#
# This file is part of Power Sequencer.
#
# Power Sequencer is free software: you can redistribute it and/or modify it under the terms of the
# GNU General Public License as published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# Power Sequencer is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with Power Sequencer. If
# not, see <https://www.gnu.org/licenses/>.
#
import bpy
from .menu_contextual import POWER_SEQUENCER_MT_contextual
from .menu_toolbar import (
POWER_SEQUENCER_MT_main,
POWER_SEQUENCER_MT_playback,
POWER_SEQUENCER_MT_strips,
POWER_SEQUENCER_MT_select,
POWER_SEQUENCER_MT_edit,
POWER_SEQUENCER_MT_markers,
POWER_SEQUENCER_MT_file,
POWER_SEQUENCER_MT_trim,
POWER_SEQUENCER_MT_preview,
POWER_SEQUENCER_MT_audio,
POWER_SEQUENCER_MT_transitions,
)
classes = [
POWER_SEQUENCER_MT_contextual,
POWER_SEQUENCER_MT_main,
POWER_SEQUENCER_MT_playback,
POWER_SEQUENCER_MT_strips,
POWER_SEQUENCER_MT_select,
POWER_SEQUENCER_MT_edit,
POWER_SEQUENCER_MT_markers,
POWER_SEQUENCER_MT_file,
POWER_SEQUENCER_MT_trim,
POWER_SEQUENCER_MT_preview,
POWER_SEQUENCER_MT_audio,
POWER_SEQUENCER_MT_transitions,
]
register_ui, unregister_ui = bpy.utils.register_classes_factory(classes)
| 34.387755 | 99 | 0.791098 | 241 | 1,685 | 5.207469 | 0.423237 | 0.312351 | 0.305976 | 0.045418 | 0.450996 | 0.430279 | 0.385657 | 0.385657 | 0.385657 | 0.385657 | 0 | 0.00636 | 0.160237 | 1,685 | 48 | 100 | 35.104167 | 0.880565 | 0.44273 | 0 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cb730a94235c4730358a5f596687426bd5d8f37 | 103 | py | Python | SiteSpecificFiles/BNL/site/test.py | lsst-camera-dh/ccstsscripts | 28e021a15640b346709cfbf6b68ae6cc9a2e5dd3 | [
"BSD-3-Clause-LBNL"
] | null | null | null | SiteSpecificFiles/BNL/site/test.py | lsst-camera-dh/ccstsscripts | 28e021a15640b346709cfbf6b68ae6cc9a2e5dd3 | [
"BSD-3-Clause-LBNL"
] | 1 | 2015-04-14T18:01:25.000Z | 2015-04-14T18:01:25.000Z | SiteSpecificFiles/BNL/site/test.py | lsst-camera-dh/ccstsscripts | 28e021a15640b346709cfbf6b68ae6cc9a2e5dd3 | [
"BSD-3-Clause-LBNL"
] | null | null | null | #!/usr/bin/env python
import os
try:
st = os.stat("tst")
print st
except:
print "no file"
| 11.444444 | 23 | 0.592233 | 17 | 103 | 3.588235 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.262136 | 103 | 8 | 24 | 12.875 | 0.802632 | 0.194175 | 0 | 0 | 0 | 0 | 0.121951 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cbae306e29453e1595d4b5d2d172d7982f3d22b | 551 | py | Python | security.py | william-wls-code/StoreAPI | a1ee38a0a39d7b71f5c07072da36985f7d0500f5 | [
"MIT"
] | null | null | null | security.py | william-wls-code/StoreAPI | a1ee38a0a39d7b71f5c07072da36985f7d0500f5 | [
"MIT"
] | null | null | null | security.py | william-wls-code/StoreAPI | a1ee38a0a39d7b71f5c07072da36985f7d0500f5 | [
"MIT"
] | null | null | null | from werkzeug.security import safe_str_cmp
from models.user import UserModel
def authenticate(username, password):
''' Look for the username, if that user exists and the password is correct, return the user. '''
user = UserModel.find_by_username(username)
if user and safe_str_cmp(user.password, password):
return user
def identity(payload):
''' Extract the user id from the payload. Then retrieve the specific user with the extracted user id. '''
user_id = payload['identity']
return UserModel.find_by_id(user_id)
| 34.4375 | 109 | 0.738657 | 80 | 551 | 4.9625 | 0.4375 | 0.060453 | 0.050378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183303 | 551 | 15 | 110 | 36.733333 | 0.882222 | 0.339383 | 0 | 0 | 0 | 0 | 0.022857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.222222 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
7cc2a1b7bd7380422d54fa62a9ade793a946fe2f | 86 | py | Python | playerpy/version.py | daniel-falk/playerpy | e2d2c24e85d61af9a529b657fd572ee5026f6c4d | [
"MIT"
] | null | null | null | playerpy/version.py | daniel-falk/playerpy | e2d2c24e85d61af9a529b657fd572ee5026f6c4d | [
"MIT"
] | 2 | 2021-07-17T03:26:37.000Z | 2021-07-18T16:24:31.000Z | playerpy/version.py | daniel-falk/playerpy | e2d2c24e85d61af9a529b657fd572ee5026f6c4d | [
"MIT"
] | 1 | 2021-07-04T20:38:43.000Z | 2021-07-04T20:38:43.000Z | __version_info__ = (0, 1, 1)
__version__ = '.'.join(str(i) for i in __version_info__)
| 28.666667 | 56 | 0.697674 | 14 | 86 | 3.285714 | 0.642857 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 0.139535 | 86 | 2 | 57 | 43 | 0.581081 | 0 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7cc5fee45922b8c54c4aa5b1ece442adeef8f06c | 560 | py | Python | var/spack/repos/builtin/packages/r-whisker/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 11 | 2015-10-04T02:17:46.000Z | 2018-02-07T18:23:00.000Z | var/spack/repos/builtin/packages/r-whisker/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 22 | 2017-08-01T22:45:10.000Z | 2022-03-10T07:46:31.000Z | var/spack/repos/builtin/packages/r-whisker/package.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 4 | 2016-06-10T17:57:39.000Z | 2018-09-11T04:59:38.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class RWhisker(RPackage):
"""{{mustache}} for R, Logicless Templating.
Implements 'Mustache' logicless templating."""
cran = "whisker"
version('0.4', sha256='7a86595be4f1029ec5d7152472d11b16175737e2777134e296ae97341bf8fba8')
version('0.3-2', sha256='484836510fcf123a66ddd13cdc8f32eb98e814cad82ed30c0294f55742b08c7c')
| 31.111111 | 95 | 0.764286 | 57 | 560 | 7.508772 | 0.807018 | 0.088785 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220619 | 0.133929 | 560 | 17 | 96 | 32.941176 | 0.661856 | 0.492857 | 0 | 0 | 0 | 0 | 0.527675 | 0.472325 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7ce768a9b10e326d6d586dedcc7cfad7d4819a24 | 112 | py | Python | beatsmusic/python/api_config.py | ajot/api-catalyst | 8376ad56c4bcf31b253787a12ddc4bf9f3d5c697 | [
"MIT"
] | 1 | 2017-03-10T12:54:36.000Z | 2017-03-10T12:54:36.000Z | beatsmusic/python/api_config.py | ajot/api-catalyst | 8376ad56c4bcf31b253787a12ddc4bf9f3d5c697 | [
"MIT"
] | null | null | null | beatsmusic/python/api_config.py | ajot/api-catalyst | 8376ad56c4bcf31b253787a12ddc4bf9f3d5c697 | [
"MIT"
] | null | null | null | # You will need to get an API key from Beats Music http://developer.beatsmusic.com
API_KEY = 'YOUR_API_KEY_HERE' | 56 | 82 | 0.785714 | 21 | 112 | 4 | 0.809524 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133929 | 112 | 2 | 83 | 56 | 0.865979 | 0.714286 | 0 | 0 | 0 | 0 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b13e76c14187e97a0be666b8ca02d885534454f | 835 | py | Python | tests/fake_repository.py | Spielmannmisha/car_holder | 1c0557455240ab84539eeedbc8c163d6cfe6fece | [
"MIT"
] | null | null | null | tests/fake_repository.py | Spielmannmisha/car_holder | 1c0557455240ab84539eeedbc8c163d6cfe6fece | [
"MIT"
] | 7 | 2021-07-05T08:25:59.000Z | 2021-07-20T18:39:26.000Z | tests/fake_repository.py | Spielmannmisha/car_holder | 1c0557455240ab84539eeedbc8c163d6cfe6fece | [
"MIT"
] | null | null | null | from typing import List
from src.models import Person
import random
from datetime import datetime
class FakeUsersRepository:
rand_id = random.randint(10, 1000)
current_date = datetime.now()
def __init__(self, users: List[Person]) -> None:
self._users = set(users)
def add(self, telegram_id: int, user_name: str, nick_name: str, id: int = rand_id, date: datetime = current_date) -> None:
user = Person(id, telegram_id, user_name, nick_name, date)
self._users.add(user)
def get(self, telegram_id) -> Person:
return next(user for user in self._users if user.telegram_id == telegram_id)
def get_by_id(self, id: int) -> Person:
return next(user for user in self._users if user.id == id)
def list(self) -> List[Person]:
return list(self._users)
| 30.925926 | 126 | 0.667066 | 122 | 835 | 4.368852 | 0.319672 | 0.101313 | 0.052533 | 0.075047 | 0.165103 | 0.165103 | 0.165103 | 0.165103 | 0.165103 | 0.165103 | 0 | 0.009317 | 0.228743 | 835 | 26 | 127 | 32.115385 | 0.818323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.222222 | 0.166667 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
6b17319d76714c14bd7fca122ea699f8a4780e1d | 389 | py | Python | src/simulator/report.py | jazzsewera/mops-projekt | 75924546eb73c266ba81e8e22c68ad939dea19d6 | [
"MIT"
] | null | null | null | src/simulator/report.py | jazzsewera/mops-projekt | 75924546eb73c266ba81e8e22c68ad939dea19d6 | [
"MIT"
] | null | null | null | src/simulator/report.py | jazzsewera/mops-projekt | 75924546eb73c266ba81e8e22c68ad939dea19d6 | [
"MIT"
] | null | null | null | class Report(object):
def __init__(self):
self._packets_in_buffer = []
self._packet_wait_time = []
self._server_load = []
def update_state(self, packets_in_buffer, packet_wait_time, server_load):
self._packets_in_buffer.append(packets_in_buffer)
self._packet_wait_time.append(packet_wait_time)
self._server_load.append(server_load)
| 35.363636 | 77 | 0.706941 | 51 | 389 | 4.784314 | 0.333333 | 0.147541 | 0.245902 | 0.233607 | 0.442623 | 0.442623 | 0.270492 | 0 | 0 | 0 | 0 | 0 | 0.200514 | 389 | 10 | 78 | 38.9 | 0.784566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
6b1aa66758eee33895b7187b2f898870a14de52b | 134 | py | Python | webtool/apps.py | wodo/WebTool3 | 1582a03d619434d8a6139f705a1b5860e9b5b8b8 | [
"BSD-2-Clause"
] | 13 | 2018-12-16T21:01:24.000Z | 2019-07-03T06:23:41.000Z | webtool/apps.py | dav-kempten/WebTool3 | 859f39df67cb0f853c7fe33cb5d08b999d8692fc | [
"BSD-2-Clause"
] | 26 | 2019-07-07T06:44:06.000Z | 2021-09-07T07:28:34.000Z | webtool/apps.py | dav-kempten/WebTool3 | 859f39df67cb0f853c7fe33cb5d08b999d8692fc | [
"BSD-2-Clause"
] | 3 | 2017-06-18T06:22:52.000Z | 2019-07-03T06:21:05.000Z | from django.contrib.admin.apps import AdminConfig
class WebtoolAdminConfig(AdminConfig):
default_site = 'admin.WebtoolAdminSite'
| 26.8 | 49 | 0.820896 | 14 | 134 | 7.785714 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104478 | 134 | 4 | 50 | 33.5 | 0.908333 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 0.164179 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
6b272c765fdce0fbc115f0d3cc23fca0281af30e | 57,908 | py | Python | nlplingo/oregon/nlplingo/tasks/sequence/ED_model_hf.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | 3 | 2020-10-22T13:28:00.000Z | 2022-03-24T19:57:22.000Z | nlplingo/oregon/nlplingo/tasks/sequence/ED_model_hf.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | null | null | null | nlplingo/oregon/nlplingo/tasks/sequence/ED_model_hf.py | BBN-E/nlplingo | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | [
"Apache-2.0"
] | 1 | 2020-10-22T13:29:51.000Z | 2020-10-22T13:29:51.000Z | # -*- coding: utf-8 -*-
# from python.clever.event_models.uoregon.models.pipeline._01.local_constants import *
#from fairseq.models.roberta import XLMRModel
from nlplingo.oregon.event_models.uoregon.tools.utils import *
from nlplingo.oregon.event_models.uoregon.layers import DynamicLSTM, GCN, SelfAttention
#from nlplingo.oregon.event_models.uoregon.models.pipeline._01.iterators import upos_map, ner_map
from nlplingo.oregon.nlplingo.tasks.sequence.generator import upos_map, ner_map
from transformers import AutoConfig, XLMRobertaModel, XLMRobertaForMaskedLM, XLMRobertaForTokenClassification
class EDModelHF(nn.Module):
def __init__(self, opt, label_map):
print('========== ED_model_hf.EDModel.__init__ START ============')
""" decode.bash
upos_dim= 30
self.rep_dim= 30
use_ner= 0
ner_dim= 30
self.xlmr_dim= 768
xlmr_model_dir= models/xlmr.base
dropout_xlmr= 0.1
num_last_layer_xlmr= 1
hidden_dim= 200
"""
super(EDModelHF, self).__init__()
self.opt = opt
self.label_map = label_map
print('upos_dim=', self.opt['upos_dim'])
self.upos_embedding = nn.Embedding(
num_embeddings=len(upos_map),
# TODO our upos_map in generator.py is the same len as theirs in iterators.py, so this is fine
embedding_dim=self.opt['upos_dim'],
padding_idx=0
)
self.rep_dim = self.opt['upos_dim'] # 30
print('self.rep_dim=', self.rep_dim)
print('use_ner=', self.opt['use_ner'])
print('ner_dim=', self.opt['ner_dim'])
if self.opt['use_ner']:
self.ner_embedding = nn.Embedding(
num_embeddings=len(ner_map),
embedding_dim=self.opt['ner_dim'],
padding_idx=0
)
self.rep_dim += self.opt['ner_dim']
# *********************************************
if 'base' in self.opt['xlmr_version']:
self.xlmr_dim = 768
elif 'large' in self.opt['xlmr_version']:
self.xlmr_dim = 1024
# self.xlmr_embedding = XLMRModel.from_pretrained(
# # os.path.join(WORKING_DIR, 'tools', 'xlmr_resources', self.opt['xlmr_version']), # <==
# self.opt['xlmr_model_dir'], # ==>
# checkpoint_file='model.pt')
self.config = AutoConfig.from_pretrained(
'xlm-roberta-base',
num_labels=len(self.label_map),
id2label = {str(v): k for k, v in self.label_map.items()},
label2id = {k: v for k, v in self.label_map.items()},
cache_dir=self.opt['cache_dir'],
output_hidden_states=True
)
#self.xlmr_embedding = XLMRobertaModel(self.config)
#self.xlmr_embedding = XLMRobertaForMaskedLM(self.config)
self.xlmr_embedding = XLMRobertaForTokenClassification(self.config)
print('self.xlmr_dim=', self.xlmr_dim)
print('xlmr_model_dir=', self.opt['xlmr_model_dir'])
print('dropout_xlmr=', self.opt['dropout_xlmr'])
self.dropout = nn.Dropout(self.opt['dropout_xlmr']) # 0.5
print('num_last_layer_xlmr=', self.opt['num_last_layer_xlmr'])
self.rep_dim += self.xlmr_dim * self.opt['num_last_layer_xlmr'] # 30 + 768 * 1
# ********************************************
self.self_att = SelfAttention(self.rep_dim, opt)
self.gcn_layer = GCN(
in_dim=self.rep_dim,
hidden_dim=self.rep_dim,
num_layers=2,
opt=opt
)
print('biw2v_size=', opt['biw2v_size'])
self.biw2v_embedding = nn.Embedding(
opt['biw2v_size'],
embedding_dim=300,
padding_idx=PAD_ID
)
self.load_pretrained_biw2v()
print('hidden_dim=', self.opt['hidden_dim'])
self.fc_ED = nn.Sequential(
nn.Linear(self.rep_dim * 2 + 300, self.opt['hidden_dim']),
nn.ReLU(),
# nn.Linear(self.opt['hidden_dim'], len(EVENT_MAP)) # <==
nn.Linear(self.opt['hidden_dim'], len(label_map)) # ==> TODO
)
print('========== ED_model.EDModel.__init__ END ============')
def load_pretrained_biw2v(self):
embed = self.biw2v_embedding
vecs = self.opt['biw2v_vecs']
pretrained = torch.from_numpy(vecs)
embed.weight.data.copy_(pretrained)
def get_xlmr_reps(self, inputs):
print('============ ED_model_hf.get_xlmr_reps START =============')
"""
xlmr_ids.shape= torch.Size([10, 53])
retrieve_ids.shape= torch.Size([10, 33])
type(all_hiddens)= <class 'list'>
len(all_hiddens= 13
all_hiddens[0].shape= torch.Size([10, 53, 768])
all_hiddens[1].shape= torch.Size([10, 53, 768])
all_hiddens[-1].shape= torch.Size([10, 53, 768])
batch_size= 10
len(all_hiddens)= 12
self.opt['num_last_layer_xlmr']= 1
used_layers= [11]
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
retrieve_reps.shape= torch.Size([33, 768])
token_reps.shape= torch.Size([10, 33, 768])
all_hiddens= [tensor([[[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
-1.6990e-01, 3.3114e-02],
[ 2.8519e-01, 2.1820e-01, 3.3214e-01, ..., 3.9062e-01,
1.3669e-01, 1.4192e-01],
[ 6.8526e-02, 1.5400e-01, 1.7242e-02, ..., -1.1426e-01,
-4.5462e-02, 5.1807e-02],
...,
[ 3.1810e-01, 4.0966e-02, 2.1512e-01, ..., 3.5518e-01,
2.6255e-01, 4.1006e-02],
[ 1.0284e-01, 5.7793e-02, 4.4513e-02, ..., -2.3617e-01,
2.5314e-02, 6.0451e-02],
[ 6.8176e-03, 1.2782e-01, 7.2239e-02, ..., -1.4924e-01,
-1.9298e-02, 1.6031e-01]],
[[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
-1.6990e-01, 3.3114e-02],
[ 2.8519e-01, 2.1820e-01, 3.3214e-01, ..., 3.9062e-01,
1.3669e-01, 1.4192e-01],
[ 6.8526e-02, 1.5400e-01, 1.7242e-02, ..., -1.1426e-01,
-4.5462e-02, 5.1807e-02],
...,
[ 1.3959e-01, 9.0699e-04, 2.0260e-01, ..., 2.0667e-02,
3.6359e-01, -1.2589e-01],
[ 1.3913e-01, 6.6280e-02, 2.8022e-01, ..., -2.7151e-02,
3.6584e-01, -6.2766e-02],
[ 1.2602e-01, 1.2431e-01, 2.7972e-01, ..., -4.9168e-02,
4.1285e-01, -2.7115e-04]],
[[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
-1.6990e-01, 3.3114e-02],
[-2.1162e-01, -1.2736e-02, -8.2769e-02, ..., 1.2881e-01,
1.2014e-01, 2.7267e-01],
[-4.0390e-01, -6.7837e-02, 1.2579e-03, ..., -6.0733e-03,
3.5541e-01, -1.9815e-01],
...,
[ 1.3959e-01, 9.0699e-04, 2.0260e-01, ..., 2.0667e-02,
3.6359e-01, -1.2589e-01],
[ 1.3913e-01, 6.6280e-02, 2.8022e-01, ..., -2.7151e-02,
3.6584e-01, -6.2766e-02],
[ 1.2602e-01, 1.2431e-01, 2.7972e-01, ..., -4.9168e-02,
4.1285e-01, -2.7115e-04]],
...,
[ 4.9059e-01, 3.9329e-01, -1.3623e-01, ..., -2.5431e-01,
1.1468e-01, 8.7181e-02],
[ 5.0399e-01, 3.8765e-01, -1.2510e-01, ..., -3.0067e-01,
1.0453e-01, 1.6625e-01],
[ 5.4651e-01, 4.0442e-01, -1.6091e-01, ..., -3.3413e-01,
5.9839e-02, 2.1487e-01]]], device='cuda:0')]
"""
xlmr_ids = inputs[0]
input_mask = inputs[1]
label_ids = inputs[2]
retrieve_ids = inputs[4]
print('xlmr_ids.shape=', xlmr_ids.shape)
print('input_mask.shape=', input_mask.shape)
print('label_ids.shape=', label_ids.shape)
print('retrieve_ids.shape=', retrieve_ids.shape)
print('xlmr_ids=', xlmr_ids)
#print('attention_mask=', attention_mask)
print('retrieve_ids=', retrieve_ids)
# all_layers = xlmr.extract_features(zh_tokens, return_all_hiddens=True)
#inputs = {"input_ids": xlmr_ids, "attention_mask": input_mask, "labels": label_ids}
#inputs = {"input_ids": xlmr_ids, "attention_mask": input_mask, "token_type_ids": (None)}
inputs = {"input_ids": xlmr_ids}
#inputs["token_type_ids"] = (None) # XLM and RoBERTa don"t use segment_ids
all_hiddens = self.xlmr_embedding(**inputs)
#all_hiddens = self.xlmr_embedding.extract_features(xlmr_ids, return_all_hiddens=True)
print('type(all_hiddens)=', type(all_hiddens))
print('len(all_hiddens)=', len(all_hiddens))
print('all_hiddens[0].shape=', all_hiddens[0].shape)
print('len(all_hiddens[1])=', len(all_hiddens[1]))
#print('all_hiddens[1].shape=', all_hiddens[1].shape)
#print('all_hiddens[-1].shape=', all_hiddens[-1].shape)
all_hiddens = all_hiddens[1]
print('== all_hiddens = all_hiddens[1] ==')
print('type(all_hiddens)=', type(all_hiddens))
print('len(all_hiddens)=', len(all_hiddens))
print('all_hiddens[0].shape=', all_hiddens[0].shape)
print('all_hiddens[1].shape=', all_hiddens[1].shape)
print('all_hiddens[-1].shape=', all_hiddens[-1].shape)
all_hiddens = list(all_hiddens[1:]) # remove embedding layer
token_reps = []
batch_size, _ = xlmr_ids.shape
print('batch_size=', batch_size)
used_layers = list(range(len(all_hiddens)))[-self.opt['num_last_layer_xlmr']:]
print('len(all_hiddens)=', len(all_hiddens))
print("self.opt['num_last_layer_xlmr']=", self.opt['num_last_layer_xlmr'])
print('used_layers=', used_layers)
for example_id in range(batch_size):
retrieved_reps = torch.cat([all_hiddens[layer_id][example_id][retrieve_ids[example_id]]
for layer_id in used_layers], dim=1) # [seq len, xlmr_dim x num last layers]
print('retrieved_reps=', retrieved_reps)
print('retrieve_reps.shape=', retrieved_reps.shape)
token_reps.append(retrieved_reps)
token_reps = torch.stack(token_reps, dim=0) # [batch size, original seq len, xlmr_dim x num_layers]
print('token_reps.shape=', token_reps.shape)
print('============ ED_model.get_xlmr_reps END =============')
return token_reps
# def get_xlmr_reps(self, inputs):
# print('============ ED_model.get_xlmr_reps START =============')
# """
# xlmr_ids.shape= torch.Size([10, 53])
# retrieve_ids.shape= torch.Size([10, 33])
# type(all_hiddens)= <class 'list'>
# len(all_hiddens= 13
# all_hiddens[0].shape= torch.Size([10, 53, 768])
# all_hiddens[1].shape= torch.Size([10, 53, 768])
# all_hiddens[-1].shape= torch.Size([10, 53, 768])
# batch_size= 10
# len(all_hiddens)= 12
# self.opt['num_last_layer_xlmr']= 1
# used_layers= [11]
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# retrieve_reps.shape= torch.Size([33, 768])
# token_reps.shape= torch.Size([10, 33, 768])
#
# all_hiddens= [tensor([[[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
# -1.6990e-01, 3.3114e-02],
# [ 2.8519e-01, 2.1820e-01, 3.3214e-01, ..., 3.9062e-01,
# 1.3669e-01, 1.4192e-01],
# [ 6.8526e-02, 1.5400e-01, 1.7242e-02, ..., -1.1426e-01,
# -4.5462e-02, 5.1807e-02],
# ...,
# [ 3.1810e-01, 4.0966e-02, 2.1512e-01, ..., 3.5518e-01,
# 2.6255e-01, 4.1006e-02],
# [ 1.0284e-01, 5.7793e-02, 4.4513e-02, ..., -2.3617e-01,
# 2.5314e-02, 6.0451e-02],
# [ 6.8176e-03, 1.2782e-01, 7.2239e-02, ..., -1.4924e-01,
# -1.9298e-02, 1.6031e-01]],
#
# [[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
# -1.6990e-01, 3.3114e-02],
# [ 2.8519e-01, 2.1820e-01, 3.3214e-01, ..., 3.9062e-01,
# 1.3669e-01, 1.4192e-01],
# [ 6.8526e-02, 1.5400e-01, 1.7242e-02, ..., -1.1426e-01,
# -4.5462e-02, 5.1807e-02],
# ...,
# [ 1.3959e-01, 9.0699e-04, 2.0260e-01, ..., 2.0667e-02,
# 3.6359e-01, -1.2589e-01],
# [ 1.3913e-01, 6.6280e-02, 2.8022e-01, ..., -2.7151e-02,
# 3.6584e-01, -6.2766e-02],
# [ 1.2602e-01, 1.2431e-01, 2.7972e-01, ..., -4.9168e-02,
# 4.1285e-01, -2.7115e-04]],
#
# [[-1.5241e-01, 1.5346e-01, -1.4166e-01, ..., 5.2533e-02,
# -1.6990e-01, 3.3114e-02],
# [-2.1162e-01, -1.2736e-02, -8.2769e-02, ..., 1.2881e-01,
# 1.2014e-01, 2.7267e-01],
# [-4.0390e-01, -6.7837e-02, 1.2579e-03, ..., -6.0733e-03,
# 3.5541e-01, -1.9815e-01],
# ...,
# [ 1.3959e-01, 9.0699e-04, 2.0260e-01, ..., 2.0667e-02,
# 3.6359e-01, -1.2589e-01],
# [ 1.3913e-01, 6.6280e-02, 2.8022e-01, ..., -2.7151e-02,
# 3.6584e-01, -6.2766e-02],
# [ 1.2602e-01, 1.2431e-01, 2.7972e-01, ..., -4.9168e-02,
# 4.1285e-01, -2.7115e-04]],
#
# ...,
# [ 4.9059e-01, 3.9329e-01, -1.3623e-01, ..., -2.5431e-01,
# 1.1468e-01, 8.7181e-02],
# [ 5.0399e-01, 3.8765e-01, -1.2510e-01, ..., -3.0067e-01,
# 1.0453e-01, 1.6625e-01],
# [ 5.4651e-01, 4.0442e-01, -1.6091e-01, ..., -3.3413e-01,
# 5.9839e-02, 2.1487e-01]]], device='cuda:0')]
# """
# xlmr_ids = inputs[0]
# retrieve_ids = inputs[2]
# print('xlmr_ids.shape=', xlmr_ids.shape)
# print('retrieve_ids.shape=', retrieve_ids.shape)
#
# # all_layers = xlmr.extract_features(zh_tokens, return_all_hiddens=True)
# all_hiddens = self.xlmr_embedding.extract_features(xlmr_ids, return_all_hiddens=True)
# print('type(all_hiddens)=', type(all_hiddens))
# print('len(all_hiddens=', len(all_hiddens))
# print('all_hiddens[0].shape=', all_hiddens[0].shape)
# print('all_hiddens[1].shape=', all_hiddens[1].shape)
# print('all_hiddens[-1].shape=', all_hiddens[-1].shape)
#
# all_hiddens = list(all_hiddens[1:]) # remove embedding layer
#
# token_reps = []
#
# batch_size, _ = xlmr_ids.shape
# print('batch_size=', batch_size)
# used_layers = list(range(len(all_hiddens)))[-self.opt['num_last_layer_xlmr']:]
# print('len(all_hiddens)=', len(all_hiddens))
# print("self.opt['num_last_layer_xlmr']=", self.opt['num_last_layer_xlmr'])
# print('used_layers=', used_layers)
# for example_id in range(batch_size):
# retrieved_reps = torch.cat([all_hiddens[layer_id][example_id][retrieve_ids[example_id]]
# for layer_id in used_layers], dim=1) # [seq len, xlmr_dim x num last layers]
# print('retrieve_reps.shape=', retrieved_reps.shape)
# token_reps.append(retrieved_reps)
#
# token_reps = torch.stack(token_reps, dim=0) # [batch size, original seq len, xlmr_dim x num_layers]
# print('token_reps.shape=', token_reps.shape)
# print('============ ED_model.get_xlmr_reps END =============')
# return token_reps
def forward(self, inputs):
print('=============== ED_model_hf.forward START ============')
xlmr_ids, input_mask, label_ids, biw2v_ids, retrieve_ids, upos_ids, xpos_ids, head_ids, deprel_ids, ner_ids, lang_weights, ED_labels, pad_masks = inputs
print('xlmr_ids.shape=', xlmr_ids.shape)
print('input_mask.shape=', input_mask.shape)
print('label_ids.shape=', label_ids.shape)
print('biw2v_ids.shape=', biw2v_ids.shape)
print('retrieve_ids.shape=', retrieve_ids.shape)
print('upos_ids.shape=', upos_ids.shape)
print('xpos_ids.shape=', xpos_ids.shape)
print('head_ids.shape=', head_ids.shape)
print('deprel_ids.shape=', deprel_ids.shape)
print('ner_ids.shape=', ner_ids.shape)
print('lang_weights.shape=', lang_weights.shape)
print('ED_labels.shape=', ED_labels.shape)
print('pad_masks.shape=', pad_masks.shape)
"""
xlmr_ids.shape= torch.Size([16, 63])
biw2v_ids.shape= torch.Size([16, 51])
retrieve_ids.shape= torch.Size([16, 51])
upos_ids.shape= torch.Size([16, 51])
xpos_ids.shape= torch.Size([16, 51])
head_ids.shape= torch.Size([16, 51])
deprel_ids.shape= torch.Size([16, 51])
ner_ids.shape= torch.Size([16, 51])
lang_weights.shape= torch.Size([16])
ED_labels.shape= torch.Size([16, 51])
pad_masks.shape= torch.Size([16, 51])
token_masks.shape= torch.Size([16, 51])
upos_reps.shape= torch.Size([16, 51, 30])
"""
token_masks = pad_masks.eq(0).float()
print('token_masks.shape=', token_masks.shape)
# ****** word embeddings ********
upos_reps = self.upos_embedding(upos_ids) # [batch size, seq len, upos dim]
print('upos_reps.shape=', upos_reps.shape)
word_feats = []
word_feats.append(upos_reps)
if self.opt['use_ner']:
ner_reps = self.ner_embedding(ner_ids)
word_feats.append(ner_reps)
word_embeds = self.get_xlmr_reps(inputs) # [batch size, seq len, xlmr dim]
""" from above self.get_xlmr_reps()
xlmr_ids.shape= torch.Size([16, 63])
retrieve_ids.shape= torch.Size([16, 51])
type(all_hiddens)= <class 'list'>
len(all_hiddens= 13
all_hiddens[0].shape= torch.Size([16, 63, 768])
all_hiddens[1].shape= torch.Size([16, 63, 768])
all_hiddens[-1].shape= torch.Size([16, 63, 768])
batch_size= 16
len(all_hiddens)= 12
self.opt['num_last_layer_xlmr']= 1
used_layers= [11]
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
retrieve_reps.shape= torch.Size([51, 768])
token_reps.shape= torch.Size([16, 51, 768])
"""
"""
word_embeds.shape= torch.Size([16, 51, 768])
word_embeds.shape= torch.Size([16, 51, 768])
word_reps.shape= torch.Size([16, 51, 798])
"""
print('word_embeds.shape=', word_embeds.shape)
word_embeds = self.dropout(word_embeds)
print('word_embeds.shape=', word_embeds.shape)
word_feats.append(word_embeds)
word_reps = torch.cat(word_feats, dim=2)
print('word_reps.shape=', word_reps.shape)
# *******************************
"""
In below self.self_att()
input_masks.shape= torch.Size([16, 51])
slf_attn_mask.shape= torch.Size([16, 51, 51])
non_pad_mask.shape= torch.Size([16, 51, 1])
enc_output.shape= torch.Size([16, 51, 798])
position_embed_for_satt= 1
position_ids.shape= torch.Size([16, 51])
enc_output.shape= torch.Size([16, 51, 798])
"""
satt_reps, att_weights = self.self_att(word_reps, pad_masks)
"""
satt_reps.shape= torch.Size([16, 51, 798])
att_weights.shape= torch.Size([16, 51, 51])
adj.shape= torch.Size([16, 51, 51])
gcn_reps.shape= torch.Size([16, 51, 798])
muse_reps.shape= torch.Size([16, 51, 300])
final_reps.shape= torch.Size([16, 51, 1896])
logits.shape= torch.Size([16, 51, 16])
loss= tensor(2.8248, device='cuda:0', grad_fn=<DivBackward0>)
probs.shape= torch.Size([16, 51, 16])
preds.shape= torch.Size([16, 51])
"""
print('satt_reps.shape=', satt_reps.shape)
print('att_weights.shape=', att_weights.shape)
adj = get_full_adj(head_ids, pad_masks, self.opt['device'])
print('adj.shape=', adj.shape)
gcn_reps, _ = self.gcn_layer(word_reps, adj)
print('gcn_reps.shape=', gcn_reps.shape)
muse_reps = self.biw2v_embedding(biw2v_ids)
print('muse_reps.shape=', muse_reps.shape)
final_reps = torch.cat(
[satt_reps, gcn_reps, muse_reps],
dim=2
)
print('final_reps.shape=', final_reps.shape)
logits = self.fc_ED(final_reps) # [batch size, seq len, 16]
print('logits.shape=', logits.shape)
loss, probs, preds = compute_batch_loss(logits, ED_labels, token_masks, instance_weights=lang_weights)
print('loss=', loss)
print('probs.shape=', probs.shape)
print('preds.shape=', preds.shape)
print('=============== ED_model_hf.forward END ============')
return loss, probs, preds
def predict(self, combined_task_inputs):
xlmr_ids, input_mask, label_ids, biw2v_ids, retrieve_ids, upos_ids, xpos_ids, head_ids, deprel_ids, ner_ids, eid, pad_masks = combined_task_inputs
token_masks = pad_masks.eq(0).float() # 1.0 if true token, else 0
print('========== ED_model.predict START ===============')
"""
token_masks.shape= torch.Size([10, 33])
upos_reps.shape= torch.Size([10, 33, 30])
"""
print('token_masks.shape=', token_masks.shape)
"""
xlmr_ids.shape= torch.Size([10, 53])
biw2v_ids.shape= torch.Size([10, 33])
retrieve_ids.shape= torch.Size([10, 33])
upos_ids.shape= torch.Size([10, 33])
xpos_ids.shape= torch.Size([10, 33])
head_ids.shape= torch.Size([10, 33])
deprel_ids.shape= torch.Size([10, 33])
ner_ids.shape= torch.Size([10, 33])
eid.shape= torch.Size([10])
pad_masks.shape= torch.Size([10, 33])
xlmr_ids= tensor([[ 0, 6, 5, 90621, 47229, 250, 181, 5273, 10408,
6267, 4039, 31245, 71633, 2620, 18684, 6466, 7233, 250,
240, 102468, 368, 6, 185701, 35618, 18004, 159565, 97288,
41468, 152, 94, 13231, 3108, 746, 14272, 3070, 102935,
2103, 153872, 767, 186386, 12581, 30039, 230, 59721, 148726,
755, 230, 6816, 1692, 340, 6, 5, 2],
[ 0, 6, 5, 45869, 53929, 10286, 112847, 593, 50221,
139152, 46416, 179, 83001, 95451, 104042, 240, 13875, 13874,
18004, 39865, 3363, 93319, 136295, 109177, 240, 81881, 189757,
81972, 43060, 230, 11115, 33018, 702, 48102, 46408, 73279,
94, 9580, 199317, 73942, 160700, 35508, 340, 6, 5,
2, 0, 0, 0, 0, 0, 0, 0],
[ 0, 4003, 20621, 862, 18173, 30099, 7624, 906, 141538,
755, 556, 48964, 61501, 65, 123290, 164456, 230, 4569,
74602, 240, 169348, 47769, 48387, 47769, 16994, 396, 113409,
216336, 755, 6, 92127, 36435, 52316, 23628, 65, 32634,
1195, 110813, 240, 34708, 201174, 6, 5, 2, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 6625, 665, 87151, 73397, 906, 129382, 10731, 87509,
6, 114378, 13620, 3015, 96629, 92564, 5202, 3015, 96629,
92564, 3108, 59545, 665, 101375, 258, 25198, 13231, 4003,
3518, 123506, 906, 24832, 755, 194558, 250, 19636, 3518,
98058, 3202, 1692, 6, 5, 2, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 6, 5, 37705, 112600, 376, 40743, 5202, 43228,
12323, 48483, 9787, 1325, 1855, 5081, 2044, 826, 6,
110351, 176, 230, 6, 163970, 19089, 47600, 96517, 16452,
412, 6963, 1533, 862, 18740, 13029, 66087, 1365, 6,
116337, 1692, 6, 5, 2, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 6, 104815, 53411, 6, 130825, 6, 22650, 54563,
240, 6, 97927, 10691, 240, 65525, 6, 224157, 665,
77358, 250, 27952, 35180, 160769, 22366, 19931, 101632, 648,
15776, 179, 26430, 70153, 12337, 2977, 240, 103919, 6,
5, 2, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1333, 214363, 96517, 22327, 8039, 3088, 1335, 51218,
902, 177421, 154597, 1533, 146142, 755, 230, 206210, 15330,
69294, 240, 359, 169368, 4040, 14924, 8428, 35862, 10691,
15493, 72317, 179, 12888, 6, 5, 2, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 6625, 106969, 24094, 917, 10913, 10937, 6, 83188,
13759, 240, 93584, 1335, 86401, 24537, 5706, 5202, 24094,
208045, 862, 155500, 48707, 8665, 45089, 121818, 84341, 412,
220818, 6, 5, 2, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 105285, 368, 35000, 230, 12584, 230, 4382, 29928,
240, 141677, 250, 18740, 54610, 60930, 240, 30506, 6,
48699, 140252, 258, 556, 6, 164072, 12589, 96517, 6,
5, 2, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 6625, 55468, 900, 1705, 124630, 151721, 5202, 234180,
3518, 48633, 94, 73441, 23579, 376, 18486, 122608, 340,
240, 37160, 11945, 240, 72647, 120465, 5784, 133131, 6,
5, 2, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
biw2v_ids= tensor([[ 6, 114225, 113937, 128409, 1, 1443, 113675, 113666, 163097,
117713, 1, 126519, 1, 113266, 114068, 1, 50, 176957,
1, 113209, 127252, 113173, 113584, 120372, 126250, 113253, 113470,
113165, 117399, 113165, 119105, 177487, 6],
[ 6, 113782, 1, 123638, 1, 1, 131450, 113546, 1,
116631, 113266, 114666, 1, 125284, 1, 115773, 117903, 124178,
113165, 113254, 1, 395, 113309, 1, 176957, 113203, 1,
1, 1, 177487, 6, 0, 0],
[113216, 113383, 1, 113448, 119129, 113264, 120182, 113167, 242005,
137590, 1, 113165, 137330, 1, 1, 234085, 179317, 115962,
162598, 114539, 114727, 114453, 1, 1, 114368, 114375, 6,
0, 0, 0, 0, 0, 0],
[113306, 115538, 113453, 1, 113183, 1, 120945, 1, 1,
1, 113209, 114243, 1, 113395, 1, 113216, 169807, 117207,
116183, 1, 192377, 121340, 6, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[ 6, 114821, 163342, 1, 130192, 1, 150092, 1, 191632,
113165, 1, 123445, 113381, 114054, 193520, 113200, 1, 113604,
134623, 113170, 122650, 6, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[ 1, 113621, 1, 123566, 1, 114004, 1, 1, 117675,
122110, 113447, 113985, 118076, 190493, 171192, 1, 113939, 1,
118621, 6, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[ 1, 113381, 121106, 1, 118400, 113950, 113725, 113200, 1,
113165, 115868, 168565, 1, 117866, 113179, 1, 151891, 6,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[113306, 116827, 113219, 154925, 148370, 1, 113596, 1, 1,
1, 113219, 115695, 1, 114991, 113407, 156650, 115304, 113180,
1, 6, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[125488, 113398, 113165, 1, 113165, 9086, 116472, 1, 119791,
113604, 114109, 115046, 1, 123924, 117632, 119425, 113167, 120572,
113381, 6, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0],
[113306, 116408, 135535, 1, 1, 1, 137810, 176957, 129403,
123127, 177487, 1, 113249, 113236, 1, 113558, 1, 113199,
113472, 6, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0]], device='cuda:0')
retrieve_ids= tensor([[ 2, 3, 4, 6, 8, 10, 11, 12, 13, 16, 18, 19, 22, 24, 25, 26, 28, 29,
30, 31, 32, 34, 35, 36, 38, 40, 41, 42, 43, 46, 47, 49, 51],
[ 2, 3, 4, 6, 8, 10, 12, 14, 15, 16, 18, 19, 20, 22, 24, 25, 26, 27,
29, 30, 31, 32, 33, 34, 36, 37, 38, 39, 41, 42, 44, 0, 0],
[ 1, 2, 3, 4, 5, 6, 7, 10, 11, 13, 15, 16, 17, 19, 20, 21, 23, 26,
27, 30, 32, 33, 34, 38, 39, 40, 42, 0, 0, 0, 0, 0, 0],
[ 1, 2, 4, 5, 7, 8, 10, 12, 15, 16, 19, 20, 21, 24, 25, 26, 27, 29,
32, 34, 35, 37, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 2, 3, 4, 7, 8, 10, 11, 14, 18, 20, 22, 23, 25, 26, 27, 29, 30, 31,
32, 34, 36, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 2, 3, 5, 7, 9, 11, 13, 14, 16, 17, 20, 21, 22, 23, 27, 30, 32, 33,
34, 36, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 3, 4, 7, 8, 10, 11, 12, 13, 15, 16, 17, 19, 20, 22, 23, 28, 32,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 2, 3, 4, 8, 10, 11, 12, 13, 16, 17, 18, 19, 20, 21, 22, 24, 25,
26, 29, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 18, 19, 21, 23,
25, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 1, 2, 4, 6, 7, 8, 9, 11, 12, 15, 17, 18, 19, 20, 21, 22, 23, 24,
25, 27, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
upos_ids= tensor([[ 7, 8, 4, 18, 18, 6, 4, 18, 18, 9, 2, 18, 18, 2, 6, 4, 7, 7,
14, 13, 8, 3, 4, 18, 8, 4, 9, 2, 4, 2, 18, 7, 7],
[ 7, 18, 8, 4, 18, 18, 18, 9, 2, 18, 2, 4, 4, 9, 2, 4, 4, 9,
2, 4, 4, 6, 4, 18, 7, 4, 9, 18, 9, 7, 7, 0, 0],
[ 8, 8, 14, 8, 4, 4, 4, 2, 18, 18, 9, 2, 4, 2, 4, 18, 18, 18,
4, 18, 18, 18, 18, 2, 4, 4, 7, 0, 0, 0, 0, 0, 0],
[ 8, 4, 2, 4, 9, 9, 18, 18, 14, 18, 13, 12, 9, 14, 14, 8, 4, 9,
6, 14, 18, 4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 7, 8, 18, 14, 4, 4, 18, 18, 18, 2, 4, 4, 4, 2, 18, 2, 14, 2,
4, 2, 18, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 8, 4, 4, 18, 2, 4, 2, 4, 4, 9, 18, 4, 4, 18, 9, 18, 18, 2,
4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 8, 4, 4, 2, 4, 9, 9, 2, 4, 2, 4, 9, 2, 4, 2, 9, 18, 7,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 8, 4, 4, 18, 18, 2, 4, 2, 18, 14, 4, 8, 14, 4, 9, 18, 4, 4,
18, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 8, 4, 2, 9, 2, 6, 4, 2, 4, 2, 4, 18, 2, 4, 8, 4, 2, 4,
4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 8, 4, 18, 9, 14, 18, 18, 7, 4, 4, 7, 2, 4, 4, 2, 4, 9, 2,
4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
xpos_ids= tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0')
head_ids= tensor([[ 2, 0, 2, 3, 4, 2, 6, 9, 6, 2, 12, 10, 12, 15, 10, 15, 2, 21,
21, 21, 2, 23, 21, 25, 21, 25, 26, 29, 26, 31, 25, 25, 2],
[ 3, 3, 0, 3, 6, 3, 6, 7, 10, 6, 12, 3, 12, 13, 16, 12, 16, 17,
20, 18, 20, 20, 22, 23, 26, 20, 26, 26, 28, 3, 3, 0, 0],
[ 0, 1, 4, 2, 4, 5, 6, 9, 5, 9, 10, 13, 10, 15, 4, 17, 9, 17,
18, 21, 19, 21, 19, 25, 23, 25, 1, 0, 0, 0, 0, 0, 0],
[ 0, 1, 4, 2, 4, 4, 2, 7, 13, 13, 13, 13, 1, 16, 14, 13, 16, 17,
18, 21, 16, 21, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 2, 0, 2, 8, 8, 5, 5, 2, 8, 11, 8, 11, 12, 15, 11, 19, 19, 19,
11, 21, 19, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1, 2, 3, 6, 3, 8, 1, 8, 8, 12, 1, 12, 13, 13, 17, 15, 19,
12, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1, 1, 5, 1, 5, 5, 9, 1, 11, 9, 11, 14, 11, 16, 14, 16, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1, 2, 5, 2, 7, 1, 9, 7, 12, 12, 1, 16, 16, 14, 12, 16, 17,
18, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1, 4, 2, 6, 4, 6, 9, 1, 11, 9, 11, 14, 1, 14, 15, 18, 16,
18, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 1, 2, 2, 7, 7, 1, 7, 1, 9, 13, 13, 9, 13, 16, 13, 16, 19,
9, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
deprel_ids= tensor([[ 7, 9, 8, 5, 24, 4, 5, 20, 5, 18, 2, 13, 24, 2, 4, 5, 7, 7,
17, 13, 26, 3, 8, 20, 12, 8, 15, 2, 5, 2, 4, 7, 7],
[ 7, 16, 9, 8, 2, 4, 5, 15, 2, 5, 2, 4, 5, 15, 2, 4, 5, 15,
2, 4, 5, 6, 5, 20, 7, 5, 15, 5, 15, 7, 7, 0, 0],
[ 9, 18, 17, 27, 8, 5, 5, 2, 5, 5, 15, 2, 5, 2, 4, 20, 12, 5,
5, 2, 5, 24, 20, 2, 4, 5, 7, 0, 0, 0, 0, 0, 0],
[ 9, 8, 2, 5, 15, 15, 5, 24, 17, 8, 13, 25, 22, 17, 28, 26, 8, 15,
6, 20, 12, 10, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 7, 9, 8, 17, 8, 5, 5, 10, 10, 2, 4, 5, 5, 2, 13, 2, 17, 2,
5, 2, 4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 9, 8, 5, 5, 2, 5, 2, 4, 5, 15, 2, 4, 5, 5, 15, 2, 5, 2,
4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 9, 8, 10, 2, 4, 15, 15, 2, 4, 2, 5, 15, 2, 5, 2, 15, 20, 7,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 9, 8, 5, 5, 5, 2, 4, 2, 5, 17, 8, 22, 17, 8, 15, 10, 10, 5,
5, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 9, 8, 2, 15, 2, 6, 5, 2, 4, 2, 5, 5, 2, 4, 14, 10, 2, 5,
5, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 9, 8, 5, 15, 17, 5, 10, 7, 10, 5, 7, 2, 5, 5, 2, 5, 15, 2,
4, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
ner_ids= tensor([[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0')
eid= tensor([1., 3., 6., 5., 2., 9., 7., 8., 4., 0.], device='cuda:0')
pad_masks= tensor([[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
True, True, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
True, True, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
True, True, True, True, True, True, True, True, True, True,
True, True, True],
[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
True, True, True, True, True, True, True, True, True, True,
True, True, True]], device='cuda:0')
token_masks= tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
"""
# ****** word embeddings ********
upos_reps = self.upos_embedding(upos_ids) # [batch size, seq len, upos dim]
print('upos_reps.shape=', upos_reps.shape)
word_feats = []
word_feats.append(upos_reps)
if self.opt['use_ner']:
ner_reps = self.ner_embedding(ner_ids)
word_feats.append(ner_reps)
word_embeds = self.get_xlmr_reps(combined_task_inputs) # [batch size, seq len, xlmr dim]
"""
word_embeds.shape= torch.Size([10, 33, 768])
word_embeds.shape= torch.Size([10, 33, 768])
word_reps.shape= torch.Size([10, 33, 798])
"""
print('word_embeds.shape=', word_embeds.shape)
word_embeds = self.dropout(word_embeds)
print('word_embeds.shape=', word_embeds.shape)
word_feats.append(word_embeds)
word_reps = torch.cat(word_feats, dim=2) # should be: [batch size, seq len, upos_dim + xlmr_dim]
print('word_reps.shape=', word_reps.shape)
# *******************************
""" When I call self.self_att() below
input_masks.shape= torch.Size([10, 33])
slf_attn_mask.shape= torch.Size([10, 33, 33])
non_pad_mask.shape= torch.Size([10, 33, 1])
enc_output.shape= torch.Size([10, 33, 798])
position_embed_for_satt= 1
position_ids.shape= torch.Size([10, 33])
enc_output.shape= torch.Size([10, 33, 798])
"""
satt_reps, att_weights = self.self_att(word_reps, pad_masks)
"""
satt_reps.shape= torch.Size([10, 33, 798]) att_weights.shape= torch.Size([10, 33, 33])
adj.shape= torch.Size([10, 33, 33])
gcn_reps.shape= torch.Size([10, 33, 798])
muse_reps.shape= torch.Size([10, 33, 300])
final_reps.shape= torch.Size([10, 33, 1896])
logits.shape= torch.Size([10, 33, 16])
preds.shape= torch.Size([10, 33])
probs.shape= torch.Size([10, 33, 16])
"""
print('satt_reps.shape=', satt_reps.shape, 'att_weights.shape=', att_weights.shape)
adj = get_full_adj(head_ids, pad_masks, self.opt['device'])
print('adj.shape=', adj.shape)
gcn_reps, _ = self.gcn_layer(word_reps, adj)
print('gcn_reps.shape=', gcn_reps.shape)
muse_reps = self.biw2v_embedding(biw2v_ids)
print('muse_reps.shape=', muse_reps.shape)
final_reps = torch.cat(
[satt_reps, gcn_reps, muse_reps],
dim=2
)
print('final_reps.shape=', final_reps.shape)
logits = self.fc_ED(final_reps) # [batch size, seq len, 16]
print('logits.shape=', logits.shape)
preds = torch.argmax(logits, dim=2).long() * token_masks.long()
print('preds.shape=', preds.shape)
probs = torch.softmax(logits, dim=2) # [batch size, seq len, num classes]
print('probs.shape=', probs.shape)
"""
preds.shape= torch.Size([10, 33])
probs.shpae= torch.Size([10, 33, 16])
token_masks.shape= torch.Size([10, 33])
preds= tensor([[ 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 15, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 10, 14, 0, 0, 0, 0, 0, 0, 0, 15, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 13, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0],
[14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 14, 0, 0, 0, 0, 0, 0, 14, 0, 5, 0, 0, 0, 0, 0, 0, 14,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 5, 0, 0, 0, 5, 5, 0, 0, 14, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 13, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[14, 0, 0, 0, 0, 0, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[13, 0, 0, 0, 0, 0, 0, 0, 13, 0, 0, 0, 0, 7, 0, 5, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
device='cuda:0')
probs= tensor([[[9.9974e-01, 1.8049e-07, 1.8267e-08, ..., 2.0810e-05,
5.8121e-05, 1.7711e-05],
[4.0915e-03, 8.9600e-07, 2.5587e-07, ..., 3.6581e-04,
9.9335e-01, 2.5578e-04],
[9.9981e-01, 6.6693e-08, 7.0893e-09, ..., 2.9757e-05,
3.0001e-05, 1.2025e-05],
...,
[9.9945e-01, 2.5450e-07, 3.3773e-08, ..., 6.2535e-05,
3.2478e-05, 1.0961e-04],
[9.9969e-01, 1.9118e-07, 1.9771e-08, ..., 1.9469e-05,
4.6727e-05, 3.4929e-05],
[9.9975e-01, 1.6357e-07, 1.6790e-08, ..., 1.7160e-05,
4.9481e-05, 2.1778e-05]],
[[9.9969e-01, 2.4833e-07, 2.8136e-08, ..., 3.9068e-05,
6.1829e-05, 1.7187e-05],
[9.9979e-01, 1.0620e-07, 1.4160e-08, ..., 3.8181e-05,
3.3937e-05, 8.8798e-06],
[1.9942e-01, 1.2716e-05, 7.3964e-06, ..., 3.7772e-02,
2.7622e-02, 1.6138e-03],
...,
[9.9968e-01, 2.4205e-07, 2.7755e-08, ..., 3.4218e-05,
4.3995e-05, 2.6076e-05],
[1.1816e-01, 3.0641e-02, 2.3439e-02, ..., 7.2712e-02,
8.3229e-02, 7.6786e-02],
[1.1816e-01, 3.0641e-02, 2.3439e-02, ..., 7.2712e-02,
8.3229e-02, 7.6786e-02]],
[[9.9974e-01, 4.8116e-08, 7.5416e-09, ..., 5.9740e-05,
7.3432e-05, 7.0663e-06],
[9.9976e-01, 5.1815e-08, 8.4192e-09, ..., 4.0064e-05,
4.6581e-05, 6.7058e-06],
[9.9986e-01, 3.8352e-08, 5.4001e-09, ..., 2.8621e-05,
2.2455e-05, 4.1249e-06],
...,
[1.1728e-01, 3.1099e-02, 2.3771e-02, ..., 7.2479e-02,
8.2832e-02, 7.6790e-02],
[1.1728e-01, 3.1099e-02, 2.3771e-02, ..., 7.2479e-02,
8.2832e-02, 7.6790e-02],
[1.1728e-01, 3.1099e-02, 2.3771e-02, ..., 7.2479e-02,
8.2832e-02, 7.6790e-02]],
...,
[[3.7560e-03, 1.7332e-06, 7.1111e-07, ..., 6.5631e-04,
9.9067e-01, 4.6437e-04],
[9.9888e-01, 3.3755e-07, 6.8032e-08, ..., 1.0844e-04,
2.3969e-04, 1.1803e-04],
[9.9945e-01, 1.6661e-07, 2.9271e-08, ..., 5.2645e-05,
8.6269e-05, 7.7344e-05],
...,
[1.2599e-01, 2.6760e-02, 2.0231e-02, ..., 7.5109e-02,
8.6246e-02, 7.6045e-02],
[1.2599e-01, 2.6760e-02, 2.0231e-02, ..., 7.5109e-02,
8.6246e-02, 7.6045e-02],
[1.2599e-01, 2.6760e-02, 2.0231e-02, ..., 7.5109e-02,
8.6246e-02, 7.6045e-02]],
[[1.2495e-01, 3.4042e-06, 1.7341e-06, ..., 8.2814e-01,
2.5807e-02, 7.6088e-03],
[9.9954e-01, 8.0825e-08, 1.2043e-08, ..., 1.4239e-04,
5.0908e-05, 9.5018e-06],
[9.9973e-01, 3.7680e-08, 4.6598e-09, ..., 5.9885e-05,
3.4976e-05, 6.5925e-06],
...,
[1.2006e-01, 2.9710e-02, 2.2615e-02, ..., 7.3734e-02,
8.3705e-02, 7.6766e-02],
[1.2006e-01, 2.9710e-02, 2.2615e-02, ..., 7.3734e-02,
8.3705e-02, 7.6766e-02],
[1.2006e-01, 2.9710e-02, 2.2615e-02, ..., 7.3734e-02,
8.3705e-02, 7.6766e-02]],
[[2.9218e-03, 1.2115e-06, 4.4520e-07, ..., 7.4744e-04,
9.9328e-01, 2.7579e-04],
[9.9936e-01, 2.0276e-07, 2.9777e-08, ..., 7.2475e-05,
2.7122e-04, 3.7922e-05],
[9.9962e-01, 1.7503e-07, 2.2917e-08, ..., 4.1906e-05,
6.1219e-05, 2.5442e-05],
...,
[1.1812e-01, 3.0612e-02, 2.3442e-02, ..., 7.2634e-02,
8.3626e-02, 7.6709e-02],
[1.1812e-01, 3.0612e-02, 2.3442e-02, ..., 7.2634e-02,
8.3626e-02, 7.6709e-02],
[1.1812e-01, 3.0612e-02, 2.3442e-02, ..., 7.2634e-02,
8.3626e-02, 7.6709e-02]]], device='cuda:0')
token_masks= tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]],
device='cuda:0')
"""
print('========== ED_model.predict END ===============')
return preds, probs, token_masks
| 57.277943 | 160 | 0.428835 | 8,535 | 57,908 | 2.825542 | 0.107206 | 0.099768 | 0.135346 | 0.162879 | 0.697421 | 0.666736 | 0.606776 | 0.572442 | 0.562158 | 0.546276 | 0 | 0.289067 | 0.384921 | 57,908 | 1,010 | 161 | 57.334653 | 0.388006 | 0.121434 | 0 | 0.373206 | 0 | 0 | 0.17412 | 0.019486 | 0 | 0 | 0 | 0.00099 | 0 | 1 | 0.023923 | false | 0 | 0.019139 | 0 | 0.062201 | 0.392345 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8609c18864c8c591a102d9003c03ffd7906521a5 | 290 | py | Python | easyneuron/__init__.py | TrendingTechnology/easyneuron | b99822c7206a144a0ab61b3b6b5cddeaca1a3c6a | [
"Apache-2.0"
] | 1 | 2021-12-14T19:21:44.000Z | 2021-12-14T19:21:44.000Z | easyneuron/__init__.py | TrendingTechnology/easyneuron | b99822c7206a144a0ab61b3b6b5cddeaca1a3c6a | [
"Apache-2.0"
] | null | null | null | easyneuron/__init__.py | TrendingTechnology/easyneuron | b99822c7206a144a0ab61b3b6b5cddeaca1a3c6a | [
"Apache-2.0"
] | null | null | null | """easyNeuron is the simplest way to design, build and test machine learnng models.
Submodules
----------
easyneuron.math - The math tools needed for the module
easyneuron.neighbours - KNearest and other neighbourb based ML models
easyneuron.types - The custom types for the module
""" | 32.222222 | 83 | 0.758621 | 40 | 290 | 5.5 | 0.675 | 0.054545 | 0.109091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158621 | 290 | 9 | 84 | 32.222222 | 0.901639 | 1.07931 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
860ab93257e6057af2f7f8d857361512722b9f0d | 28,835 | py | Python | sphinx/search/it.py | zhsj/sphinx | 169297d0b76bf0b503033dadeb14f9a2b735e422 | [
"BSD-2-Clause"
] | 69 | 2019-02-18T12:07:35.000Z | 2022-03-12T10:38:32.000Z | sphinx/search/it.py | zhsj/sphinx | 169297d0b76bf0b503033dadeb14f9a2b735e422 | [
"BSD-2-Clause"
] | 301 | 2020-10-03T10:46:31.000Z | 2022-03-27T23:46:23.000Z | sphinx/search/it.py | zhsj/sphinx | 169297d0b76bf0b503033dadeb14f9a2b735e422 | [
"BSD-2-Clause"
] | 28 | 2019-03-22T01:07:13.000Z | 2022-02-21T16:38:27.000Z | # -*- coding: utf-8 -*-
"""
sphinx.search.it
~~~~~~~~~~~~~~~~
Italian search language: includes the JS Italian stemmer.
:copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from sphinx.search import SearchLanguage, parse_stop_word
import snowballstemmer
if False:
# For type annotation
from typing import Any # NOQA
italian_stopwords = parse_stop_word(u'''
| source: http://snowball.tartarus.org/algorithms/italian/stop.txt
ad | a (to) before vowel
al | a + il
allo | a + lo
ai | a + i
agli | a + gli
all | a + l'
agl | a + gl'
alla | a + la
alle | a + le
con | with
col | con + il
coi | con + i (forms collo, cogli etc are now very rare)
da | from
dal | da + il
dallo | da + lo
dai | da + i
dagli | da + gli
dall | da + l'
dagl | da + gll'
dalla | da + la
dalle | da + le
di | of
del | di + il
dello | di + lo
dei | di + i
degli | di + gli
dell | di + l'
degl | di + gl'
della | di + la
delle | di + le
in | in
nel | in + el
nello | in + lo
nei | in + i
negli | in + gli
nell | in + l'
negl | in + gl'
nella | in + la
nelle | in + le
su | on
sul | su + il
sullo | su + lo
sui | su + i
sugli | su + gli
sull | su + l'
sugl | su + gl'
sulla | su + la
sulle | su + le
per | through, by
tra | among
contro | against
io | I
tu | thou
lui | he
lei | she
noi | we
voi | you
loro | they
mio | my
mia |
miei |
mie |
tuo |
tua |
tuoi | thy
tue |
suo |
sua |
suoi | his, her
sue |
nostro | our
nostra |
nostri |
nostre |
vostro | your
vostra |
vostri |
vostre |
mi | me
ti | thee
ci | us, there
vi | you, there
lo | him, the
la | her, the
li | them
le | them, the
gli | to him, the
ne | from there etc
il | the
un | a
uno | a
una | a
ma | but
ed | and
se | if
perché | why, because
anche | also
come | how
dov | where (as dov')
dove | where
che | who, that
chi | who
cui | whom
non | not
più | more
quale | who, that
quanto | how much
quanti |
quanta |
quante |
quello | that
quelli |
quella |
quelle |
questo | this
questi |
questa |
queste |
si | yes
tutto | all
tutti | all
| single letter forms:
a | at
c | as c' for ce or ci
e | and
i | the
l | as l'
o | or
| forms of avere, to have (not including the infinitive):
ho
hai
ha
abbiamo
avete
hanno
abbia
abbiate
abbiano
avrò
avrai
avrà
avremo
avrete
avranno
avrei
avresti
avrebbe
avremmo
avreste
avrebbero
avevo
avevi
aveva
avevamo
avevate
avevano
ebbi
avesti
ebbe
avemmo
aveste
ebbero
avessi
avesse
avessimo
avessero
avendo
avuto
avuta
avuti
avute
| forms of essere, to be (not including the infinitive):
sono
sei
è
siamo
siete
sia
siate
siano
sarò
sarai
sarà
saremo
sarete
saranno
sarei
saresti
sarebbe
saremmo
sareste
sarebbero
ero
eri
era
eravamo
eravate
erano
fui
fosti
fu
fummo
foste
furono
fossi
fosse
fossimo
fossero
essendo
| forms of fare, to do (not including the infinitive, fa, fat-):
faccio
fai
facciamo
fanno
faccia
facciate
facciano
farò
farai
farà
faremo
farete
faranno
farei
faresti
farebbe
faremmo
fareste
farebbero
facevo
facevi
faceva
facevamo
facevate
facevano
feci
facesti
fece
facemmo
faceste
fecero
facessi
facesse
facessimo
facessero
facendo
| forms of stare, to be (not including the infinitive):
sto
stai
sta
stiamo
stanno
stia
stiate
stiano
starò
starai
starà
staremo
starete
staranno
starei
staresti
starebbe
staremmo
stareste
starebbero
stavo
stavi
stava
stavamo
stavate
stavano
stetti
stesti
stette
stemmo
steste
stettero
stessi
stesse
stessimo
stessero
''')
js_stemmer = u"""
var JSX={};(function(k){function l(b,e){var a=function(){};a.prototype=e.prototype;var c=new a;for(var d in b){b[d].prototype=c}}function K(c,b){for(var a in b.prototype)if(b.prototype.hasOwnProperty(a))c.prototype[a]=b.prototype[a]}function e(a,b,d){function c(a,b,c){delete a[b];a[b]=c;return c}Object.defineProperty(a,b,{get:function(){return c(a,b,d())},set:function(d){c(a,b,d)},enumerable:true,configurable:true})}function L(a,b,c){return a[b]=a[b]/c|0}var r=parseInt;var B=parseFloat;function M(a){return a!==a}var z=isFinite;var y=encodeURIComponent;var x=decodeURIComponent;var w=encodeURI;var u=decodeURI;var t=Object.prototype.toString;var C=Object.prototype.hasOwnProperty;function j(){}k.require=function(b){var a=q[b];return a!==undefined?a:null};k.profilerIsRunning=function(){return j.getResults!=null};k.getProfileResults=function(){return(j.getResults||function(){return{}})()};k.postProfileResults=function(a,b){if(j.postResults==null)throw new Error('profiler has not been turned on');return j.postResults(a,b)};k.resetProfileResults=function(){if(j.resetResults==null)throw new Error('profiler has not been turned on');return j.resetResults()};k.DEBUG=false;function s(){};l([s],Error);function a(a,b,c){this.F=a.length;this.K=a;this.L=b;this.I=c;this.H=null;this.P=null};l([a],Object);function p(){};l([p],Object);function i(){var a;var b;var c;this.G={};a=this.E='';b=this._=0;c=this.A=a.length;this.D=0;this.C=b;this.B=c};l([i],p);function v(a,b){a.E=b.E;a._=b._;a.A=b.A;a.D=b.D;a.C=b.C;a.B=b.B};function d(b,d,c,e){var a;if(b._>=b.A){return false}a=b.E.charCodeAt(b._);if(a>e||a<c){return false}a-=c;if((d[a>>>3]&1<<(a&7))===0){return false}b._++;return true};function m(b,d,c,e){var a;if(b._<=b.D){return false}a=b.E.charCodeAt(b._-1);if(a>e||a<c){return false}a-=c;if((d[a>>>3]&1<<(a&7))===0){return false}b._--;return true};function h(a,d,c,e){var b;if(a._>=a.A){return false}b=a.E.charCodeAt(a._);if(b>e||b<c){a._++;return true}b-=c;if((d[b>>>3]&1<<(b&7))===0){a._++;return true}return false};function o(a,b,d){var c;if(a.A-a._<b){return false}if(a.E.slice(c=a._,c+b)!==d){return false}a._+=b;return true};function g(a,b,d){var c;if(a._-a.D<b){return false}if(a.E.slice((c=a._)-b,c)!==d){return false}a._-=b;return true};function n(f,m,p){var b;var d;var e;var n;var g;var k;var l;var i;var h;var c;var a;var j;var o;b=0;d=p;e=f._;n=f.A;g=0;k=0;l=false;while(true){i=b+(d-b>>>1);h=0;c=g<k?g:k;a=m[i];for(j=c;j<a.F;j++){if(e+c===n){h=-1;break}h=f.E.charCodeAt(e+c)-a.K.charCodeAt(j);if(h!==0){break}c++}if(h<0){d=i;k=c}else{b=i;g=c}if(d-b<=1){if(b>0){break}if(d===b){break}if(l){break}l=true}}while(true){a=m[b];if(g>=a.F){f._=e+a.F|0;if(a.H==null){return a.I}o=a.H(a.P);f._=e+a.F|0;if(o){return a.I}}b=a.L;if(b<0){return 0}}return-1};function f(d,m,p){var b;var g;var e;var n;var f;var k;var l;var i;var h;var c;var a;var j;var o;b=0;g=p;e=d._;n=d.D;f=0;k=0;l=false;while(true){i=b+(g-b>>1);h=0;c=f<k?f:k;a=m[i];for(j=a.F-1-c;j>=0;j--){if(e-c===n){h=-1;break}h=d.E.charCodeAt(e-1-c)-a.K.charCodeAt(j);if(h!==0){break}c++}if(h<0){g=i;k=c}else{b=i;f=c}if(g-b<=1){if(b>0){break}if(g===b){break}if(l){break}l=true}}while(true){a=m[b];if(f>=a.F){d._=e-a.F|0;if(a.H==null){return a.I}o=a.H(d);d._=e-a.F|0;if(o){return a.I}}b=a.L;if(b<0){return 0}}return-1};function D(a,b,d,e){var c;c=e.length-(d-b);a.E=a.E.slice(0,b)+e+a.E.slice(d);a.A+=c|0;if(a._>=d){a._+=c|0}else if(a._>b){a._=b}return c|0};function c(a,f){var b;var c;var d;var e;b=false;if((c=a.C)<0||c>(d=a.B)||d>(e=a.A)||e>a.E.length?false:true){D(a,a.C,a.B,f);b=true}return b};i.prototype.J=function(){return false};i.prototype.a=function(b){var a;var c;var d;var e;a=this.G['.'+b];if(a==null){c=this.E=b;d=this._=0;e=this.A=c.length;this.D=0;this.C=d;this.B=e;this.J();a=this.E;this.G['.'+b]=a}return a};i.prototype.stemWord=i.prototype.a;i.prototype.b=function(e){var d;var b;var c;var a;var f;var g;var h;d=[];for(b=0;b<e.length;b++){c=e[b];a=this.G['.'+c];if(a==null){f=this.E=c;g=this._=0;h=this.A=f.length;this.D=0;this.C=g;this.B=h;this.J();a=this.E;this.G['.'+c]=a}d.push(a)}return d};i.prototype.stemWords=i.prototype.b;function b(){i.call(this);this.I_p2=0;this.I_p1=0;this.I_pV=0};l([b],i);b.prototype.M=function(a){this.I_p2=a.I_p2;this.I_p1=a.I_p1;this.I_pV=a.I_pV;v(this,a)};b.prototype.copy_from=b.prototype.M;b.prototype.W=function(){var e;var p;var q;var l;var a;var k;var f;var g;var h;var i;var j;var m;p=this._;b:while(true){q=this._;f=true;a:while(f===true){f=false;this.C=this._;e=n(this,b.a_0,7);if(e===0){break a}this.B=this._;switch(e){case 0:break a;case 1:if(!c(this,'à')){return false}break;case 2:if(!c(this,'è')){return false}break;case 3:if(!c(this,'ì')){return false}break;case 4:if(!c(this,'ò')){return false}break;case 5:if(!c(this,'ù')){return false}break;case 6:if(!c(this,'qU')){return false}break;case 7:if(this._>=this.A){break a}this._++;break}continue b}this._=q;break b}this._=p;b:while(true){l=this._;g=true;d:while(g===true){g=false;e:while(true){a=this._;h=true;a:while(h===true){h=false;if(!d(this,b.g_v,97,249)){break a}this.C=this._;i=true;f:while(i===true){i=false;k=this._;j=true;c:while(j===true){j=false;if(!o(this,1,'u')){break c}this.B=this._;if(!d(this,b.g_v,97,249)){break c}if(!c(this,'U')){return false}break f}this._=k;if(!o(this,1,'i')){break a}this.B=this._;if(!d(this,b.g_v,97,249)){break a}if(!c(this,'I')){return false}}this._=a;break e}m=this._=a;if(m>=this.A){break d}this._++}continue b}this._=l;break b}return true};b.prototype.r_prelude=b.prototype.W;function G(a){var e;var q;var r;var m;var f;var l;var g;var h;var i;var j;var k;var p;q=a._;b:while(true){r=a._;g=true;a:while(g===true){g=false;a.C=a._;e=n(a,b.a_0,7);if(e===0){break a}a.B=a._;switch(e){case 0:break a;case 1:if(!c(a,'à')){return false}break;case 2:if(!c(a,'è')){return false}break;case 3:if(!c(a,'ì')){return false}break;case 4:if(!c(a,'ò')){return false}break;case 5:if(!c(a,'ù')){return false}break;case 6:if(!c(a,'qU')){return false}break;case 7:if(a._>=a.A){break a}a._++;break}continue b}a._=r;break b}a._=q;b:while(true){m=a._;h=true;d:while(h===true){h=false;e:while(true){f=a._;i=true;a:while(i===true){i=false;if(!d(a,b.g_v,97,249)){break a}a.C=a._;j=true;f:while(j===true){j=false;l=a._;k=true;c:while(k===true){k=false;if(!o(a,1,'u')){break c}a.B=a._;if(!d(a,b.g_v,97,249)){break c}if(!c(a,'U')){return false}break f}a._=l;if(!o(a,1,'i')){break a}a.B=a._;if(!d(a,b.g_v,97,249)){break a}if(!c(a,'I')){return false}}a._=f;break e}p=a._=f;if(p>=a.A){break d}a._++}continue b}a._=m;break b}return true};b.prototype.U=function(){var u;var w;var x;var y;var t;var l;var e;var f;var g;var i;var c;var j;var k;var a;var m;var n;var o;var p;var q;var r;var s;var v;this.I_pV=s=this.A;this.I_p1=s;this.I_p2=s;u=this._;l=true;a:while(l===true){l=false;e=true;g:while(e===true){e=false;w=this._;f=true;b:while(f===true){f=false;if(!d(this,b.g_v,97,249)){break b}g=true;f:while(g===true){g=false;x=this._;i=true;c:while(i===true){i=false;if(!h(this,b.g_v,97,249)){break c}d:while(true){c=true;e:while(c===true){c=false;if(!d(this,b.g_v,97,249)){break e}break d}if(this._>=this.A){break c}this._++}break f}this._=x;if(!d(this,b.g_v,97,249)){break b}c:while(true){j=true;d:while(j===true){j=false;if(!h(this,b.g_v,97,249)){break d}break c}if(this._>=this.A){break b}this._++}}break g}this._=w;if(!h(this,b.g_v,97,249)){break a}k=true;c:while(k===true){k=false;y=this._;a=true;b:while(a===true){a=false;if(!h(this,b.g_v,97,249)){break b}e:while(true){m=true;d:while(m===true){m=false;if(!d(this,b.g_v,97,249)){break d}break e}if(this._>=this.A){break b}this._++}break c}this._=y;if(!d(this,b.g_v,97,249)){break a}if(this._>=this.A){break a}this._++}}this.I_pV=this._}v=this._=u;t=v;n=true;a:while(n===true){n=false;b:while(true){o=true;c:while(o===true){o=false;if(!d(this,b.g_v,97,249)){break c}break b}if(this._>=this.A){break a}this._++}b:while(true){p=true;c:while(p===true){p=false;if(!h(this,b.g_v,97,249)){break c}break b}if(this._>=this.A){break a}this._++}this.I_p1=this._;b:while(true){q=true;c:while(q===true){q=false;if(!d(this,b.g_v,97,249)){break c}break b}if(this._>=this.A){break a}this._++}c:while(true){r=true;b:while(r===true){r=false;if(!h(this,b.g_v,97,249)){break b}break c}if(this._>=this.A){break a}this._++}this.I_p2=this._}this._=t;return true};b.prototype.r_mark_regions=b.prototype.U;function H(a){var x;var y;var z;var u;var v;var l;var e;var f;var g;var i;var j;var k;var c;var m;var n;var o;var p;var q;var r;var s;var t;var w;a.I_pV=t=a.A;a.I_p1=t;a.I_p2=t;x=a._;l=true;a:while(l===true){l=false;e=true;g:while(e===true){e=false;y=a._;f=true;b:while(f===true){f=false;if(!d(a,b.g_v,97,249)){break b}g=true;f:while(g===true){g=false;z=a._;i=true;c:while(i===true){i=false;if(!h(a,b.g_v,97,249)){break c}d:while(true){j=true;e:while(j===true){j=false;if(!d(a,b.g_v,97,249)){break e}break d}if(a._>=a.A){break c}a._++}break f}a._=z;if(!d(a,b.g_v,97,249)){break b}c:while(true){k=true;d:while(k===true){k=false;if(!h(a,b.g_v,97,249)){break d}break c}if(a._>=a.A){break b}a._++}}break g}a._=y;if(!h(a,b.g_v,97,249)){break a}c=true;c:while(c===true){c=false;u=a._;m=true;b:while(m===true){m=false;if(!h(a,b.g_v,97,249)){break b}e:while(true){n=true;d:while(n===true){n=false;if(!d(a,b.g_v,97,249)){break d}break e}if(a._>=a.A){break b}a._++}break c}a._=u;if(!d(a,b.g_v,97,249)){break a}if(a._>=a.A){break a}a._++}}a.I_pV=a._}w=a._=x;v=w;o=true;a:while(o===true){o=false;b:while(true){p=true;c:while(p===true){p=false;if(!d(a,b.g_v,97,249)){break c}break b}if(a._>=a.A){break a}a._++}b:while(true){q=true;c:while(q===true){q=false;if(!h(a,b.g_v,97,249)){break c}break b}if(a._>=a.A){break a}a._++}a.I_p1=a._;b:while(true){r=true;c:while(r===true){r=false;if(!d(a,b.g_v,97,249)){break c}break b}if(a._>=a.A){break a}a._++}c:while(true){s=true;b:while(s===true){s=false;if(!h(a,b.g_v,97,249)){break b}break c}if(a._>=a.A){break a}a._++}a.I_p2=a._}a._=v;return true};b.prototype.V=function(){var a;var e;var d;b:while(true){e=this._;d=true;a:while(d===true){d=false;this.C=this._;a=n(this,b.a_1,3);if(a===0){break a}this.B=this._;switch(a){case 0:break a;case 1:if(!c(this,'i')){return false}break;case 2:if(!c(this,'u')){return false}break;case 3:if(this._>=this.A){break a}this._++;break}continue b}this._=e;break b}return true};b.prototype.r_postlude=b.prototype.V;function I(a){var d;var f;var e;b:while(true){f=a._;e=true;a:while(e===true){e=false;a.C=a._;d=n(a,b.a_1,3);if(d===0){break a}a.B=a._;switch(d){case 0:break a;case 1:if(!c(a,'i')){return false}break;case 2:if(!c(a,'u')){return false}break;case 3:if(a._>=a.A){break a}a._++;break}continue b}a._=f;break b}return true};b.prototype.S=function(){return!(this.I_pV<=this._)?false:true};b.prototype.r_RV=b.prototype.S;b.prototype.Q=function(){return!(this.I_p1<=this._)?false:true};b.prototype.r_R1=b.prototype.Q;b.prototype.R=function(){return!(this.I_p2<=this._)?false:true};b.prototype.r_R2=b.prototype.R;b.prototype.T=function(){var a;this.B=this._;if(f(this,b.a_2,37)===0){return false}this.C=this._;a=f(this,b.a_3,5);if(a===0){return false}if(!(!(this.I_pV<=this._)?false:true)){return false}switch(a){case 0:return false;case 1:if(!c(this,'')){return false}break;case 2:if(!c(this,'e')){return false}break}return true};b.prototype.r_attached_pronoun=b.prototype.T;function J(a){var d;a.B=a._;if(f(a,b.a_2,37)===0){return false}a.C=a._;d=f(a,b.a_3,5);if(d===0){return false}if(!(!(a.I_pV<=a._)?false:true)){return false}switch(d){case 0:return false;case 1:if(!c(a,'')){return false}break;case 2:if(!c(a,'e')){return false}break}return true};b.prototype.X=function(){var a;var j;var d;var h;var e;var k;var i;var l;var m;var o;var p;var q;var r;var n;this.B=this._;a=f(this,b.a_6,51);if(a===0){return false}this.C=this._;switch(a){case 0:return false;case 1:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'')){return false}break;case 2:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'')){return false}j=this.A-this._;k=true;a:while(k===true){k=false;this.B=this._;if(!g(this,2,'ic')){this._=this.A-j;break a}this.C=o=this._;if(!(!(this.I_p2<=o)?false:true)){this._=this.A-j;break a}if(!c(this,'')){return false}}break;case 3:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'log')){return false}break;case 4:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'u')){return false}break;case 5:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'ente')){return false}break;case 6:if(!(!(this.I_pV<=this._)?false:true)){return false}if(!c(this,'')){return false}break;case 7:if(!(!(this.I_p1<=this._)?false:true)){return false}if(!c(this,'')){return false}d=this.A-this._;i=true;a:while(i===true){i=false;this.B=this._;a=f(this,b.a_4,4);if(a===0){this._=this.A-d;break a}this.C=p=this._;if(!(!(this.I_p2<=p)?false:true)){this._=this.A-d;break a}if(!c(this,'')){return false}switch(a){case 0:this._=this.A-d;break a;case 1:this.B=this._;if(!g(this,2,'at')){this._=this.A-d;break a}this.C=q=this._;if(!(!(this.I_p2<=q)?false:true)){this._=this.A-d;break a}if(!c(this,'')){return false}break}}break;case 8:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'')){return false}h=this.A-this._;l=true;a:while(l===true){l=false;this.B=this._;a=f(this,b.a_5,3);if(a===0){this._=this.A-h;break a}this.C=this._;switch(a){case 0:this._=this.A-h;break a;case 1:if(!(!(this.I_p2<=this._)?false:true)){this._=this.A-h;break a}if(!c(this,'')){return false}break}}break;case 9:if(!(!(this.I_p2<=this._)?false:true)){return false}if(!c(this,'')){return false}e=this.A-this._;m=true;a:while(m===true){m=false;this.B=this._;if(!g(this,2,'at')){this._=this.A-e;break a}this.C=r=this._;if(!(!(this.I_p2<=r)?false:true)){this._=this.A-e;break a}if(!c(this,'')){return false}this.B=this._;if(!g(this,2,'ic')){this._=this.A-e;break a}this.C=n=this._;if(!(!(this.I_p2<=n)?false:true)){this._=this.A-e;break a}if(!c(this,'')){return false}}break}return true};b.prototype.r_standard_suffix=b.prototype.X;function F(a){var d;var k;var e;var i;var h;var l;var j;var m;var n;var p;var q;var r;var s;var o;a.B=a._;d=f(a,b.a_6,51);if(d===0){return false}a.C=a._;switch(d){case 0:return false;case 1:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'')){return false}break;case 2:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'')){return false}k=a.A-a._;l=true;a:while(l===true){l=false;a.B=a._;if(!g(a,2,'ic')){a._=a.A-k;break a}a.C=p=a._;if(!(!(a.I_p2<=p)?false:true)){a._=a.A-k;break a}if(!c(a,'')){return false}}break;case 3:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'log')){return false}break;case 4:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'u')){return false}break;case 5:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'ente')){return false}break;case 6:if(!(!(a.I_pV<=a._)?false:true)){return false}if(!c(a,'')){return false}break;case 7:if(!(!(a.I_p1<=a._)?false:true)){return false}if(!c(a,'')){return false}e=a.A-a._;j=true;a:while(j===true){j=false;a.B=a._;d=f(a,b.a_4,4);if(d===0){a._=a.A-e;break a}a.C=q=a._;if(!(!(a.I_p2<=q)?false:true)){a._=a.A-e;break a}if(!c(a,'')){return false}switch(d){case 0:a._=a.A-e;break a;case 1:a.B=a._;if(!g(a,2,'at')){a._=a.A-e;break a}a.C=r=a._;if(!(!(a.I_p2<=r)?false:true)){a._=a.A-e;break a}if(!c(a,'')){return false}break}}break;case 8:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'')){return false}i=a.A-a._;m=true;a:while(m===true){m=false;a.B=a._;d=f(a,b.a_5,3);if(d===0){a._=a.A-i;break a}a.C=a._;switch(d){case 0:a._=a.A-i;break a;case 1:if(!(!(a.I_p2<=a._)?false:true)){a._=a.A-i;break a}if(!c(a,'')){return false}break}}break;case 9:if(!(!(a.I_p2<=a._)?false:true)){return false}if(!c(a,'')){return false}h=a.A-a._;n=true;a:while(n===true){n=false;a.B=a._;if(!g(a,2,'at')){a._=a.A-h;break a}a.C=s=a._;if(!(!(a.I_p2<=s)?false:true)){a._=a.A-h;break a}if(!c(a,'')){return false}a.B=a._;if(!g(a,2,'ic')){a._=a.A-h;break a}a.C=o=a._;if(!(!(a.I_p2<=o)?false:true)){a._=a.A-h;break a}if(!c(a,'')){return false}}break}return true};b.prototype.Y=function(){var d;var e;var a;var g;var h;var i;e=this.A-(g=this._);if(g<this.I_pV){return false}h=this._=this.I_pV;a=this.D;this.D=h;i=this._=this.A-e;this.B=i;d=f(this,b.a_7,87);if(d===0){this.D=a;return false}this.C=this._;switch(d){case 0:this.D=a;return false;case 1:if(!c(this,'')){return false}break}this.D=a;return true};b.prototype.r_verb_suffix=b.prototype.Y;function E(a){var e;var g;var d;var h;var i;var j;g=a.A-(h=a._);if(h<a.I_pV){return false}i=a._=a.I_pV;d=a.D;a.D=i;j=a._=a.A-g;a.B=j;e=f(a,b.a_7,87);if(e===0){a.D=d;return false}a.C=a._;switch(e){case 0:a.D=d;return false;case 1:if(!c(a,'')){return false}break}a.D=d;return true};b.prototype.Z=function(){var a;var d;var e;var f;var h;var i;a=this.A-this._;e=true;a:while(e===true){e=false;this.B=this._;if(!m(this,b.g_AEIO,97,242)){this._=this.A-a;break a}this.C=h=this._;if(!(!(this.I_pV<=h)?false:true)){this._=this.A-a;break a}if(!c(this,'')){return false}this.B=this._;if(!g(this,1,'i')){this._=this.A-a;break a}this.C=i=this._;if(!(!(this.I_pV<=i)?false:true)){this._=this.A-a;break a}if(!c(this,'')){return false}}d=this.A-this._;f=true;a:while(f===true){f=false;this.B=this._;if(!g(this,1,'h')){this._=this.A-d;break a}this.C=this._;if(!m(this,b.g_CG,99,103)){this._=this.A-d;break a}if(!(!(this.I_pV<=this._)?false:true)){this._=this.A-d;break a}if(!c(this,'')){return false}}return true};b.prototype.r_vowel_suffix=b.prototype.Z;function A(a){var d;var e;var f;var h;var i;var j;d=a.A-a._;f=true;a:while(f===true){f=false;a.B=a._;if(!m(a,b.g_AEIO,97,242)){a._=a.A-d;break a}a.C=i=a._;if(!(!(a.I_pV<=i)?false:true)){a._=a.A-d;break a}if(!c(a,'')){return false}a.B=a._;if(!g(a,1,'i')){a._=a.A-d;break a}a.C=j=a._;if(!(!(a.I_pV<=j)?false:true)){a._=a.A-d;break a}if(!c(a,'')){return false}}e=a.A-a._;h=true;a:while(h===true){h=false;a.B=a._;if(!g(a,1,'h')){a._=a.A-e;break a}a.C=a._;if(!m(a,b.g_CG,99,103)){a._=a.A-e;break a}if(!(!(a.I_pV<=a._)?false:true)){a._=a.A-e;break a}if(!c(a,'')){return false}}return true};b.prototype.J=function(){var l;var i;var j;var k;var m;var n;var b;var c;var d;var e;var a;var f;var g;var h;var p;var q;var r;var s;var t;var u;var o;l=this._;b=true;a:while(b===true){b=false;if(!G(this)){break a}}p=this._=l;i=p;c=true;a:while(c===true){c=false;if(!H(this)){break a}}q=this._=i;this.D=q;s=this._=r=this.A;j=r-s;d=true;a:while(d===true){d=false;if(!J(this)){break a}}u=this._=(t=this.A)-j;k=t-u;e=true;a:while(e===true){e=false;a=true;b:while(a===true){a=false;m=this.A-this._;f=true;c:while(f===true){f=false;if(!F(this)){break c}break b}this._=this.A-m;if(!E(this)){break a}}}this._=this.A-k;g=true;a:while(g===true){g=false;if(!A(this)){break a}}o=this._=this.D;n=o;h=true;a:while(h===true){h=false;if(!I(this)){break a}}this._=n;return true};b.prototype.stem=b.prototype.J;b.prototype.N=function(a){return a instanceof b};b.prototype.equals=b.prototype.N;b.prototype.O=function(){var c;var a;var b;var d;c='ItalianStemmer';a=0;for(b=0;b<c.length;b++){d=c.charCodeAt(b);a=(a<<5)-a+d;a=a&a}return a|0};b.prototype.hashCode=b.prototype.O;b.serialVersionUID=1;e(b,'methodObject',function(){return new b});e(b,'a_0',function(){return[new a('',-1,7),new a('qu',0,6),new a('á',0,1),new a('é',0,2),new a('í',0,3),new a('ó',0,4),new a('ú',0,5)]});e(b,'a_1',function(){return[new a('',-1,3),new a('I',0,1),new a('U',0,2)]});e(b,'a_2',function(){return[new a('la',-1,-1),new a('cela',0,-1),new a('gliela',0,-1),new a('mela',0,-1),new a('tela',0,-1),new a('vela',0,-1),new a('le',-1,-1),new a('cele',6,-1),new a('gliele',6,-1),new a('mele',6,-1),new a('tele',6,-1),new a('vele',6,-1),new a('ne',-1,-1),new a('cene',12,-1),new a('gliene',12,-1),new a('mene',12,-1),new a('sene',12,-1),new a('tene',12,-1),new a('vene',12,-1),new a('ci',-1,-1),new a('li',-1,-1),new a('celi',20,-1),new a('glieli',20,-1),new a('meli',20,-1),new a('teli',20,-1),new a('veli',20,-1),new a('gli',20,-1),new a('mi',-1,-1),new a('si',-1,-1),new a('ti',-1,-1),new a('vi',-1,-1),new a('lo',-1,-1),new a('celo',31,-1),new a('glielo',31,-1),new a('melo',31,-1),new a('telo',31,-1),new a('velo',31,-1)]});e(b,'a_3',function(){return[new a('ando',-1,1),new a('endo',-1,1),new a('ar',-1,2),new a('er',-1,2),new a('ir',-1,2)]});e(b,'a_4',function(){return[new a('ic',-1,-1),new a('abil',-1,-1),new a('os',-1,-1),new a('iv',-1,1)]});e(b,'a_5',function(){return[new a('ic',-1,1),new a('abil',-1,1),new a('iv',-1,1)]});e(b,'a_6',function(){return[new a('ica',-1,1),new a('logia',-1,3),new a('osa',-1,1),new a('ista',-1,1),new a('iva',-1,9),new a('anza',-1,1),new a('enza',-1,5),new a('ice',-1,1),new a('atrice',7,1),new a('iche',-1,1),new a('logie',-1,3),new a('abile',-1,1),new a('ibile',-1,1),new a('usione',-1,4),new a('azione',-1,2),new a('uzione',-1,4),new a('atore',-1,2),new a('ose',-1,1),new a('ante',-1,1),new a('mente',-1,1),new a('amente',19,7),new a('iste',-1,1),new a('ive',-1,9),new a('anze',-1,1),new a('enze',-1,5),new a('ici',-1,1),new a('atrici',25,1),new a('ichi',-1,1),new a('abili',-1,1),new a('ibili',-1,1),new a('ismi',-1,1),new a('usioni',-1,4),new a('azioni',-1,2),new a('uzioni',-1,4),new a('atori',-1,2),new a('osi',-1,1),new a('anti',-1,1),new a('amenti',-1,6),new a('imenti',-1,6),new a('isti',-1,1),new a('ivi',-1,9),new a('ico',-1,1),new a('ismo',-1,1),new a('oso',-1,1),new a('amento',-1,6),new a('imento',-1,6),new a('ivo',-1,9),new a('ità',-1,8),new a('istà',-1,1),new a('istè',-1,1),new a('istì',-1,1)]});e(b,'a_7',function(){return[new a('isca',-1,1),new a('enda',-1,1),new a('ata',-1,1),new a('ita',-1,1),new a('uta',-1,1),new a('ava',-1,1),new a('eva',-1,1),new a('iva',-1,1),new a('erebbe',-1,1),new a('irebbe',-1,1),new a('isce',-1,1),new a('ende',-1,1),new a('are',-1,1),new a('ere',-1,1),new a('ire',-1,1),new a('asse',-1,1),new a('ate',-1,1),new a('avate',16,1),new a('evate',16,1),new a('ivate',16,1),new a('ete',-1,1),new a('erete',20,1),new a('irete',20,1),new a('ite',-1,1),new a('ereste',-1,1),new a('ireste',-1,1),new a('ute',-1,1),new a('erai',-1,1),new a('irai',-1,1),new a('isci',-1,1),new a('endi',-1,1),new a('erei',-1,1),new a('irei',-1,1),new a('assi',-1,1),new a('ati',-1,1),new a('iti',-1,1),new a('eresti',-1,1),new a('iresti',-1,1),new a('uti',-1,1),new a('avi',-1,1),new a('evi',-1,1),new a('ivi',-1,1),new a('isco',-1,1),new a('ando',-1,1),new a('endo',-1,1),new a('Yamo',-1,1),new a('iamo',-1,1),new a('avamo',-1,1),new a('evamo',-1,1),new a('ivamo',-1,1),new a('eremo',-1,1),new a('iremo',-1,1),new a('assimo',-1,1),new a('ammo',-1,1),new a('emmo',-1,1),new a('eremmo',54,1),new a('iremmo',54,1),new a('immo',-1,1),new a('ano',-1,1),new a('iscano',58,1),new a('avano',58,1),new a('evano',58,1),new a('ivano',58,1),new a('eranno',-1,1),new a('iranno',-1,1),new a('ono',-1,1),new a('iscono',65,1),new a('arono',65,1),new a('erono',65,1),new a('irono',65,1),new a('erebbero',-1,1),new a('irebbero',-1,1),new a('assero',-1,1),new a('essero',-1,1),new a('issero',-1,1),new a('ato',-1,1),new a('ito',-1,1),new a('uto',-1,1),new a('avo',-1,1),new a('evo',-1,1),new a('ivo',-1,1),new a('ar',-1,1),new a('ir',-1,1),new a('erà',-1,1),new a('irà',-1,1),new a('erò',-1,1),new a('irò',-1,1)]});e(b,'g_v',function(){return[17,65,16,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2,1]});e(b,'g_AEIO',function(){return[17,65,0,0,0,0,0,0,0,0,0,0,0,0,0,128,128,8,2]});e(b,'g_CG',function(){return[17]});var q={'src/stemmer.jsx':{Stemmer:p},'src/italian-stemmer.jsx':{ItalianStemmer:b}}}(JSX))
var Stemmer = JSX.require("src/italian-stemmer.jsx").ItalianStemmer;
"""
class SearchItalian(SearchLanguage):
lang = 'it'
language_name = 'Italian'
js_stemmer_rawcode = 'italian-stemmer.js'
js_stemmer_code = js_stemmer
stopwords = italian_stopwords
def init(self, options):
# type: (Any) -> None
self.stemmer = snowballstemmer.stemmer('italian')
def stem(self, word):
# type: (unicode) -> unicode
return self.stemmer.stemWord(word.lower())
| 86.074627 | 23,390 | 0.608427 | 6,481 | 28,835 | 2.639408 | 0.109551 | 0.0463 | 0.046475 | 0.039986 | 0.522857 | 0.466561 | 0.41921 | 0.376885 | 0.298375 | 0.245528 | 0 | 0.036057 | 0.111288 | 28,835 | 334 | 23,391 | 86.332335 | 0.631468 | 0.010161 | 0 | 0 | 0 | 0.003289 | 0.980814 | 0.56987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.006579 | false | 0 | 0.009868 | 0.003289 | 0.039474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8624371154c1b1a75c36b0de672b793085f753dc | 4,337 | py | Python | tests/test_fft.py | Majoburo/spaxlet | 9eaf52b996dd6f64401e95eedfa50785b6f8cc85 | [
"MIT"
] | null | null | null | tests/test_fft.py | Majoburo/spaxlet | 9eaf52b996dd6f64401e95eedfa50785b6f8cc85 | [
"MIT"
] | null | null | null | tests/test_fft.py | Majoburo/spaxlet | 9eaf52b996dd6f64401e95eedfa50785b6f8cc85 | [
"MIT"
] | null | null | null | from functools import partial
import numpy as np
import scarlet
import scarlet.fft as fft
from numpy.testing import assert_array_equal, assert_almost_equal
class TestCentering(object):
"""Test the centering and padding algorithms"""
def test_shift(self):
"""Test that padding and fft shift/unshift are consistent"""
a0 = np.ones((1, 1))
a_pad = fft._pad(a0, (5, 4))
truth = [[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
assert_array_equal(a_pad, truth)
a_shift = np.fft.ifftshift(a_pad)
truth = [[1.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0]]
assert_array_equal(a_shift, truth)
# Shifting back should give us a_pad again
a_shift_back = np.fft.fftshift(a_shift)
assert_array_equal(a_shift_back, a_pad)
def test_center(self):
"""Test that _centered method is compatible with shift/unshift"""
shape = (5, 2)
a0 = np.arange(10).reshape(shape)
a_pad = fft._pad(a0, (9, 11))
truth = [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 2, 3, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 4, 5, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 6, 7, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 8, 9, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
assert_array_equal(a_pad, truth)
a_shift = np.fft.ifftshift(a_pad)
truth = [[4, 5, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[6, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[8, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
assert_array_equal(a_shift, truth)
# Shifting back should give us a_pad again
a_shift_back = np.fft.fftshift(a_shift)
assert_array_equal(a_shift_back, a_pad)
# _centered should undo the padding, returning the original array
a_final = fft._centered(a_pad, shape)
assert_array_equal(a_final, a0)
class TestFourier(object):
def get_psfs(self, shape, sigmas):
shape_ = (None, *shape)
psfs = np.array([
scarlet.PSF(partial(scarlet.psf.gaussian, sigma=s), shape=shape_).image[0]
for s in sigmas
])
psfs /= psfs.sum(axis=(1, 2))[:, None, None]
return psfs
"""Test the Fourier object"""
def test_2D_psf_matching(self):
"""Test matching two 2D psfs
"""
# Narrow PSF
shape = (41,41)
psf1 = scarlet.fft.Fourier(self.get_psfs(shape, [1])[0])
# Wide PSF
psf2 = scarlet.fft.Fourier(self.get_psfs(shape, [2])[0])
# Test narrow to wide
kernel_1to2 = fft.match_psfs(psf2, psf1)
img2 = fft.convolve(psf1, kernel_1to2)
assert_almost_equal(img2.image, psf2.image)
# Test wide to narrow
kernel_2to1 = fft.match_psfs(psf1, psf2)
img1 = fft.convolve(psf2, kernel_2to1)
assert_almost_equal(img1.image, psf1.image)
def test_multiband_psf_matching(self):
"""Test matching two PSFs with a spectral dimension
"""
# Narrow PSF
shape = (41,41)
psf1 = scarlet.fft.Fourier(self.get_psfs(shape, [1]))
# Wide PSF
psf2 = scarlet.fft.Fourier(self.get_psfs(shape, [1,2,3]))
# Nawrrow to wide
kernel_1to2 = fft.match_psfs(psf2, psf1)
image = fft.convolve(kernel_1to2, psf1)
assert_almost_equal(psf2.image, image.image)
# Wide to narrow
kernel_2to1 = fft.match_psfs(psf1, psf2)
image = fft.convolve(kernel_2to1, psf2).image
for img in image:
assert_almost_equal(img, psf1.image[0])
| 34.696 | 86 | 0.502421 | 701 | 4,337 | 2.987161 | 0.149786 | 0.233047 | 0.329513 | 0.412607 | 0.513372 | 0.50191 | 0.473257 | 0.473257 | 0.472779 | 0.438395 | 0 | 0.125485 | 0.345861 | 4,337 | 124 | 87 | 34.975806 | 0.612619 | 0.116901 | 0 | 0.349398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 1 | 0.060241 | false | 0 | 0.060241 | 0 | 0.156627 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
862720c2e5134ef292f81f25715ff68ca3bd4af0 | 2,618 | py | Python | experiments/noise.py | eareyan/pysegta | b7d208d05855a9994bd711e2b9c99bc5a86bb851 | [
"MIT"
] | 2 | 2021-07-23T13:26:33.000Z | 2021-08-21T15:52:31.000Z | experiments/noise.py | eareyan/pysegta | b7d208d05855a9994bd711e2b9c99bc5a86bb851 | [
"MIT"
] | 3 | 2021-06-08T22:41:23.000Z | 2022-01-13T03:27:42.000Z | experiments/noise.py | eareyan/pysegta | b7d208d05855a9994bd711e2b9c99bc5a86bb851 | [
"MIT"
] | 2 | 2021-07-23T13:26:34.000Z | 2021-08-21T15:52:32.000Z | from abc import ABC, abstractmethod
from scipy.stats import uniform
from typing import List
import numpy as np
class Noise(ABC):
"""An abstract base class to implement noise distribution used when sampling games."""
@abstractmethod
def get_samples(self, m: int) -> List[float]:
"""
Return m samples of noise.
:param m: an integer
:return: a list of m values, each value corresponding to a sample noise value.
"""
pass
@abstractmethod
def get_mean(self) -> float:
"""
Return the mean of the noise distribution.
:return: a float corresponding to the mean value of the distribution
"""
pass
@abstractmethod
def get_variance(self):
"""
Return the variance of the noise distribution.
:return: a float corresponding to the variance of the distribution.
"""
pass
@abstractmethod
def get_c(self, max_utility: float, min_utility: float):
"""
Compute the range of utilities, including noise, of utilities.
:param max_utility: the max utility of the ground-truth game (or an upper-bound)
:param min_utility: the min utility of the ground-truth game (or a lower-bound)
:return: a float.
"""
pass
class UniformNoise(Noise):
"""Implements uniform noise. """
def __init__(self, low: float, high: float):
assert low <= high
# We concentrate on noise that is centered at zero so that we don't have to shift the games' payoffs around.
assert low + high == 0.0
self.low = low
self.high = high
self.uniform_distribution = uniform(loc=self.low, scale=self.high - self.low)
def get_samples(self, m: int):
# return self.uniform_distribution.rvs(size=m) # This is much slower than using the following line!
return np.random.uniform(self.low, self.high, m)
def get_mean(self):
"""
Compute mean of the uniform distribution.
:return:
"""
return self.uniform_distribution.mean()
def get_variance(self):
"""
Compute variance of the uniform distribution.
:return:
"""
return self.uniform_distribution.var()
def get_c(self, max_utility: float, min_utility: float):
"""
Range of the noise.
:param max_utility:
:param min_utility:
:return:
"""
return max_utility - min_utility + self.high - self.low
def __repr__(self):
return f'UniformNoise, Variance = {self.get_variance():.4f}'
| 30.44186 | 116 | 0.620703 | 332 | 2,618 | 4.801205 | 0.295181 | 0.028231 | 0.050188 | 0.045169 | 0.321832 | 0.301129 | 0.27478 | 0.190715 | 0.190715 | 0.116688 | 0 | 0.001617 | 0.291444 | 2,618 | 85 | 117 | 30.8 | 0.857682 | 0.413675 | 0 | 0.352941 | 0 | 0 | 0.039526 | 0.019763 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.294118 | false | 0.117647 | 0.117647 | 0.058824 | 0.617647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
862e6b8fc73c94629147c93336c8d2cf7619f99b | 2,704 | py | Python | nicos/devices/tas/energy.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 12 | 2019-11-06T15:40:36.000Z | 2022-01-01T16:23:00.000Z | nicos/devices/tas/energy.py | ebadkamil/nicos | 0355a970d627aae170c93292f08f95759c97f3b5 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 91 | 2020-08-18T09:20:26.000Z | 2022-02-01T11:07:14.000Z | nicos/devices/tas/energy.py | ISISComputingGroup/nicos | 94cb4d172815919481f8c6ee686f21ebb76f2068 | [
"CC-BY-3.0",
"Apache-2.0",
"CC-BY-4.0"
] | 6 | 2020-01-11T10:52:30.000Z | 2022-02-25T12:35:23.000Z | # -*- coding: utf-8 -*-
# *****************************************************************************
# NICOS, the Networked Instrument Control System of the MLZ
# Copyright (c) 2009-2021 by the NICOS contributors (see AUTHORS)
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation; either version 2 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
# details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# Module authors:
# Björn Pedersen <bjoern.pedersen@frm2.tum.de>
#
# *****************************************************************************
from math import pi, sqrt
THZ2MEV = 4.1356675
ANG2MEV = 81.804165
UNITS = {'A': 'lambda',
'A-1': 'k',
'meV': 'meV',
'THz': 'THz'}
class Energy:
"""Energy class."""
def __init__(self, value, unit=None):
if isinstance(value, Energy):
value, unit = value.value, value.unit
if unit not in UNITS:
raise ValueError('unknown energy unit: %r' % unit)
self.value = value
self.unit = unit
def __repr__(self):
return '%.5g %s' % (self.value, self.unit)
def as_meV(self):
if self.unit == 'meV':
return self.value
elif self.unit == 'THz':
return self.value * THZ2MEV
elif self.unit == 'A-1':
return ANG2MEV / (2*pi)**2 * self.value**2
elif self.unit == 'A':
return ANG2MEV / self.value**2
raise ValueError('impossible energy unit: %r' % self.unit)
def as_THz(self):
return self.as_meV() / THZ2MEV
def as_k(self):
return 2*pi * sqrt(self.as_meV() / ANG2MEV)
def as_lambda(self):
return sqrt(ANG2MEV / self.as_meV())
def __float__(self):
return float(self.value)
def asUnit(self, unit):
"""Return a new Energy that represents this energy with another unit."""
return getattr(self, 'as_%s' % UNITS[unit])()
def storable(self):
"""Dictionary representation."""
return {'unit': self.unit, 'e': self.value}
def __getstate__(self):
return self.storable()
def __setstate__(self, state):
self.__dict__.update(state)
| 31.08046 | 80 | 0.592086 | 350 | 2,704 | 4.482857 | 0.42 | 0.051625 | 0.024857 | 0.036329 | 0.052263 | 0.035692 | 0 | 0 | 0 | 0 | 0 | 0.0279 | 0.244453 | 2,704 | 86 | 81 | 31.44186 | 0.740088 | 0.423817 | 0 | 0 | 0 | 0 | 0.065132 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.255814 | false | 0 | 0.023256 | 0.139535 | 0.581395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
86426795f47f46463eb4153338dea9e8512d3125 | 9,318 | py | Python | ckanext/validation/tests/test_jobs.py | salsadigitalauorg/ckanext-validation | bd9e1684287093eb1b0a56b7af8d9a93758f981e | [
"MIT"
] | null | null | null | ckanext/validation/tests/test_jobs.py | salsadigitalauorg/ckanext-validation | bd9e1684287093eb1b0a56b7af8d9a93758f981e | [
"MIT"
] | 5 | 2021-02-04T01:20:05.000Z | 2022-02-14T02:00:08.000Z | ckanext/validation/tests/test_jobs.py | salsadigitalauorg/ckanext-validation | bd9e1684287093eb1b0a56b7af8d9a93758f981e | [
"MIT"
] | 1 | 2020-05-19T23:44:57.000Z | 2020-05-19T23:44:57.000Z | import mock
import StringIO
import json
import io
from nose.tools import assert_equals
import ckantoolkit
from ckan.lib.uploader import ResourceUpload
from ckan.tests.helpers import call_action, reset_db, change_config
from ckan.tests import factories
from ckanext.validation.model import create_tables, tables_exist, Validation
from ckanext.validation.jobs import (
run_validation_job, uploader, Session)
from ckanext.validation.tests.helpers import (
VALID_REPORT, INVALID_REPORT, ERROR_REPORT, VALID_REPORT_LOCAL_FILE,
mock_uploads, MockFieldStorage
)
class MockUploader(ResourceUpload):
def get_path(self, resource_id):
return '/tmp/example/{}'.format(resource_id)
def mock_get_resource_uploader(data_dict):
return MockUploader(data_dict)
class TestValidationJob(object):
def setup(self):
reset_db()
if not tables_exist():
create_tables()
@change_config('ckanext.validation.run_on_create_async', False)
@mock.patch('ckanext.validation.jobs.validate')
@mock.patch.object(Session, 'commit')
@mock.patch.object(ckantoolkit, 'get_action')
def test_job_run_no_schema(self, mock_get_action, mock_commit, mock_validate):
org = factories.Organization()
dataset = factories.Dataset(private=True, owner_org=org['id'])
resource = {
'id': 'test',
'url': 'http://example.com/file.csv',
'format': 'csv',
'package_id': dataset['id'],
}
run_validation_job(resource)
mock_validate.assert_called_with(
'http://example.com/file.csv',
format='csv',
schema=None)
@mock.patch('ckanext.validation.jobs.validate')
@mock.patch.object(Session, 'commit')
@mock.patch.object(ckantoolkit, 'get_action')
def test_job_run_schema(self, mock_get_action, mock_commit, mock_validate):
org = factories.Organization()
dataset = factories.Dataset(private=True, owner_org=org['id'])
schema = {
'fields': [
{'name': 'id', 'type': 'integer'},
{'name': 'description', 'type': 'string'}
]
}
resource = {
'id': 'test',
'url': 'http://example.com/file.csv',
'format': 'csv',
'schema': json.dumps(schema),
'package_id': dataset['id'],
}
run_validation_job(resource)
mock_validate.assert_called_with(
'http://example.com/file.csv',
format='csv',
schema=schema)
@mock.patch('ckanext.validation.jobs.validate')
@mock.patch.object(uploader, 'get_resource_uploader',
return_value=mock_get_resource_uploader({}))
@mock.patch.object(Session, 'commit')
@mock.patch.object(ckantoolkit, 'get_action')
def test_job_run_uploaded_file(
self, mock_get_action, mock_commit, mock_uploader, mock_validate):
org = factories.Organization()
dataset = factories.Dataset(private=True, owner_org=org['id'])
resource = {
'id': 'test',
'url': '__upload',
'url_type': 'upload',
'format': 'csv',
'package_id': dataset['id'],
}
run_validation_job(resource)
mock_validate.assert_called_with(
'/tmp/example/{}'.format(resource['id']),
format='csv',
schema=None)
@mock.patch('ckanext.validation.jobs.validate',
return_value=VALID_REPORT)
def test_job_run_valid_stores_validation_object(self, mock_validate):
resource = factories.Resource(
url='http://example.com/file.csv', format='csv')
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert_equals(validation.status, 'success')
assert_equals(validation.report, VALID_REPORT)
assert validation.finished
@mock.patch('ckanext.validation.jobs.validate',
return_value=INVALID_REPORT)
def test_job_run_invalid_stores_validation_object(self, mock_validate):
resource = factories.Resource(
url='http://example.com/file.csv', format='csv')
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert_equals(validation.status, 'failure')
assert_equals(validation.report, INVALID_REPORT)
assert validation.finished
@mock.patch('ckanext.validation.jobs.validate',
return_value=ERROR_REPORT)
def test_job_run_error_stores_validation_object(self, mock_validate):
resource = factories.Resource(
url='http://example.com/file.csv', format='csv')
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert_equals(validation.status, 'error')
assert_equals(validation.report, None)
assert_equals(validation.error, {'message': 'Some warning'})
assert validation.finished
@mock.patch('ckanext.validation.jobs.validate',
return_value=VALID_REPORT_LOCAL_FILE)
@mock.patch.object(uploader, 'get_resource_uploader',
return_value=mock_get_resource_uploader({}))
def test_job_run_uploaded_file_replaces_paths(
self, mock_uploader, mock_validate):
resource = factories.Resource(
url='__upload', url_type='upload', format='csv')
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert validation.report['tables'][0]['source'].startswith('http')
@mock.patch('ckanext.validation.jobs.validate',
return_value=VALID_REPORT)
def test_job_run_valid_stores_status_in_resource(self, mock_validate):
resource = factories.Resource(
url='http://example.com/file.csv', format='csv')
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
updated_resource = call_action('resource_show', id=resource['id'])
assert_equals(updated_resource['validation_status'], validation.status)
assert_equals(
updated_resource['validation_timestamp'],
validation.finished.isoformat())
@mock_uploads
def test_job_local_paths_are_hidden(self, mock_open):
invalid_csv = 'id,type\n' + '1,a,\n' * 1010
invalid_file = StringIO.StringIO()
invalid_file.write(invalid_csv)
mock_upload = MockFieldStorage(invalid_file, 'invalid.csv')
resource = factories.Resource(format='csv', upload=mock_upload)
invalid_stream = io.BufferedReader(io.BytesIO(invalid_csv))
with mock.patch('io.open', return_value=invalid_stream):
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
source = validation.report['tables'][0]['source']
assert source.startswith('http')
assert source.endswith('invalid.csv')
warning = validation.report['warnings'][0]
assert_equals(
warning, 'Table inspection has reached 1000 row(s) limit')
@mock_uploads
def test_job_pass_validation_options(self, mock_open):
invalid_csv = '''
a,b,c
#comment
1,2,3
'''
validation_options = {
'headers': 3,
'skip_rows': ['#']
}
invalid_file = StringIO.StringIO()
invalid_file.write(invalid_csv)
mock_upload = MockFieldStorage(invalid_file, 'invalid.csv')
resource = factories.Resource(
format='csv',
upload=mock_upload,
validation_options=validation_options)
invalid_stream = io.BufferedReader(io.BytesIO(invalid_csv))
with mock.patch('io.open', return_value=invalid_stream):
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert_equals(validation.report['valid'], True)
@mock_uploads
def test_job_pass_validation_options_string(self, mock_open):
invalid_csv = '''
a;b;c
#comment
1;2;3
'''
validation_options = '''{
"headers": 3,
"skip_rows": ["#"]
}'''
invalid_file = StringIO.StringIO()
invalid_file.write(invalid_csv)
mock_upload = MockFieldStorage(invalid_file, 'invalid.csv')
resource = factories.Resource(
format='csv',
upload=mock_upload,
validation_options=validation_options)
invalid_stream = io.BufferedReader(io.BytesIO(invalid_csv))
with mock.patch('io.open', return_value=invalid_stream):
run_validation_job(resource)
validation = Session.query(Validation).filter(
Validation.resource_id == resource['id']).one()
assert_equals(validation.report['valid'], True)
| 30.55082 | 82 | 0.639193 | 1,008 | 9,318 | 5.659722 | 0.144841 | 0.040316 | 0.033655 | 0.046275 | 0.775986 | 0.717967 | 0.704996 | 0.692375 | 0.676599 | 0.669939 | 0 | 0.00283 | 0.241575 | 9,318 | 304 | 83 | 30.651316 | 0.804443 | 0 | 0 | 0.584112 | 0 | 0 | 0.135651 | 0.036059 | 0.004673 | 0 | 0 | 0 | 0.102804 | 1 | 0.065421 | false | 0.009346 | 0.056075 | 0.009346 | 0.140187 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
86428ade7cd153f751b8a3082689ddd3ab507412 | 583 | py | Python | Dataset/Leetcode/train/38/718.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/38/718.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | Dataset/Leetcode/train/38/718.py | kkcookies99/UAST | fff81885aa07901786141a71e5600a08d7cb4868 | [
"MIT"
] | null | null | null | class Solution:
def XXX(self, n: int) -> str:
result = ["1"]
for i in range(n - 1):
p = 0
cnt = 1
tmp = []
while p < len(result):
if p + 1 == len(result):
tmp.extend([str(cnt), result[p]])
elif result[p + 1] == result[p]:
cnt += 1
elif result[p + 1] != result[p]:
tmp.extend([str(cnt), result[p]])
cnt = 1
p += 1
result = tmp
return "".join(result)
| 29.15 | 53 | 0.349914 | 65 | 583 | 3.138462 | 0.384615 | 0.205882 | 0.117647 | 0.147059 | 0.401961 | 0.401961 | 0 | 0 | 0 | 0 | 0 | 0.035336 | 0.51458 | 583 | 19 | 54 | 30.684211 | 0.685512 | 0 | 0 | 0.222222 | 0 | 0 | 0.001718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
86538e5f9f67c30484032f5afd7b79629b2843d2 | 791 | py | Python | src/sca3s/backend/acquire/driver/function/generic.py | scarv/sca3s-backend | 62659fcd6986481698df53b99d14d15c6421cf9b | [
"MIT"
] | null | null | null | src/sca3s/backend/acquire/driver/function/generic.py | scarv/sca3s-backend | 62659fcd6986481698df53b99d14d15c6421cf9b | [
"MIT"
] | null | null | null | src/sca3s/backend/acquire/driver/function/generic.py | scarv/sca3s-backend | 62659fcd6986481698df53b99d14d15c6421cf9b | [
"MIT"
] | null | null | null | # Copyright (C) 2018 SCARV project <info@scarv.org>
#
# Use of this source code is restricted per the MIT license, a copy of which
# can be found at https://opensource.org/licenses/MIT (or should be included
# as LICENSE.txt within the associated archive or repository).
from sca3s import backend as sca3s_be
from sca3s import middleware as sca3s_mw
from sca3s.backend.acquire import board as board
from sca3s.backend.acquire import scope as scope
from sca3s.backend.acquire import hybrid as hybrid
from sca3s.backend.acquire import driver as driver
from sca3s.backend.acquire import repo as repo
from sca3s.backend.acquire import depo as depo
import binascii, struct
class DriverImp( driver.function.DriverType ) :
def __init__( self, job ) :
super().__init__( job )
| 32.958333 | 77 | 0.77244 | 122 | 791 | 4.92623 | 0.5 | 0.1198 | 0.159734 | 0.229617 | 0.289517 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021212 | 0.165613 | 791 | 23 | 78 | 34.391304 | 0.889394 | 0.331226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.75 | 0 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8659284e69613ca4a25c7c878d45767d0e521299 | 1,564 | py | Python | 208/test_combos.py | alehpineda/bitesofpy | bfd319a606cd0b7b9bfb85a3e8942872a2d43c48 | [
"MIT"
] | null | null | null | 208/test_combos.py | alehpineda/bitesofpy | bfd319a606cd0b7b9bfb85a3e8942872a2d43c48 | [
"MIT"
] | 2 | 2020-09-24T11:25:29.000Z | 2021-06-25T15:43:35.000Z | 208/test_combos.py | alehpineda/bitesofpy | bfd319a606cd0b7b9bfb85a3e8942872a2d43c48 | [
"MIT"
] | null | null | null | import pytest
from combos import find_number_pairs
def _sort_all(ret):
return sorted([tuple(sorted(n)) for n in ret])
@pytest.mark.parametrize(
"numbers, N, expected",
[
([2, 3, 5, 4, 6], 10, [(4, 6)]),
([9, 1, 3, 8, 7], 10, [(9, 1), (3, 7)]),
([0.2, 3, 0.4], 10, []),
([0.2, 9.8, 10, 1, 0], 10, [(0.2, 9.8), (10, 0)]),
(
[
0.24,
0.36,
0.04,
0.06,
0.33,
0.08,
0.20,
0.27,
0.3,
0.31,
0.76,
0.05,
0.08,
0.08,
0.67,
0.09,
0.66,
0.79,
0.95,
],
1,
[(0.24, 0.76), (0.33, 0.67), (0.05, 0.95)],
),
([9, 1, 3, 8, 7], 0, []),
([-9, 29, 11, 10, 9, 3, -1, 21], 20, [(-9, 29), (11, 9), (-1, 21)]),
(
[
1.69,
1.82,
2.91,
4.67,
4.81,
3.05,
5.82,
5.06,
4.28,
6.36,
5.19,
4.57,
],
10,
[(4.81, 5.19)],
),
],
)
def test_find_number_pairs(numbers, N, expected):
actual = find_number_pairs(numbers, N=N)
assert type(actual) == list
assert _sort_all(actual) == _sort_all(expected)
| 23 | 76 | 0.289642 | 192 | 1,564 | 2.291667 | 0.328125 | 0.018182 | 0.102273 | 0.018182 | 0.163636 | 0.036364 | 0 | 0 | 0 | 0 | 0 | 0.267409 | 0.540921 | 1,564 | 67 | 77 | 23.343284 | 0.345404 | 0 | 0 | 0.129032 | 0 | 0 | 0.012788 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.032258 | false | 0 | 0.032258 | 0.016129 | 0.080645 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
865f679cbb5369d1569399fd083023d952ec9d0c | 1,219 | py | Python | library/migrations/0023_auto_20180824_0128.py | doriclazar/peak_30 | a87217e4d0d1f96d39ad214d40a879c7abfaaaee | [
"Apache-2.0"
] | null | null | null | library/migrations/0023_auto_20180824_0128.py | doriclazar/peak_30 | a87217e4d0d1f96d39ad214d40a879c7abfaaaee | [
"Apache-2.0"
] | 1 | 2018-07-14T07:35:55.000Z | 2018-07-16T07:40:49.000Z | library/migrations/0023_auto_20180824_0128.py | doriclazar/peak_30 | a87217e4d0d1f96d39ad214d40a879c7abfaaaee | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2018-08-24 01:28
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('library', '0022_auto_20180824_0121'),
]
operations = [
migrations.AlterField(
model_name='category',
name='code',
field=models.CharField(default='CAT-6335', max_length=8, unique=True),
),
migrations.AlterField(
model_name='class',
name='code',
field=models.CharField(default='CLA-5491', max_length=8, unique=True),
),
migrations.AlterField(
model_name='command',
name='code',
field=models.CharField(default='CMD-0320', max_length=8, unique=True),
),
migrations.AlterField(
model_name='module',
name='code',
field=models.CharField(default='MOD-9895', max_length=8, unique=True),
),
migrations.AlterField(
model_name='profession',
name='code',
field=models.CharField(default='PRO-7060', max_length=8, unique=True),
),
]
| 29.731707 | 82 | 0.575062 | 127 | 1,219 | 5.377953 | 0.448819 | 0.146413 | 0.183016 | 0.212299 | 0.572474 | 0.543192 | 0.286969 | 0.286969 | 0.286969 | 0 | 0 | 0.067364 | 0.293683 | 1,219 | 40 | 83 | 30.475 | 0.7259 | 0.055783 | 0 | 0.454545 | 1 | 0 | 0.109756 | 0.020035 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
86795f7a152ea8db53b41b3c942bd281942c5976 | 166 | py | Python | rest-example/main.py | brunocozendey/Pythonplayground | 41257c5010274f7964b3f72a2d00513ddf8ad3c1 | [
"MIT"
] | null | null | null | rest-example/main.py | brunocozendey/Pythonplayground | 41257c5010274f7964b3f72a2d00513ddf8ad3c1 | [
"MIT"
] | null | null | null | rest-example/main.py | brunocozendey/Pythonplayground | 41257c5010274f7964b3f72a2d00513ddf8ad3c1 | [
"MIT"
] | null | null | null | import requests
from AcessoCep import AcessoCep
cep = "22290040"
objeto_cep = AcessoCep(cep)
bairro, cidade, uf = objeto_cep.acessa_api()
print(bairro,cidade,uf) | 15.090909 | 44 | 0.771084 | 23 | 166 | 5.434783 | 0.565217 | 0.192 | 0.224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 0.13253 | 166 | 11 | 45 | 15.090909 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.047904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
867d47d226f4c140ed5892621b6c063fd5c77d99 | 189 | py | Python | pluto/__init__.py | agoragames/pluto | c5f8b01a74a2c63d376b2436745a590708b62304 | [
"MIT"
] | null | null | null | pluto/__init__.py | agoragames/pluto | c5f8b01a74a2c63d376b2436745a590708b62304 | [
"MIT"
] | null | null | null | pluto/__init__.py | agoragames/pluto | c5f8b01a74a2c63d376b2436745a590708b62304 | [
"MIT"
] | null | null | null | '''
Copyright (c) 2014, Aaron Westendorf All rights reserved.
https://github.com/agoragames/pluto/blob/master/LICENSE.txt
'''
from __future__ import absolute_import
__version__ = '0.0.1'
| 21 | 59 | 0.761905 | 26 | 189 | 5.192308 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04142 | 0.10582 | 189 | 8 | 60 | 23.625 | 0.757396 | 0.624339 | 0 | 0 | 0 | 0 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
867eddcbb0875131445d8acb036f91f8f4051fdd | 115 | py | Python | const/__init__.py | Fireman730/python-eveng-api | ffa436c49f76963bbad105d74d77dfafa01770f3 | [
"MIT"
] | null | null | null | const/__init__.py | Fireman730/python-eveng-api | ffa436c49f76963bbad105d74d77dfafa01770f3 | [
"MIT"
] | null | null | null | const/__init__.py | Fireman730/python-eveng-api | ffa436c49f76963bbad105d74d77dfafa01770f3 | [
"MIT"
] | 1 | 2021-12-10T18:42:08.000Z | 2021-12-10T18:42:08.000Z | __author__ = "Dylan Hamel"
__version__ = "0.1"
__email__ = "dylan.hamel@protonmail.com"
__status__ = "Prototype"
| 16.428571 | 40 | 0.730435 | 13 | 115 | 5.230769 | 0.846154 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.130435 | 115 | 6 | 41 | 19.166667 | 0.66 | 0 | 0 | 0 | 0 | 0 | 0.433628 | 0.230089 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
869548d1c846a2ffeeca6e77e724f4a426ab9353 | 379 | py | Python | tests/conftest.py | genisysram/django-etcd-settings | 749fb23728348f580fa44039e9b7976675ba7daa | [
"Apache-2.0"
] | 38 | 2015-12-02T09:17:59.000Z | 2022-02-09T21:27:36.000Z | tests/conftest.py | genisysram/django-etcd-settings | 749fb23728348f580fa44039e9b7976675ba7daa | [
"Apache-2.0"
] | 23 | 2015-12-14T17:32:12.000Z | 2017-10-03T09:55:58.000Z | tests/conftest.py | genisysram/django-etcd-settings | 749fb23728348f580fa44039e9b7976675ba7daa | [
"Apache-2.0"
] | 17 | 2015-12-07T08:29:47.000Z | 2020-11-10T08:54:28.000Z | class TestSettings(object):
ETCD_PREFIX = '/config/etcd_settings'
ETCD_ENV = 'test'
ETCD_HOST = 'etcd'
ETCD_PORT = 2379
ETCD_USERNAME = 'test'
ETCD_PASSWORD = 'test'
ETCD_DETAILS = dict(
host='etcd',
port=2379,
prefix='/config/etcd_settings',
username='test',
password='test'
)
settings = TestSettings()
| 19.947368 | 41 | 0.596306 | 40 | 379 | 5.425 | 0.4 | 0.110599 | 0.147465 | 0.221198 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.282322 | 379 | 18 | 42 | 21.055556 | 0.768382 | 0 | 0 | 0 | 0 | 0 | 0.184697 | 0.110818 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.133333 | 0 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
86ce5026163d3f59a4665c68709d5430a6e925ef | 82 | py | Python | 3.7.0/lldb-3.7.0.src/test/functionalities/command_source/my.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | 3 | 2016-02-10T14:18:40.000Z | 2018-02-05T03:15:56.000Z | 3.7.0/lldb-3.7.0.src/test/functionalities/command_source/my.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | 1 | 2016-02-10T15:40:03.000Z | 2016-02-10T15:40:03.000Z | 3.7.0/lldb-3.7.0.src/test/functionalities/command_source/my.py | androm3da/clang_sles | 2ba6d0711546ad681883c42dfb8661b842806695 | [
"MIT"
] | null | null | null | def date():
import datetime
today = datetime.date.today()
print today
| 16.4 | 33 | 0.646341 | 10 | 82 | 5.3 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.256098 | 82 | 4 | 34 | 20.5 | 0.868852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.