hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
80c840becea67eb126d76791a8e10c1596e705d5 | 2,918 | py | Python | src/com/inductiveautomation/ignition/common/__init__.py | thecesrom/8.1 | 0a0fa304137d9eca3e9fd8517b2c9569243121e4 | [
"MIT"
] | 1 | 2022-03-16T23:22:27.000Z | 2022-03-16T23:22:27.000Z | src/com/inductiveautomation/ignition/common/__init__.py | thecesrom/8.1 | 0a0fa304137d9eca3e9fd8517b2c9569243121e4 | [
"MIT"
] | 4 | 2022-03-15T21:33:46.000Z | 2022-03-22T21:25:18.000Z | src/com/inductiveautomation/ignition/common/__init__.py | thecesrom/7.9 | 6dc59a1e920382345837d620907578b35fe7e96b | [
"MIT"
] | 2 | 2022-03-16T18:26:29.000Z | 2022-03-28T20:12:56.000Z | __all__ = ["AbstractDataset", "BasicDataset", "Dataset"]
class Dataset(object):
"""A dataset is a collection of values arranged in a structured
format.
Most datasets are two dimensional -- they can be viewed as a table
with rows and columns being the two dimensions. Values in a dataset
are usually accessed by specifying one index for each dimension of
data (row and column for tables).
"""
def binarySearch(self, column, key):
pass
def getColumnAsList(self, col):
pass
def getColumnCount(self):
raise NotImplementedError
def getColumnIndex(self, name):
raise NotImplementedError
def getColumnName(self, col):
raise NotImplementedError
def getColumnNames(self):
raise NotImplementedError
def getColumnType(self, col):
raise NotImplementedError
def getColumnTypes(self):
raise NotImplementedError
def getPrimitiveValueAt(self, row, col):
raise NotImplementedError
def getQualityAt(self, row, col):
raise NotImplementedError
def getRowCount(self):
raise NotImplementedError
def getValueAt(self, row, col):
raise NotImplementedError
def hasQualityData(self):
pass
class AbstractDataset(Dataset):
_columnNames = None
_columnNamesLowercase = None
_columnTypes = None
_qualityCodes = None
def __init__(self, columnNames, columnTypes, qualityCodes=None):
self._columnNames = columnNames
self._columnTypes = columnTypes
self._qualityCodes = qualityCodes
@staticmethod
def convertToQualityCodes(dataQualities):
pass
def getBulkQualityCodes(self):
pass
def getColumnCount(self):
pass
def getColumnIndex(self, name):
pass
def getColumnName(self, col):
pass
def getColumnNames(self):
pass
def getColumnType(self, col):
pass
def getColumnTypes(self):
pass
def getPrimitiveValueAt(self, row, col):
pass
def getQualityAt(self, row, col):
pass
def getRowCount(self):
pass
def getValueAt(self, row, col):
pass
def setColumnNames(self, arg):
pass
def setColumnTypes(self, arg):
pass
def setDirty(self):
pass
class BasicDataset(AbstractDataset):
def __init__(self, columnNames=None, columnTypes=None):
super(BasicDataset, self).__init__(columnNames, columnTypes)
def columnContainsNulls(self, col):
pass
def datasetContainsNulls(self):
pass
def getData(self):
pass
def setAllDirectly(self, columnNames, columnTypes, data):
pass
def setDataDirectly(self, arg):
pass
def setFromXML(self, columnNames, columnTypes, encodedData, rowCount):
pass
def setValueAt(self, row, col, value):
pass
| 21.455882 | 74 | 0.655243 | 289 | 2,918 | 6.536332 | 0.318339 | 0.081525 | 0.142933 | 0.079407 | 0.171519 | 0.058761 | 0 | 0 | 0 | 0 | 0 | 0 | 0.274503 | 2,918 | 135 | 75 | 21.614815 | 0.8923 | 0.104524 | 0 | 0.647059 | 0 | 0 | 0.013168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.435294 | false | 0.294118 | 0 | 0 | 0.517647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
038e61fc70b89533f7fa02d99b0ce5d016a01e70 | 270 | py | Python | gatenlp/processing/matcher.py | gitter-badger/python-gatenlp | bfed863b404cfd62c98a6cb08ad287c3b4b6ccae | [
"Apache-2.0"
] | 30 | 2020-04-18T12:28:15.000Z | 2022-02-18T21:31:18.000Z | gatenlp/processing/matcher.py | gitter-badger/python-gatenlp | bfed863b404cfd62c98a6cb08ad287c3b4b6ccae | [
"Apache-2.0"
] | 133 | 2019-10-16T07:41:59.000Z | 2022-03-31T07:27:07.000Z | gatenlp/processing/matcher.py | gitter-badger/python-gatenlp | bfed863b404cfd62c98a6cb08ad287c3b4b6ccae | [
"Apache-2.0"
] | 4 | 2021-01-20T08:12:19.000Z | 2021-10-21T13:29:44.000Z | """
Module that defines classes for matchers other than gazetteers which match e.g. regular expressions
of strings or annotations.
"""
class StringRegexMatcher:
"""
NOT YET IMPLEMENTED
"""
pass
# class AnnotationRegexMatcher:
# """ """
# pass
| 15 | 99 | 0.67037 | 28 | 270 | 6.464286 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 270 | 17 | 100 | 15.882353 | 0.874396 | 0.725926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
03a5e21dafdefde75656c0d87df88f69357dab32 | 167 | py | Python | pandas_xyz/__init__.py | aaron-schroeder/pandas-xyz | 1620ee48c4da937abd5a5cba21daac6cfe10f9bd | [
"MIT"
] | null | null | null | pandas_xyz/__init__.py | aaron-schroeder/pandas-xyz | 1620ee48c4da937abd5a5cba21daac6cfe10f9bd | [
"MIT"
] | null | null | null | pandas_xyz/__init__.py | aaron-schroeder/pandas-xyz | 1620ee48c4da937abd5a5cba21daac6cfe10f9bd | [
"MIT"
] | null | null | null | from .accessor import PositionAccessor
# from .algorithms import *
# from .scalar import flat_earth, great_circle
__version__ = '0.0.5'
__all__ = ['PositionAccessor'] | 27.833333 | 46 | 0.772455 | 20 | 167 | 5.95 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020548 | 0.125749 | 167 | 6 | 47 | 27.833333 | 0.794521 | 0.419162 | 0 | 0 | 0 | 0 | 0.221053 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
03be4922cf34a2e0414b43d8fd34bbb658de3dc7 | 247 | py | Python | sample/scoping/__init__.py | diko316/python-sample | f8a61da465bc1b3589dabe29dac72a8c25ce0239 | [
"MIT"
] | null | null | null | sample/scoping/__init__.py | diko316/python-sample | f8a61da465bc1b3589dabe29dac72a8c25ce0239 | [
"MIT"
] | null | null | null | sample/scoping/__init__.py | diko316/python-sample | f8a61da465bc1b3589dabe29dac72a8c25ce0239 | [
"MIT"
] | null | null | null | """Nothing special, just calling sample.test.show()"""
from sample.scoping import function
def run():
print """
Scoping
1. function run
"""
function.run1()
print """
2. function run
"""
function.run2()
| 11.227273 | 54 | 0.57085 | 27 | 247 | 5.222222 | 0.666667 | 0.156028 | 0.269504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 0.287449 | 247 | 21 | 55 | 11.761905 | 0.778409 | 0 | 0 | 0.363636 | 0 | 0 | 0.348958 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.181818 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
03db670dee02fd2299659dad3e38740bdd61a32c | 2,107 | py | Python | partition_type.py | DenisNovac/GPTUtils | cecb3f63e616df5eb77e67bb8684e82b86e101d1 | [
"MIT"
] | null | null | null | partition_type.py | DenisNovac/GPTUtils | cecb3f63e616df5eb77e67bb8684e82b86e101d1 | [
"MIT"
] | null | null | null | partition_type.py | DenisNovac/GPTUtils | cecb3f63e616df5eb77e67bb8684e82b86e101d1 | [
"MIT"
] | null | null | null | # Enumeration for GPT disks utility (for Python 3)
#
# Other GPT utilities https://github.com/DenisNovac/GPTUtils
# Documentation https://en.wikipedia.org/wiki/GUID_Partition_Table
from enum import Enum
class PartitionType(Enum):
MBR="024DEE41-33E7-11D3-9D69-0008C781F39F"
EFI="C12A7328-F81F-11D2-BA4B-00A0C93EC93B"
BIOS_boot_partition="21686148-6449-6E6F-744E-656564454649"
# Microsoft types:
Microsoft_reserved_partition="E3C9E316-0B5C-4DB8-817D-F92DF00215AE"
Microsoft_basic_data_partition="EBD0A0A2-B9E5-4433-87C0-68B6B72699C7"
Microsoft_Logical_Disk_Manager_metadata_partition="5808C8AA-7E8F-42E0-85D2-E1E90434CFB3"
Microsoft_Logical_Disk_Manager_data_partition="AF9B60A0-1431-4F62-BC68-3311714A69AD"
Windows_Recovery_Environment="DE94BBA4-06D1-4D40-A16A-BFD50179D6AC"
Microsoft_Storage_Spaces_partition="E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D"
# Linux types
Linux_filesystem_data="0FC63DAF-8483-4772-8E79-3D69D8477DE4"
Linux_RAID_partition="A19D880F-05FC-4D3B-A006-743F0F84911E"
Linux_Root_partition_x86="44479540-F297-41B2-9AF7-D131D5F0458A"
Linux_Root_partition_x86_64="4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709"
Linux_Root_partition_ARM_x32="69DAD710-2CE4-4E3C-B16C-21A1D49ABED3"
Linux_Root_partition_ARM_x64="B921B045-1DF0-41C3-AF44-4C6F280D3FAE"
Linux_Swap_partition="0657FD6D-A4AB-43C4-84E5-0933C84B4F4F"
Linux_Logical_Volume_Manager_partition="E6D6D379-F507-44C2-A23C-238F2A3DF928"
Linux_home_partition="933AC7E1-2EB4-4F13-B844-0E14E2AEF915"
Linux_server_data_partition="3B8F8425-20E0-4F3B-907F-1A25A76F98E8"
Linux_Plain_dm_crypt_partition="7FFEC5C9-2D00-49B7-8941-3EA10A5586B7"
Linux_LUKS_partition="CA7D7CCB-63ED-4C53-861C-1742536059CC"
Reserved="8DA63339-0007-60C0-C436-083AC8230908"
# returns string representation of partition type
@staticmethod
def type( partition_guid ):
for e in PartitionType:
if e.value == partition_guid:
# only output type, not class name
return str(e).split("PartitionType.",1)[1]
return "Unknown" | 50.166667 | 92 | 0.78168 | 261 | 2,107 | 6.072797 | 0.735632 | 0.022713 | 0.045426 | 0.034069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251756 | 0.1215 | 2,107 | 42 | 93 | 50.166667 | 0.604538 | 0.134789 | 0 | 0 | 0 | 0 | 0.447934 | 0.436364 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.033333 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
03db6cf4674ca4c09f270ace6aad4ada5abd8720 | 446 | py | Python | api/app/celery/celery_app.py | VidroX/recommdo | fe518158b1a63225816054fb129f680e1d0c7d9c | [
"MIT"
] | null | null | null | api/app/celery/celery_app.py | VidroX/recommdo | fe518158b1a63225816054fb129f680e1d0c7d9c | [
"MIT"
] | null | null | null | api/app/celery/celery_app.py | VidroX/recommdo | fe518158b1a63225816054fb129f680e1d0c7d9c | [
"MIT"
] | null | null | null | from celery import Celery
from app import settings
broker_url = 'redis://:' + settings.REDIS_PASSWORD + '@redis:6379/0'
celery_app = Celery('recommdo', broker=broker_url, include=['app.celery.celery_worker'])
celery_app.conf.task_routes = {
"app.celery.celery_worker.import_and_analyze_purchases": {'queue': 'celery'},
"app.celery.celery_worker.analyze_purchases": {'queue': 'celery'}
}
celery_app.conf.update(task_track_started=True)
| 34.307692 | 88 | 0.755605 | 60 | 446 | 5.35 | 0.416667 | 0.11215 | 0.140187 | 0.196262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012376 | 0.09417 | 446 | 12 | 89 | 37.166667 | 0.782178 | 0 | 0 | 0 | 0 | 0 | 0.383408 | 0.266816 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
03e53a24fd233c0d84133edd52d9952b5a344c50 | 346 | py | Python | bot.py | parityapp/mattermost-bot | 80f9a21ed8962226d74266e492f3a763d45314b4 | [
"MIT"
] | null | null | null | bot.py | parityapp/mattermost-bot | 80f9a21ed8962226d74266e492f3a763d45314b4 | [
"MIT"
] | null | null | null | bot.py | parityapp/mattermost-bot | 80f9a21ed8962226d74266e492f3a763d45314b4 | [
"MIT"
] | null | null | null | import re
from mattermost_bot.bot import listen_to
from mattermost_bot.bot import respond_to
@respond_to('hi', re.IGNORECASE)
def hi(message):
message.reply('I can understand hi or HI!')
@listen_to('Can someone help me?')
def help_me(message):
# Message is replied to the sender (prefixed with @user)
message.reply('Yes, I can!') | 23.066667 | 60 | 0.731214 | 56 | 346 | 4.392857 | 0.5 | 0.113821 | 0.138211 | 0.162602 | 0.211382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15896 | 346 | 15 | 61 | 23.066667 | 0.845361 | 0.156069 | 0 | 0 | 0 | 0 | 0.202749 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
03fcad1d1936cee28640a02b1c3ebdfc5c4cd278 | 53 | py | Python | python_lessons/MtMk_Test_Files/numpy_test.py | 1986MMartin/coding-sections-markus | e13be32e5d83e69250ecfb3c76a04ee48a320607 | [
"Apache-2.0"
] | null | null | null | python_lessons/MtMk_Test_Files/numpy_test.py | 1986MMartin/coding-sections-markus | e13be32e5d83e69250ecfb3c76a04ee48a320607 | [
"Apache-2.0"
] | null | null | null | python_lessons/MtMk_Test_Files/numpy_test.py | 1986MMartin/coding-sections-markus | e13be32e5d83e69250ecfb3c76a04ee48a320607 | [
"Apache-2.0"
] | null | null | null | import numpy as np
arr = np.arange(0,11)
print(arr) | 17.666667 | 22 | 0.698113 | 11 | 53 | 3.363636 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 0.169811 | 53 | 3 | 23 | 17.666667 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
03fcf010e806a2ca558ead9b4f75e81efede549f | 135 | py | Python | flask_simple/mocks.py | Forumouth/flask-simple | 0764559fbbc4348cc146aa4dddbc1f90d91bc840 | [
"MIT",
"Unlicense"
] | null | null | null | flask_simple/mocks.py | Forumouth/flask-simple | 0764559fbbc4348cc146aa4dddbc1f90d91bc840 | [
"MIT",
"Unlicense"
] | 5 | 2016-01-30T13:32:23.000Z | 2016-02-06T13:34:11.000Z | flask_simple/mocks.py | Forumouth/flask-simple | 0764559fbbc4348cc146aa4dddbc1f90d91bc840 | [
"MIT",
"Unlicense"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
from unittest.mock import MagicMock
render_template = MagicMock(return_value="this is a test")
| 16.875 | 58 | 0.762963 | 21 | 135 | 4.809524 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008475 | 0.125926 | 135 | 7 | 59 | 19.285714 | 0.847458 | 0.244444 | 0 | 0 | 0 | 0 | 0.14 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
20692b1ffec59a043c4925ca299dd808accf86ea | 179 | py | Python | Introduction to Computer Science and Programing Using Python/Exercises/Week 2 - Function, Strings and Alogorithms/Polysum.py | Dittz/Learning_Python | 4c0c97075ef5e1717f82e2cf24b0587f0c8504f5 | [
"MIT"
] | null | null | null | Introduction to Computer Science and Programing Using Python/Exercises/Week 2 - Function, Strings and Alogorithms/Polysum.py | Dittz/Learning_Python | 4c0c97075ef5e1717f82e2cf24b0587f0c8504f5 | [
"MIT"
] | null | null | null | Introduction to Computer Science and Programing Using Python/Exercises/Week 2 - Function, Strings and Alogorithms/Polysum.py | Dittz/Learning_Python | 4c0c97075ef5e1717f82e2cf24b0587f0c8504f5 | [
"MIT"
] | null | null | null | from math import pi
from math import tan
def polysum(n, s):
perimeter = n * s
area = (0.25 * n * (s **2))/ (tan(pi/n))
result = perimeter**2 + area
return result
| 19.888889 | 44 | 0.581006 | 30 | 179 | 3.466667 | 0.533333 | 0.057692 | 0.269231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.273743 | 179 | 8 | 45 | 22.375 | 0.761538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
207b985f54a451ba05a83c7321110571f9b36581 | 97 | py | Python | aws_resources/tests/test_dynamo.py | jottenlips/tinyauth | 2c7e5e65e08627795748656f5b3d4122f836d6cd | [
"MIT"
] | 3 | 2020-06-07T18:43:25.000Z | 2021-03-30T14:42:40.000Z | aws_resources/tests/test_dynamo.py | jottenlips/tinyauth | 2c7e5e65e08627795748656f5b3d4122f836d6cd | [
"MIT"
] | 2 | 2021-04-30T21:09:00.000Z | 2021-05-11T11:10:14.000Z | aws_resources/tests/test_dynamo.py | jottenlips/tinyauth | 2c7e5e65e08627795748656f5b3d4122f836d6cd | [
"MIT"
] | 1 | 2020-04-26T16:24:52.000Z | 2020-04-26T16:24:52.000Z | from aws_resources.dynamo import table
def test_table():
db = table()
assert db != None
| 16.166667 | 38 | 0.680412 | 14 | 97 | 4.571429 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226804 | 97 | 5 | 39 | 19.4 | 0.853333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
207f55f29edfdd3fe87e918760b34c1f86506336 | 292 | py | Python | algorithms/sorting/bubble_sort.py | Byung-June/coding_test_study | aca82da72ba4a6147e35af3749844dc53de682a3 | [
"MIT"
] | null | null | null | algorithms/sorting/bubble_sort.py | Byung-June/coding_test_study | aca82da72ba4a6147e35af3749844dc53de682a3 | [
"MIT"
] | null | null | null | algorithms/sorting/bubble_sort.py | Byung-June/coding_test_study | aca82da72ba4a6147e35af3749844dc53de682a3 | [
"MIT"
] | null | null | null | nums = [5,2,31,2,5,7, 9, 10]
def bubble_sort(nums):
for i in range(len(nums)-1, 0, -1):
for j in range(i):
if nums[j] > nums[j+1]:
temp = nums[j]
nums[j] = nums[j+1]
nums[j+1] = temp
return nums
bubble_sort(nums) | 20.857143 | 39 | 0.455479 | 50 | 292 | 2.62 | 0.42 | 0.229008 | 0.206107 | 0.229008 | 0.167939 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 0.383562 | 292 | 14 | 40 | 20.857143 | 0.638889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
20915d22a85cdbcd4b2ea977314aae79038fd886 | 225 | py | Python | api/base/serializers.py | felliott/SHARE | 8fd60ff4749349c9b867f6188650d71f4f0a1a56 | [
"Apache-2.0"
] | 87 | 2015-01-06T18:24:45.000Z | 2021-08-08T07:59:40.000Z | api/base/serializers.py | fortress-biotech/SHARE | 9c5a05dd831447949fa6253afec5225ff8ab5d4f | [
"Apache-2.0"
] | 442 | 2015-01-01T19:16:01.000Z | 2022-03-30T21:10:26.000Z | api/base/serializers.py | fortress-biotech/SHARE | 9c5a05dd831447949fa6253afec5225ff8ab5d4f | [
"Apache-2.0"
] | 67 | 2015-03-10T16:32:58.000Z | 2021-11-12T16:33:41.000Z | from rest_framework_json_api import serializers
__all__ = ('ShareSerializer', )
class ShareSerializer(serializers.ModelSerializer):
pass # Use as base for all serializers in case we need customizations in the future
| 25 | 88 | 0.795556 | 28 | 225 | 6.142857 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 225 | 8 | 89 | 28.125 | 0.905263 | 0.337778 | 0 | 0 | 0 | 0 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.25 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
20920539583830803e507b6abeed56abf3e464c5 | 114 | py | Python | config.py | hyhplus/FlaskFirstDemo | 4db0a374b32e58bba841e5a499535a8aa34e9237 | [
"Apache-2.0"
] | null | null | null | config.py | hyhplus/FlaskFirstDemo | 4db0a374b32e58bba841e5a499535a8aa34e9237 | [
"Apache-2.0"
] | null | null | null | config.py | hyhplus/FlaskFirstDemo | 4db0a374b32e58bba841e5a499535a8aa34e9237 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
DATABASE = 'db/user.db'
DEBUG = True
SECRET_KEY = 'secret_key_1'
| 14.25 | 27 | 0.640351 | 18 | 114 | 3.888889 | 0.833333 | 0.257143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.157895 | 114 | 7 | 28 | 16.285714 | 0.697917 | 0.377193 | 0 | 0 | 0 | 0 | 0.328358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
20c6bab3994c954ecffd3763529c8895304de913 | 72,964 | py | Python | versioned_hdf5/tests/test_api.py | Quansight/versioned-hdf5 | e2d2eb2bbe2eed851239ee9d1ed7a9cf5cee3c55 | [
"BSD-3-Clause"
] | 9 | 2019-12-12T16:04:03.000Z | 2020-06-09T05:18:21.000Z | versioned_hdf5/tests/test_api.py | Quansight/versioned-hdf5 | e2d2eb2bbe2eed851239ee9d1ed7a9cf5cee3c55 | [
"BSD-3-Clause"
] | 64 | 2019-12-13T18:43:13.000Z | 2020-06-11T15:58:07.000Z | versioned_hdf5/tests/test_api.py | Quansight/versioned-hdf5 | e2d2eb2bbe2eed851239ee9d1ed7a9cf5cee3c55 | [
"BSD-3-Clause"
] | 3 | 2020-02-11T18:53:47.000Z | 2020-03-10T21:52:43.000Z | import os
import itertools
from pytest import raises, mark
import h5py
import datetime
import numpy as np
from numpy.testing import assert_equal
from .helpers import setup_vfile
from ..backend import DEFAULT_CHUNK_SIZE
from ..api import VersionedHDF5File
from ..versions import TIMESTAMP_FMT
from ..wrappers import (InMemoryArrayDataset, InMemoryDataset,
InMemorySparseDataset, DatasetWrapper, InMemoryGroup)
def test_stage_version(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group['test_data'] = test_data
version1 = vfile['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1['test_data'], test_data)
ds = vfile.f['/_version_data/test_data/raw_data']
assert ds.shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(ds[0:1*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(ds[1*DEFAULT_CHUNK_SIZE:2*DEFAULT_CHUNK_SIZE], 2.0)
assert_equal(ds[2*DEFAULT_CHUNK_SIZE:3*DEFAULT_CHUNK_SIZE], 3.0)
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 0.0
version2 = vfile['version2']
assert version2.attrs['prev_version'] == 'version1'
test_data[0] = 0.0
assert_equal(version2['test_data'], test_data)
assert ds.shape == (4*DEFAULT_CHUNK_SIZE,)
assert_equal(ds[0:1*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(ds[1*DEFAULT_CHUNK_SIZE:2*DEFAULT_CHUNK_SIZE], 2.0)
assert_equal(ds[2*DEFAULT_CHUNK_SIZE:3*DEFAULT_CHUNK_SIZE], 3.0)
assert_equal(ds[3*DEFAULT_CHUNK_SIZE], 0.0)
assert_equal(ds[3*DEFAULT_CHUNK_SIZE+1:4*DEFAULT_CHUNK_SIZE], 1.0)
def test_stage_version_chunk_size(vfile):
chunk_size = 2**10
test_data = np.concatenate((np.ones((2*chunk_size,)),
2*np.ones((chunk_size,)),
3*np.ones((chunk_size,))))
with vfile.stage_version('version1', '') as group:
group.create_dataset('test_data', data=test_data, chunks=(chunk_size,))
with raises(ValueError):
with vfile.stage_version('version_bad') as group:
group.create_dataset('test_data', data=test_data, chunks=(2**9,))
version1 = vfile['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1['test_data'], test_data)
ds = vfile.f['/_version_data/test_data/raw_data']
assert ds.shape == (3*chunk_size,)
assert_equal(ds[0:1*chunk_size], 1.0)
assert_equal(ds[1*chunk_size:2*chunk_size], 2.0)
assert_equal(ds[2*chunk_size:3*chunk_size], 3.0)
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 0.0
version2 = vfile['version2']
assert version2.attrs['prev_version'] == 'version1'
test_data[0] = 0.0
assert_equal(version2['test_data'], test_data)
assert ds.shape == (4*chunk_size,)
assert_equal(ds[0:1*chunk_size], 1.0)
assert_equal(ds[1*chunk_size:2*chunk_size], 2.0)
assert_equal(ds[2*chunk_size:3*chunk_size], 3.0)
assert_equal(ds[3*chunk_size], 0.0)
assert_equal(ds[3*chunk_size+1:4*chunk_size], 1.0)
def test_stage_version_compression(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group.create_dataset('test_data', data=test_data,
compression='gzip', compression_opts=3)
with raises(ValueError):
with vfile.stage_version('version_bad') as group:
group.create_dataset('test_data', data=test_data, compression='lzf')
with raises(ValueError):
with vfile.stage_version('version_bad') as group:
group.create_dataset('test_data', data=test_data,
compression='gzip', compression_opts=4)
version1 = vfile['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1['test_data'], test_data)
ds = vfile.f['/_version_data/test_data/raw_data']
assert ds.compression == 'gzip'
assert ds.compression_opts == 3
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 0.0
version2 = vfile['version2']
assert version2.attrs['prev_version'] == 'version1'
test_data[0] = 0.0
assert_equal(version2['test_data'], test_data)
assert ds.compression == 'gzip'
assert ds.compression_opts == 3
def test_version_int_slicing(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group['test_data'] = test_data
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 2.0
with vfile.stage_version('version3', 'version2') as group:
group['test_data'][0] = 3.0
with vfile.stage_version('version2_1', 'version1', make_current=False) as group:
group['test_data'][0] = 2.0
assert vfile[0]['test_data'][0] == 3.0
with raises(KeyError):
vfile['bad']
with raises(IndexError):
vfile[1]
assert vfile[-1]['test_data'][0] == 2.0
assert vfile[-2]['test_data'][0] == 1.0, vfile[-2]
with raises(IndexError):
vfile[-3]
vfile.current_version = 'version2'
assert vfile[0]['test_data'][0] == 2.0
assert vfile[-1]['test_data'][0] == 1.0
with raises(IndexError):
vfile[-2]
vfile.current_version = 'version2_1'
assert vfile[0]['test_data'][0] == 2.0
assert vfile[-1]['test_data'][0] == 1.0
with raises(IndexError):
vfile[-2]
vfile.current_version = 'version1'
assert vfile[0]['test_data'][0] == 1.0
with raises(IndexError):
vfile[-1]
def test_version_name_slicing(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group['test_data'] = test_data
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 2.0
with vfile.stage_version('version3', 'version2') as group:
group['test_data'][0] = 3.0
with vfile.stage_version('version2_1', 'version1', make_current=False) as group:
group['test_data'][0] = 2.0
assert vfile[0]['test_data'][0] == 3.0
with raises(IndexError):
vfile[1]
assert vfile[-1]['test_data'][0] == 2.0
assert vfile[-2]['test_data'][0] == 1.0, vfile[-2]
with raises(ValueError):
vfile['/_version_data']
def test_iter_versions(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group['test_data'] = test_data
with vfile.stage_version('version2', 'version1') as group:
group['test_data'][0] = 2.0
assert set(vfile) == {'version1', 'version2'}
# __contains__ is implemented from __iter__ automatically
assert 'version1' in vfile
assert 'version2' in vfile
assert 'version3' not in vfile
def test_create_dataset(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1', '') as group:
group.create_dataset('test_data', data=test_data)
version1 = vfile['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1['test_data'], test_data)
with vfile.stage_version('version2') as group:
group.create_dataset('test_data2', data=test_data)
ds = vfile.f['/_version_data/test_data/raw_data']
assert ds.shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(ds[0:1*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(ds[1*DEFAULT_CHUNK_SIZE:2*DEFAULT_CHUNK_SIZE], 2.0)
assert_equal(ds[2*DEFAULT_CHUNK_SIZE:3*DEFAULT_CHUNK_SIZE], 3.0)
ds = vfile.f['/_version_data/test_data2/raw_data']
assert ds.shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(ds[0:1*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(ds[1*DEFAULT_CHUNK_SIZE:2*DEFAULT_CHUNK_SIZE], 2.0)
assert_equal(ds[2*DEFAULT_CHUNK_SIZE:3*DEFAULT_CHUNK_SIZE], 3.0)
assert list(vfile.f['/_version_data/versions/__first_version__']) == []
assert list(vfile.f['/_version_data/versions/version1']) == list(vfile['version1']) == ['test_data']
assert list(vfile.f['/_version_data/versions/version2']) == list(vfile['version2']) == ['test_data', 'test_data2']
def test_changes_dataset(vfile):
# Testcase similar to those on generate_data.py
test_data = np.ones((2*DEFAULT_CHUNK_SIZE,))
name = "testname"
with vfile.stage_version('version1', '') as group:
group.create_dataset(f'{name}/key', data=test_data)
group.create_dataset(f'{name}/val', data=test_data)
version1 = vfile['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1[f'{name}/key'], test_data)
assert_equal(version1[f'{name}/val'], test_data)
with vfile.stage_version('version2') as group:
key_ds = group[f'{name}/key']
val_ds = group[f'{name}/val']
val_ds[0] = -1
key_ds[0] = 0
key = vfile['version2'][f'{name}/key']
assert key.shape == (2*DEFAULT_CHUNK_SIZE,)
assert_equal(key[0], 0)
assert_equal(key[1:2*DEFAULT_CHUNK_SIZE], 1.0)
val = vfile['version2'][f'{name}/val']
assert val.shape == (2*DEFAULT_CHUNK_SIZE,)
assert_equal(val[0], -1.0)
assert_equal(val[1:2*DEFAULT_CHUNK_SIZE], 1.0)
assert list(vfile.f['_version_data/versions/__first_version__']) == []
assert list(vfile.f['_version_data/versions/version1']) == list(vfile['version1']) == [name]
assert list(vfile.f['_version_data/versions/version2']) == list(vfile['version2']) == [name]
def test_small_dataset(vfile):
# Test creating a dataset that is smaller than the chunk size
data = np.ones((100,))
with vfile.stage_version("version1") as group:
group.create_dataset("test", data=data, chunks=(2**14,))
assert_equal(vfile['version1']['test'], data)
def test_unmodified(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=test_data)
group.create_dataset('test_data2', data=test_data)
assert set(vfile.f['_version_data/versions/version1']) == {'test_data', 'test_data2'}
assert set(vfile['version1']) == {'test_data', 'test_data2'}
assert_equal(vfile['version1']['test_data'], test_data)
assert_equal(vfile['version1']['test_data2'], test_data)
assert vfile['version1'].datasets().keys() == {'test_data', 'test_data2'}
with vfile.stage_version('version2') as group:
group['test_data2'][0] = 0.0
assert set(vfile.f['_version_data/versions/version2']) == {'test_data', 'test_data2'}
assert set(vfile['version2']) == {'test_data', 'test_data2'}
assert_equal(vfile['version2']['test_data'], test_data)
assert_equal(vfile['version2']['test_data2'][0], 0.0)
assert_equal(vfile['version2']['test_data2'][1:], test_data[1:])
def test_delete_version(vfile):
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=test_data)
group.create_dataset('test_data2', data=test_data)
with vfile.stage_version('version2') as group:
del group['test_data2']
assert set(vfile.f['_version_data/versions/version2']) == {'test_data'}
assert set(vfile['version2']) == {'test_data'}
assert_equal(vfile['version2']['test_data'], test_data)
assert vfile['version2'].datasets().keys() == {'test_data'}
def test_resize(vfile):
no_offset_data = np.ones((2*DEFAULT_CHUNK_SIZE,))
offset_data = np.concatenate((np.ones((DEFAULT_CHUNK_SIZE,)),
np.ones((2,))))
with vfile.stage_version('version1') as group:
group.create_dataset('no_offset', data=no_offset_data)
group.create_dataset('offset', data=offset_data)
group = vfile['version1']
assert group['no_offset'].shape == (2*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
# Resize larger, chunk multiple
with vfile.stage_version('larger_chunk_multiple') as group:
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group = vfile['larger_chunk_multiple']
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], 0.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
# Resize larger, non-chunk multiple
with vfile.stage_version('larger_chunk_non_multiple', 'version1') as group:
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group = vfile['larger_chunk_non_multiple']
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], 0.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
# Resize smaller, chunk multiple
with vfile.stage_version('smaller_chunk_multiple', 'version1') as group:
group['no_offset'].resize((DEFAULT_CHUNK_SIZE,))
group['offset'].resize((DEFAULT_CHUNK_SIZE,))
group = vfile['smaller_chunk_multiple']
assert group['no_offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:], 1.0)
assert_equal(group['offset'][:], 1.0)
# Resize smaller, chunk non-multiple
with vfile.stage_version('smaller_chunk_non_multiple', 'version1') as group:
group['no_offset'].resize((DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((DEFAULT_CHUNK_SIZE + 2,))
group = vfile['smaller_chunk_non_multiple']
assert group['no_offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:], 1.0)
assert_equal(group['offset'][:], 1.0)
# Resize after creation
with vfile.stage_version('version2', 'version1') as group:
# Cover the case where some data is already read in
group['offset'][-1] = 2.0
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], 0.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE,))
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], 0.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
group['no_offset'].resize((DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((DEFAULT_CHUNK_SIZE + 2,))
assert group['no_offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:], 1.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
group['no_offset'].resize((DEFAULT_CHUNK_SIZE,))
group['offset'].resize((DEFAULT_CHUNK_SIZE,))
assert group['no_offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:], 1.0)
assert_equal(group['offset'][:], 1.0)
group = vfile['version2']
assert group['no_offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:], 1.0)
assert_equal(group['offset'][:], 1.0)
# Resize smaller than a chunk
small_data = np.array([1, 2, 3])
with vfile.stage_version('version1_small', '') as group:
group.create_dataset('small', data=small_data)
with vfile.stage_version('version2_small', 'version1_small') as group:
group['small'].resize((5,))
assert_equal(group['small'], np.array([1, 2, 3, 0, 0]))
group['small'][3:] = np.array([4, 5])
assert_equal(group['small'], np.array([1, 2, 3, 4, 5]))
group = vfile['version1_small']
assert_equal(group['small'], np.array([1, 2, 3]))
group = vfile['version2_small']
assert_equal(group['small'], np.array([1, 2, 3, 4, 5]))
# Resize after calling create_dataset, larger
with vfile.stage_version('resize_after_create_larger', '') as group:
group.create_dataset('data', data=offset_data)
group['data'].resize((DEFAULT_CHUNK_SIZE + 4,))
assert group['data'].shape == (DEFAULT_CHUNK_SIZE + 4,)
assert_equal(group['data'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['data'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
group = vfile['resize_after_create_larger']
assert group['data'].shape == (DEFAULT_CHUNK_SIZE + 4,)
assert_equal(group['data'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['data'][DEFAULT_CHUNK_SIZE + 2:], 0.0)
# Resize after calling create_dataset, smaller
with vfile.stage_version('resize_after_create_smaller', '') as group:
group.create_dataset('data', data=offset_data)
group['data'].resize((DEFAULT_CHUNK_SIZE - 4,))
assert group['data'].shape == (DEFAULT_CHUNK_SIZE - 4,)
assert_equal(group['data'][:], 1.0)
group = vfile['resize_after_create_smaller']
assert group['data'].shape == (DEFAULT_CHUNK_SIZE - 4,)
assert_equal(group['data'][:], 1.0)
def test_resize_unaligned(vfile):
ds_name = 'test_resize_unaligned'
with vfile.stage_version('0') as group:
group.create_dataset(ds_name, data=np.arange(1000))
for i in range(1, 10):
with vfile.stage_version(str(i)) as group:
l = len(group[ds_name])
assert_equal(group[ds_name][:], np.arange(i * 1000))
group[ds_name].resize((l + 1000,))
group[ds_name][-1000:] = np.arange(l, l + 1000)
assert_equal(group[ds_name][:], np.arange((i + 1) * 1000))
@mark.slow
def test_resize_multiple_dimensions(tmp_path, h5file):
# Test semantics against raw HDF5
vfile = VersionedHDF5File(h5file)
shapes = range(5, 25, 5) # 5, 10, 15, 20
chunks = (10, 10, 10)
for i, (oldshape, newshape) in\
enumerate(itertools.combinations_with_replacement(itertools.product(shapes, repeat=3), 2)):
data = np.arange(np.product(oldshape)).reshape(oldshape)
# Get the ground truth from h5py
vfile.f.create_dataset(f'data{i}', data=data, fillvalue=-1, chunks=chunks,
maxshape=(None, None, None))
vfile.f[f'data{i}'].resize(newshape)
new_data = vfile.f[f'data{i}'][()]
# resize after creation
with vfile.stage_version(f'version1_{i}') as group:
group.create_dataset(f'dataset1_{i}', data=data, chunks=chunks,
fillvalue=-1)
group[f'dataset1_{i}'].resize(newshape)
assert group[f'dataset1_{i}'].shape == newshape
assert_equal(group[f'dataset1_{i}'][()], new_data)
version1 = vfile[f'version1_{i}']
assert version1[f'dataset1_{i}'].shape == newshape
assert_equal(version1[f'dataset1_{i}'][()], new_data)
# resize in a new version
with vfile.stage_version(f'version2_1_{i}', '') as group:
group.create_dataset(f'dataset2_{i}', data=data, chunks=chunks,
fillvalue=-1)
with vfile.stage_version(f'version2_2_{i}', f'version2_1_{i}') as group:
group[f'dataset2_{i}'].resize(newshape)
assert group[f'dataset2_{i}'].shape == newshape
assert_equal(group[f'dataset2_{i}'][()], new_data, str((oldshape, newshape)))
version2_2 = vfile[f'version2_2_{i}']
assert version2_2[f'dataset2_{i}'].shape == newshape
assert_equal(version2_2[f'dataset2_{i}'][()], new_data)
# resize after some data is read in
with vfile.stage_version(f'version3_1_{i}', '') as group:
group.create_dataset(f'dataset3_{i}', data=data, chunks=chunks,
fillvalue=-1)
with vfile.stage_version(f'version3_2_{i}', f'version3_1_{i}') as group:
# read in first and last chunks
group[f'dataset3_{i}'][0, 0, 0]
group[f'dataset3_{i}'][-1, -1, -1]
group[f'dataset3_{i}'].resize(newshape)
assert group[f'dataset3_{i}'].shape == newshape
assert_equal(group[f'dataset3_{i}'][()], new_data)
version3_2 = vfile[f'version3_2_{i}']
assert version3_2[f'dataset3_{i}'].shape == newshape
assert_equal(version3_2[f'dataset3_{i}'][()], new_data)
def test_getitem(vfile):
data = np.arange(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
test_data = group['test_data']
assert test_data.shape == (2*DEFAULT_CHUNK_SIZE,)
assert_equal(test_data[0], 0)
assert test_data[0].dtype == np.int64
assert_equal(test_data[:], data)
assert_equal(test_data[:DEFAULT_CHUNK_SIZE+1], data[:DEFAULT_CHUNK_SIZE+1])
with vfile.stage_version('version2') as group:
test_data = group['test_data']
assert test_data.shape == (2*DEFAULT_CHUNK_SIZE,)
assert_equal(test_data[0], 0)
assert test_data[0].dtype == np.int64
assert_equal(test_data[:], data)
assert_equal(test_data[:DEFAULT_CHUNK_SIZE+1], data[:DEFAULT_CHUNK_SIZE+1])
def test_timestamp_auto(vfile):
data = np.ones((2*DEFAULT_CHUNK_SIZE,))
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
assert isinstance(vfile['version1'].attrs['timestamp'], str)
def test_timestamp_manual(vfile):
data1 = np.ones((2*DEFAULT_CHUNK_SIZE,))
data2 = np.ones((3*DEFAULT_CHUNK_SIZE))
ts1 = datetime.datetime(2020, 6, 29, 20, 12, 56, tzinfo=datetime.timezone.utc)
ts2 = datetime.datetime(2020, 6, 29, 22, 12, 56)
with vfile.stage_version('version1', timestamp=ts1) as group:
group['test_data_1'] = data1
assert vfile['version1'].attrs['timestamp'] == ts1.strftime(TIMESTAMP_FMT)
with raises(ValueError):
with vfile.stage_version('version2', timestamp=ts2) as group:
group['test_data_2'] = data2
with raises(TypeError):
with vfile.stage_version('version3', timestamp='2020-6-29') as group:
group['test_data_3'] = data1
def test_timestamp_pytz(vfile):
# pytz is not a dependency of versioned-hdf5, but it is supported if it is
# used.
import pytz
data1 = np.ones((2*DEFAULT_CHUNK_SIZE,))
data2 = np.ones((3*DEFAULT_CHUNK_SIZE))
ts1 = datetime.datetime(2020, 6, 29, 20, 12, 56, tzinfo=pytz.utc)
ts2 = datetime.datetime(2020, 6, 29, 22, 12, 56)
with vfile.stage_version('version1', timestamp=ts1) as group:
group['test_data_1'] = data1
assert vfile['version1'].attrs['timestamp'] == ts1.strftime(TIMESTAMP_FMT)
with raises(ValueError):
with vfile.stage_version('version2', timestamp=ts2) as group:
group['test_data_2'] = data2
with raises(TypeError):
with vfile.stage_version('version3', timestamp='2020-6-29') as group:
group['test_data_3'] = data1
def test_timestamp_manual_datetime64(vfile):
data = np.ones((2*DEFAULT_CHUNK_SIZE,))
# Also tests that it works correctly for 0 fractional part (issue #190).
ts = datetime.datetime(2020, 6, 29, 20, 12, 56, tzinfo=datetime.timezone.utc)
npts = np.datetime64(ts.replace(tzinfo=None))
with vfile.stage_version('version1', timestamp=npts) as group:
group['test_data'] = data
v1 = vfile['version1']
assert v1.attrs['timestamp'] == ts.strftime(TIMESTAMP_FMT)
assert vfile[npts] == v1
assert vfile[ts] == v1
assert vfile.get_version_by_timestamp(npts, exact=True) == v1
assert vfile.get_version_by_timestamp(ts, exact=True) == v1
def test_getitem_by_timestamp(vfile):
data = np.arange(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
v1 = vfile['version1']
ts1 = datetime.datetime.strptime(v1.attrs['timestamp'], TIMESTAMP_FMT)
assert vfile[ts1] == v1
assert vfile.get_version_by_timestamp(ts1) == v1
assert vfile.get_version_by_timestamp(ts1, exact=True) == v1
dt1 = np.datetime64(ts1.replace(tzinfo=None))
assert vfile[dt1] == v1
assert vfile.get_version_by_timestamp(dt1) == v1
assert vfile.get_version_by_timestamp(dt1, exact=True) == v1
minute = datetime.timedelta(minutes=1)
second = datetime.timedelta(seconds=1)
ts2 = ts1 + minute
dt2 = np.datetime64(ts2.replace(tzinfo=None))
with vfile.stage_version('version2', timestamp=ts2) as group:
group['test_data'][0] += 1
v2 = vfile['version2']
assert vfile[ts2] == v2
assert vfile.get_version_by_timestamp(ts2) == v2
assert vfile.get_version_by_timestamp(ts2, exact=True) == v2
assert vfile[dt2] == v2
assert vfile.get_version_by_timestamp(dt2) == v2
assert vfile.get_version_by_timestamp(dt2, exact=True) == v2
ts2_1 = ts2 + second
dt2_1 = np.datetime64(ts2_1.replace(tzinfo=None))
assert vfile[ts2_1] == v2
assert vfile.get_version_by_timestamp(ts2_1) == v2
raises(KeyError, lambda: vfile.get_version_by_timestamp(ts2_1, exact=True))
assert vfile[dt2_1] == v2
assert vfile.get_version_by_timestamp(dt2_1) == v2
raises(KeyError, lambda: vfile.get_version_by_timestamp(dt2_1, exact=True))
ts1_1 = ts1 + second
dt1_1 = np.datetime64(ts1_1.replace(tzinfo=None))
assert vfile[ts1_1] == v1
assert vfile.get_version_by_timestamp(ts1_1) == v1
raises(KeyError, lambda: vfile.get_version_by_timestamp(ts1_1, exact=True))
assert vfile[dt1_1] == v1
assert vfile.get_version_by_timestamp(dt1_1) == v1
raises(KeyError, lambda: vfile.get_version_by_timestamp(dt1_1, exact=True))
ts0 = ts1 - second
dt0 = np.datetime64(ts0.replace(tzinfo=None))
raises(KeyError, lambda: vfile[ts0] == v1)
raises(KeyError, lambda: vfile.get_version_by_timestamp(ts0) == v1)
raises(KeyError, lambda: vfile.get_version_by_timestamp(ts0, exact=True))
raises(KeyError, lambda: vfile[dt0] == v1)
raises(KeyError, lambda: vfile.get_version_by_timestamp(dt0) == v1)
raises(KeyError, lambda: vfile.get_version_by_timestamp(dt0, exact=True))
def test_nonroot(vfile):
g = vfile.f.create_group('subgroup')
file = VersionedHDF5File(g)
test_data = np.concatenate((np.ones((2*DEFAULT_CHUNK_SIZE,)),
2*np.ones((DEFAULT_CHUNK_SIZE,)),
3*np.ones((DEFAULT_CHUNK_SIZE,))))
with file.stage_version('version1', '') as group:
group['test_data'] = test_data
version1 = file['version1']
assert version1.attrs['prev_version'] == '__first_version__'
assert_equal(version1['test_data'], test_data)
ds = vfile.f['/subgroup/_version_data/test_data/raw_data']
assert ds.shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(ds[0:1*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(ds[1*DEFAULT_CHUNK_SIZE:2*DEFAULT_CHUNK_SIZE], 2.0)
assert_equal(ds[2*DEFAULT_CHUNK_SIZE:3*DEFAULT_CHUNK_SIZE], 3.0)
def test_attrs(vfile):
data = np.arange(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
test_data = group['test_data']
assert 'test_attr' not in test_data.attrs
test_data.attrs['test_attr'] = 0
assert vfile['version1']['test_data'].attrs['test_attr'] == \
vfile.f['_version_data']['versions']['version1']['test_data'].attrs['test_attr'] == 0
with vfile.stage_version('version2') as group:
test_data = group['test_data']
assert test_data.attrs['test_attr'] == 0
test_data.attrs['test_attr'] = 1
assert vfile['version1']['test_data'].attrs['test_attr'] == \
vfile.f['_version_data']['versions']['version1']['test_data'].attrs['test_attr'] == 0
assert vfile['version2']['test_data'].attrs['test_attr'] == \
vfile.f['_version_data']['versions']['version2']['test_data'].attrs['test_attr'] == 1
def test_auto_delete(vfile):
try:
with vfile.stage_version('version1') as group:
raise RuntimeError
except RuntimeError:
pass
else:
raise AssertionError("did not raise")
# Make sure the version got deleted so that we can make it again
data = np.arange(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
assert_equal(vfile['version1']['test_data'], data)
def test_delitem(vfile):
data = np.arange(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('test_data', data=data)
with vfile.stage_version('version2') as group:
group.create_dataset('test_data2', data=data)
del vfile['version2']
assert list(vfile) == ['version1']
assert vfile.current_version == 'version1'
with raises(KeyError):
del vfile['version2']
del vfile['version1']
assert list(vfile) == []
assert vfile.current_version == '__first_version__'
def test_groups(vfile):
data = np.ones(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_group('group1')
group.create_dataset('group1/test_data', data=data)
assert_equal(group['group1']['test_data'], data)
assert_equal(group['group1/test_data'], data)
version = vfile['version1']
assert_equal(version['group1']['test_data'], data)
assert_equal(version['group1/test_data'], data)
with vfile.stage_version('version2', '') as group:
group.create_dataset('group1/test_data', data=data)
assert_equal(group['group1']['test_data'], data)
assert_equal(group['group1/test_data'], data)
version = vfile['version2']
assert_equal(version['group1']['test_data'], data)
assert_equal(version['group1/test_data'], data)
with vfile.stage_version('version3', 'version1') as group:
group['group1']['test_data'][0] = 0
group['group1/test_data'][1] = 0
assert_equal(group['group1']['test_data'][:2], 0)
assert_equal(group['group1']['test_data'][2:], 1)
assert_equal(group['group1/test_data'][:2], 0)
assert_equal(group['group1/test_data'][2:], 1)
version = vfile['version3']
assert_equal(version['group1']['test_data'][:2], 0)
assert_equal(version['group1']['test_data'][2:], 1)
assert_equal(version['group1/test_data'][:2], 0)
assert_equal(version['group1/test_data'][2:], 1)
assert list(version) == ['group1']
assert list(version['group1']) == ['test_data']
with vfile.stage_version('version4', 'version3') as group:
group.create_dataset('group2/test_data', data=2*data)
assert_equal(group['group1']['test_data'][:2], 0)
assert_equal(group['group1']['test_data'][2:], 1)
assert_equal(group['group2']['test_data'][:], 2)
assert_equal(group['group1/test_data'][:2], 0)
assert_equal(group['group1/test_data'][2:], 1)
assert_equal(group['group2/test_data'][:], 2)
version = vfile['version4']
assert_equal(version['group1']['test_data'][:2], 0)
assert_equal(version['group1']['test_data'][2:], 1)
assert_equal(group['group2']['test_data'][:], 2)
assert_equal(version['group1/test_data'][:2], 0)
assert_equal(version['group1/test_data'][2:], 1)
assert_equal(group['group2/test_data'][:], 2)
assert list(version) == ['group1', 'group2']
assert list(version['group1']) == ['test_data']
assert list(version['group2']) == ['test_data']
with vfile.stage_version('version5', '') as group:
group.create_dataset('group1/group2/test_data', data=data)
assert_equal(group['group1']['group2']['test_data'], data)
assert_equal(group['group1/group2']['test_data'], data)
assert_equal(group['group1']['group2/test_data'], data)
assert_equal(group['group1/group2/test_data'], data)
version = vfile['version5']
assert_equal(version['group1']['group2']['test_data'], data)
assert_equal(version['group1/group2']['test_data'], data)
assert_equal(version['group1']['group2/test_data'], data)
assert_equal(version['group1/group2/test_data'], data)
with vfile.stage_version('version6', '') as group:
group.create_dataset('group1/test_data1', data=data)
group.create_dataset('group1/group2/test_data2', data=2*data)
group.create_dataset('group1/group2/group3/test_data3', data=3*data)
group.create_dataset('group1/group2/test_data4', data=4*data)
assert_equal(group['group1']['test_data1'], data)
assert_equal(group['group1/test_data1'], data)
assert_equal(group['group1']['group2']['test_data2'], 2*data)
assert_equal(group['group1/group2']['test_data2'], 2*data)
assert_equal(group['group1']['group2/test_data2'], 2*data)
assert_equal(group['group1/group2/test_data2'], 2*data)
assert_equal(group['group1']['group2']['group3']['test_data3'], 3*data)
assert_equal(group['group1/group2']['group3']['test_data3'], 3*data)
assert_equal(group['group1/group2']['group3/test_data3'], 3*data)
assert_equal(group['group1']['group2/group3/test_data3'], 3*data)
assert_equal(group['group1/group2/group3/test_data3'], 3*data)
assert_equal(group['group1']['group2']['test_data4'], 4*data)
assert_equal(group['group1/group2']['test_data4'], 4*data)
assert_equal(group['group1']['group2/test_data4'], 4*data)
assert_equal(group['group1/group2/test_data4'], 4*data)
assert list(group) == ['group1']
assert set(group['group1']) == {'group2', 'test_data1'}
assert set(group['group1']['group2']) == set(group['group1/group2']) == {'group3', 'test_data2', 'test_data4'}
assert list(group['group1']['group2']['group3']) == list(group['group1/group2/group3']) == ['test_data3']
version = vfile['version6']
assert_equal(version['group1']['test_data1'], data)
assert_equal(version['group1/test_data1'], data)
assert_equal(version['group1']['group2']['test_data2'], 2*data)
assert_equal(version['group1/group2']['test_data2'], 2*data)
assert_equal(version['group1']['group2/test_data2'], 2*data)
assert_equal(version['group1/group2/test_data2'], 2*data)
assert_equal(version['group1']['group2']['group3']['test_data3'], 3*data)
assert_equal(version['group1/group2']['group3']['test_data3'], 3*data)
assert_equal(version['group1/group2']['group3/test_data3'], 3*data)
assert_equal(version['group1']['group2/group3/test_data3'], 3*data)
assert_equal(version['group1/group2/group3/test_data3'], 3*data)
assert_equal(version['group1']['group2']['test_data4'], 4*data)
assert_equal(version['group1/group2']['test_data4'], 4*data)
assert_equal(version['group1']['group2/test_data4'], 4*data)
assert_equal(version['group1/group2/test_data4'], 4*data)
assert list(version) == ['group1']
assert set(version['group1']) == {'group2', 'test_data1'}
assert set(version['group1']['group2']) == set(version['group1/group2']) == {'group3', 'test_data2', 'test_data4'}
assert list(version['group1']['group2']['group3']) == list(version['group1/group2/group3']) == ['test_data3']
with vfile.stage_version('version-bad', '') as group:
raises(ValueError, lambda: group.create_dataset('/group1/test_data', data=data))
raises(ValueError, lambda: group.create_group('/group1'))
def test_group_contains(vfile):
data = np.ones(2*DEFAULT_CHUNK_SIZE)
with vfile.stage_version('version1') as group:
group.create_dataset('group1/group2/test_data', data=data)
assert 'group1' in group
assert 'group2' in group['group1']
assert 'test_data' in group['group1/group2']
assert 'test_data' not in group
assert 'test_data' not in group['group1']
assert 'group1/group2' in group
assert 'group1/group3' not in group
assert 'group1/group2/test_data' in group
assert 'group1/group3/test_data' not in group
assert 'group1/group3/test_data2' not in group
with vfile.stage_version('version2') as group:
group.create_dataset('group1/group3/test_data2', data=data)
assert 'group1' in group
assert 'group2' in group['group1']
assert 'group3' in group['group1']
assert 'test_data' in group['group1/group2']
assert 'test_data' not in group
assert 'test_data' not in group['group1']
assert 'test_data2' in group['group1/group3']
assert 'test_data2' not in group['group1/group2']
assert 'group1/group2' in group
assert 'group1/group3' in group
assert 'group1/group2/test_data' in group
assert 'group1/group3/test_data' not in group
assert 'group1/group3/test_data2' in group
version1 = vfile['version1']
version2 = vfile['version2']
assert 'group1' in version1
assert 'group1/' in version1
assert 'group1' in version2
assert 'group1/' in version2
assert 'group2' in version1['group1']
assert 'group2/' in version1['group1']
assert 'group2' in version2['group1']
assert 'group2/' in version2['group1']
assert 'group3' not in version1['group1']
assert 'group3/' not in version1['group1']
assert 'group3' in version2['group1']
assert 'group3/' in version2['group1']
assert 'group1/group2' in version1
assert 'group1/group2/' in version1
assert 'group1/group2' in version2
assert 'group1/group2/' in version2
assert 'group1/group3' not in version1
assert 'group1/group3/' not in version1
assert 'group1/group3' in version2
assert 'group1/group3/' in version2
assert 'group1/group2/test_data' in version1
assert 'group1/group2/test_data/' in version1
assert 'group1/group2/test_data' in version2
assert 'group1/group2/test_data/' in version2
assert 'group1/group3/test_data' not in version1
assert 'group1/group3/test_data/' not in version1
assert 'group1/group3/test_data' not in version2
assert 'group1/group3/test_data/' not in version2
assert 'group1/group3/test_data2' not in version1
assert 'group1/group3/test_data2/' not in version1
assert 'group1/group3/test_data2' in version2
assert 'group1/group3/test_data2/' in version2
assert 'test_data' in version1['group1/group2']
assert 'test_data' in version2['group1/group2']
assert 'test_data' not in version1
assert 'test_data' not in version2
assert 'test_data' not in version1['group1']
assert 'test_data' not in version2['group1']
assert 'test_data2' in version2['group1/group3']
assert 'test_data2' not in version1['group1/group2']
assert 'test_data2' not in version2['group1/group2']
assert '/_version_data/versions/version1/' in version1
assert '/_version_data/versions/version1' in version1
assert '/_version_data/versions/version1/' not in version2
assert '/_version_data/versions/version1' not in version2
assert '/_version_data/versions/version1/group1' in version1
assert '/_version_data/versions/version1/group1' not in version2
assert '/_version_data/versions/version1/group1/group2' in version1
assert '/_version_data/versions/version1/group1/group2' not in version2
@mark.setup_args(file_name='test.hdf5')
def test_moved_file(tmp_path, h5file):
# See issue #28. Make sure the virtual datasets do not hard-code the filename.
file = VersionedHDF5File(h5file)
data = np.ones(2*DEFAULT_CHUNK_SIZE)
with file.stage_version('version1') as group:
group['dataset'] = data
file.close()
with h5py.File('test.hdf5', 'r') as f:
file = VersionedHDF5File(f)
assert_equal(file['version1']['dataset'][:], data)
file.close()
# XXX: os.replace
os.rename('test.hdf5', 'test2.hdf5')
with h5py.File('test2.hdf5', 'r') as f:
file = VersionedHDF5File(f)
assert_equal(file['version1']['dataset'][:], data)
file.close()
def test_list_assign(vfile):
data = [1, 2, 3]
with vfile.stage_version('version1') as group:
group['dataset'] = data
assert_equal(group['dataset'][:], data)
assert_equal(vfile['version1']['dataset'][:], data)
def test_nested_group(vfile):
# Issue #66
data1 = np.array([1, 1])
data2 = np.array([2, 2])
with vfile.stage_version('1') as sv:
sv.create_dataset('bar/baz', data=data1)
assert_equal(sv['bar/baz'][:], data1)
assert_equal(sv['bar/baz'][:], data1)
with vfile.stage_version('2') as sv:
sv.create_dataset('bar/bon/1/data/axes/date', data=data2)
assert_equal(sv['bar/baz'][:], data1)
assert_equal(sv['bar/bon/1/data/axes/date'][:], data2)
version1 = vfile['1']
version2 = vfile['2']
assert_equal(version1['bar/baz'][:], data1)
assert_equal(version2['bar/baz'][:], data1)
assert 'bar/bon/1/data/axes/date' not in version1
assert_equal(version2['bar/bon/1/data/axes/date'][:], data2)
def test_fillvalue(vfile):
# Based on test_resize(), but only the resize largers that use the fill
# value
fillvalue = 5.0
no_offset_data = np.ones((2*DEFAULT_CHUNK_SIZE,))
offset_data = np.concatenate((np.ones((DEFAULT_CHUNK_SIZE,)),
np.ones((2,))))
with vfile.stage_version('version1') as group:
group.create_dataset('no_offset', data=no_offset_data, fillvalue=fillvalue)
group.create_dataset('offset', data=offset_data, fillvalue=fillvalue)
group = vfile['version1']
assert group['no_offset'].shape == (2*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
# Resize larger, chunk multiple
with vfile.stage_version('larger_chunk_multiple') as group:
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group = vfile['larger_chunk_multiple']
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], fillvalue)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
# Resize larger, non-chunk multiple
with vfile.stage_version('larger_chunk_non_multiple', 'version1') as group:
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group = vfile['larger_chunk_non_multiple']
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], fillvalue)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
# Resize after creation
with vfile.stage_version('version2', 'version1') as group:
# Cover the case where some data is already read in
group['offset'][-1] = 2.0
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE + 2,))
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE + 2,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], fillvalue)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
group['no_offset'].resize((3*DEFAULT_CHUNK_SIZE,))
group['offset'].resize((3*DEFAULT_CHUNK_SIZE,))
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], fillvalue)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
group = vfile['version2']
assert group['no_offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert group['offset'].shape == (3*DEFAULT_CHUNK_SIZE,)
assert_equal(group['no_offset'][:2*DEFAULT_CHUNK_SIZE], 1.0)
assert_equal(group['no_offset'][2*DEFAULT_CHUNK_SIZE:], fillvalue)
assert_equal(group['offset'][:DEFAULT_CHUNK_SIZE + 1], 1.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 1], 2.0)
assert_equal(group['offset'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
# Resize after calling create_dataset, larger
with vfile.stage_version('resize_after_create_larger', '') as group:
group.create_dataset('data', data=offset_data,
fillvalue=fillvalue)
group['data'].resize((DEFAULT_CHUNK_SIZE + 4,))
assert group['data'].shape == (DEFAULT_CHUNK_SIZE + 4,)
assert_equal(group['data'][:DEFAULT_CHUNK_SIZE + 2], 1.0)
assert_equal(group['data'][DEFAULT_CHUNK_SIZE + 2:], fillvalue)
def test_multidimsional(vfile):
data = np.ones((2*DEFAULT_CHUNK_SIZE, 5))
with vfile.stage_version('version1') as g:
g.create_dataset('test_data', data=data,
chunks=(DEFAULT_CHUNK_SIZE, 2))
assert_equal(g['test_data'][()], data)
version1 = vfile['version1']
assert_equal(version1['test_data'][()], data)
data2 = data.copy()
data2[0, 1] = 2
with vfile.stage_version('version2') as g:
g['test_data'][0, 1] = 2
assert g['test_data'][0, 1] == 2
assert_equal(g['test_data'][()], data2)
version2 = vfile['version2']
assert version2['test_data'][0, 1] == 2
assert_equal(version2['test_data'][()], data2)
data3 = data.copy()
data3[0:1] = 3
with vfile.stage_version('version3', 'version1') as g:
g['test_data'][0:1] = 3
assert_equal(g['test_data'][0:1], 3)
assert_equal(g['test_data'][()], data3)
version3 = vfile['version3']
assert_equal(version3['test_data'][0:1], 3)
assert_equal(version3['test_data'][()], data3)
def test_group_chunks_compression(vfile):
# Chunks and compression are similar, so test them both at the same time.
data = np.ones((2*DEFAULT_CHUNK_SIZE, 5))
with vfile.stage_version('version1') as g:
g2 = g.create_group('group')
g2.create_dataset('test_data', data=data,
chunks=(DEFAULT_CHUNK_SIZE, 2),
compression='gzip',
compression_opts=3)
assert_equal(g2['test_data'][()], data)
assert_equal(g['group/test_data'][()], data)
assert_equal(g['group']['test_data'][()], data)
version1 = vfile['version1']
assert_equal(version1['group']['test_data'][()], data)
assert_equal(version1['group/test_data'][()], data)
raw_data = vfile.f['/_version_data/group/test_data/raw_data']
assert raw_data.compression == 'gzip'
assert raw_data.compression_opts == 3
def test_closes(vfile):
data = np.ones((DEFAULT_CHUNK_SIZE,))
with vfile.stage_version('version1') as g:
g.create_dataset('test_data', data=data)
assert vfile._closed is False
assert vfile.closed is False
version_data = vfile._version_data
versions = vfile._versions
h5pyfile = vfile.f
vfile.close()
assert vfile._closed is True
assert vfile.closed is True
raises(AttributeError, lambda: vfile.f)
raises(AttributeError, lambda: vfile._version_data)
raises(AttributeError, lambda: vfile._versions)
assert repr(vfile) == "<Closed VersionedHDF5File>"
reopened_file = VersionedHDF5File(h5pyfile)
assert list(reopened_file['version1']) == ['test_data']
assert_equal(reopened_file['version1']['test_data'][()], data)
assert reopened_file._version_data == version_data
assert reopened_file._versions == versions
# Close the underlying file
h5pyfile.close()
assert vfile.closed is True
raises(ValueError, lambda: vfile['version1'])
raises(ValueError, lambda: vfile['version2'])
assert repr(vfile) == "<Closed VersionedHDF5File>"
def test_scalar_dataset():
for data1, data2 in [
(b'baz', b'foo'),
(np.asarray('baz', dtype='S'), np.asarray('foo', dtype='S')),
(1.5, 2.3),
(1, 0)
]:
dt = np.asarray(data1).dtype
with setup_vfile() as f:
file = VersionedHDF5File(f)
with file.stage_version('v1') as group:
group['scalar_ds'] = data1
v1_ds = file['v1']['scalar_ds']
assert v1_ds[()] == data1
assert v1_ds.shape == ()
assert v1_ds.dtype == dt
with file.stage_version('v2') as group:
group['scalar_ds'] = data2
v2_ds = file['v2']['scalar_ds']
assert v2_ds[()] == data2
assert v2_ds.shape == ()
assert v2_ds.dtype == dt
file.close()
def test_store_binary_as_void(vfile):
with vfile.stage_version('version1') as sv:
sv['test_store_binary_data'] = [np.void(b'1111')]
version1 = vfile['version1']
assert_equal(version1['test_store_binary_data'][0], np.void(b'1111'))
with vfile.stage_version('version2') as sv:
sv['test_store_binary_data'][:] = [np.void(b'1234567890')]
version2 = vfile['version2']
assert_equal(version2['test_store_binary_data'][0], np.void(b'1234'))
def test_check_committed(vfile):
data = np.ones((DEFAULT_CHUNK_SIZE,))
with vfile.stage_version('version1') as g:
g.create_dataset('test_data', data=data)
with raises(ValueError, match="committed"):
g['data'] = data
with raises(ValueError, match="committed"):
g.create_dataset('data', data=data)
with raises(ValueError, match="committed"):
g.create_group('subgruop')
with raises(ValueError, match="committed"):
del g['test_data']
# Incorrectly uses g from the previous version (InMemoryArrayDataset)
with raises(ValueError, match="committed"):
with vfile.stage_version('version2'):
assert isinstance(g['test_data'], InMemoryArrayDataset)
g['test_data'][0] = 1
with raises(ValueError, match="committed"):
with vfile.stage_version('version2'):
assert isinstance(g['test_data'], InMemoryArrayDataset)
g['test_data'].resize((100,))
with vfile.stage_version('version2') as g2:
pass
# Incorrectly uses g from the previous version (InMemoryDataset)
with raises(ValueError, match="committed"):
with vfile.stage_version('version3'):
assert isinstance(g2['test_data'], DatasetWrapper)
assert isinstance(g2['test_data'].dataset, InMemoryDataset)
g2['test_data'][0] = 1
with raises(ValueError, match="committed"):
with vfile.stage_version('version3'):
assert isinstance(g2['test_data'], DatasetWrapper)
assert isinstance(g2['test_data'].dataset, InMemoryDataset)
g2['test_data'].resize((100,))
assert repr(g) == '<Committed InMemoryGroup "/_version_data/versions/version1">'
def test_set_chunks_nested(vfile):
with vfile.stage_version('0') as sv:
data_group = sv.create_group('data')
data_group.create_dataset('bar', data=np.arange(4))
with vfile.stage_version('1') as sv:
data_group = sv['data']
data_group.create_dataset('props/1/bar', data=np.arange(0, 4, 2))
def test_InMemoryArrayDataset_chunks(vfile):
with vfile.stage_version('0') as sv:
data_group = sv.create_group('data')
data_group.create_dataset('g/bar', data=np.arange(4),
chunks=(100,), compression='gzip', compression_opts=3)
assert isinstance(data_group['g/bar'], InMemoryArrayDataset)
assert data_group['g/bar'].chunks == (100,)
assert data_group['g/bar'].compression == 'gzip'
assert data_group['g/bar'].compression_opts == 3
def test_string_dtypes():
# Make sure the fillvalue logic works correctly for custom h5py string
# dtypes.
# h5py 3 changed variable-length UTF-8 strings to be read in as bytes
# instead of str. See
# https://docs.h5py.org/en/stable/whatsnew/3.0.html#breaking-changes-deprecations
h5py_str_type = bytes if h5py.__version__.startswith('3') else str
for typ, dt in [
(h5py_str_type, h5py.string_dtype('utf-8')),
(bytes, h5py.string_dtype('ascii')),
# h5py uses bytes here
(bytes, h5py.string_dtype('utf-8', length=20)),
(bytes, h5py.string_dtype('ascii', length=20)),
]:
if typ == str:
data = np.full(10, 'hello world', dtype=dt)
else:
data = np.full(10, b'hello world', dtype=dt)
with setup_vfile() as f:
file = VersionedHDF5File(f)
with file.stage_version('0') as sv:
sv.create_dataset("name", shape=(10,), dtype=dt, data=data)
assert isinstance(sv['name'], InMemoryArrayDataset)
sv['name'].resize((11,))
assert file['0']['name'].dtype == dt
assert_equal(file['0']['name'][:10], data)
assert file['0']['name'][10] == typ(), dt.metadata
with file.stage_version('1') as sv:
assert isinstance(sv['name'], DatasetWrapper)
assert isinstance(sv['name'].dataset, InMemoryDataset)
sv['name'].resize((12,))
assert file['1']['name'].dtype == dt
assert_equal(file['1']['name'][:10], data, str(dt.metadata))
assert file['1']['name'][10] == typ(), dt.metadata
assert file['1']['name'][11] == typ(), dt.metadata
# Make sure we are matching the pure h5py behavior
f.create_dataset('name', shape=(10,), dtype=dt, data=data,
chunks=(10,), maxshape=(None,))
f['name'].resize((11,))
assert f['name'].dtype == dt
assert_equal(f['name'][:10], data)
assert f['name'][10] == typ(), dt.metadata
def test_empty(vfile):
with vfile.stage_version('version1') as g:
g['data'] = np.arange(10)
g.create_dataset('data2', data=np.empty((1, 0, 2)), chunks=(5, 5, 5))
assert_equal(g['data2'][()], np.empty((1, 0, 2)))
assert_equal(vfile['version1']['data2'][()], np.empty((1, 0, 2)))
with vfile.stage_version('version2') as g:
g['data'].resize((0,))
assert_equal(g['data'][()], np.empty((0,)))
assert_equal(vfile['version2']['data'][()], np.empty((0,)))
assert_equal(vfile['version2']['data2'][()], np.empty((1, 0, 2)))
def test_read_only():
with setup_vfile('test.hdf5') as f:
file = VersionedHDF5File(f)
timestamp = datetime.datetime.now(datetime.timezone.utc)
with file.stage_version('version1', timestamp=timestamp) as g:
g['data'] = [0, 1, 2]
with raises(ValueError):
g['data'][0] = 1
with raises(ValueError):
g['data2'] = [1, 2, 3]
with raises(ValueError):
file['version1']['data'][0] = 1
with raises(ValueError):
file['version1']['data2'] = [1, 2, 3]
with raises(ValueError):
file[timestamp]['data'][0] = 1
with raises(ValueError):
file[timestamp]['data2'] = [1, 2, 3]
with h5py.File('test.hdf5', 'r+') as f:
file = VersionedHDF5File(f)
with raises(ValueError):
file['version1']['data'][0] = 1
with raises(ValueError):
file['version1']['data2'] = [1, 2, 3]
with raises(ValueError):
file[timestamp]['data'][0] = 1
with raises(ValueError):
file[timestamp]['data2'] = [1, 2, 3]
def test_delete_datasets(vfile):
data1 = np.arange(10)
data2 = np.zeros(20, dtype=int)
with vfile.stage_version('version1') as g:
g['data'] = data1
g.create_group('group1/group2')
g['group1']['group2']['data1'] = data1
with vfile.stage_version('del_data') as g:
del g['data']
with vfile.stage_version('del_data1', 'version1') as g:
del g['group1/group2/data1']
with vfile.stage_version('del_group2', 'version1') as g:
del g['group1/group2']
with vfile.stage_version('del_group1', 'version1') as g:
del g['group1/']
with vfile.stage_version('version2', 'del_data') as g:
g['data'] = np.zeros(20, dtype=int)
with vfile.stage_version('version3', 'del_data1') as g:
g['group1/group2/data1'] = data2
with vfile.stage_version('version4', 'del_group2') as g:
g.create_group('group1/group2')
g['group1/group2/data1'] = data2
with vfile.stage_version('version5', 'del_group1') as g:
g.create_group('group1/group2')
g['group1/group2/data1'] = data2
assert set(vfile['version1']) == {'group1', 'data'}
assert list(vfile['version1']['group1']) == ['group2']
assert list(vfile['version1']['group1']['group2']) == ['data1']
assert_equal(vfile['version1']['data'][:], data1)
assert_equal(vfile['version1']['group1/group2/data1'][:], data1)
assert list(vfile['del_data']) == ['group1']
assert list(vfile['del_data']['group1']) == ['group2']
assert list(vfile['del_data']['group1']['group2']) == ['data1']
assert_equal(vfile['del_data']['group1/group2/data1'][:], data1)
assert set(vfile['del_data1']) == {'group1', 'data'}
assert list(vfile['del_data1']['group1']) == ['group2']
assert list(vfile['del_data1']['group1']['group2']) == []
assert_equal(vfile['del_data1']['data'][:], data1)
assert set(vfile['del_group2']) == {'group1', 'data'}
assert list(vfile['del_group2']['group1']) == []
assert_equal(vfile['del_group2']['data'][:], data1)
assert list(vfile['del_group1']) == ['data']
assert_equal(vfile['del_group1']['data'][:], data1)
assert set(vfile['version2']) == {'group1', 'data'}
assert list(vfile['version2']['group1']) == ['group2']
assert list(vfile['version2']['group1']['group2']) == ['data1']
assert_equal(vfile['version2']['data'][:], data2)
assert_equal(vfile['version2']['group1/group2/data1'][:], data1)
assert set(vfile['version3']) == {'group1', 'data'}
assert list(vfile['version3']['group1']) == ['group2']
assert list(vfile['version3']['group1']['group2']) == ['data1']
assert_equal(vfile['version3']['data'][:], data1)
assert_equal(vfile['version3']['group1/group2/data1'][:], data2)
assert set(vfile['version4']) == {'group1', 'data'}
assert list(vfile['version4']['group1']) == ['group2']
assert list(vfile['version4']['group1']['group2']) == ['data1']
assert_equal(vfile['version4']['data'][:], data1)
assert_equal(vfile['version4']['group1/group2/data1'][:], data2)
assert set(vfile['version5']) == {'group1', 'data'}
assert list(vfile['version5']['group1']) == ['group2']
assert list(vfile['version5']['group1']['group2']) == ['data1']
assert_equal(vfile['version5']['data'][:], data1)
assert_equal(vfile['version5']['group1/group2/data1'][:], data2)
def test_auto_create_group(vfile):
with vfile.stage_version('version1') as g:
g['a/b/c'] = [0, 1, 2]
assert_equal(g['a']['b']['c'][:], [0, 1, 2])
assert_equal(vfile['version1']['a']['b']['c'][:], [0, 1, 2])
def test_scalar():
with setup_vfile('test.hdf5') as f:
vfile = VersionedHDF5File(f)
with vfile.stage_version('version1') as g:
dtype = h5py.special_dtype(vlen=bytes)
g.create_dataset('bar', data=np.array(['aaa'], dtype='O'), dtype=dtype)
with h5py.File('test.hdf5', 'r+') as f:
vfile = VersionedHDF5File(f)
assert isinstance(vfile['version1']['bar'], DatasetWrapper)
assert isinstance(vfile['version1']['bar'].dataset, InMemoryDataset)
# Should return a scalar, not a shape () array
assert isinstance(vfile['version1']['bar'][0], bytes)
with h5py.File('test.hdf5', 'r') as f:
vfile = VersionedHDF5File(f)
assert isinstance(vfile['version1']['bar'], h5py.Dataset)
# Should return a scalar, not a shape () array
assert isinstance(vfile['version1']['bar'][0], bytes)
def test_sparse(vfile):
with vfile.stage_version('version1') as g:
g.create_dataset('test_data', shape=(10_000, 10_000), dtype=np.dtype('int64'), data=None,
chunks=(100, 100), fillvalue=1)
assert isinstance(g['test_data'], InMemorySparseDataset)
assert g['test_data'][0, 0] == 1
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 1
g['test_data'][0, 0] = 2
assert g['test_data'][0, 0] == 2
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 1
with vfile.stage_version('version2') as g:
assert isinstance(g['test_data'], DatasetWrapper)
assert isinstance(g['test_data'].dataset, InMemoryDataset)
assert g['test_data'][0, 0] == 2
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 1
g['test_data'][200, 1] = 3
assert g['test_data'][0, 0] == 2
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 3
assert vfile['version1']['test_data'][0, 0] == 2
assert vfile['version1']['test_data'][0, 1] == 1
assert vfile['version1']['test_data'][200, 1] == 1
assert vfile['version2']['test_data'][0, 0] == 2
assert vfile['version2']['test_data'][0, 1] == 1
assert vfile['version2']['test_data'][200, 1] == 3
def test_sparse_empty(vfile):
with vfile.stage_version('version1') as g:
g.create_dataset('test_data', shape=(10_000, 10_000), dtype=np.dtype('int64'), data=None,
chunks=(100, 100), fillvalue=1)
# Don't read or write any data from the sparse dataset
assert vfile['version1']['test_data'][0, 0] == 1
assert vfile['version1']['test_data'][0, 1] == 1
assert vfile['version1']['test_data'][200, 1] == 1
with vfile.stage_version('version2') as g:
assert isinstance(g['test_data'], DatasetWrapper)
assert isinstance(g['test_data'].dataset, InMemoryDataset)
assert g['test_data'][0, 0] == 1
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 1
g['test_data'][0, 0] = 2
g['test_data'][200, 1] = 2
assert g['test_data'][0, 0] == 2
assert g['test_data'][0, 1] == 1
assert g['test_data'][200, 1] == 2
assert vfile['version1']['test_data'][0, 0] == 1
assert vfile['version1']['test_data'][0, 1] == 1
assert vfile['version1']['test_data'][200, 1] == 1
assert vfile['version2']['test_data'][0, 0] == 2
assert vfile['version2']['test_data'][0, 1] == 1
assert vfile['version2']['test_data'][200, 1] == 2
def test_sparse_large(vfile):
# This is currently inefficient in terms of time, but test that it isn't
# inefficient in terms of memory.
with vfile.stage_version('version1') as g:
# test_data would be 100GB if stored entirely in memory. We use a huge
# chunk size to avoid taking too long with the current code that loops
# over all chunk indices.
g.create_dataset('test_data', shape=(100_000_000_000,), data=None,
chunks=(10_000_000,), fillvalue=0.)
assert isinstance(g['test_data'], InMemorySparseDataset)
assert g['test_data'][0] == 0
assert g['test_data'][1] == 0
assert g['test_data'][20_000_000] == 0
g['test_data'][0] = 1
assert g['test_data'][0] == 1
assert g['test_data'][1] == 0
assert g['test_data'][20_000_000] == 0
with vfile.stage_version('version2') as g:
assert isinstance(g['test_data'], DatasetWrapper)
assert isinstance(g['test_data'].dataset, InMemoryDataset)
assert g['test_data'][0] == 1
assert g['test_data'][1] == 0
assert g['test_data'][20_000_000] == 0
g['test_data'][20_000_000] = 2
assert g['test_data'][0] == 1
assert g['test_data'][1] == 0
assert g['test_data'][20_000_000] == 2
assert vfile['version1']['test_data'][0] == 1
assert vfile['version1']['test_data'][1] == 0
assert vfile['version1']['test_data'][20_000_000] == 0
assert vfile['version2']['test_data'][0] == 1
assert vfile['version2']['test_data'][1] == 0
assert vfile['version2']['test_data'][20_000_000] == 2
def test_no_recursive_version_group_access(vfile):
timestamp1 = datetime.datetime.now(datetime.timezone.utc)
with vfile.stage_version('version1', timestamp=timestamp1) as g:
g.create_dataset('test', data=[1, 2, 3])
timestamp2 = datetime.datetime.now(datetime.timezone.utc)
minute = datetime.timedelta(minutes=1)
with vfile.stage_version('version2', timestamp=timestamp2) as g:
vfile['version1'] # Doesn't raise
raises(ValueError, lambda: vfile['version2'])
vfile[timestamp1] # Doesn't raise
# Without +minute, it will pick the previous version, as the
# uncommitted group only has a placeholder timestamp, which will be
# after timestamp2. Since this isn't supposed to work in the first
# place, this isn't a big deal.
raises(ValueError, lambda: vfile[timestamp2+minute])
def test_empty_dataset_str_dtype(vfile):
# Issue #161. Make sure the dtype is maintained correctly for empty
# datasets with custom string dtypes.
with vfile.stage_version('version1') as g:
g.create_dataset('bar', data=np.array(['a', 'b', 'c'], dtype='S5'), dtype=np.dtype('S5'))
g['bar'].resize((0,))
with vfile.stage_version('version2') as g:
g['bar'].resize((3,))
g['bar'][:] = np.array(['a', 'b', 'c'], dtype='S5')
def test_datasetwrapper(vfile):
with vfile.stage_version('r0') as sv:
sv.create_dataset('bar', data=[1, 2, 3], chunks=(2,))
sv['bar'].attrs['key'] = 0
assert isinstance(sv['bar'], InMemoryArrayDataset)
assert dict(sv['bar'].attrs) == {'key': 0}
assert sv['bar'].chunks == (2,)
with vfile.stage_version('r1') as sv:
assert isinstance(sv['bar'], DatasetWrapper)
assert isinstance(sv['bar'].dataset, InMemoryDataset)
assert sv['bar'].attrs['key'] == 0
sv['bar'].attrs['key'] = 1
assert sv['bar'].attrs['key'] == 1
assert sv['bar'].chunks == (2,)
sv['bar'][:] = [4, 5, 6]
assert isinstance(sv['bar'], DatasetWrapper)
assert isinstance(sv['bar'].dataset, InMemoryArrayDataset)
assert sv['bar'].attrs['key'] == 1
assert sv['bar'].chunks == (2,)
def test_mask_reading(tmp_path):
# Reading a virtual dataset with a mask does not work in HDF5, so make
# sure it still works for versioned datasets.
file_name = os.path.join(tmp_path, 'file.hdf5')
mask = np.array([True, True, False], dtype='bool')
with h5py.File(file_name, 'w') as f:
vf = VersionedHDF5File(f)
with vf.stage_version('r0') as sv:
sv.create_dataset('bar', data=[1, 2, 3], chunks=(2,))
b = sv['bar'][mask]
assert_equal(b, [1, 2])
b = vf['r0']['bar'][mask]
assert_equal(b, [1, 2])
with h5py.File(file_name, 'r+') as f:
vf = VersionedHDF5File(f)
sv = vf['r0']
b = sv['bar'][mask]
assert_equal(b, [1, 2])
# This fails prior to h5py 3.3 because read-only files return the virtual
# dataset directly, but h5py <3.3 does not support mask indices on virtual
# datasets.
@mark.xfail(h5py.__version__[0] == '2'
or h5py.__version__[0] == '3' and int(h5py.__version__[2]) < 3,
reason='h5py 2 does not support masks on virtual datasets')
def test_mask_reading_read_only(tmp_path):
# Reading a virtual dataset with a mask does not work in HDF5, so make
# sure it still works for versioned datasets.
file_name = os.path.join(tmp_path, 'file.hdf5')
mask = np.array([True, True, False], dtype='bool')
with h5py.File(file_name, 'w') as f:
vf = VersionedHDF5File(f)
with vf.stage_version('r0') as sv:
sv.create_dataset('bar', data=[1, 2, 3], chunks=(2,))
b = sv['bar'][mask]
assert_equal(b, [1, 2])
b = vf['r0']['bar'][mask]
assert_equal(b, [1, 2])
with h5py.File(file_name, 'r') as f:
vf = VersionedHDF5File(f)
sv = vf['r0']
b = sv['bar'][mask]
assert_equal(b, [1, 2])
def test_read_only_no_wrappers():
# Read-only files should not use the wrapper classes
with setup_vfile('test.hdf5') as f:
vfile = VersionedHDF5File(f)
with vfile.stage_version('version1') as g:
g.create_dataset('bar', data=np.array([0, 1, 2]))
with h5py.File('test.hdf5', 'r+') as f:
vfile = VersionedHDF5File(f)
assert isinstance(vfile['version1'], InMemoryGroup)
assert isinstance(vfile['version1']['bar'], DatasetWrapper)
assert isinstance(vfile['version1']['bar'].dataset, InMemoryDataset)
with h5py.File('test.hdf5', 'r') as f:
vfile = VersionedHDF5File(f)
assert isinstance(vfile['version1'], h5py.Group)
assert isinstance(vfile['version1']['bar'], h5py.Dataset)
| 39.249059 | 118 | 0.638932 | 9,825 | 72,964 | 4.540662 | 0.044478 | 0.058818 | 0.076392 | 0.057428 | 0.81366 | 0.756075 | 0.697077 | 0.637676 | 0.605465 | 0.569936 | 0 | 0.044127 | 0.201483 | 72,964 | 1,858 | 119 | 39.270183 | 0.721573 | 0.040678 | 0 | 0.500368 | 0 | 0 | 0.175925 | 0.033266 | 0 | 0 | 0 | 0 | 0.456954 | 1 | 0.038999 | false | 0.001472 | 0.009566 | 0 | 0.048565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
20c8dc629ba478e2775bf9fa36b8d56bb2d26c1d | 2,612 | py | Python | ward_mapping/migrations/0001_initial.py | Suraj1127/ward-mapping-application | 53fa39bab875ca47fdab814fd28ea0b7d2086c15 | [
"MIT"
] | 1 | 2019-05-16T04:08:40.000Z | 2019-05-16T04:08:40.000Z | ward_mapping/migrations/0001_initial.py | Suraj1127/ward-mapping-application | 53fa39bab875ca47fdab814fd28ea0b7d2086c15 | [
"MIT"
] | null | null | null | ward_mapping/migrations/0001_initial.py | Suraj1127/ward-mapping-application | 53fa39bab875ca47fdab814fd28ea0b7d2086c15 | [
"MIT"
] | null | null | null | # Generated by Django 2.2 on 2019-05-15 16:43
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Map2011',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('zone', models.CharField(max_length=30)),
('district', models.CharField(max_length=30)),
('old_survey_vdc_name', models.CharField(max_length=100)),
('old_survey_vdc_code', models.CharField(max_length=30)),
('old_ward_no', models.PositiveSmallIntegerField()),
('old_survey_ward_code', models.CharField(max_length=30)),
('province', models.PositiveSmallIntegerField()),
('new_district', models.CharField(max_length=30)),
('cbs_district_code', models.PositiveSmallIntegerField()),
('category_of_lu', models.CharField(max_length=50)),
('lu_name', models.CharField(max_length=100)),
('lu_full_name', models.CharField(max_length=200)),
('lu_name_nepali', models.CharField(max_length=200)),
('cbs_lu_code', models.PositiveIntegerField()),
('lu_ward_no', models.PositiveSmallIntegerField()),
('cbs_ward_code', models.PositiveIntegerField()),
],
),
migrations.CreateModel(
name='Map2014',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('district', models.CharField(max_length=50)),
('vdc_muni', models.CharField(max_length=300)),
('old_ward_no', models.PositiveSmallIntegerField()),
('province', models.PositiveSmallIntegerField()),
('cbs_district_code', models.PositiveSmallIntegerField()),
('category_of_lu', models.CharField(max_length=50)),
('lu_name', models.CharField(max_length=100)),
('lu_full_name', models.CharField(max_length=200)),
('lu_name_nepali', models.CharField(max_length=200)),
('lu_hlcit_code', models.CharField(max_length=30)),
('lu_cbs_code', models.PositiveIntegerField()),
('new_ward_no', models.PositiveSmallIntegerField()),
('cbs_ward_code', models.PositiveIntegerField()),
],
),
]
| 46.642857 | 114 | 0.584227 | 248 | 2,612 | 5.870968 | 0.254032 | 0.175137 | 0.210165 | 0.28022 | 0.70467 | 0.610577 | 0.475275 | 0.475275 | 0.475275 | 0.373626 | 0 | 0.034006 | 0.279479 | 2,612 | 55 | 115 | 47.490909 | 0.739639 | 0.016462 | 0 | 0.541667 | 1 | 0 | 0.143358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020833 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
20f4974dbfb9b7e44a6eecfec1209588daffaa28 | 762 | py | Python | __main__.py | JeffreyTsang/Brickbreaker | 37f0d143e9f937027fc281aef1511d0e9c804b8b | [
"MIT"
] | null | null | null | __main__.py | JeffreyTsang/Brickbreaker | 37f0d143e9f937027fc281aef1511d0e9c804b8b | [
"MIT"
] | null | null | null | __main__.py | JeffreyTsang/Brickbreaker | 37f0d143e9f937027fc281aef1511d0e9c804b8b | [
"MIT"
] | null | null | null | # __main__.py
# Walker M. White (wmw2)
# November 12, 2012
"""__main__ module for Breakout
This is the module with the application code. Make sure that this module is
in a folder with the following files:
breakout.py (the primary controller class)
model.py (the model classes)
game2d.py (the view classes)
In addition, you should have the following subfolders
Fonts (fonts to use for GLabel)
Sounds (sound effects for the game)
Images (image files to use in the game)
Moving any of these folders or files will prevent the game from working properly"""
from constants import *
from breakout import *
# Application code
if __name__ == '__main__':
Breakout(width=GAME_WIDTH,height=GAME_HEIGHT).run()
| 29.307692 | 83 | 0.711286 | 111 | 762 | 4.720721 | 0.594595 | 0.028626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013559 | 0.225722 | 762 | 25 | 84 | 30.48 | 0.874576 | 0.809711 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
455d6ce36a65cf0610f2586646f75e04d7e70b2f | 138 | py | Python | test.py | loremcookie/_logging_module | c24f962ad321b8b2d7ac362e65dbd6d259686ebc | [
"MIT"
] | null | null | null | test.py | loremcookie/_logging_module | c24f962ad321b8b2d7ac362e65dbd6d259686ebc | [
"MIT"
] | null | null | null | test.py | loremcookie/_logging_module | c24f962ad321b8b2d7ac362e65dbd6d259686ebc | [
"MIT"
] | null | null | null | import _logging as logging
logger = logging.logging()
logger.DEBUG('TEST')
logger.ERROR('TEST')
logger.INFO('TEST')
logger.WARNING('TEST') | 23 | 26 | 0.753623 | 19 | 138 | 5.421053 | 0.473684 | 0.291262 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 138 | 6 | 27 | 23 | 0.804688 | 0 | 0 | 0 | 0 | 0 | 0.115108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4583a8b165ab4c78a4b26d904a7a68c5264ba730 | 335 | py | Python | ccij/ccij/doctype/representacion/representacion.py | dacosta2213/ccij | ef68d49d8dbabd5e381bcd411dc48b670621e666 | [
"MIT"
] | null | null | null | ccij/ccij/doctype/representacion/representacion.py | dacosta2213/ccij | ef68d49d8dbabd5e381bcd411dc48b670621e666 | [
"MIT"
] | null | null | null | ccij/ccij/doctype/representacion/representacion.py | dacosta2213/ccij | ef68d49d8dbabd5e381bcd411dc48b670621e666 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2019, Totall and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
class Representacion(Document):
pass
#
# def validate(self):
# self.db_set('mes_entrega', self.entrega.month)
| 23.928571 | 50 | 0.755224 | 44 | 335 | 5.590909 | 0.772727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017361 | 0.140299 | 335 | 13 | 51 | 25.769231 | 0.836806 | 0.543284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
4595098c311e3e92f7b064d28a78bd111f665e93 | 3,155 | py | Python | tests/test_marklogic_utils.py | marklogic/newrelic-plugin | 61bf437cace1c9d9b517e7ae2ad0022b8225086c | [
"Apache-2.0"
] | 3 | 2017-07-08T04:28:43.000Z | 2020-03-25T17:35:22.000Z | tests/test_marklogic_utils.py | marklogic-community/newrelic-plugin | 61bf437cace1c9d9b517e7ae2ad0022b8225086c | [
"Apache-2.0"
] | 13 | 2017-08-03T19:06:49.000Z | 2021-06-25T15:25:03.000Z | tests/test_marklogic_utils.py | marklogic-community/newrelic-plugin | 61bf437cace1c9d9b517e7ae2ad0022b8225086c | [
"Apache-2.0"
] | 3 | 2019-03-16T22:15:31.000Z | 2020-03-25T17:35:24.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2019 MarkLogic Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0#
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# working directory=tests
import unittest
import logging
from newrelic_marklogic_plugin.marklogic_status import MarkLogicStatus
LOG = logging.getLogger()
logging.basicConfig(level=logging.DEBUG)
HOST = "localhost"
USER = "admin"
PASS = "admin"
AUTH = "DIGEST"
SCHEME = "http"
PORT = 8002
class MarkLogicUtilsTests(unittest.TestCase):
def test_rest(self):
status = MarkLogicStatus(scheme=SCHEME, user=USER, passwd=PASS, host=HOST, port=PORT, auth=AUTH, verify=False)
response = status.get()
LOG.debug(response)
self.assertEqual(status.scheme, SCHEME)
self.assertEqual(status.user, USER)
self.assertEqual(status.passwd, PASS)
self.assertEqual(status.host, HOST)
self.assertEqual(status.port, PORT)
self.assertEqual(status.auth, AUTH)
self.assertTrue(isinstance(response, dict))
self.assertIsNotNone(response["local-cluster-status"])
def test_verify(self):
status = MarkLogicStatus(scheme=SCHEME, user=USER, passwd=PASS, host=HOST, port=PORT, auth=AUTH)
self.assertFalse(status.verify)
status = MarkLogicStatus(scheme=SCHEME, user=USER, passwd=PASS, host=HOST, port=PORT, auth=AUTH, verify=False)
self.assertFalse(status.verify)
status = MarkLogicStatus(scheme=SCHEME, user=USER, passwd=PASS, host=HOST, port=PORT, auth=AUTH, verify=True)
self.assertTrue(status.verify)
status = MarkLogicStatus(scheme=SCHEME, user=USER, passwd=PASS, host=HOST, port=PORT, auth=AUTH, verify="/path/to/cacerts")
self.assertEqual(status.verify, "/path/to/cacerts")
def test_defaults(self):
status = MarkLogicStatus()
self.assertEqual(status.scheme, "http")
self.assertEqual(status.host, None)
self.assertEqual(status.user, None)
self.assertEqual(status.passwd, None)
self.assertEqual(status.port, 8002)
self.assertEqual(status.auth, None)
self.assertEqual(status.verify, False)
self.assertEqual(status.url, None)
def test_default_override(self):
status = MarkLogicStatus(scheme="xcc", user="user", passwd="pass", port=123, host="host", auth="BASIC", verify=True)
self.assertEqual(status.scheme, "xcc")
self.assertEqual(status.port, 123)
self.assertEqual(status.auth, "BASIC")
self.assertEqual(status.verify, True)
self.assertEqual(status.user, "user")
self.assertEqual(status.passwd, "pass")
self.assertEqual(status.host, "host")
| 39.4375 | 131 | 0.701109 | 392 | 3,155 | 5.622449 | 0.32398 | 0.149728 | 0.209619 | 0.049002 | 0.318512 | 0.299909 | 0.299909 | 0.299909 | 0.299909 | 0.299909 | 0 | 0.008904 | 0.1813 | 3,155 | 79 | 132 | 39.936709 | 0.844367 | 0.197464 | 0 | 0.076923 | 0 | 0 | 0.049741 | 0 | 0 | 0 | 0 | 0 | 0.519231 | 1 | 0.076923 | false | 0.192308 | 0.057692 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
45d0300dc539119af7cf8c40a9178087f5bac8ed | 332 | py | Python | layers/modules/__init__.py | frezaeix/AttFDNet | e4021b259e187e9180a83fcb67c029144bdd5789 | [
"MIT"
] | 1 | 2021-03-07T01:09:33.000Z | 2021-03-07T01:09:33.000Z | layers/modules/__init__.py | frezaeix/AttFDNet | e4021b259e187e9180a83fcb67c029144bdd5789 | [
"MIT"
] | null | null | null | layers/modules/__init__.py | frezaeix/AttFDNet | e4021b259e187e9180a83fcb67c029144bdd5789 | [
"MIT"
] | null | null | null | from .multibox_tf_loss import MultiBoxLoss_tf_source
from .knowledge_distillation_loss import KD_loss
from .imprinted_object import search_imprinted_weights
from .multibox_tf_loss_target import MultiBoxLoss_tf_target
__all__ = ['MultiBoxLoss_tf_source', 'KD_loss',
'search_imprinted_weights', 'MultiBoxLoss_tf_target']
| 36.888889 | 64 | 0.840361 | 43 | 332 | 5.883721 | 0.372093 | 0.221344 | 0.110672 | 0.142292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105422 | 332 | 8 | 65 | 41.5 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0.225904 | 0.204819 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
afeb8ca338393a94ffd02cdee2ec1716df4a8337 | 731 | py | Python | hutils/tests.py | dreamplatform/haltu-utils | 6308d032791b09e5a46bdb98cfc4d99c96da96d7 | [
"BSD-3-Clause"
] | null | null | null | hutils/tests.py | dreamplatform/haltu-utils | 6308d032791b09e5a46bdb98cfc4d99c96da96d7 | [
"BSD-3-Clause"
] | null | null | null | hutils/tests.py | dreamplatform/haltu-utils | 6308d032791b09e5a46bdb98cfc4d99c96da96d7 | [
"BSD-3-Clause"
] | null | null | null |
from django.test import TestCase
from django.db import models
from hutils.managers import QuerySetManager
class TestModel(models.Model):
i = models.IntegerField()
objects = QuerySetManager()
class QuerySet(models.query.QuerySet):
def less_than(self, c):
return self.filter(id__lt=c)
class QuerySetManagerTestCase(TestCase):
def _setup(self):
# TODO For some reason model creation does not work in django-nose 1.1
self.objs = [TestModel.objects.create(i=i) for i in range(3)]
def _test_get_query_set(self):
objs = TestModel.objects.less_than(2)
self.assertEqual(objs.count(), 2)
def test_attribute_query(self):
self.assertRaises(AttributeError, lambda: TestModel.objects._foo())
| 24.366667 | 74 | 0.740082 | 101 | 731 | 5.237624 | 0.534653 | 0.090737 | 0.064272 | 0.090737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00813 | 0.158687 | 731 | 29 | 75 | 25.206897 | 0.852033 | 0.093023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.117647 | 1 | 0.235294 | false | 0 | 0.176471 | 0.058824 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
afebe6f2ea49a4d54ad87a769c40e84344c8632a | 109 | py | Python | src/lookup_merge.py | alphanexustech/role-directory-proto | 27b5bbf7a76022a97057263398c97f9462fd3fff | [
"MIT"
] | null | null | null | src/lookup_merge.py | alphanexustech/role-directory-proto | 27b5bbf7a76022a97057263398c97f9462fd3fff | [
"MIT"
] | 2 | 2022-03-24T15:21:38.000Z | 2022-03-25T21:33:34.000Z | src/lookup_merge.py | alphanexustech/role-directory-proto | 27b5bbf7a76022a97057263398c97f9462fd3fff | [
"MIT"
] | null | null | null | def merge_role_json(files):
merged = {}
for i in files:
merged.update(i)
return merged
| 13.625 | 27 | 0.59633 | 15 | 109 | 4.2 | 0.733333 | 0.349206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.311927 | 109 | 7 | 28 | 15.571429 | 0.84 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b31c75e2046a13da0654b68c56be95811318a66b | 314 | py | Python | intask_api/tasks/urls.py | KirovVerst/intask | 4bdec6f49fa2873cca1354d7d3967973f5bcadc3 | [
"MIT"
] | null | null | null | intask_api/tasks/urls.py | KirovVerst/intask | 4bdec6f49fa2873cca1354d7d3967973f5bcadc3 | [
"MIT"
] | 7 | 2016-08-17T23:08:31.000Z | 2022-03-02T02:23:08.000Z | intask_api/tasks/urls.py | KirovVerst/intask | 4bdec6f49fa2873cca1354d7d3967973f5bcadc3 | [
"MIT"
] | null | null | null | from rest_framework.routers import DefaultRouter
from intask_api.tasks.views import TaskViewSet, TaskUserViewSet
router = DefaultRouter()
router.register(r'tasks', TaskViewSet, basename='tasks')
router.register(r'tasks/(?P<task_id>[0-9]+)/users', TaskUserViewSet, basename='task-users')
urlpatterns = router.urls
| 39.25 | 91 | 0.799363 | 40 | 314 | 6.2 | 0.6 | 0.112903 | 0.120968 | 0.16129 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006849 | 0.070064 | 314 | 7 | 92 | 44.857143 | 0.842466 | 0 | 0 | 0 | 0 | 0 | 0.16242 | 0.098726 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b32c6157b37a6c2aef3cf2191a1d45f18ee4504f | 229 | py | Python | Introducao python/exercicios/ex013b.py | Luis12368/python | 23352d75ad13bcfd09ea85ab422fdc6ae1fcc5e7 | [
"MIT"
] | null | null | null | Introducao python/exercicios/ex013b.py | Luis12368/python | 23352d75ad13bcfd09ea85ab422fdc6ae1fcc5e7 | [
"MIT"
] | null | null | null | Introducao python/exercicios/ex013b.py | Luis12368/python | 23352d75ad13bcfd09ea85ab422fdc6ae1fcc5e7 | [
"MIT"
] | null | null | null | valor = float(input('Insira o valor do produto: '))
desconto = float(input('Insira o vlaor do deaconto: '))
novo_valor = valor - (valor * (desconto / 100))
print(f'O novo valor com {desconto}% de desconto é {novo_valor:.2f}')
| 28.625 | 69 | 0.68559 | 35 | 229 | 4.428571 | 0.514286 | 0.174194 | 0.206452 | 0.219355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.161572 | 229 | 7 | 70 | 32.714286 | 0.786458 | 0 | 0 | 0 | 0 | 0 | 0.497817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b354e2175d42b27df0d6c702ef00771b60b91fe6 | 60 | py | Python | modules/shared/infrastructure/requests/django/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 6 | 2020-08-09T23:41:08.000Z | 2021-03-16T22:05:40.000Z | modules/shared/infrastructure/requests/django/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 1 | 2020-10-02T02:59:38.000Z | 2020-10-02T02:59:38.000Z | modules/shared/infrastructure/requests/django/__init__.py | eduardolujan/hexagonal_architecture_django | 8055927cb460bc40f3a2651c01a9d1da696177e8 | [
"BSD-3-Clause"
] | 2 | 2021-03-16T22:05:43.000Z | 2021-04-30T06:35:25.000Z | from .django_request import Request
__all__ = ('Request',)
| 15 | 35 | 0.75 | 7 | 60 | 5.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 60 | 3 | 36 | 20 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0.116667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
b35f49c197da4ac821ef9e64e2d19def9b56f814 | 178 | py | Python | setup.py | cxu-fork/pipupgradeall | fcca62aa0c334d9f9eca8323c7d17f228d937ee7 | [
"MIT"
] | null | null | null | setup.py | cxu-fork/pipupgradeall | fcca62aa0c334d9f9eca8323c7d17f228d937ee7 | [
"MIT"
] | 1 | 2020-10-27T01:51:33.000Z | 2020-10-27T01:51:33.000Z | setup.py | cxu-fork/pipupgradeall | fcca62aa0c334d9f9eca8323c7d17f228d937ee7 | [
"MIT"
] | 2 | 2020-10-26T20:36:01.000Z | 2020-10-26T21:00:47.000Z | import setuptools
setuptools.setup(
name='pipupgradeall',
py_modules=['pipupgradeall'],
entry_points={"console_scripts": ["pipupgradeall = pipupgradeall:_main"],},
)
| 25.428571 | 79 | 0.724719 | 16 | 178 | 7.8125 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123596 | 178 | 6 | 80 | 29.666667 | 0.801282 | 0 | 0 | 0 | 0 | 0 | 0.426966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
b360a4c4991b238fe7765fcb44a5651558f24263 | 275 | py | Python | kollect/management/commands/runserver.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | 7 | 2018-09-03T20:52:00.000Z | 2021-09-12T20:52:43.000Z | kollect/management/commands/runserver.py | dcramer/kollect | a8586ec07f671e01e80df2336ad1fa5dfe4804e5 | [
"Apache-2.0"
] | 9 | 2020-02-11T23:11:31.000Z | 2022-01-13T00:53:07.000Z | tabletop/management/commands/runserver.py | dcramer/tabletop-server | 062f56d149a29d5ab8605e220c156c1b4fb52d2f | [
"Apache-2.0"
] | null | null | null | from django.conf import settings
from django.contrib.staticfiles.management.commands.runserver import Command as BaseCommand
class Command(BaseCommand):
def execute(self, *args, **options):
settings.DEBUG = True
return super().execute(*args, **options)
| 30.555556 | 91 | 0.738182 | 32 | 275 | 6.34375 | 0.71875 | 0.098522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 275 | 8 | 92 | 34.375 | 0.878788 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
b369861278c0f6577303d1013bf15fd9eaf975ac | 55 | py | Python | D02/set/hapus_clear.py | shdx8/dtwrhs | 108decb8056931fc7601ed455a72ef0d65983ab0 | [
"MIT"
] | null | null | null | D02/set/hapus_clear.py | shdx8/dtwrhs | 108decb8056931fc7601ed455a72ef0d65983ab0 | [
"MIT"
] | null | null | null | D02/set/hapus_clear.py | shdx8/dtwrhs | 108decb8056931fc7601ed455a72ef0d65983ab0 | [
"MIT"
] | null | null | null | set_saya = {1,2,3,4,5}
set_saya.clear()
print(set_saya) | 18.333333 | 22 | 0.709091 | 13 | 55 | 2.769231 | 0.692308 | 0.583333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098039 | 0.072727 | 55 | 3 | 23 | 18.333333 | 0.607843 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2fae65f69504a377cf97ec0ffdf0a00d4a9c2141 | 434 | py | Python | VisualPy/engine/ext/ErrorTypes.py | xTrayambak/visualpy | 10a68c023b230f9c2f11e57680dab06e68078d0a | [
"BSD-2-Clause"
] | null | null | null | VisualPy/engine/ext/ErrorTypes.py | xTrayambak/visualpy | 10a68c023b230f9c2f11e57680dab06e68078d0a | [
"BSD-2-Clause"
] | null | null | null | VisualPy/engine/ext/ErrorTypes.py | xTrayambak/visualpy | 10a68c023b230f9c2f11e57680dab06e68078d0a | [
"BSD-2-Clause"
] | null | null | null | class NoDialogError(Exception):
"""
Occurs when the user has not inputted any dialog.
"""
pass
class DialogTooBigError(Exception):
"""
Occurs when the user has inputted too big of a dialog.
"""
pass
class DiscordRPCFailed(Exception):
"""
Occurs when pypresence fails to connect to Discord.
"""
pass
class DiscordNotFound(Exception):
"""
Occurs when pypresence fails to connect to Discord.
"""
pass | 20.666667 | 56 | 0.695853 | 52 | 434 | 5.807692 | 0.480769 | 0.198676 | 0.251656 | 0.145695 | 0.562914 | 0.562914 | 0.370861 | 0.370861 | 0.370861 | 0.370861 | 0 | 0 | 0.211982 | 434 | 21 | 57 | 20.666667 | 0.883041 | 0.479263 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
2fb91561e56ca96f56198adfb8680ae6a7d77cc6 | 14,969 | py | Python | phi/math/backend/_numpy_backend.py | Sh0cktr4p/PhiFlow | cc87c5887bc3abfa1ef3c03252122a06e9fd2c18 | [
"MIT"
] | null | null | null | phi/math/backend/_numpy_backend.py | Sh0cktr4p/PhiFlow | cc87c5887bc3abfa1ef3c03252122a06e9fd2c18 | [
"MIT"
] | null | null | null | phi/math/backend/_numpy_backend.py | Sh0cktr4p/PhiFlow | cc87c5887bc3abfa1ef3c03252122a06e9fd2c18 | [
"MIT"
] | 1 | 2021-09-15T11:14:42.000Z | 2021-09-15T11:14:42.000Z | import numbers
import os
import sys
import warnings
from typing import List
import numpy as np
import scipy.signal
import scipy.sparse
from scipy.sparse.linalg import cg, LinearOperator
from . import Backend, ComputeDevice
from ._backend_helper import combined_dim
from ._dtype import from_numpy_dtype, to_numpy_dtype, DType
from ._optim import Solve, LinearSolve, SolveResult
class NumPyBackend(Backend):
"""Core Python Backend using NumPy & SciPy"""
def __init__(self):
if sys.platform != "win32" and sys.platform != "darwin":
mem_bytes = os.sysconf('SC_PAGE_SIZE') * os.sysconf('SC_PHYS_PAGES')
else:
mem_bytes = -1
processors = os.cpu_count()
self.cpu = ComputeDevice(self, "CPU", 'CPU', mem_bytes, processors, "")
Backend.__init__(self, "NumPy", self.cpu)
def list_devices(self, device_type: str or None = None) -> List[ComputeDevice]:
return [self.cpu]
def as_tensor(self, x, convert_external=True):
if self.is_tensor(x, only_native=convert_external):
array = x
else:
array = np.array(x)
# --- Enforce Precision ---
if not isinstance(array, numbers.Number):
if array.dtype in (np.float16, np.float32, np.float64, np.longdouble):
array = self.to_float(array)
return array
def is_tensor(self, x, only_native=False):
if isinstance(x, np.ndarray) and x.dtype != np.object:
return True
if scipy.sparse.issparse(x):
return True
if isinstance(x, np.bool_):
return True
# --- Above considered native ---
if only_native:
return False
# --- Non-native types
if isinstance(x, (numbers.Number, bool, str)):
return True
if isinstance(x, (tuple, list)):
return all([self.is_tensor(item, False) for item in x])
return False
def is_available(self, tensor):
return True
def numpy(self, tensor):
if isinstance(tensor, np.ndarray):
return tensor
else:
return np.array(tensor)
def copy(self, tensor, only_mutable=False):
return np.copy(tensor)
def transpose(self, tensor, axes):
return np.transpose(tensor, axes)
def equal(self, x, y):
if isinstance(x, np.ndarray) and x.dtype.char == 'U': # string comparison
x = x.astype(np.object)
if isinstance(x, str):
x = np.array(x, np.object)
return np.equal(x, y)
def divide_no_nan(self, x, y):
with np.errstate(divide='ignore', invalid='ignore'):
result = x / y
return np.where(y == 0, 0, result)
def random_uniform(self, shape):
return np.random.random(shape).astype(to_numpy_dtype(self.float_type))
def random_normal(self, shape):
return np.random.standard_normal(shape).astype(to_numpy_dtype(self.float_type))
def range(self, start, limit=None, delta=1, dtype=None):
"""
range syntax to arange syntax
Args:
start:
limit: (Default value = None)
delta: (Default value = 1)
dtype: (Default value = None)
Returns:
"""
if limit is None:
start, limit = 0, start
return np.arange(start, limit, delta, dtype)
def tile(self, value, multiples):
return np.tile(value, multiples)
def stack(self, values, axis=0):
return np.stack(values, axis)
def concat(self, values, axis):
return np.concatenate(values, axis)
def pad(self, value, pad_width, mode='constant', constant_values=0):
assert mode in ('constant', 'symmetric', 'periodic', 'reflect', 'boundary'), mode
if mode == 'constant':
return np.pad(value, pad_width, 'constant', constant_values=constant_values)
else:
if mode in ('periodic', 'boundary'):
mode = {'periodic': 'wrap', 'boundary': 'edge'}[mode]
return np.pad(value, pad_width, mode)
def reshape(self, value, shape):
return np.reshape(value, shape)
def sum(self, value, axis=None, keepdims=False):
return np.sum(value, axis=axis, keepdims=keepdims)
def prod(self, value, axis=None):
if not isinstance(value, np.ndarray):
value = np.array(value)
if value.dtype == bool:
return np.all(value, axis=axis)
return np.prod(value, axis=axis)
def where(self, condition, x=None, y=None):
if x is None or y is None:
return np.argwhere(condition)
return np.where(condition, x, y)
def nonzero(self, values):
return np.argwhere(values)
def zeros(self, shape, dtype: DType = None):
return np.zeros(shape, dtype=to_numpy_dtype(dtype or self.float_type))
def zeros_like(self, tensor):
return np.zeros_like(tensor)
def ones(self, shape, dtype: DType = None):
return np.ones(shape, dtype=to_numpy_dtype(dtype or self.float_type))
def ones_like(self, tensor):
return np.ones_like(tensor)
def meshgrid(self, *coordinates):
return np.meshgrid(*coordinates, indexing='ij')
def linspace(self, start, stop, number):
return np.linspace(start, stop, number, dtype=to_numpy_dtype(self.float_type))
def mean(self, value, axis=None, keepdims=False):
return np.mean(value, axis, keepdims=keepdims)
def dot(self, a, b, axes):
return np.tensordot(a, b, axes)
def mul(self, a, b):
if scipy.sparse.issparse(a):
return a.multiply(b)
elif scipy.sparse.issparse(b):
return b.multiply(a)
else:
return Backend.mul(self, a, b)
def matmul(self, A, b):
return np.stack([A.dot(b[i]) for i in range(b.shape[0])])
def einsum(self, equation, *tensors):
return np.einsum(equation, *tensors)
def while_loop(self, cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True,
swap_memory=False, name=None, maximum_iterations=None):
i = 0
while cond(*loop_vars):
if maximum_iterations is not None and i == maximum_iterations:
break
loop_vars = body(*loop_vars)
i += 1
return loop_vars
def abs(self, x):
return np.abs(x)
def sign(self, x):
return np.sign(x)
def round(self, x):
return np.round(x)
def ceil(self, x):
return np.ceil(x)
def floor(self, x):
return np.floor(x)
def max(self, x, axis=None, keepdims=False):
return np.max(x, axis, keepdims=keepdims)
def min(self, x, axis=None, keepdims=False):
return np.min(x, axis, keepdims=keepdims)
def maximum(self, a, b):
return np.maximum(a, b)
def minimum(self, a, b):
return np.minimum(a, b)
def clip(self, x, minimum, maximum):
return np.clip(x, minimum, maximum)
def sqrt(self, x):
return np.sqrt(x)
def exp(self, x):
return np.exp(x)
def conv(self, tensor, kernel, padding="SAME"):
"""
apply convolution of kernel on tensor
Args:
tensor:
kernel:
padding: (Default value = "SAME")
Returns:
"""
assert tensor.shape[-1] == kernel.shape[-2]
# kernel = kernel[[slice(None)] + [slice(None, None, -1)] + [slice(None)]*(len(kernel.shape)-3) + [slice(None)]]
if padding.lower() == "same":
result = np.zeros(tensor.shape[:-1] + (kernel.shape[-1],), dtype=to_numpy_dtype(self.float_type))
elif padding.lower() == "valid":
valid = [tensor.shape[i + 1] - (kernel.shape[i] + 1) // 2 for i in range(tensor_spatial_rank(tensor))]
result = np.zeros([tensor.shape[0]] + valid + [kernel.shape[-1]], dtype=to_numpy_dtype(self.float_type))
else:
raise ValueError("Illegal padding: %s" % padding)
for batch in range(tensor.shape[0]):
for o in range(kernel.shape[-1]):
for i in range(tensor.shape[-1]):
result[batch, ..., o] += scipy.signal.correlate(tensor[batch, ..., i], kernel[..., i, o], padding.lower())
return result
def expand_dims(self, a, axis=0, number=1):
for _i in range(number):
a = np.expand_dims(a, axis)
return a
def shape(self, tensor):
return np.shape(tensor)
def staticshape(self, tensor):
return np.shape(tensor)
def cast(self, x, dtype: DType):
if self.is_tensor(x, only_native=True) and from_numpy_dtype(x.dtype) == dtype:
return x
else:
return np.array(x, to_numpy_dtype(dtype))
def gather(self, values, indices):
if scipy.sparse.issparse(values):
if scipy.sparse.isspmatrix_coo(values):
values = values.tocsc()
return values[indices]
def batched_gather_nd(self, values, indices):
assert indices.shape[-1] == self.ndims(values) - 2
batch_size = combined_dim(values.shape[0], indices.shape[0])
result = np.empty((batch_size, *indices.shape[1:-1], values.shape[-1],), values.dtype)
for b in range(batch_size):
b_values = values[min(b, values.shape[0] - 1)]
b_indices = self.unstack(indices[min(b, indices.shape[0] - 1)], -1)
result[b] = b_values[b_indices]
return result
def std(self, x, axis=None, keepdims=False):
return np.std(x, axis, keepdims=keepdims)
def boolean_mask(self, x, mask):
return x[mask]
def isfinite(self, x):
return np.isfinite(x)
def any(self, boolean_tensor, axis=None, keepdims=False):
return np.any(boolean_tensor, axis=axis, keepdims=keepdims)
def all(self, boolean_tensor, axis=None, keepdims=False):
return np.all(boolean_tensor, axis=axis, keepdims=keepdims)
def scatter(self, indices, values, shape, duplicates_handling='undefined', outside_handling='undefined'):
assert duplicates_handling in ('undefined', 'add', 'mean', 'any')
assert outside_handling in ('discard', 'clamp', 'undefined')
shape = np.array(shape, np.int32)
if outside_handling == 'clamp':
indices = np.maximum(0, np.minimum(indices, shape - 1))
elif outside_handling == 'discard':
indices_inside = (indices >= 0) & (indices < shape)
indices_inside = np.min(indices_inside, axis=-1)
filter_indices = np.argwhere(indices_inside)
indices = indices[filter_indices][..., 0, :]
if values.shape[0] > 1:
values = values[filter_indices.reshape(-1)]
array = np.zeros(tuple(shape) + values.shape[indices.ndim-1:], to_numpy_dtype(self.float_type))
indices = self.unstack(indices, axis=-1)
if duplicates_handling == 'add':
np.add.at(array, tuple(indices), values)
elif duplicates_handling == 'mean':
count = np.zeros(shape, np.int32)
np.add.at(array, tuple(indices), values)
np.add.at(count, tuple(indices), 1)
count = np.maximum(1, count)
return array / count
else: # last, any, undefined
array[indices] = values
return array
def fft(self, x):
rank = len(x.shape) - 2
assert rank >= 1
if rank == 1:
return np.fft.fft(x, axis=1)
elif rank == 2:
return np.fft.fft2(x, axes=[1, 2])
else:
return np.fft.fftn(x, axes=list(range(1, rank + 1)))
def ifft(self, k):
assert self.dtype(k).kind == complex
rank = len(k.shape) - 2
assert rank >= 1
if rank == 1:
return np.fft.ifft(k, axis=1).astype(k.dtype)
elif rank == 2:
return np.fft.ifft2(k, axes=[1, 2]).astype(k.dtype)
else:
return np.fft.ifftn(k, axes=list(range(1, rank + 1))).astype(k.dtype)
def imag(self, complex_arr):
return np.imag(complex_arr)
def real(self, complex_arr):
return np.real(complex_arr)
def sin(self, x):
return np.sin(x)
def cos(self, x):
return np.cos(x)
def dtype(self, array) -> DType:
if isinstance(array, int):
return DType(int, 32)
if isinstance(array, float):
return DType(float, 64)
if isinstance(array, complex):
return DType(complex, 128)
if not isinstance(array, np.ndarray):
array = np.array(array)
return from_numpy_dtype(array.dtype)
def sparse_tensor(self, indices, values, shape):
if not isinstance(indices, (tuple, list)):
indices = self.unstack(indices, -1)
if len(indices) == 2:
return scipy.sparse.csc_matrix((values, indices), shape=shape)
else:
raise NotImplementedError(f"len(indices) = {len(indices)} not supported. Only (2) allowed.")
def coordinates(self, tensor, unstack_coordinates=False):
if scipy.sparse.issparse(tensor):
coo = tensor.tocoo()
return (coo.row, coo.col), coo.data
else:
raise NotImplementedError("Only sparse tensors supported.")
def conjugate_gradient(self, A, y, x0, solve_params=LinearSolve(), callback=None):
bs_y = self.staticshape(y)[0]
bs_x0 = self.staticshape(x0)[0]
batch_size = combined_dim(bs_y, bs_x0)
if callable(A):
A = LinearOperator(dtype=y.dtype, shape=(self.staticshape(y)[-1], self.staticshape(x0)[-1]), matvec=A)
elif isinstance(A, (tuple, list)) or self.ndims(A) == 3:
batch_size = combined_dim(batch_size, self.staticshape(A)[0])
iterations = [0] * batch_size
converged = []
results = []
def count_callback(*args):
iterations[batch] += 1
if callback is not None:
callback(*args)
for batch in range(batch_size):
y_ = y[min(batch, bs_y - 1)]
x0_ = x0[min(batch, bs_x0 - 1)]
x, ret_val = cg(A, y_, x0_, tol=solve_params.relative_tolerance, atol=solve_params.absolute_tolerance, maxiter=solve_params.max_iterations, callback=count_callback)
converged.append(ret_val == 0)
results.append(x)
solve_params.result = SolveResult(all(converged), max(iterations))
return self.stack(results)
def clamp(coordinates, shape):
assert coordinates.shape[-1] == len(shape)
for i in range(len(shape)):
coordinates[...,i] = np.maximum(0, np.minimum(shape[i] - 1, coordinates[..., i]))
return coordinates
def tensor_spatial_rank(field):
dims = len(field.shape) - 2
assert dims > 0, "channel has no spatial dimensions"
return dims
NUMPY_BACKEND = NumPyBackend()
| 34.175799 | 176 | 0.594161 | 1,963 | 14,969 | 4.441161 | 0.15894 | 0.054141 | 0.013765 | 0.014912 | 0.189722 | 0.13237 | 0.115049 | 0.075017 | 0.047488 | 0.027988 | 0 | 0.011118 | 0.278977 | 14,969 | 437 | 177 | 34.254005 | 0.796627 | 0.035807 | 0 | 0.101227 | 0 | 0 | 0.027999 | 0 | 0 | 0 | 0 | 0 | 0.030675 | 1 | 0.220859 | false | 0 | 0.039877 | 0.138037 | 0.542945 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
2feb74471e168c7ba1d30e8dffed81e0e52414c6 | 3,271 | py | Python | feedreader/fallback/atom.py | webos-goodies/feedreader | 42f44a98980af3a9bc39fa5c436a48fc31733d0f | [
"BSD-2-Clause"
] | 1 | 2015-11-15T18:45:50.000Z | 2015-11-15T18:45:50.000Z | feedreader/fallback/atom.py | webos-goodies/feedreader | 42f44a98980af3a9bc39fa5c436a48fc31733d0f | [
"BSD-2-Clause"
] | null | null | null | feedreader/fallback/atom.py | webos-goodies/feedreader | 42f44a98980af3a9bc39fa5c436a48fc31733d0f | [
"BSD-2-Clause"
] | null | null | null | """
Malformed Atom fallback
"""
from feedreader.fallback.base import (PREFERRED_LINK_TYPES, PREFERRED_CONTENT_TYPES,
Feed, Item, get_element_text, get_attribute, search_child,
get_xpath_node, get_xpath_text, get_xpath_datetime,
safe_strip, normalize_spaces, unescape_html)
class AtomFallback(Feed):
__feed__ = 'Atom Fallback'
@property
def is_valid(self):
# <feed xmlns="http://www.w3.org/2005/Atom">
return self._element.tag.lower() == 'feed'
@property
def id(self):
return safe_strip(get_xpath_text(self._element, 'id'))
@property
def title(self):
return normalize_spaces(get_xpath_text(self._element, 'title'))
@property
def link(self):
link = search_child(self._element, 'feedlink',
('rel', 'alternate', 'type', PREFERRED_LINK_TYPES))
return safe_strip(get_attribute(link, 'href'))
@property
def description(self):
subtitle = search_child(self._element, 'subtitle',
('type', PREFERRED_CONTENT_TYPES))
if subtitle is not None:
return get_element_text(subtitle)
else:
return get_xpath_text(self._element, 'tagline')
@property
def published(self):
return (get_xpath_datetime(self._element, 'published') or
get_xpath_datetime(self._element, 'issued'))
@property
def updated(self):
return (get_xpath_datetime(self._element, 'updated') or
get_xpath_datetime(self._element, 'modified'))
@property
def entries(self):
return [Atom10Item(item) for item in self._element.xpath('descendant::entry')]
class Atom10Item(Item):
@property
def id(self):
return safe_strip(get_xpath_text(self._element, 'descendant::id'))
@property
def title(self):
return normalize_spaces(unescape_html(get_xpath_text(self._element, 'descendant::title')))
@property
def link(self):
link = search_child(self._element, 'descendant::feedlink',
('rel', 'alternate', 'type', PREFERRED_LINK_TYPES))
return safe_strip(get_attribute(link, 'href'))
@property
def author_name(self):
return normalize_spaces(get_xpath_text(self._element, 'descendant::author/descendant::name'))
@property
def author_email(self):
return safe_strip(get_xpath_text(self._element, 'descendant::author/descendant::email'))
@property
def author_link(self):
return safe_strip(get_xpath_text(self._element, 'descendant::author/descendant::uri'))
@property
def description(self):
content = search_child(self._element, 'descendant::content',
('type', PREFERRED_CONTENT_TYPES))
if content is None:
content = search_child(self._element, 'descendant::summary',
('type', PREFERRED_CONTENT_TYPES))
return get_element_text(content)
@property
def published(self):
return (get_xpath_datetime(self._element, 'descendant::published') or
get_xpath_datetime(self._element, 'descendant::issued'))
@property
def updated(self):
return (get_xpath_datetime(self._element, 'descendant::updated') or
get_xpath_datetime(self._element, 'descendant::modified'))
| 31.757282 | 97 | 0.670131 | 378 | 3,271 | 5.505291 | 0.195767 | 0.121576 | 0.121096 | 0.061509 | 0.64248 | 0.600673 | 0.547333 | 0.466603 | 0.415185 | 0.369053 | 0 | 0.003494 | 0.212473 | 3,271 | 102 | 98 | 32.068627 | 0.804348 | 0.020483 | 0 | 0.473684 | 0 | 0 | 0.130788 | 0.039424 | 0 | 0 | 0 | 0 | 0 | 1 | 0.223684 | false | 0 | 0.013158 | 0.171053 | 0.513158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
64081c37d1ca537a55d057ce8e0fe32c837b442b | 74 | py | Python | tests/type/bad/function-bad-return.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | 1 | 2020-11-24T05:24:26.000Z | 2020-11-24T05:24:26.000Z | tests/type/bad/function-bad-return.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | null | null | null | tests/type/bad/function-bad-return.py | Nakrez/RePy | 057db55a99eac2c5cb3d622fa1f2e29f6083d8d6 | [
"MIT"
] | null | null | null | def fun(x):
if x > 1:
return 0
else:
return "str"
| 12.333333 | 20 | 0.418919 | 11 | 74 | 2.818182 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051282 | 0.472973 | 74 | 5 | 21 | 14.8 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
641bb66e30f87d492340ad814a4e968e438c0fa6 | 1,277 | py | Python | dags/rockflow/dags/logo.py | asdf-zxcv/airflow-dags | 033511deaaf07a662b30d35fd86ae866115baa28 | [
"Unlicense"
] | null | null | null | dags/rockflow/dags/logo.py | asdf-zxcv/airflow-dags | 033511deaaf07a662b30d35fd86ae866115baa28 | [
"Unlicense"
] | null | null | null | dags/rockflow/dags/logo.py | asdf-zxcv/airflow-dags | 033511deaaf07a662b30d35fd86ae866115baa28 | [
"Unlicense"
] | null | null | null | from airflow.models import DAG
from rockflow.dags.const import *
from rockflow.dags.symbol import MERGE_CSV_KEY
from rockflow.operators.logo import *
with DAG("public_logo_download", default_args=DEFAULT_DEBUG_ARGS) as public:
PublicLogoBatchOperator(
from_key=MERGE_CSV_KEY,
key=public.dag_id,
region=DEFAULT_REGION,
bucket_name=DEFAULT_BUCKET_NAME,
proxy=DEFAULT_PROXY
)
with DAG("public_logo_download_debug", default_args=DEFAULT_DEBUG_ARGS) as public_debug:
PublicLogoBatchOperatorDebug(
from_key=MERGE_CSV_KEY,
key=public_debug.dag_id,
region=DEFAULT_REGION,
bucket_name=DEFAULT_BUCKET_NAME,
proxy=DEFAULT_PROXY
)
with DAG("etoro_logo_download", default_args=DEFAULT_DEBUG_ARGS) as etoro:
EtoroLogoBatchOperator(
from_key=MERGE_CSV_KEY,
key=etoro.dag_id,
region=DEFAULT_REGION,
bucket_name=DEFAULT_BUCKET_NAME,
proxy=DEFAULT_PROXY
)
with DAG("etoro_logo_download_debug", default_args=DEFAULT_DEBUG_ARGS) as etoro_debug:
EtoroLogoBatchOperatorDebug(
from_key=MERGE_CSV_KEY,
key=etoro_debug.dag_id,
region=DEFAULT_REGION,
bucket_name=DEFAULT_BUCKET_NAME,
proxy=DEFAULT_PROXY
)
| 30.404762 | 88 | 0.729052 | 161 | 1,277 | 5.385093 | 0.192547 | 0.092272 | 0.063437 | 0.106113 | 0.731257 | 0.709343 | 0.709343 | 0.561707 | 0.480969 | 0.388697 | 0 | 0 | 0.204385 | 1,277 | 41 | 89 | 31.146341 | 0.853346 | 0 | 0 | 0.444444 | 0 | 0 | 0.070478 | 0.039937 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
643e3d8e50c594d60d058cffa7941b77bbddb825 | 794 | py | Python | katas/kata5.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | katas/kata5.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | katas/kata5.py | cesarau04/metodosnumericos | 9670c91d5c659d2bfb1b95e85c437e6deed9ec28 | [
"MIT"
] | null | null | null | """
Find the Minimum, Maximum, Length and Average Values
Create a function that takes a list of numbers and returns the following statistics:
Minimum Value
Maximum Value
Sequence Length
Average Value
"""
def minMaxLengthAverage(lst):
return [min(lst), max(lst), len(lst), sum(lst)/len(lst)]
"""
Return the Sum of the Two Smallest Numbers
Create a function that takes a list of numbers and returns the sum of the two lowest positive numbers.
"""
def sum_two_smallest_nums(lst):
return sum(list(filter(lambda x: x>=0, sorted(lst)))[:2])
"""
Cumulative List Sum
Create a function that takes a list of numbers and returns a list where each number is the sum of itself + all previous numbers in the list.
"""
def cumulative_sum(lst):
return [sum(lst[:i+1]) for i in range(len(lst))]
| 29.407407 | 140 | 0.738035 | 133 | 794 | 4.37594 | 0.398496 | 0.034364 | 0.07732 | 0.097938 | 0.300687 | 0.257732 | 0.257732 | 0.257732 | 0.257732 | 0.257732 | 0 | 0.004532 | 0.166247 | 794 | 26 | 141 | 30.538462 | 0.874622 | 0.245592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
ff298f207d3ee96b781b7eddae68d7609d2caae4 | 85 | py | Python | units/apps.py | ezekielkibiego/Store_Center | 199bb8240e78e7227d2daa0ce8d9df13e1c429f7 | [
"MIT"
] | 6 | 2022-01-27T15:12:43.000Z | 2022-03-28T23:07:14.000Z | units/apps.py | ezekielkibiego/Store_Center | 199bb8240e78e7227d2daa0ce8d9df13e1c429f7 | [
"MIT"
] | 7 | 2022-01-21T11:58:55.000Z | 2022-01-29T00:11:10.000Z | units/apps.py | c3n7/university-portal | 82bf40a1c0d98111ffe8a184d16b543a3feec072 | [
"MIT"
] | 3 | 2022-01-27T13:22:11.000Z | 2022-03-03T12:41:31.000Z | from django.apps import AppConfig
class UnitsConfig(AppConfig):
name = 'units'
| 14.166667 | 33 | 0.741176 | 10 | 85 | 6.3 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 85 | 5 | 34 | 17 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
ff43b9110e7c6c410c83280a5c7b6d03cd3574ec | 608 | py | Python | symphoni-ui/Tests/test_suite.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | null | null | null | symphoni-ui/Tests/test_suite.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | 77 | 2018-09-13T02:29:50.000Z | 2018-12-03T19:31:45.000Z | symphoni-ui/Tests/test_suite.py | tylersuchan/symphoni-fork | 9b12a17d6562f5c94e17c840b0ea4af1f74d4a08 | [
"Apache-2.0"
] | null | null | null | import unittest
import join_button
import start_party_button
import queue
import privacy_button
import music_queue
import contact_button
loader = unittest.TestLoader()
suite = unittest.TestSuite()
runner = unittest.TextTestRunner(verbosity=3)
suite.addTests(loader.loadTestsFromModule(join_button))
suite.addTests(loader.loadTestsFromModule(start_party_button))
suite.addTests(loader.loadTestsFromModule(queue))
suite.addTests(loader.loadTestsFromModule(privacy_button))
suite.addTests(loader.loadTestsFromModule(music_queue))
suite.addTests(loader.loadTestsFromModule(contact_button))
runner.run(suite)
| 27.636364 | 62 | 0.858553 | 70 | 608 | 7.285714 | 0.3 | 0.152941 | 0.223529 | 0.447059 | 0.427451 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001742 | 0.055921 | 608 | 21 | 63 | 28.952381 | 0.88676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.411765 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
ff4e9ca7e3ce7e44ad223e8d480850fa09b99929 | 334 | py | Python | fungraph/keywordargument.py | davehadley/graci | 8c5b86ce364df32e48bca40a46091021459547fb | [
"MIT"
] | 1 | 2020-07-18T17:53:02.000Z | 2020-07-18T17:53:02.000Z | fungraph/keywordargument.py | davehadley/graci | 8c5b86ce364df32e48bca40a46091021459547fb | [
"MIT"
] | null | null | null | fungraph/keywordargument.py | davehadley/graci | 8c5b86ce364df32e48bca40a46091021459547fb | [
"MIT"
] | 3 | 2020-07-31T16:57:50.000Z | 2020-07-31T16:58:02.000Z | from typing import NamedTuple
class KeywordArgument(NamedTuple):
"""Use to explicitly search for a function keyword argument node when getting
objects from a graph.
This is useful in cases where named function names and keyword argument names clash.
See Also
--------
fungraph.Name
"""
value: str
| 20.875 | 88 | 0.697605 | 43 | 334 | 5.418605 | 0.837209 | 0.128755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239521 | 334 | 15 | 89 | 22.266667 | 0.917323 | 0.643713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
ff8ad2742f44d16038167e00c4e49b630ac94e9f | 745 | py | Python | pylayers/gis/test/test_taoffice.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 143 | 2015-01-09T07:50:20.000Z | 2022-03-02T11:26:53.000Z | pylayers/gis/test/test_taoffice.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 148 | 2015-01-13T04:19:34.000Z | 2022-03-11T23:48:25.000Z | pylayers/gis/test/test_taoffice.py | usmanwardag/pylayers | 2e8a9bdc993b2aacc92610a9c7edf875c6c7b24a | [
"MIT"
] | 95 | 2015-05-01T13:22:42.000Z | 2022-03-15T11:22:28.000Z | from pylayers.gis.layout import *
from pylayers.simul.link import *
L = Layout('TA-Office.ini',force=True)
##L.build()
#plt.ion()
##L.showG('st',aw=True,labels=True,nodelist=L.ldiffout)
#f,lax= plt.subplots(2,2)
#L.showG('s',aw=True,labels=True,fig=f,ax=lax[0][0])
#lax[0][0].set_title('Gs',fontsize=18)
#L.showG('st',aw=True,labels=True,fig=f,ax=lax[0][1])
#lax[0][1].set_title('Gt',fontsize=18)
#L.showG('v',aw=True,labels=True,fig=f,ax=lax[1][0])
#lax[1][0].set_title('Gv',fontsize=18)
#L.showG('i',aw=True,labels=True,fig=f,ax=lax[1][1])
#lax[1][1].set_title('Gi',fontsize=18)
#
##DL = DLink(L=L)
##DL.a = np.array([-3,6.2,1.5])
##DL.eval(force=['sig','ray','Ct','H'],ra_vectorized=True,diffraction=True)
#
##DL.b = np.array([12.5,30,1.5])
| 32.391304 | 75 | 0.651007 | 155 | 745 | 3.096774 | 0.387097 | 0.0625 | 0.125 | 0.166667 | 0.283333 | 0.283333 | 0.283333 | 0.216667 | 0.216667 | 0 | 0 | 0.053672 | 0.049664 | 745 | 22 | 76 | 33.863636 | 0.624294 | 0.798658 | 0 | 0 | 0 | 0 | 0.103175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
ffa1c51a918aeb4c248717083899dd625abcf68f | 145 | py | Python | print_linux.py | bicubico/neuralprinter | 62abb0149d99e85c011e49d5b3b18cfd685b5057 | [
"MIT"
] | 1 | 2017-12-28T16:49:07.000Z | 2017-12-28T16:49:07.000Z | print_linux.py | bicubico/neuralprinter | 62abb0149d99e85c011e49d5b3b18cfd685b5057 | [
"MIT"
] | null | null | null | print_linux.py | bicubico/neuralprinter | 62abb0149d99e85c011e49d5b3b18cfd685b5057 | [
"MIT"
] | null | null | null | #/usr/bin/python3
import os
def print_image(filename, print_image = False):
if print_image:
os.system('lp ' + filename)
return
| 16.111111 | 47 | 0.662069 | 20 | 145 | 4.65 | 0.7 | 0.322581 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008929 | 0.227586 | 145 | 8 | 48 | 18.125 | 0.821429 | 0.110345 | 0 | 0 | 0 | 0 | 0.023438 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ffba2f9fa76e114d9b8e5bc7903f58d0d6e6f1e3 | 183 | py | Python | 27 Body Mass Index.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | 27 Body Mass Index.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | 27 Body Mass Index.py | L0ganhowlett/Python_workbook-Ben_Stephenson | ab711257bd2da9b34c6001a8e09d20bfc0114a3f | [
"MIT"
] | null | null | null | # 27 Body mass Index
#Asking for height and weight of user
h = float(input("Enter the height = "))
m = float(input("Enter the mass = "))
print("Body Mass Index = ",m / (h * h))
| 30.5 | 40 | 0.622951 | 30 | 183 | 3.8 | 0.6 | 0.140351 | 0.22807 | 0.315789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.224044 | 183 | 5 | 41 | 36.6 | 0.788732 | 0.300546 | 0 | 0 | 0 | 0 | 0.45 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
ffbb948d728f4e1e40d9f713346c629fba3183e5 | 143 | py | Python | weaviate/gql/__init__.py | ooxoo-bv/weaviate-python-client | f646a5c16b1c0cc7940b3ffa17a71efb6e96063a | [
"BSD-3-Clause"
] | 14 | 2019-11-04T14:18:21.000Z | 2022-03-31T09:11:51.000Z | weaviate/gql/__init__.py | ooxoo-bv/weaviate-python-client | f646a5c16b1c0cc7940b3ffa17a71efb6e96063a | [
"BSD-3-Clause"
] | 91 | 2019-11-04T11:26:42.000Z | 2022-03-22T10:22:44.000Z | weaviate/gql/__init__.py | ooxoo-bv/weaviate-python-client | f646a5c16b1c0cc7940b3ffa17a71efb6e96063a | [
"BSD-3-Clause"
] | 7 | 2021-05-14T14:53:42.000Z | 2022-03-31T15:09:55.000Z | """
GraphQL module used to create `get` and/or `aggregate` GraphQL requests from Weaviate.
"""
__all__ = ['Query']
from .query import Query
| 17.875 | 87 | 0.706294 | 19 | 143 | 5.105263 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167832 | 143 | 7 | 88 | 20.428571 | 0.815126 | 0.608392 | 0 | 0 | 0 | 0 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
44270b1635eefee664fb5fb4985c764d1d994824 | 187 | py | Python | AT_Basic/AT_readexternalfile.py | huertatipografica/huertatipografica-fl-scripts | f048705c217132b3f103a773aaf20033874f6579 | [
"Apache-2.0"
] | 1 | 2020-02-06T22:18:07.000Z | 2020-02-06T22:18:07.000Z | AT_Basic/AT_readexternalfile.py | huertatipografica/huertatipografica-fl-scripts | f048705c217132b3f103a773aaf20033874f6579 | [
"Apache-2.0"
] | null | null | null | AT_Basic/AT_readexternalfile.py | huertatipografica/huertatipografica-fl-scripts | f048705c217132b3f103a773aaf20033874f6579 | [
"Apache-2.0"
] | null | null | null | #FLM: AT URL file
import urllib
f = urllib.urlopen('http://www.andrestorresi.com.ar/test.txt')
readdata = f.read()
readdata2 = unicode( readdata, "iso-8859-1" )
f.close()
print readdata2
| 23.375 | 62 | 0.721925 | 29 | 187 | 4.655172 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042169 | 0.112299 | 187 | 7 | 63 | 26.714286 | 0.771084 | 0.085562 | 0 | 0 | 0 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
44339024d54ca9de139f09917b4ba6a7b238c654 | 17,294 | py | Python | plot_residuals_location.py | chriskjou/opennmt-inspection | c70e1f3665ed29b20abcf464e4c73aa7e228a046 | [
"MIT"
] | 2 | 2019-03-18T15:54:32.000Z | 2019-03-22T02:21:38.000Z | plot_residuals_location.py | chriskjou/opennmt-inspection | c70e1f3665ed29b20abcf464e4c73aa7e228a046 | [
"MIT"
] | 6 | 2020-01-28T22:48:37.000Z | 2020-08-17T16:09:03.000Z | plot_residuals_location.py | chriskjou/opennmt-inspection | c70e1f3665ed29b20abcf464e4c73aa7e228a046 | [
"MIT"
] | 1 | 2019-08-04T17:36:22.000Z | 2019-08-04T17:36:22.000Z | import numpy as np
import pickle
import sys
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.switch_backend('agg')
import argparse
import os
from tqdm import tqdm
import math
import helper
map_dict = {
'avg': "Average",
'min': "Minimum",
'max': "Maximum",
'last': "Last",
"spanish": "Spanish",
"swedish": "Swedish",
"french": "French",
"german": "German",
"italian": "Italian"
}
def get_location(df, atype, layer_num, names, activations):
df_agg = df[df.agg_type == atype][df.layer == layer_num]
indices = []
for name in names:
index = df_agg.index[df_agg['atlas_labels'] == name].tolist()
indices += index
all_activations = [activations[x] for x in indices]
return np.nansum(all_activations), np.nansum(all_activations) // len(all_activations)
print("SUM: ", np.nansum(all_activations))
print("AVG: ", np.nansum(all_activations) // len(all_activations))
def compare_aggregations(df):
# g = sns.catplot(x="roi_labels", y="residuals", data=df, hue="agg_type", kind="bar", height=7.5, aspect=1.5)
# g.set_xticklabels(rotation=90)
#plt.show()
return
def plot_aggregations(df, args, file_name):
all_residuals = list(df.residuals)
g = sns.catplot(x="roi_labels", y="residuals", data=df, hue="layer", kind="bar", height=7.5, aspect=1.5)
g.set_axis_labels("", "RMSE")
g.set(ylim=(min(all_residuals), max(all_residuals)/1.75))
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language] + ", " + str(bm) + " " + str(cv))
plt.show()
return
def plot_atlas(df, args, file_name, zoom=False):
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.catplot(x="atlas_labels", y="residuals", data=df, height=17.5, aspect=1.5)
g.set_xticklabels(rotation=90)
if zoom:
g.set(ylim=(min(all_residuals), 0.5)) #5 * math.pow(10, -11)))
file_name += "-zoom"
else:
g.set(ylim=(min(all_residuals), max(all_residuals)))
g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Brain Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language] + ", " + str(bm) + " " + str(cv))
elif args.word2vec:
plt.title("RMSE in all Brain Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec")
elif args.glove:
plt.title("RMSE in all Brain Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE")
elif args.bert:
plt.title("RMSE in all Brain Regions for " + map_dict[args.agg_type] + " Aggregation of BERT")
else: # args.rand_embed:
plt.title("RMSE in all Brain Regions for " + map_dict[args.agg_type] + " Aggregation of Random Embeddings")
plt.savefig("../visualizations/" + str(file_name) + ".png")
# plt.show()
return
def plot_roi(df, args, file_name, zoom=False):
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.catplot(x="roi_labels", y="residuals", data=df, height=7.5, aspect=1.5)
g.set_xticklabels(rotation=90)
if zoom:
print(min(all_residuals))
g.set(ylim=(0, min(all_residuals) * 15)) #5 * math.pow(10, -11)))
file_name += "-zoom"
else:
g.set(ylim=(min(all_residuals), max(all_residuals)))
g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language])
elif args.word2vec:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec" + ", " + str(bm) + " " + str(cv))
elif args.glove:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE" + ", " + str(bm) + " " + str(cv))
elif args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of BERT" + ", " + str(bm) + " " + str(cv))
elif args.random and args.rand_embed:
plt.title("RMSE in all Language Regions for Random Activations and Embeddings, " + str(bm) + " " + str(cv))
else: # args.rand_embed:
plt.title("RMSE in all Language Regions for Random Embeddings, " + str(bm) + " " + str(cv))
plt.savefig("../visualizations/" + str(file_name) + ".png")
return
def plot_boxplot_for_atlas(df, args, file_name):
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.catplot(x="atlas_labels", y="residuals", data=df, height=17.5, aspect=1.5, kind="box")
g.set_xticklabels(rotation=90)
g.set(ylim=(min(all_residuals), max(all_residuals)))
g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language])
elif args.word2vec:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec" + ", " + str(bm) + " " + str(cv))
elif args.glove:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE" + ", " + str(bm) + " " + str(cv))
elif args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of BERT" + ", " + str(bm) + " " + str(cv))
elif args.random and args.rand_embed:
plt.title("RMSE in all Language Regions for Random Activations and Embeddings, " + str(bm) + " " + str(cv))
else: # args.rand_embed:
plt.title("RMSE in all Language Regions for Random Embeddings, " + str(bm) + " " + str(cv))
plt.savefig("../visualizations/" + str(file_name) + ".png")
return
def plot_boxplot_for_roi(df, args, file_name):
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.catplot(x="roi_labels", y="residuals", data=df, height=7.5, aspect=1.5, kind="box")
g.set_xticklabels(rotation=90)
# g.set(ylim=(min(all_residuals), max(all_residuals)))
g.set(ylim=(min(all_residuals), 50))
g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language])
elif args.word2vec:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec" + ", " + str(bm) + " " + str(cv))
elif args.glove:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE" + ", " + str(bm) + " " + str(cv))
elif args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of BERT" + ", " + str(bm) + " " + str(cv))
elif args.random and args.rand_embed:
plt.title("RMSE in all Language Regions for Random Activations and Embeddings, " + str(bm) + " " + str(cv))
else: # args.rand_embed:
plt.title("RMSE in all Language Regions for Random Embeddings, " + str(bm) + " " + str(cv))
plt.savefig("../visualizations/" + str(file_name) + ".png")
return
def plot_violinplot_for_atlas(df, args, file_name):
plt.clf()
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.violinplot(x="atlas_labels", y="residuals", data=df, height=17.5, aspect=1.5)
g.set_xticklabels(rotation=90)
# g.set(ylim=(min(all_residuals), max(all_residuals)))
# g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language])
elif args.word2vec:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec" + ", " + str(bm) + " " + str(cv))
elif args.glove:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE" + ", " + str(bm) + " " + str(cv))
elif args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of BERT" + ", " + str(bm) + " " + str(cv))
else: # args.rand_embed:
plt.title("RMSE in all Language Regions for Random Embeddings")
plt.savefig("../visualizations/" + str(file_name) + ".png")
return
def plot_violinplot_for_roi(df, args, file_name):
plt.clf()
if args.cross_validation:
cv = "Cross Validation"
else:
cv = ""
if args.brain_to_model:
bm = "Brain-to-Model"
else:
bm = "Model-to-Brain"
all_residuals = list(df.residuals)
g = sns.violinplot(x="roi_labels", y="residuals", data=df, height=7.5, aspect=1.5)
# g.set_xticklabels(rotation=90)
g.set(ylim=(min(all_residuals), max(all_residuals)))
# g.set_axis_labels("RMSE", "")
if not args.rand_embed and not args.word2vec and not args.glove and not args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of " + str(args.which_layer) + "-Layer " + str(args.model_type).upper() + " English-to-" + map_dict[args.language])
elif args.word2vec:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of Word2Vec" + ", " + str(bm) + " " + str(cv))
elif args.glove:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of GLoVE" + ", " + str(bm) + " " + str(cv))
elif args.bert:
plt.title("RMSE in all Language Regions for " + map_dict[args.agg_type] + " Aggregation of BERT" + ", " + str(bm) + " " + str(cv))
elif args.random and args.rand_embed:
plt.title("RMSE in all Language Regions for Random Activations and Embeddings, " + str(bm) + " " + str(cv))
else: # args.rand_embed:
plt.title("RMSE in all Language Regions for Random Embeddings, " + str(bm) + " " + str(cv))
plt.savefig("../visualizations/" + str(file_name) + ".png")
return
def main():
argparser = argparse.ArgumentParser(description="plot RMSE by location")
argparser.add_argument("-language", "--language", help="Target language ('spanish', 'german', 'italian', 'french', 'swedish')", type=str, default='spanish')
argparser.add_argument("-num_layers", "--num_layers", help="Total number of layers ('2', '4')", type=int, default=2)
argparser.add_argument("-model_type", "--model_type", help="Type of model ('brnn', 'rnn')", type=str, default='brnn')
argparser.add_argument("-which_layer", "--which_layer", help="Layer of interest in [1: total number of layers]", type=int, default=1)
argparser.add_argument("-agg_type", "--agg_type", help="Aggregation type ('avg', 'max', 'min', 'last')", type=str, default='avg')
argparser.add_argument("-subject_number", "--subject_number", type=int, default=1, help="subject number (fMRI data) for decoding")
argparser.add_argument("-cross_validation", "--cross_validation", help="Add flag if add cross validation", action='store_true', default=False)
argparser.add_argument("-brain_to_model", "--brain_to_model", help="Add flag if regressing brain to model", action='store_true', default=False)
argparser.add_argument("-model_to_brain", "--model_to_brain", help="Add flag if regressing model to brain", action='store_true', default=False)
argparser.add_argument("-glove", "--glove", action='store_true', default=False, help="True if initialize glove embeddings, False if not")
argparser.add_argument("-word2vec", "--word2vec", action='store_true', default=False, help="True if initialize word2vec embeddings, False if not")
argparser.add_argument("-random", "--random", action='store_true', default=False, help="True if initialize random brain activations, False if not")
argparser.add_argument("-rand_embed", "--rand_embed", action='store_true', default=False, help="True if initialize random embeddings, False if not")
argparser.add_argument("-bert", "--bert", action='store_true', default=False, help="True if initialize bert embeddings, False if not")
argparser.add_argument("-permutation", "--permutation", action='store_true', default=False, help="True if permutation, False if not")
argparser.add_argument("-permutation_region", "--permutation_region", action='store_true', default=False, help="True if permutation by brain region, False if not")
argparser.add_argument("-local", "--local", action='store_true', default=False, help="True if running locally")
argparser.add_argument("-hard_drive", "--hard_drive", action='store_true', default=False, help="True if running from hard drive")
args = argparser.parse_args()
# get residuals
# check conditions // can remove when making pipeline
if args.brain_to_model and args.model_to_brain:
print("select only one flag for brain_to_model or model_to_brain")
exit()
if not args.brain_to_model and not args.model_to_brain:
print("select at least flag for brain_to_model or model_to_brain")
exit()
direction, validate, rlabel, elabel, glabel, w2vlabel, bertlabel, plabel, prlabel = helper.generate_labels(args)
# residual_file = sys.argv[1]
file_loc = str(plabel) + str(prlabel) + str(rlabel) + str(elabel) + str(glabel) + str(w2vlabel) + str(bertlabel) + str(direction) + str(validate) + "subj{}_parallel-english-to-{}-model-{}layer-{}-pred-layer{}-{}"
file_name = file_loc.format(
args.subject_number,
args.language,
args.num_layers,
args.model_type,
args.which_layer,
args.agg_type
)
residual_file = "../rmses/concatenated-" + str(file_name) + ".p"
# file_name = residual_file.split("/")[-1].split(".")[0]
all_residuals = pickle.load( open( residual_file, "rb" ) )
# get atlas and roi
if not args.local:
atlas_vals = pickle.load( open( f"/n/shieber_lab/Lab/users/cjou/fmri/subj{args.subject_number}/atlas_vals.p", "rb" ) )
atlas_labels = pickle.load( open( f"/n/shieber_lab/Lab/users/cjou/fmri/subj{args.subject_number}/atlas_labels.p", "rb" ) )
roi_vals = pickle.load( open( f"/n/shieber_lab/Lab/users/cjou/fmri/subj{args.subject_number}/roi_vals.p", "rb" ) )
roi_labels = pickle.load( open( f"/n/shieber_lab/Lab/users/cjou/fmri/subj{args.subject_number}/roi_labels.p", "rb" ) )
elif args.hard_drive:
atlas_vals = pickle.load( open( f"/Volumes/passport/\!RESEARCH/examplesGLM/subj{args.subject_number}/atlas_vals.p", "rb" ) )
atlas_labels = pickle.load( open( f"/Volumes/passport/\!RESEARCH/examplesGLM/subj{args.subject_number}/atlas_labels.p", "rb" ) )
roi_vals = pickle.load( open( f"/Volumes/passport/\!RESEARCH/examplesGLM/subj{args.subject_number}/roi_vals.p", "rb" ) )
roi_labels = pickle.load( open( f"/Volumes/passport/\!RESEARCH/examplesGLM/subj{args.subject_number}/roi_labels.p", "rb" ) )
else:
atlas_vals = pickle.load( open( f"../examplesGLM/subj{args.subject_number}/atlas_vals.p", "rb" ) )
atlas_labels = pickle.load( open( f"../examplesGLM/subj{args.subject_number}/atlas_labels.p", "rb" ) )
roi_vals = pickle.load( open( f"../examplesGLM/subj{args.subject_number}/roi_vals.p", "rb" ) )
roi_labels = pickle.load( open( f"../examplesGLM/subj{args.subject_number}/roi_labels.p", "rb" ) )
print("INITIAL:")
print(len(atlas_vals))
print(len(atlas_labels))
print(len(roi_vals))
print(len(roi_labels))
final_roi_labels = helper.clean_roi(roi_vals, roi_labels)
at_labels = helper.clean_atlas(atlas_vals, atlas_labels)
print("CLEANING")
print(len(final_roi_labels))
print(len(at_labels))
if not os.path.exists('../visualizations/'):
os.makedirs('../visualizations/')
# make dataframe
print(len(list(range(len(all_residuals)))))
print(len(all_residuals))
print(len(at_labels))
print(len(final_roi_labels))
df_dict = {'voxel_index': list(range(len(all_residuals))),
'residuals': all_residuals,
'atlas_labels': at_labels,
'roi_labels': final_roi_labels}
df = pd.DataFrame(df_dict)
# create plots
print("creating plots...")
# plot_roi(df, args, file_name + "-roi", zoom=False)
# plot_atlas(df, args, file_name + "-atlas", zoom=False)
# plot_roi(df, args, file_name + "-roi", zoom=True)
# plot_atlas(df, args, file_name + "-atlas", zoom=True)
plot_boxplot_for_roi(df, args, file_name + "-boxplot-roi")
# plot_boxplot_for_atlas(df, args, file_name + "-boxplot-atlas")
# plot_violinplot_for_roi(df, args, file_name + "-violinplot-roi")
# plot_violinplot_for_atlas(df, args, file_name + "-violinplot-atlas")
# plot_aggregations(df, args, file_name + "-agg")
print("done.")
return
if __name__ == "__main__":
main()
| 48.853107 | 237 | 0.6966 | 2,617 | 17,294 | 4.448223 | 0.089415 | 0.024053 | 0.036079 | 0.042093 | 0.77459 | 0.742977 | 0.721759 | 0.696847 | 0.657933 | 0.619105 | 0 | 0.006768 | 0.1371 | 17,294 | 353 | 238 | 48.991501 | 0.773303 | 0.066728 | 0 | 0.542373 | 0 | 0.013559 | 0.330602 | 0.056125 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0.013559 | 0.037288 | 0.00339 | 0.105085 | 0.064407 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
4434c59b9a9448ef9ebd0ccd1c34428d0cf3497c | 879 | py | Python | admin/migrations/0005_auto_20180416_1045.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 21 | 2017-10-08T23:19:47.000Z | 2020-01-16T20:02:08.000Z | admin/migrations/0005_auto_20180416_1045.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 6 | 2020-06-03T05:30:52.000Z | 2022-01-13T00:44:26.000Z | admin/migrations/0005_auto_20180416_1045.py | adelsonllima/djangoplus | a4ce50bf8231a0d9a4a40751f0d076c2e9931f44 | [
"BSD-3-Clause"
] | 9 | 2017-10-09T22:58:31.000Z | 2021-11-20T15:20:18.000Z | # Generated by Django 2.0.3 on 2018-04-16 10:45
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('admin', '0004_auto_20180416_1037'),
]
operations = [
migrations.RemoveField(
model_name='organizationrole',
name='organization',
),
migrations.RemoveField(
model_name='organizationrole',
name='role',
),
migrations.RemoveField(
model_name='unitrole',
name='role',
),
migrations.RemoveField(
model_name='unitrole',
name='unit',
),
migrations.DeleteModel(
name='OrganizationRole',
),
migrations.DeleteModel(
name='UnitRole',
),
migrations.DeleteModel(
name='Role',
),
]
| 22.538462 | 47 | 0.523322 | 68 | 879 | 6.661765 | 0.5 | 0.18543 | 0.229581 | 0.264901 | 0.423841 | 0.423841 | 0.211921 | 0.211921 | 0 | 0 | 0 | 0.055655 | 0.366325 | 879 | 38 | 48 | 23.131579 | 0.75763 | 0.051195 | 0 | 0.625 | 1 | 0 | 0.153846 | 0.027644 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.03125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
443e527fc0cc1d91bb3a8b7413b0fe5d01bf9dbe | 1,011 | py | Python | env.py | claywahlstrom/pack | 86b70198a4b185611c2ce3d29df99dd01233a6ac | [
"BSD-2-Clause"
] | 2 | 2019-05-04T09:32:15.000Z | 2021-02-08T08:38:23.000Z | env.py | claywahlstrom/pack | 86b70198a4b185611c2ce3d29df99dd01233a6ac | [
"BSD-2-Clause"
] | null | null | null | env.py | claywahlstrom/pack | 86b70198a4b185611c2ce3d29df99dd01233a6ac | [
"BSD-2-Clause"
] | null | null | null |
"""
Runtime environment settings
"""
import os as _os
import sys as _sys
def is_idle() -> bool:
"""Returns True if the script is running within IDLE, False otherwise"""
return 'idlelib' in _sys.modules
def is_powershell() -> bool:
"""Returns True if the script is running within PowerShell, False otherwise"""
# per mklement0 via https://stackoverflow.com/a/55598796/5645103
return is_win32() and len(_os.getenv('PSModulePath', '').split(_os.pathsep)) >= 3
def is_launcher() -> bool:
"""
Returns True if the script is running within the Python launcher,
False otherwise
"""
return not is_idle()
def is_posix() -> bool:
"""
Returns True if the script is running within a Posix machine,
False otherwise
"""
return any(_sys.platform.startswith(x) for x in ('linux', 'darwin')) # darwin for macOS
def is_win32() -> bool:
"""Returns True if the script is running within a win32 machine, False otherwise"""
return _sys.platform == 'win32'
| 26.605263 | 91 | 0.677547 | 140 | 1,011 | 4.792857 | 0.392857 | 0.037258 | 0.111773 | 0.126677 | 0.308495 | 0.308495 | 0.308495 | 0.308495 | 0.308495 | 0.125186 | 0 | 0.031211 | 0.207715 | 1,011 | 37 | 92 | 27.324324 | 0.806492 | 0.481701 | 0 | 0 | 0 | 0 | 0.076419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | true | 0 | 0.166667 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
4458de209412ad014e27667db524e01364c91585 | 1,531 | py | Python | serial_scripts/sriov/test_sriov.py | vkolli/5.0_contrail-test | 1793f169a94100400a1b2fafbad21daf5aa4d48a | [
"Apache-2.0"
] | 1 | 2017-06-13T04:42:34.000Z | 2017-06-13T04:42:34.000Z | serial_scripts/sriov/test_sriov.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | 1 | 2021-06-01T22:18:29.000Z | 2021-06-01T22:18:29.000Z | serial_scripts/sriov/test_sriov.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | # Need to import path to test/fixtures and test/scripts/
# Ex : export PYTHONPATH='$PATH:/root/test/fixtures/:/root/test/scripts/'
#
# To run tests, you can do 'python -m testtools.run tests'. To run specific tests,
# You can do 'python -m testtools.run -l tests'
# Set the env variable PARAMS_FILE to point to your ini file. Else it will try to pick params.ini in PWD
from tcutils.wrappers import preposttest_wrapper
from verify import VerifySriovCases
import base
import test
class TestSriov(base.BaseSriovTest, VerifySriovCases):
@classmethod
def setUpClass(cls):
super(TestSriov, cls).setUpClass()
def runTest(self):
pass
#end runTest
@preposttest_wrapper
def test_communication_between_two_sriov_vm(self):
'''
Configure two SRIOV VM in Same phynets and same VN.
VMs are configure across compute node.
Verify the commonication over SRIOV NIC.
'''
return self.communication_between_two_sriov_vm()
@preposttest_wrapper
def test_communication_between_two_sriov_vm_with_large_mtu(self):
'''
'''
return self.communication_between_two_sriov_vm_with_large_mtu()
@preposttest_wrapper
def test_virtual_function_exhaustion_and_resue(self):
'''
Verify Nova can schdule VM to all the VF of a PF.
Nova should though error when VF is exhausted.
After clearing one VF that should be rsusable
'''
return self.virtual_function_exhaustion_and_resue()
| 31.895833 | 104 | 0.704768 | 206 | 1,531 | 5.053398 | 0.495146 | 0.038425 | 0.048031 | 0.107589 | 0.330451 | 0.267051 | 0.267051 | 0.21902 | 0.105668 | 0 | 0 | 0 | 0.228609 | 1,531 | 47 | 105 | 32.574468 | 0.881456 | 0.428478 | 0 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.263158 | false | 0.052632 | 0.210526 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 3 |
9222f71379c7695e4f0ab4cb4b35f058fef9c401 | 156 | py | Python | apps/BlogContent/urls.py | Roberto09/MyBlog | 24634ce60a3af45a5675668132dbfa725872b793 | [
"MIT"
] | null | null | null | apps/BlogContent/urls.py | Roberto09/MyBlog | 24634ce60a3af45a5675668132dbfa725872b793 | [
"MIT"
] | null | null | null | apps/BlogContent/urls.py | Roberto09/MyBlog | 24634ce60a3af45a5675668132dbfa725872b793 | [
"MIT"
] | null | null | null | from django.conf.urls import url, include
from apps.BlogContent.views import SeeBP
urlpatterns = [
url(r'^MyBlogs', SeeBP.as_view() , name='see_BP'),
] | 26 | 54 | 0.724359 | 23 | 156 | 4.826087 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134615 | 156 | 6 | 55 | 26 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.089172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
924b85ea9f7b1945f0fd22f097a20909bd163eef | 723 | py | Python | src/a9a/__init__.py | hvod2000/a9a | 6b5cddbac885e2aff56e32936b966f4ce05afbba | [
"MIT"
] | null | null | null | src/a9a/__init__.py | hvod2000/a9a | 6b5cddbac885e2aff56e32936b966f4ce05afbba | [
"MIT"
] | null | null | null | src/a9a/__init__.py | hvod2000/a9a | 6b5cddbac885e2aff56e32936b966f4ce05afbba | [
"MIT"
] | null | null | null | import a9a.encoder
import a9a.decoder
import a9a.dir_reader
import a9a.dir_writer
class Archive:
def __init__(self, nodes=None):
if nodes is None:
nodes = {}
self.content = nodes
def to_bytes(self):
return b"a9a\n" + encoder.encode_nodes(self.content)
def __repr__(self):
return f"Archive({self.content})"
@staticmethod
def from_bytes(bts):
assert bts[:4] == b"a9a\n"
return Archive(decoder.decode_nodes(bts[4:])[0])
def to_directory(self, dir_path):
return dir_writer.write_directory(dir_path, self.content)
@staticmethod
def from_directory(dir_path):
return Archive(dir_reader.read_directory(dir_path)[1])
| 24.1 | 65 | 0.658368 | 99 | 723 | 4.565657 | 0.373737 | 0.079646 | 0.106195 | 0.115044 | 0.132743 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018018 | 0.232365 | 723 | 29 | 66 | 24.931034 | 0.796396 | 0 | 0 | 0.090909 | 0 | 0 | 0.045643 | 0.031812 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.272727 | false | 0 | 0.181818 | 0.181818 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
924e8e77c2bc19143e903e4c018fdd8b4185b457 | 746 | py | Python | jp.atcoder/agc015/agc015_b/8556570.py | kagemeka/atcoder-submissions | 91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e | [
"MIT"
] | 1 | 2022-02-09T03:06:25.000Z | 2022-02-09T03:06:25.000Z | jp.atcoder/agc015/agc015_b/8556570.py | kagemeka/atcoder-submissions | 91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e | [
"MIT"
] | 1 | 2022-02-05T22:53:18.000Z | 2022-02-09T01:29:30.000Z | jp.atcoder/agc015/agc015_b/8556570.py | kagemeka/atcoder-submissions | 91d8ad37411ea2ec582b10ba41b1e3cae01d4d6e | [
"MIT"
] | null | null | null | # 2019-11-22 16:25:33(JST)
import sys
def main():
s = sys.stdin.readline().rstrip()
n = len(s)
take_times = [[None for _ in range(n)] for _ in range(n)]
for i in range(n):
for j in range(n):
if i == j:
take_times[i][j] = 0
elif s[i] == 'U':
if j < i:
take_times[i][j] = 2
elif j > i:
take_times[i][j] = 1
elif s[i] == 'D':
if j > i:
take_times[i][j] = 2
elif j < i:
take_times[i][j] = 1
ans = sum(sum(take_times[i]) for i in range(n))
print(ans)
if __name__ == '__main__':
main()
| 24.866667 | 62 | 0.386059 | 105 | 746 | 2.580952 | 0.342857 | 0.232472 | 0.221402 | 0.202952 | 0.420664 | 0.250923 | 0.250923 | 0.250923 | 0.250923 | 0.250923 | 0 | 0.048469 | 0.474531 | 746 | 29 | 63 | 25.724138 | 0.642857 | 0.032172 | 0 | 0.173913 | 0 | 0 | 0.014472 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0.043478 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
9251d2e24adbc05adaa81a9a2ac85b187ddf6015 | 841 | py | Python | bot/models.py | pyprism/Hiren-TwitBot | 358184673c0d3fb8f5dc3420590239b3178f5a81 | [
"MIT"
] | null | null | null | bot/models.py | pyprism/Hiren-TwitBot | 358184673c0d3fb8f5dc3420590239b3178f5a81 | [
"MIT"
] | null | null | null | bot/models.py | pyprism/Hiren-TwitBot | 358184673c0d3fb8f5dc3420590239b3178f5a81 | [
"MIT"
] | null | null | null | from django.db import models
class TwitterApp(models.Model):
name = models.CharField(max_length=500)
consumer_key = models.CharField(max_length=1000)
consumer_secret = models.CharField(max_length=1000)
access_token = models.CharField(max_length=1000)
access_token_secret = models.CharField(max_length=1000)
tag = models.CharField(max_length=500)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.name
class Twit(models.Model):
app = models.ForeignKey(TwitterApp)
status = models.TextField()
status_id = models.CharField(max_length=1000)
approved = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
| 33.64 | 59 | 0.752675 | 109 | 841 | 5.559633 | 0.385321 | 0.173267 | 0.207921 | 0.277228 | 0.643564 | 0.462046 | 0.39604 | 0.267327 | 0.267327 | 0.267327 | 0 | 0.036415 | 0.151011 | 841 | 24 | 60 | 35.041667 | 0.812325 | 0 | 0 | 0.210526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0.052632 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
926004827692514a2e7e1fab88210f1b0ac22384 | 344 | py | Python | setup.py | FreeDiscovery/FreeDiscovery-S3-connector | c10a6e1c26f95c199e94908f4e8534f735b94e37 | [
"BSD-3-Clause"
] | null | null | null | setup.py | FreeDiscovery/FreeDiscovery-S3-connector | c10a6e1c26f95c199e94908f4e8534f735b94e37 | [
"BSD-3-Clause"
] | null | null | null | setup.py | FreeDiscovery/FreeDiscovery-S3-connector | c10a6e1c26f95c199e94908f4e8534f735b94e37 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
from freedscovery_s3_connector._version import __version__
setup(name='freedscovery_s3_connector',
version=__version__,
description='',
author='FreeDiscovery Developpers',
packages=find_packages(),
include_package_data=True)
| 24.571429 | 58 | 0.732558 | 37 | 344 | 6.351351 | 0.675676 | 0.102128 | 0.195745 | 0.255319 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010381 | 0.159884 | 344 | 13 | 59 | 26.461538 | 0.802768 | 0.110465 | 0 | 0 | 0 | 0 | 0.164474 | 0.082237 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
92aac54ee2f007488fca049482a6bfd926d66471 | 2,432 | py | Python | oo/carro.py | Jaques1974/pythonbirds | dcd7270442dc652ff698689b81ee45c9a7c76c2d | [
"MIT"
] | null | null | null | oo/carro.py | Jaques1974/pythonbirds | dcd7270442dc652ff698689b81ee45c9a7c76c2d | [
"MIT"
] | null | null | null | oo/carro.py | Jaques1974/pythonbirds | dcd7270442dc652ff698689b81ee45c9a7c76c2d | [
"MIT"
] | null | null | null |
"""
Você deve criar uma classe carro que vai possuir dois atributos compostos por outras duas
classes:
1)Motor
2)Direção
O Motor terá a responsabilidade de controlar a velocidade.
Ele oferece os seguintes atributos:
1)Atributo de dado velocidade
2)Método acelerar, que deverá incrementar a velocidade de uma unidade
3)Método frear que deverá decrementar a velocidade em duas unidades
A direção terá a responsabilidade de controlar a direção. Ela oferece os seguintes atributos:
1)Valor de direção com valores posséveis: Norte, Sul, Leste, Oeste.
2)Método girar_a_direita
3)Método girar_a_esquerda
N
O L
S
Exemplo:
>>> # Testando motor
>>> motor = Motor()
>>> motor.velocidade
0
>>> motor.acelerar()
>>> motor.velocidade
1
>>> motor.acelerar()
>>> motor.velocidade
2
>>> motor.acelerar()
>>> motor.velocidade
3
>>> motor.frear()
>>> motor.velocidade
1
>>> motor.frear()
>>> motor.velocidade
0
>>> # Testando Direção
>>> direção = Direcao()
>>> direção.valor
'Norte'
>>> direção.girar_a_direita()
>>> direção.valor
'Leste'
>>> direção.girar_a_direita()
>>> direção.valor
'Sul'
>>> direção.girar_a_direita()
>>> direção.valor
'Oeste'
>>> direção.girar_a_direita()
>>> direção.valor
'Norte'
>>> direção.girar_a_esquerda()
>>> direção.valor
'Oeste'
>>> direção.girar_a_esquerda()
>>> direção.valor
'Sul'
>>> direção.girar_a_esquerda()
>>> direção.valor
'Leste'
>>> direção.girar_a_esquerda()
>>> direção.valor
'Norte'
>>> carro = Carro(direção, motor)
>>> carro.calcular_velocidade()
0
>>> carro.acelerar()
>>> carro.calcular_velocidade()
1
>>> carro.acelerar()
>>> carro.calcular_velocidade()
2
>>> carro.frear()
>>> carro.calcular_velocidade()
0
>>> carro.calcular_direção()
'Norte'
>>> carro.girar_a_direita()
>>> carro.calcular_direção()
'Leste'
>>> carro.girar_a_esquerda()
>>> carro.calcular_direção()
'Norte'
>>> carro.girar_a_esquerda()
>>> carro.calcular_direção()
'Oeste'
"""
class Motor:
def __init__(self):
self.velocidade = 0
def acelerar(self):
self.velocidade += 1
def frear(self):
self.velocidade -= 2
self.velocidade = max(0, self.velocidade)
| 17.751825 | 93 | 0.612664 | 276 | 2,432 | 5.26087 | 0.246377 | 0.053719 | 0.071625 | 0.055096 | 0.460744 | 0.339532 | 0.081956 | 0 | 0 | 0 | 0 | 0.012121 | 0.253701 | 2,432 | 136 | 94 | 17.882353 | 0.787879 | 0.892681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 1 | 0.375 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2b8feaa10a6b312b58767ad31f13d28d00efcfc0 | 663 | py | Python | kurier/amqp/exceptions.py | Relrin/kurier | 1fd5593c4249a29943d3ee33e4491135ed0fde8d | [
"BSD-3-Clause"
] | 22 | 2019-03-03T11:48:11.000Z | 2022-01-13T19:13:37.000Z | kurier/amqp/exceptions.py | Relrin/kurier | 1fd5593c4249a29943d3ee33e4491135ed0fde8d | [
"BSD-3-Clause"
] | 2 | 2018-07-04T18:52:05.000Z | 2019-10-02T09:01:34.000Z | kurier/amqp/exceptions.py | Relrin/kurier | 1fd5593c4249a29943d3ee33e4491135ed0fde8d | [
"BSD-3-Clause"
] | 4 | 2019-05-27T09:45:29.000Z | 2021-09-10T15:08:57.000Z |
class BaseAmqpException(Exception):
default_detail = "Occurred an unexpected error."
def __init__(self, detail=None):
self.detail = detail if detail is not None else self.default_detail
def __str__(self):
return self.detail
class AmqpInvalidUrl(BaseAmqpException):
default_detail = "The specified URL is invalid."
class AmqpInvalidExchange(BaseAmqpException):
default_detail = "The specified exchange doesn't exist."
class AmqpUnroutableError(BaseAmqpException):
default_detail = "The message can't be delivered."
class AmqpRequestCancelled(BaseAmqpException):
default_detail = "The request was cancelled."
| 24.555556 | 75 | 0.751131 | 73 | 663 | 6.630137 | 0.520548 | 0.161157 | 0.247934 | 0.272727 | 0.173554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174962 | 663 | 26 | 76 | 25.5 | 0.884826 | 0 | 0 | 0 | 0 | 0 | 0.229955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0.071429 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
2b924ecff2ec12429f869ba4056b1d43ae0cd0bd | 39 | py | Python | banking/src/common/enum.py | teamo2dev/coin-town | 74ba60ee45644950074efb725d990d63a418c7f6 | [
"MIT"
] | null | null | null | banking/src/common/enum.py | teamo2dev/coin-town | 74ba60ee45644950074efb725d990d63a418c7f6 | [
"MIT"
] | null | null | null | banking/src/common/enum.py | teamo2dev/coin-town | 74ba60ee45644950074efb725d990d63a418c7f6 | [
"MIT"
] | 1 | 2021-08-28T08:34:56.000Z | 2021-08-28T08:34:56.000Z | CONTENT_TYPE_JSON = 'application/json'
| 19.5 | 38 | 0.820513 | 5 | 39 | 6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.410256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2bd8a7bf4f3a44da4637775f9b8932d258ab790c | 412 | py | Python | _27_TEXT_PROCESSING/_3_Substring.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | _27_TEXT_PROCESSING/_3_Substring.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | _27_TEXT_PROCESSING/_3_Substring.py | YordanPetrovDS/Python_Fundamentals | 81163054cd3ac780697eaa43f099cc455f253a0c | [
"MIT"
] | null | null | null | def replace_all(replace_string, actual_string):
while replace_string in actual_string:
actual_string = actual_string.replace(replace_string, "")
return actual_string
print(replace_all(input(), input()))
# replace_string = input()
# actual_string = input()
#
# while replace_string in actual_string:
# actual_string = actual_string.replace(replace_string, "")
#
# print(f"{actual_string}")
| 25.75 | 65 | 0.737864 | 51 | 412 | 5.607843 | 0.215686 | 0.41958 | 0.314685 | 0.335664 | 0.531469 | 0.531469 | 0.531469 | 0.531469 | 0.531469 | 0.531469 | 0 | 0 | 0.145631 | 412 | 15 | 66 | 27.466667 | 0.8125 | 0.424757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0.2 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2bdb68df54d9aa08cc436ff6ba347abae2b26cdc | 1,421 | py | Python | baseTest/DriverManager.py | jonreding2010/PythonLogging | c931bf84d0f71bc7917ff57009c7139886acf77f | [
"MIT"
] | null | null | null | baseTest/DriverManager.py | jonreding2010/PythonLogging | c931bf84d0f71bc7917ff57009c7139886acf77f | [
"MIT"
] | null | null | null | baseTest/DriverManager.py | jonreding2010/PythonLogging | c931bf84d0f71bc7917ff57009c7139886acf77f | [
"MIT"
] | null | null | null | from baseTest.BaseTestObject import BaseTestObject
# The type Driver manager.
class DriverManager:
# Base Test Object.
baseTestObject = BaseTestObject
# The Base driver.
baseDriver = object()
# The Get driver.
getDriverSupplier = object()
# Instantiates a new Driver manager.
# @param getDriverFunction driver function supplier
# @param baseTestObject the base test object
def __init__(self, get_driver_function, base_test_object):
self.baseTestObject = base_test_object
self.getDriverSupplier = get_driver_function
# Gets base driver.
# @return the base driver
def get_base_driver(self):
return self.baseDriver
# Sets base driver.
# @param baseDriver the base driver
def set_base_driver(self, base_driver):
self.baseDriver = base_driver
# Is driver initialized boolean.
# @return the boolean
def is_driver_initialized(self):
return self.baseDriver is not None
# Gets logger.
# @return the logger
def get_logger(self):
return self.baseTestObject.get_logger()
# Get base object.
# @return the object
def get_base(self):
if self.baseDriver is None:
self.baseDriver = self.getDriverSupplier
return self.baseDriver
# Gets test object.
# @return the test object
def get_test_object(self):
return self.baseTestObject
| 26.811321 | 62 | 0.684025 | 164 | 1,421 | 5.77439 | 0.22561 | 0.095037 | 0.059134 | 0.038015 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.252639 | 1,421 | 52 | 63 | 27.326923 | 0.891714 | 0.324419 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.318182 | false | 0 | 0.045455 | 0.181818 | 0.772727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
2be599f1d48fa45f9d8d32cfab95f648f8f75077 | 429 | py | Python | code_delivery/app.py | castilhoin/code-delivery | 39073341f468d5d1e30b5d5910c043953e4b9429 | [
"MIT"
] | null | null | null | code_delivery/app.py | castilhoin/code-delivery | 39073341f468d5d1e30b5d5910c043953e4b9429 | [
"MIT"
] | null | null | null | code_delivery/app.py | castilhoin/code-delivery | 39073341f468d5d1e30b5d5910c043953e4b9429 | [
"MIT"
] | null | null | null | from flask import Flask
from .ext import site
from .ext import config
from .ext import toolbar
from .ext import db
from .ext import migrate
from .ext import cli
def create_app():
"""Factory to create a Flask app based on factory pattern"""
app = Flask(__name__)
config.init_app(app)
db.init_app(app)
migrate.init_app(app)
cli.init_app(app)
toolbar.init_app(app)
site.init_app(app)
return app
| 22.578947 | 64 | 0.710956 | 70 | 429 | 4.2 | 0.314286 | 0.142857 | 0.265306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205128 | 429 | 18 | 65 | 23.833333 | 0.86217 | 0.125874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.4375 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
921b26719ef6dd1872a88554745a7cfee94c9fe4 | 633 | py | Python | qmspy/__init__.py | Cavenfish/qmspy | 4ac6191b22d606ce007b5fb7b75a3c0734b41a70 | [
"MIT"
] | null | null | null | qmspy/__init__.py | Cavenfish/qmspy | 4ac6191b22d606ce007b5fb7b75a3c0734b41a70 | [
"MIT"
] | null | null | null | qmspy/__init__.py | Cavenfish/qmspy | 4ac6191b22d606ce007b5fb7b75a3c0734b41a70 | [
"MIT"
] | null | null | null | """
This is the qmspy module, a python module designed to automate graphing data
collected from a QMS.
Contains the following functions:
init_data
add_style
fit_gaussians
appearance_energy
Contains the following submodules:
graph_data
Author: Brian C. Ferrari
"""
from .config import *
from .add_style import add_style
from .init_data import init_data
from .fit_gaussians import fit_gaussians
from .get_peaks import get_peaks
from .appearance_energy import appearance_energy
from .graph_data import *
| 24.346154 | 76 | 0.657188 | 76 | 633 | 5.263158 | 0.460526 | 0.06 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.309637 | 633 | 25 | 77 | 25.32 | 0.915332 | 0.535545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
ecf61980cc4a49f01a13cbe72aad03439a423e46 | 2,073 | py | Python | backend/api/decapod_api/views/v1/__init__.py | angry-tony/ceph-lcm-decapod | 535944d3ee384c3a7c4af82f74041b0a7792433f | [
"Apache-2.0"
] | 41 | 2016-11-03T16:40:17.000Z | 2019-05-23T08:39:17.000Z | backend/api/decapod_api/views/v1/__init__.py | Mirantis/ceph-lcm | fad9bad0b94f2ef608362953583b10a54a841d24 | [
"Apache-2.0"
] | 30 | 2016-10-14T10:54:46.000Z | 2017-10-20T15:58:01.000Z | backend/api/decapod_api/views/v1/__init__.py | angry-tony/ceph-lcm-decapod | 535944d3ee384c3a7c4af82f74041b0a7792433f | [
"Apache-2.0"
] | 28 | 2016-09-17T01:17:36.000Z | 2019-07-05T03:32:54.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module contains blueprint for API v1."""
import flask
from decapod_api.views.v1 import auth
from decapod_api.views.v1 import cinder_integration
from decapod_api.views.v1 import cluster
from decapod_api.views.v1 import execution
from decapod_api.views.v1 import info
from decapod_api.views.v1 import password_reset
from decapod_api.views.v1 import permission
from decapod_api.views.v1 import playbook
from decapod_api.views.v1 import playbook_configuration
from decapod_api.views.v1 import role
from decapod_api.views.v1 import server
from decapod_api.views.v1 import user
BLUEPRINT_NAME = "ApiV1"
"""Blueprint name for the API v1."""
BLUEPRINT = flask.Blueprint(BLUEPRINT_NAME, __name__)
"""Blueprint."""
auth.AuthView.register_to(BLUEPRINT)
cinder_integration.CinderIntegrationView.register_to(BLUEPRINT)
cluster.ClusterView.register_to(BLUEPRINT)
execution.ExecutionStepsLog.register_to(BLUEPRINT)
execution.ExecutionStepsView.register_to(BLUEPRINT)
execution.ExecutionView.register_to(BLUEPRINT)
info.InfoView.register_to(BLUEPRINT)
password_reset.PasswordReset.register_to(BLUEPRINT)
permission.PermissionView.register_to(BLUEPRINT)
playbook_configuration.PlaybookConfigurationView.register_to(BLUEPRINT)
playbook.PlaybookView.register_to(BLUEPRINT)
role.RoleView.register_to(BLUEPRINT)
role.RoleSelfView.register_to(BLUEPRINT)
server.ServerView.register_to(BLUEPRINT)
user.UserView.register_to(BLUEPRINT)
user.UserSelfView.register_to(BLUEPRINT)
| 35.741379 | 71 | 0.822962 | 289 | 2,073 | 5.764706 | 0.380623 | 0.096038 | 0.182473 | 0.136855 | 0.204082 | 0.204082 | 0.042017 | 0 | 0 | 0 | 0 | 0.012814 | 0.096479 | 2,073 | 57 | 72 | 36.368421 | 0.876668 | 0.298601 | 0 | 0 | 0 | 0 | 0.003618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.064516 | 0.419355 | 0 | 0.419355 | 0.032258 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
a6038b3f8e171ab48ac06bd1a445cb35301ad809 | 1,059 | py | Python | api/http/auth.py | dominicplouffe/connexionme | 85d10905b8e4dd535320cc314a7340e5d28f6e9e | [
"MIT"
] | null | null | null | api/http/auth.py | dominicplouffe/connexionme | 85d10905b8e4dd535320cc314a7340e5d28f6e9e | [
"MIT"
] | null | null | null | api/http/auth.py | dominicplouffe/connexionme | 85d10905b8e4dd535320cc314a7340e5d28f6e9e | [
"MIT"
] | null | null | null | from flask import Blueprint, request
from services import account
from functools import wraps
from status import finish
auth = Blueprint('auth', __name__, url_prefix='/auth')
def requires_auth(f):
@wraps(f)
def decorated(*args, **kwargs):
token = request.form.get('token', request.args.get('token', None))
print token
_acc = None
if token is not None:
_acc = account.get_acount_by_id(token)
if _acc is None:
return finish(
{},
401,
'You are missing the user token or it is invalid'
)
return f(*args, **kwargs)
return decorated
@auth.route('/login', methods=['POST'])
def login():
#TODO Users
email = request.form.get('email')
password = request.form.get('password')
_acc = account.validate_account(email, password)
if _acc is not None:
token = str(_acc['_id'])
return finish({'token': token}, 200)
return finish({}, 404, msg='Username or Password were not found.')
| 25.829268 | 74 | 0.599622 | 132 | 1,059 | 4.681818 | 0.44697 | 0.053398 | 0.067961 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011873 | 0.28423 | 1,059 | 40 | 75 | 26.475 | 0.80343 | 0.009443 | 0 | 0 | 0 | 0 | 0.126908 | 0 | 0 | 0 | 0 | 0.025 | 0 | 0 | null | null | 0.1 | 0.133333 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
a60a5e2c81cee18f19368d6d10d396d0396c103a | 71 | py | Python | 1002_AreaCircle.py | DiegoC386/EJERCICIOS-URI | b2e12b6420ea16a9060726b988ea1b35cbf312c2 | [
"MIT"
] | null | null | null | 1002_AreaCircle.py | DiegoC386/EJERCICIOS-URI | b2e12b6420ea16a9060726b988ea1b35cbf312c2 | [
"MIT"
] | null | null | null | 1002_AreaCircle.py | DiegoC386/EJERCICIOS-URI | b2e12b6420ea16a9060726b988ea1b35cbf312c2 | [
"MIT"
] | null | null | null | R=float(input())
PI=3.14159
A=(PI)*(R**2)
print("A = {:.4f}".format(A)) | 17.75 | 29 | 0.549296 | 15 | 71 | 2.6 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 0.070423 | 71 | 4 | 29 | 17.75 | 0.469697 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
a61d2ff865e88390f5a4f3b8c8da817f93d2e1a6 | 355 | py | Python | Calculator/operations/__init__.py | Shakil-1501/Session12 | 0ae424a6ac8949b21b89aa1548b6e997a2e4c133 | [
"MIT"
] | null | null | null | Calculator/operations/__init__.py | Shakil-1501/Session12 | 0ae424a6ac8949b21b89aa1548b6e997a2e4c133 | [
"MIT"
] | null | null | null | Calculator/operations/__init__.py | Shakil-1501/Session12 | 0ae424a6ac8949b21b89aa1548b6e997a2e4c133 | [
"MIT"
] | null | null | null | from .sine import *
from .cose import *
from .tane import *
from .tanhe import *
from .ee import *
from .loge import *
from .sigmoide import *
from .relue import *
from .softmaxe import *
__all__= (sine.__all__+ cose.__all__ + tane.__all__ + tanhe.__all__ + ee.__all__ + loge.__all__ + sigmoide.__all__ + relue.__all__ + softmaxe.__all__)
| 25.357143 | 151 | 0.698592 | 46 | 355 | 4.521739 | 0.26087 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194366 | 355 | 13 | 152 | 27.307692 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.9 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a61da44df1b4b01ba75ab9949fbc2b4e355911f3 | 121 | py | Python | Basico/ex002.py | Gustavsantos/python1 | 5520f2d2ee591157942008fdcd6bd42eb521f1a6 | [
"MIT"
] | null | null | null | Basico/ex002.py | Gustavsantos/python1 | 5520f2d2ee591157942008fdcd6bd42eb521f1a6 | [
"MIT"
] | null | null | null | Basico/ex002.py | Gustavsantos/python1 | 5520f2d2ee591157942008fdcd6bd42eb521f1a6 | [
"MIT"
] | null | null | null | nome = str(input('Qual é seu nome?')).upper().capitalize()
print('prazer em conhecelo, \033[1;34m{}\033[m!'.format(nome)) | 60.5 | 62 | 0.677686 | 20 | 121 | 4.1 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080357 | 0.07438 | 121 | 2 | 62 | 60.5 | 0.651786 | 0 | 0 | 0 | 0 | 0 | 0.459016 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
a6282b4f65c892360e83ec0d77ab379ee0c3ed7f | 93 | py | Python | hello_world.py | economicactivist/profiles-rest-api | 298dfc81eea353c3db9c43b3514fd95b5e557e0c | [
"MIT"
] | null | null | null | hello_world.py | economicactivist/profiles-rest-api | 298dfc81eea353c3db9c43b3514fd95b5e557e0c | [
"MIT"
] | 5 | 2021-04-08T21:53:59.000Z | 2022-02-10T15:12:26.000Z | hello_world.py | economicactivist/profiles-rest-api | 298dfc81eea353c3db9c43b3514fd95b5e557e0c | [
"MIT"
] | null | null | null | print('hello world!')
a = "cat dog fish monkey".split()
for animal in a:
print(animal)
| 13.285714 | 33 | 0.645161 | 15 | 93 | 4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204301 | 93 | 6 | 34 | 15.5 | 0.810811 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
a6663eeb6a9d4fc56840a93b4bd7b39452405d4b | 68 | py | Python | AKDSFramework/__init__.py | DeepNet-Research/AKDSFramework | a0b9fc2466b228ea6053b9f03e1d497462567a96 | [
"MIT"
] | 13 | 2020-11-03T00:07:43.000Z | 2021-12-31T04:18:03.000Z | AKDSFramework/__init__.py | DeepNet-Research/AKDSFramework | a0b9fc2466b228ea6053b9f03e1d497462567a96 | [
"MIT"
] | 2 | 2021-03-06T12:20:33.000Z | 2021-03-07T04:26:29.000Z | AKDSFramework/__init__.py | DeepNet-Research/AKDSFramework | a0b9fc2466b228ea6053b9f03e1d497462567a96 | [
"MIT"
] | 2 | 2020-11-03T23:13:53.000Z | 2021-02-24T13:16:02.000Z | from . import structure, applications, error
__version__ = '1.0.0'
| 17 | 44 | 0.735294 | 9 | 68 | 5.111111 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 0.147059 | 68 | 3 | 45 | 22.666667 | 0.741379 | 0 | 0 | 0 | 0 | 0 | 0.073529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a685097fe33f7cb01421c71457d5d7a015032c99 | 110 | py | Python | Python/Curos_Python_curemvid/Exercicios_dos_videos/Ex005.py | Jhonattan-rocha/Meus-primeiros-programas | f5971b66c0afd049b5d0493e8b7a116b391d058e | [
"MIT"
] | null | null | null | Python/Curos_Python_curemvid/Exercicios_dos_videos/Ex005.py | Jhonattan-rocha/Meus-primeiros-programas | f5971b66c0afd049b5d0493e8b7a116b391d058e | [
"MIT"
] | null | null | null | Python/Curos_Python_curemvid/Exercicios_dos_videos/Ex005.py | Jhonattan-rocha/Meus-primeiros-programas | f5971b66c0afd049b5d0493e8b7a116b391d058e | [
"MIT"
] | null | null | null | n = int(input("Digite um número: "))
print("O anecessor do número é {} e o sucessor é {} ".format(n-1, n+1))
| 27.5 | 71 | 0.618182 | 21 | 110 | 3.238095 | 0.714286 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.181818 | 110 | 3 | 72 | 36.666667 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0.572727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
a68c75b3fe9c052bf3c96144a5e5dab51f6f733d | 139 | py | Python | .ycm_extra_conf.py | jaccovanschaik/Into | 1292c913957fdcbebf291e82ee3896b1b9883e51 | [
"MIT"
] | null | null | null | .ycm_extra_conf.py | jaccovanschaik/Into | 1292c913957fdcbebf291e82ee3896b1b9883e51 | [
"MIT"
] | null | null | null | .ycm_extra_conf.py | jaccovanschaik/Into | 1292c913957fdcbebf291e82ee3896b1b9883e51 | [
"MIT"
] | null | null | null | import os
def FlagsForFile(filename, **kwargs):
return {
'flags': [
'-x', 'c11',
'-Wall', '-Wpointer-arith',
],
}
| 13.9 | 37 | 0.496403 | 13 | 139 | 5.307692 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020202 | 0.28777 | 139 | 9 | 38 | 15.444444 | 0.676768 | 0 | 0 | 0 | 0 | 0 | 0.215827 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
a68c82f5f2c8691b5638188a847f6e4c2977e2f2 | 731 | py | Python | zingkd/zingkd/models/notes.py | brukhabtu/zingk | 517bad148b39c4961885908a287cb1c7bd8b65c2 | [
"MIT"
] | null | null | null | zingkd/zingkd/models/notes.py | brukhabtu/zingk | 517bad148b39c4961885908a287cb1c7bd8b65c2 | [
"MIT"
] | 3 | 2016-07-09T22:03:40.000Z | 2016-07-28T21:15:14.000Z | zingkd/zingkd/models/notes.py | brukhabtu/zingk | 517bad148b39c4961885908a287cb1c7bd8b65c2 | [
"MIT"
] | null | null | null |
from zingkd.models.meta import Base
from sqlalchemy import Column, func
from sqlalchemy.types import Unicode, Integer, DateTime, Boolean, UnicodeText
from sqlalchemy.orm import relationship, backref
class Note(Base):
__tablename__ = 'notes'
__table_args__ = {'schema': 'zingk'}
note_id = Column(Integer, primary_key=True)
title = Column(Unicode(255), nullable=False)
created = Column(DateTime, nullable=False, server_default=func.now())
modified = Column(DateTime, nullable=False, server_default=func.now())
colour = Column(Unicode(6), nullable=False, server_default='CCCCCC')
#archived = Column(Boolean, nullable=False, server_default=False)
content = Column(UnicodeText, nullable=False)
| 34.809524 | 77 | 0.746922 | 88 | 731 | 6.034091 | 0.5 | 0.146893 | 0.143126 | 0.195857 | 0.177024 | 0.177024 | 0.177024 | 0.177024 | 0 | 0 | 0 | 0.00639 | 0.143639 | 731 | 20 | 78 | 36.55 | 0.841853 | 0.087551 | 0 | 0 | 0 | 0 | 0.033083 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a6a1b8a6a8be457c8cc3f0ac959f22dbed709405 | 247 | py | Python | facialrecognition/accounts/models.py | Moran98/facial-recognition | da4711c5d0fb77d77a5dffb20d85bfa9072f7933 | [
"MIT"
] | 34 | 2020-01-27T15:07:25.000Z | 2021-09-25T17:07:37.000Z | facialrecognition/accounts/models.py | Moran98/facial-recognition | da4711c5d0fb77d77a5dffb20d85bfa9072f7933 | [
"MIT"
] | 26 | 2020-01-29T12:24:42.000Z | 2022-03-12T00:16:44.000Z | facialrecognition/accounts/models.py | Moran98/facial-recognition | da4711c5d0fb77d77a5dffb20d85bfa9072f7933 | [
"MIT"
] | 7 | 2020-01-27T11:42:11.000Z | 2021-04-05T04:42:22.000Z | from django.db import models
class UserProfile(models.Model):
title = models.CharField(max_length=25, default='NULL USER')
img = models.ImageField(upload_to="images/", default='null.jpg')
def __str__(self):
return self.title | 27.444444 | 68 | 0.708502 | 33 | 247 | 5.121212 | 0.787879 | 0.130178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009709 | 0.165992 | 247 | 9 | 69 | 27.444444 | 0.81068 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
a6a31c4462617cfbcd1bf1328dcedfb388cc1b51 | 375 | py | Python | api_discovery/modules/__init__.py | TommyLike/huawei-api-discovery | 01409e2d8f3c1942d3cb92dee5f96782d925ab3c | [
"MIT"
] | null | null | null | api_discovery/modules/__init__.py | TommyLike/huawei-api-discovery | 01409e2d8f3c1942d3cb92dee5f96782d925ab3c | [
"MIT"
] | null | null | null | api_discovery/modules/__init__.py | TommyLike/huawei-api-discovery | 01409e2d8f3c1942d3cb92dee5f96782d925ab3c | [
"MIT"
] | null | null | null | from api_discovery.modules import logging
from api_discovery.modules import api
from api_discovery.modules import database
from api_discovery.modules import task
log = logging.Logging()
discovery_api = api.DiscoveryAPI()
db = database.Database()
taskhub = task.TaskHub()
def init_app(app):
for module in [log, discovery_api, db, taskhub]:
module.init_app(app)
| 25 | 52 | 0.773333 | 53 | 375 | 5.320755 | 0.339623 | 0.099291 | 0.22695 | 0.326241 | 0.411348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141333 | 375 | 14 | 53 | 26.785714 | 0.875776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a6a3b9bf76f0958453838d9fe554c384889d8f3d | 3,053 | py | Python | woomba/robot.py | AnotherKamila/lego-wet-roomba | 796403146d068c73ef053fe8db431da6e7a68253 | [
"MIT"
] | 1 | 2017-08-20T21:46:40.000Z | 2017-08-20T21:46:40.000Z | woomba/robot.py | AnotherKamila/lego-wet-roomba | 796403146d068c73ef053fe8db431da6e7a68253 | [
"MIT"
] | 14 | 2017-08-20T21:14:29.000Z | 2018-02-18T15:31:34.000Z | woomba/robot.py | AnotherKamila/lego-wet-roomba | 796403146d068c73ef053fe8db431da6e7a68253 | [
"MIT"
] | null | null | null | """A Controller class for composable things controlling robots, plus utility stuff.
An important abstraction here is Command: a function that returns immediately.
It may issue sub-Commands to motors and such, and do quick calculations, but
it must be fast. It must not block, ever. It will be called often enough to
safely process inputs. Usually it will cause something to happen until another
Command is issued. Example: telling the motor to run is a Command.
Note that:
- EV3 motor commands are Commands
- Commands can call other Commands
- Commands *cannot* call non-Commands (would break the Command contract)
- all fast non-blocking functions are technically Commands, so those can be used
"""
import time
class Controller:
"""A base class for composable things that control robots.
The methods step() and halt() must be implemented and their contracts
*must* be fulfilled. See their docs.
"""
def step(self):
"""Performs one step. Will be called often enough. Must return immediately.
This function *must not* block. It is a Command.
It can assume that it will be called frequently enough to process events in time.
"""
return NotImplementedError
def halt(self):
"""Stops motors and turns off all devices that need to be turned off when the program exits.
Will be called at program exit. Must make sure that the robot is safe & stationary.
"""
return NotImplementedError
# def step_coroutine(gen):
# """Allows a generator/coroutine to be used as a Controller's step(). Helpful for sequential code.
#
# You can then do the following:
#
# # TODO this code cannot possibly be correct, fix it
# def step(self):
# # wait for 1s
# now = time.now()
# while (time.now() - now) < 1000:
# # do nothing
# yield # "return" immediately to fulfill the Command contract
#
# # after 1s, do something for 10s
# now = time.now()
# while (time.now() - now) < 10000:
# self.do_something()
# yield # and yield control back to the main loop to fulfill Command contract
# """
# # TODO this code cannot possibly be correct, fix it and test with the above example
# # one especially wrong thing is: what happens when the generator "runs out"? does it throw an exception? or what?
# def step():
# gen()
# return step
def mainloop(controller, freq=50):
"""A convenience function that you can safely use as the main loop.
controller: an instance of Controller (duck-typed). *Must* fulfill the Controller contracts.
freq: the approximate frequency at which controller.step() will be called, in Hz. The actual frequency will be lower.
"""
try:
dt = 1.0/freq
while True:
controller.step()
time.sleep(dt)
except KeyboardInterrupt:
pass
finally:
controller.halt() # This is _very important_ :D
| 37.231707 | 121 | 0.658041 | 417 | 3,053 | 4.808153 | 0.450839 | 0.017955 | 0.029925 | 0.02394 | 0.087781 | 0.064838 | 0.064838 | 0.0399 | 0.0399 | 0 | 0 | 0.008086 | 0.270881 | 3,053 | 81 | 122 | 37.691358 | 0.892633 | 0.818539 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0.1875 | false | 0.0625 | 0.0625 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
a6a91201d3a0ac40123ae08adfeb0a278ecf3119 | 819 | py | Python | dcp_py/day062/day062.py | sraaphorst/Daily-Coding-Problem | acfcf83a66099f3e4b69e2447600b8208cd9ab1b | [
"MIT"
] | null | null | null | dcp_py/day062/day062.py | sraaphorst/Daily-Coding-Problem | acfcf83a66099f3e4b69e2447600b8208cd9ab1b | [
"MIT"
] | null | null | null | dcp_py/day062/day062.py | sraaphorst/Daily-Coding-Problem | acfcf83a66099f3e4b69e2447600b8208cd9ab1b | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# day062.py
# By Sebastian Raaphorst, 2019.
from math import factorial
def num_paths(n: int, m: int) -> int:
"""
Calculate the number of paths in a rectangle of dimensions n x m from the top left to the bottom right.
This is incredibly easy: we have to make n - 1 moves to the right, and m - 1 moves down.
Thus, we must make a total of n - 1 + m - 1 moves, and choose n - 1 of them to be to the right.
The remaining ones will be down.
:param n: one dimension of the matrix (doesn't really matter which, due to symmetry)
:param m: the other dimension of the matrix
:return: the number of possible paths through the matrix
>>> num_paths(2, 2)
2
>>> num_paths(5, 5)
70
"""
return factorial(n - 1 + m - 1)//factorial(n-1)//factorial(m-1)
| 31.5 | 107 | 0.656899 | 145 | 819 | 3.689655 | 0.496552 | 0.018692 | 0.041122 | 0.014953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039152 | 0.251526 | 819 | 25 | 108 | 32.76 | 0.833605 | 0.752137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a6ad0e5039f9621813b9301c9789e70168ed48b1 | 144 | py | Python | lang/Python/concurrent-computing-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/concurrent-computing-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null | lang/Python/concurrent-computing-2.py | ethansaxenian/RosettaDecode | 8ea1a42a5f792280b50193ad47545d14ee371fb7 | [
"MIT"
] | null | null | null |
from concurrent import futures
with futures.ProcessPoolExecutor() as executor:
_ = list(executor.map(print, 'Enjoy Rosetta Code'.split()))
| 24 | 62 | 0.756944 | 17 | 144 | 6.352941 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131944 | 144 | 5 | 63 | 28.8 | 0.864 | 0 | 0 | 0 | 0 | 0 | 0.126761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
a6d65973f237ddd3c1bee4a471549249a3f6c1f1 | 2,556 | py | Python | env/Lib/site-packages/OpenGL/GLES2/EXT/multiview_draw_buffers.py | 5gconnectedbike/Navio2 | 8c3f2b5d8bbbcea1fc08739945183c12b206712c | [
"BSD-3-Clause"
] | 210 | 2016-04-09T14:26:00.000Z | 2022-03-25T18:36:19.000Z | env/Lib/site-packages/OpenGL/GLES2/EXT/multiview_draw_buffers.py | 5gconnectedbike/Navio2 | 8c3f2b5d8bbbcea1fc08739945183c12b206712c | [
"BSD-3-Clause"
] | 72 | 2016-09-04T09:30:19.000Z | 2022-03-27T17:06:53.000Z | env/Lib/site-packages/OpenGL/GLES2/EXT/multiview_draw_buffers.py | 5gconnectedbike/Navio2 | 8c3f2b5d8bbbcea1fc08739945183c12b206712c | [
"BSD-3-Clause"
] | 64 | 2016-04-09T14:26:49.000Z | 2022-03-21T11:19:47.000Z | '''OpenGL extension EXT.multiview_draw_buffers
This module customises the behaviour of the
OpenGL.raw.GLES2.EXT.multiview_draw_buffers to provide a more
Python-friendly API
Overview (from the spec)
This extension allows selecting among draw buffers as the
rendering target. This may be among multiple primary buffers
pertaining to platform-specific stereoscopic or multiview displays
or among offscreen framebuffer object color attachments.
To remove any artificial limitations imposed on the number of
possible buffers, draw buffers are identified not as individual
enums, but as pairs of values consisting of an enum representing
buffer locations such as COLOR_ATTACHMENT_EXT or MULTIVIEW_EXT,
and an integer representing an identifying index of buffers of this
location. These (location, index) pairs are used to specify draw
buffer targets using a new DrawBuffersIndexedEXT call.
Rendering to buffers of location MULTIVIEW_EXT associated with the
context allows rendering to multiview buffers created by EGL using
EGL_EXT_multiview_window for stereoscopic displays.
Rendering to COLOR_ATTACHMENT_EXT buffers allows implementations to
increase the number of potential color attachments indefinitely to
renderbuffers and textures.
This extension allows the traditional quad buffer stereoscopic
rendering method that has proven effective by indicating a left or
right draw buffer and rendering to each accordingly, but is also
dynamic enough to handle an arbitrary number of color buffer targets
all using the same shader. This grants the user maximum flexibility
as well as a familiar interface.
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/EXT/multiview_draw_buffers.txt
'''
from OpenGL import platform, constant, arrays
from OpenGL import extensions, wrapper
import ctypes
from OpenGL.raw.GLES2 import _types, _glgets
from OpenGL.raw.GLES2.EXT.multiview_draw_buffers import *
from OpenGL.raw.GLES2.EXT.multiview_draw_buffers import _EXTENSION_NAME
def glInitMultiviewDrawBuffersEXT():
'''Return boolean indicating whether this extension is available'''
from OpenGL import extensions
return extensions.hasGLExtension( _EXTENSION_NAME )
# INPUT glDrawBuffersIndexedEXT.indices size not checked against n
# INPUT glDrawBuffersIndexedEXT.location size not checked against n
glDrawBuffersIndexedEXT=wrapper.wrapper(glDrawBuffersIndexedEXT).setInputArraySize(
'indices', None
).setInputArraySize(
'location', None
)
### END AUTOGENERATED SECTION | 43.322034 | 83 | 0.823944 | 346 | 2,556 | 6.014451 | 0.465318 | 0.037001 | 0.038443 | 0.055262 | 0.084094 | 0.062951 | 0.062951 | 0.045171 | 0.045171 | 0 | 0 | 0.001821 | 0.140454 | 2,556 | 59 | 84 | 43.322034 | 0.94538 | 0.851721 | 0 | 0 | 0 | 0 | 0.026786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.5 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
a6e17278e6e2908941b708144e21a12c3551c371 | 516 | py | Python | Python-programs/solve 2X2 Matrix .py | manavbansalcoder/Hacktoberfest2021 | ba20770f070bf9c0b02a8fe2bcbeb72cd559e428 | [
"CC0-1.0"
] | null | null | null | Python-programs/solve 2X2 Matrix .py | manavbansalcoder/Hacktoberfest2021 | ba20770f070bf9c0b02a8fe2bcbeb72cd559e428 | [
"CC0-1.0"
] | null | null | null | Python-programs/solve 2X2 Matrix .py | manavbansalcoder/Hacktoberfest2021 | ba20770f070bf9c0b02a8fe2bcbeb72cd559e428 | [
"CC0-1.0"
] | null | null | null | print("FINDING Square OF a MATRIX")
print(" ")
order=int(input("enter order of square matrix="))
if (order==2):
a11=int(input("enter a11="))
print(" ")
a12=int(input("enter a12="))
print(" ")
a21=int(input("enter a21="))
print(" ")
a22=int(input("enter a22="))
print(" ")
A11=(a11*a11)+(a12*a21)
A12=(a11*a12)+(a12*a22)
A21=(a21*a11)+(a22*a21)
A22=(a21*a12)+(a22*a22)
print(" ")
print("RESULT=[",A11,A12,"]")
print(" [",A21,A22,"]")
| 25.8 | 49 | 0.515504 | 70 | 516 | 3.8 | 0.242857 | 0.150376 | 0.244361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164975 | 0.236434 | 516 | 19 | 50 | 27.157895 | 0.510152 | 0 | 0 | 0.315789 | 0 | 0 | 0.253876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.473684 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
4707b4d7f941097b9eae2aa1a1af3bbf0a103d6c | 3,350 | py | Python | configr/test.py | ArneBachmann/configr | ed41d18467f370abcadf6f8661f10486ae1cfd00 | [
"MIT"
] | 1 | 2016-12-30T22:18:24.000Z | 2016-12-30T22:18:24.000Z | configr/test.py | ArneBachmann/configr | ed41d18467f370abcadf6f8661f10486ae1cfd00 | [
"MIT"
] | 8 | 2017-05-15T12:11:04.000Z | 2018-02-20T20:50:15.000Z | configr/test.py | ArneBachmann/configr | ed41d18467f370abcadf6f8661f10486ae1cfd00 | [
"MIT"
] | null | null | null | import doctest, json, logging, os, unittest, sys
sys.path.insert(0, "..")
import configr
class Tests(unittest.TestCase):
''' Test suite. '''
def tests_metadata(_):
_.assertTrue(hasattr(configr, "version"))
_.assertTrue(hasattr(configr.version, "__version__"))
_.assertTrue(hasattr(configr.version, "__version_info__"))
def test_details(_):
try:
for file in (f for f in os.listdir() if f.endswith(configr.EXTENSION + ".bak")):
try: os.unlink(file)
except Exception: pass
except Exception: pass
c = configr.Configr("myapp", data = {"d": 2}, defaults = {"e": 1})
_.assertEqual("myapp", c.__name)
_.assertEqual("myapp", c["__name"])
try: c["c"]; raise Exception("Should have crashed") # not existing data via dictionary access case
except Exception: pass
try: c.c; raise Exception("Should have crashed") # not existing data via attribute access case
except Exception: pass
_.assertEqual(2, c.d) # pre-defined data case
_.assertEqual(1, c["e"]) # default case
# Create some contents
c.a = "a"
c["b"] = "b"
_.assertEqual("a", c["a"])
_.assertEqual("b", c.b)
# Save to file
value = c.saveSettings(location = os.getcwd(), keys = ["a", "b"], clientCodeLocation = __file__) # CWD should be "tests" folder
_.assertIsNotNone(value.path)
_.assertIsNone(value.error)
_.assertEqual(value, c.__savedTo)
_.assertEqual(os.getcwd(), os.path.dirname(c.__savedTo.path))
_.assertEqual("a", c["a"])
_.assertEqual("b", c.b)
name = c.__savedTo.path
with open(name, "r") as fd: contents = json.loads(fd.read())
_.assertEqual({"a": "a", "b": "b"}, contents)
# Now load and see if all is correct
c = configr.Configr("myapp")
value = c.loadSettings(location = os.getcwd(), data = {"c": 33}, clientCodeLocation = __file__)
_.assertEqual(name, c.__loadedFrom.path)
_.assertIsNotNone(value.path)
_.assertIsNone(value.error)
_.assertEqual(value, c.__loadedFrom)
_.assertEqual(c.a, "a")
_.assertEqual(c["b"], "b")
_.assertEqual(c.c, 33)
os.unlink(value.path)
value = c.loadSettings(location = "bla", clientCodeLocation = __file__) # provoke error
_.assertIsNone(value.path)
_.assertIsNotNone(value.error)
# Now test removal
del c["b"]
del c.a
_.assertEqual(1, len(c.keys()))
_.assertIn("c", c.keys())
# Now stringify
_.assertEqual("<Configr c: 33>", str(c))
_.assertEqual("<Configr c: 33>", repr(c))
# Testing map functions: already done in doctest
# TODO test ignores option for saveSettings
def testNested(_):
c = configr.Configr(data = {"a": "a"}, defaults = configr.Configr(data = {"b": "b"}, defaults = configr.Configr(data = {"c": "c"})))
_.assertEqual("a", c.a)
_.assertEqual("b", c["b"])
_.assertEqual("c", c.c)
_.assertTrue("a" in c)
_.assertTrue("b" in c)
_.assertTrue("c" in c)
_.assertFalse("d" in c)
def load_tests(loader, tests, ignore):
''' The function name suffix "_tests" tells the unittest module about a test case. '''
tests.addTests(doctest.DocTestSuite(configr))
return tests
if __name__ == "__main__":
logging.basicConfig(level = logging.DEBUG, stream = sys.stderr, format = "%(asctime)-25s %(levelname)-8s %(name)-12s | %(message)s")
print(unittest.main())
| 37.222222 | 136 | 0.64209 | 429 | 3,350 | 4.81352 | 0.335664 | 0.00678 | 0.036804 | 0.045036 | 0.222276 | 0.194189 | 0.153995 | 0.153995 | 0.113317 | 0.0523 | 0 | 0.007008 | 0.190746 | 3,350 | 89 | 137 | 37.640449 | 0.754703 | 0.13403 | 0 | 0.169014 | 0 | 0 | 0.083797 | 0 | 0 | 0 | 0 | 0.011236 | 0.507042 | 1 | 0.056338 | false | 0.056338 | 0.028169 | 0 | 0.112676 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
4712052d73125063f1e6bbdd64f996c9f81e1a5e | 73 | py | Python | tests/fixtures/custom_functions.py | helobinvn/zuul | dda840b82934c82b9783bdc29f2f0626883cc47e | [
"Apache-2.0"
] | null | null | null | tests/fixtures/custom_functions.py | helobinvn/zuul | dda840b82934c82b9783bdc29f2f0626883cc47e | [
"Apache-2.0"
] | 84 | 2015-10-22T11:21:02.000Z | 2022-03-31T02:24:54.000Z | tests/fixtures/custom_functions.py | helobinvn/zuul | dda840b82934c82b9783bdc29f2f0626883cc47e | [
"Apache-2.0"
] | 21 | 2016-02-10T11:20:37.000Z | 2022-01-05T02:53:37.000Z | def select_debian_node(item, params):
params['ZUUL_NODE'] = 'debian'
| 24.333333 | 37 | 0.712329 | 10 | 73 | 4.9 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136986 | 73 | 2 | 38 | 36.5 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.205479 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5b4ffb37f9d334eb6f038c45f439a859439c4bcf | 308 | py | Python | algorithms/0001-FizzBuzz/fizzbuzz.py | ivamluz/Algorithms-and-Data-Structures | cffacd758adf134dabc63cdc107c8b485b00b1c1 | [
"Apache-2.0"
] | 1 | 2018-11-06T22:43:07.000Z | 2018-11-06T22:43:07.000Z | algorithms/0001-FizzBuzz/fizzbuzz.py | ivamluz/Algorithms-and-Data-Structures | cffacd758adf134dabc63cdc107c8b485b00b1c1 | [
"Apache-2.0"
] | null | null | null | algorithms/0001-FizzBuzz/fizzbuzz.py | ivamluz/Algorithms-and-Data-Structures | cffacd758adf134dabc63cdc107c8b485b00b1c1 | [
"Apache-2.0"
] | null | null | null | def fizzbuzz(number):
if number <= 0:
return None
is_multiple_of_3 = (number % 3 == 0)
is_multiple_of_5 = (number % 5 == 0)
if is_multiple_of_3 and is_multiple_of_5:
return "FizzBuzz"
if is_multiple_of_3:
return "Fizz"
if is_multiple_of_5:
return "Buzz"
return str(number)
| 17.111111 | 43 | 0.668831 | 51 | 308 | 3.686275 | 0.313725 | 0.319149 | 0.382979 | 0.207447 | 0.361702 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046809 | 0.237013 | 308 | 17 | 44 | 18.117647 | 0.753191 | 0 | 0 | 0 | 0 | 0 | 0.051948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5b51125a1a090c9fc8cb4d27953bb17cb0175781 | 159 | py | Python | ex005.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | ex005.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | ex005.py | juniorpedroso/Exercicios-CEV-Python | 4adad3b6f3994cf61f9ead5564124b8b9c58d304 | [
"MIT"
] | null | null | null | n = int(input('Digite um número: '))
#prox = n + 1
#ant = n -1
print ('Analizando o valor {}, seu antecessor é {} e o sucessor é {}.'.format(n, (n-1), (n+1)))
| 31.8 | 95 | 0.572327 | 29 | 159 | 3.137931 | 0.655172 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.194969 | 159 | 4 | 96 | 39.75 | 0.679688 | 0.138365 | 0 | 0 | 0 | 0 | 0.585185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
5b6e37dbd3c20d60ef9b914f23cdbf88dd577984 | 208 | py | Python | elastic/api/urls.py | deam91/elastic | f5fb20f9afb6669974567fd39e6e261c704d3c54 | [
"MIT"
] | null | null | null | elastic/api/urls.py | deam91/elastic | f5fb20f9afb6669974567fd39e6e261c704d3c54 | [
"MIT"
] | 2 | 2021-06-09T18:42:49.000Z | 2021-06-10T20:40:15.000Z | elastic/api/urls.py | deam91/elastic | f5fb20f9afb6669974567fd39e6e261c704d3c54 | [
"MIT"
] | null | null | null | from django.urls import path
from elastic.api.views import GoogleView, AppleView
urlpatterns = [
path('auth/google/verify/', GoogleView.as_view()),
path('auth/apple/verify/', AppleView.as_view())
]
| 23.111111 | 54 | 0.725962 | 27 | 208 | 5.518519 | 0.62963 | 0.107383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129808 | 208 | 8 | 55 | 26 | 0.823204 | 0 | 0 | 0 | 0 | 0 | 0.177885 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5b84c00b0e49a8356164bb4726b200f671aa1898 | 96 | py | Python | stripe_shop/apps.py | Veilkrand/django-stripe-tutorial | 8f47e42175cab7ff7b41a84b8dc78fbab823b013 | [
"MIT"
] | 2 | 2021-06-01T10:11:03.000Z | 2022-01-13T13:31:43.000Z | stripe_shop/apps.py | Veilkrand/django-stripe-tutorial | 8f47e42175cab7ff7b41a84b8dc78fbab823b013 | [
"MIT"
] | null | null | null | stripe_shop/apps.py | Veilkrand/django-stripe-tutorial | 8f47e42175cab7ff7b41a84b8dc78fbab823b013 | [
"MIT"
] | 3 | 2021-02-09T14:41:32.000Z | 2022-03-08T01:22:39.000Z | from django.apps import AppConfig
class StripeShopConfig(AppConfig):
name = 'stripe_shop'
| 16 | 34 | 0.770833 | 11 | 96 | 6.636364 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 96 | 5 | 35 | 19.2 | 0.901235 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5bb5bd266de1da3922f853af8a5e52abe7663f06 | 40 | py | Python | Python diye Programming sekha 1st/Sum function.py | mitul3737/My-Python-Programming-journey-from-Beginning-to-Data-Sciene-Machine-Learning-AI-Deep-Learning | ca2c15c597a64e5a7689ba3a44ce36a1c0828194 | [
"MIT"
] | 1 | 2021-05-02T20:30:33.000Z | 2021-05-02T20:30:33.000Z | Python diye Programming sekha 1st/Sum function.py | Mit382/My-Python-Programming-Journey-from-Beginning-to-Data-Sciene-Machine-Learning-AI-Deep-Learning | c19d84dfe6dcf496ff4527724f92e228579b6456 | [
"MIT"
] | null | null | null | Python diye Programming sekha 1st/Sum function.py | Mit382/My-Python-Programming-Journey-from-Beginning-to-Data-Sciene-Machine-Learning-AI-Deep-Learning | c19d84dfe6dcf496ff4527724f92e228579b6456 | [
"MIT"
] | 1 | 2021-05-02T20:30:29.000Z | 2021-05-02T20:30:29.000Z | li=[1,2,3]
result=sum(li)
print(result)
| 10 | 14 | 0.675 | 9 | 40 | 3 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 0.075 | 40 | 3 | 15 | 13.333333 | 0.648649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5bc291d11ce1dc852f69923b4d905a55362d2478 | 206 | py | Python | brr/article/models.py | nilq/brr | d2b8166d2816750d379b62f408c20b0aebcbc075 | [
"Unlicense"
] | null | null | null | brr/article/models.py | nilq/brr | d2b8166d2816750d379b62f408c20b0aebcbc075 | [
"Unlicense"
] | null | null | null | brr/article/models.py | nilq/brr | d2b8166d2816750d379b62f408c20b0aebcbc075 | [
"Unlicense"
] | null | null | null | from django.db import models
class Article(models.Model):
article_id = models.AutoField(primary_key=True)
article_heading = models.CharField(max_length=256)
article_body = models.TextField()
| 29.428571 | 54 | 0.762136 | 27 | 206 | 5.62963 | 0.740741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017045 | 0.145631 | 206 | 6 | 55 | 34.333333 | 0.846591 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
5bdf6fd05d5925e9877b48b6e99ef373ee445894 | 463 | py | Python | PerceptronsORBitOperator.py | Jayasagar/neural-networks-and-fundamentals | 3ba730f723f92dd1eab4f88b4d968f66fa866e7b | [
"MIT"
] | null | null | null | PerceptronsORBitOperator.py | Jayasagar/neural-networks-and-fundamentals | 3ba730f723f92dd1eab4f88b4d968f66fa866e7b | [
"MIT"
] | null | null | null | PerceptronsORBitOperator.py | Jayasagar/neural-networks-and-fundamentals | 3ba730f723f92dd1eab4f88b4d968f66fa866e7b | [
"MIT"
] | null | null | null |
# Stimulate OR bit operator using Perceptron neural function
def ORBitwiseOperation(x1, x2, bias):
weight = 1
if x1*weight + x2*weight + bias <= 0:
return 0
return 1
# Bias = 0 and Weight = 1
print('ORBitwiseOperation: 0,0', ORBitwiseOperation(0, 0, 0))
print('ORBitwiseOperation: 0,1', ORBitwiseOperation(0, 1, 0))
print('ORBitwiseOperation: 1,0', ORBitwiseOperation(1, 0, 0))
print('ORBitwiseOperation: 1,1', ORBitwiseOperation(1, 1, 0))
| 30.866667 | 61 | 0.697624 | 63 | 463 | 5.126984 | 0.31746 | 0.28483 | 0.22291 | 0.154799 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078125 | 0.170626 | 463 | 14 | 62 | 33.071429 | 0.763021 | 0.177106 | 0 | 0 | 0 | 0 | 0.244681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.333333 | 0.444444 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
5bebef964e9e8885b1a9cb0a5ad817b18a2a5dc4 | 787 | py | Python | back/lollangCompiler/variable.py | wonjinYi/lollang-playground | 2df07ccc2518e6dc9f9aa00b2f38ad8d62cdb507 | [
"MIT"
] | 11 | 2022-03-12T06:41:29.000Z | 2022-03-15T06:15:52.000Z | back/lollangCompiler/variable.py | wonjinYi/lollang-playground | 2df07ccc2518e6dc9f9aa00b2f38ad8d62cdb507 | [
"MIT"
] | 4 | 2022-03-14T12:01:09.000Z | 2022-03-26T20:19:52.000Z | back/lollangCompiler/variable.py | wonjinYi/lollang-playground | 2df07ccc2518e6dc9f9aa00b2f38ad8d62cdb507 | [
"MIT"
] | 2 | 2022-03-12T03:49:20.000Z | 2022-03-15T05:41:41.000Z | from enum import Enum, auto
class TYPE(Enum): # 자료형
INT = auto()
STR = auto()
class Variable:
def __init__(self):
self.var = dict()
def insert(self, name):
try:
self.var[name]
except KeyError:
self.var[name] = [f"var_{len(self.var)}", TYPE.INT]
def get(self, name):
return self.var[name][0]
def getType(self, name):
return self.var[name][1]
def setType(self, name, newType):
self.var[name][1] = newType
class FunVariable(Variable):
def __init__(self):
super().__init__()
def insert(self, name):
try:
self.var[name]
raise SyntaxError
except KeyError:
self.var[name] = [f"fun_{len(self.var)}"] | 22.485714 | 63 | 0.533672 | 97 | 787 | 4.185567 | 0.350515 | 0.172414 | 0.189655 | 0.093596 | 0.403941 | 0.403941 | 0.152709 | 0.152709 | 0 | 0 | 0 | 0.005714 | 0.33291 | 787 | 35 | 64 | 22.485714 | 0.767619 | 0.003812 | 0 | 0.37037 | 0 | 0 | 0.048531 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.259259 | false | 0 | 0.037037 | 0.074074 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
754964acc6a0fdb092682d40acbf5109a1947819 | 59 | py | Python | demo.py | dimitrov-dimitar/competitive-programming | f2b022377baf6d4beff213fc513907b774c12352 | [
"MIT"
] | null | null | null | demo.py | dimitrov-dimitar/competitive-programming | f2b022377baf6d4beff213fc513907b774c12352 | [
"MIT"
] | null | null | null | demo.py | dimitrov-dimitar/competitive-programming | f2b022377baf6d4beff213fc513907b774c12352 | [
"MIT"
] | null | null | null | print('demo')
for x in range(1000):
print(x, end=' ')
| 11.8 | 21 | 0.559322 | 10 | 59 | 3.3 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0.220339 | 59 | 4 | 22 | 14.75 | 0.630435 | 0 | 0 | 0 | 0 | 0 | 0.084746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
f334c373215cba10b03d648d2ce22cbe3103246a | 484 | py | Python | create_db.py | jmprkables/webapp | 72cdb4b0a8c2d4822e36280421776ca706f724ab | [
"MIT"
] | null | null | null | create_db.py | jmprkables/webapp | 72cdb4b0a8c2d4822e36280421776ca706f724ab | [
"MIT"
] | null | null | null | create_db.py | jmprkables/webapp | 72cdb4b0a8c2d4822e36280421776ca706f724ab | [
"MIT"
] | null | null | null | import rethinkdb as r
conn = r.connect("192.168.6.26", 28015)
try:
r.db_drop("hackiiitd").run(conn)
print("deleted old db")
except:
print("inital creation")
r.db_create("hackiiitd").run(conn)
r.db("hackiiitd").table_create("fall").run(conn)
print("."),
r.db("hackiiitd").table_create("medicine").run(conn)
print("."),
r.db("hackiiitd").table_create("door", primary_key="door_id").run(conn)
print("."),
r.db("hackiiitd").table_create("photo").run(conn)
print("."),
print("done")
| 23.047619 | 71 | 0.683884 | 75 | 484 | 4.306667 | 0.426667 | 0.055728 | 0.185759 | 0.210526 | 0.396285 | 0.325077 | 0.325077 | 0.325077 | 0 | 0 | 0 | 0.031042 | 0.068182 | 484 | 20 | 72 | 24.2 | 0.685144 | 0 | 0 | 0.235294 | 0 | 0 | 0.270661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0.411765 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
f353dbb565c4d499694168557296ed62d9913084 | 1,687 | py | Python | artifact/gpufwd_image/to_copy/empirical_testing/src/configuration.py | tyler-utah/AlloyForwardProgress | d9c779129f5ff9f678b31d9b7ccb29c184ddb097 | [
"BSD-2-Clause"
] | null | null | null | artifact/gpufwd_image/to_copy/empirical_testing/src/configuration.py | tyler-utah/AlloyForwardProgress | d9c779129f5ff9f678b31d9b7ccb29c184ddb097 | [
"BSD-2-Clause"
] | null | null | null | artifact/gpufwd_image/to_copy/empirical_testing/src/configuration.py | tyler-utah/AlloyForwardProgress | d9c779129f5ff9f678b31d9b7ccb29c184ddb097 | [
"BSD-2-Clause"
] | 2 | 2020-05-02T00:28:27.000Z | 2020-05-12T20:36:49.000Z | # -----------------------------------------------------------------------
# configuration.py
# Author: Hari Raval
# -----------------------------------------------------------------------
# a Configuration object represents the parameters and settings used to generate an Amber test
class Configuration(object):
# constructor of the Configuration object
def __init__(self, timeout, workgroups, threads_per_workgroup, saturation_level, subgroup):
# timeout represents the time (in ms) for which the Amber test will run
self._timeout = timeout
# number of workgroups to be used for the Amber test
self._workgroups = workgroups
# number of threads per workgroup
self._threads_per_workgroup = threads_per_workgroup
# type of saturation: 0 means no saturation, 1 means "round robin" saturation, 2 means "chunking" saturation
self._saturation_level = saturation_level
# subgroup usage: 0 means same subgroups, 1 means different subgroup and same workgroup
self._subgroup = subgroup
# getter method to retrieve the timeout
def get_timeout(self):
return self._timeout
# getter method to retrieve the number of workgroups
def get_number_of_workgroups(self):
return self._workgroups
# getter method to retrieve the number of threads per workgroup
def get_threads_per_workgroup(self):
return self._threads_per_workgroup
# getter method to retrieve the type of saturation (value of 0, 1 or 2) of the Amber test
def get_saturation_level(self):
return self._saturation_level
def get_subgroup_setting(self):
return self._subgroup
| 41.146341 | 116 | 0.662715 | 202 | 1,687 | 5.351485 | 0.306931 | 0.064755 | 0.123034 | 0.081406 | 0.149861 | 0.061055 | 0.061055 | 0 | 0 | 0 | 0 | 0.006006 | 0.210433 | 1,687 | 40 | 117 | 42.175 | 0.805556 | 0.531713 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.352941 | false | 0 | 0 | 0.294118 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
f3666df8e2a3d53709ecdedaf5a73aa5a39694e9 | 2,016 | py | Python | tests/test_event_hooks.py | eruvanos/dynafile | 207425b073a963b01c677b697e74842b429c004a | [
"MIT"
] | null | null | null | tests/test_event_hooks.py | eruvanos/dynafile | 207425b073a963b01c677b697e74842b429c004a | [
"MIT"
] | null | null | null | tests/test_event_hooks.py | eruvanos/dynafile | 207425b073a963b01c677b697e74842b429c004a | [
"MIT"
] | null | null | null | from dynafile import Dynafile, Event
class Observer:
def __init__(self):
self.calls = []
def __call__(self, *args, **kwargs):
self.calls.append((args, kwargs))
@property
def latest(self):
return self.calls[-1] if self.calls else None
def test_put_item_schedules_event(tmp_path):
db = Dynafile(tmp_path / "db")
observer = Observer()
db.add_stream_listener(observer)
db.put_item(item={"PK": "1", "SK": "aa"})
args, kwargs = observer.latest
assert args[0] == Event(action="PUT", new={"PK": "1", "SK": "aa"}, old=None)
def test_put_item_overwrite_schedules_event(tmp_path):
db = Dynafile(tmp_path / "db")
db.put_item(item={"PK": "1", "SK": "aa", "old": True})
observer = Observer()
db.add_stream_listener(observer)
db.put_item(item={"PK": "1", "SK": "aa", "old": False})
args, kwargs = observer.latest
assert args[0] == Event(action="PUT",
new={"PK": "1", "SK": "aa", "old": False},
old={"PK": "1", "SK": "aa", "old": True}
)
def test_delete_item_schedules_event(tmp_path):
db = Dynafile(tmp_path / "db")
db.put_item(item={"PK": "1", "SK": "aa"})
observer = Observer()
db.add_stream_listener(observer)
db.delete_item(key={"PK": "1", "SK": "aa"})
args, kwargs = observer.latest
assert args[0] == Event(action="DELETE", new=None, old={"PK": "1", "SK": "aa"})
def test_batch_write_item_schedules_event(tmp_path):
db = Dynafile(tmp_path / "db")
observer = Observer()
db.add_stream_listener(observer)
with db.batch_writer() as writer:
writer.put_item(item={"PK": "1", "SK": "aa"})
writer.delete_item(key={"PK": "1", "SK": "aa"})
args, kwargs = observer.calls[0]
assert args[0] == Event(action="PUT", new={"PK": "1", "SK": "aa"}, old=None)
args, kwargs = observer.calls[1]
assert args[0] == Event(action="DELETE", new=None, old={"PK": "1", "SK": "aa"})
| 29.647059 | 83 | 0.579861 | 277 | 2,016 | 4.039711 | 0.176895 | 0.034853 | 0.058088 | 0.081323 | 0.7605 | 0.732797 | 0.707775 | 0.691689 | 0.651475 | 0.646113 | 0 | 0.013436 | 0.224702 | 2,016 | 67 | 84 | 30.089552 | 0.702495 | 0 | 0 | 0.456522 | 0 | 0 | 0.065476 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 1 | 0.152174 | false | 0 | 0.021739 | 0.021739 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f3794b8036c6961209cfd4976b19858f3b884290 | 120 | py | Python | abc/abc076/abc076b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | abc/abc076/abc076b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | abc/abc076/abc076b.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | N = int(input())
K = int(input())
result = 1
for _ in range(N):
result = min(result * 2, result + K)
print(result)
| 15 | 40 | 0.591667 | 20 | 120 | 3.5 | 0.6 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021505 | 0.225 | 120 | 7 | 41 | 17.142857 | 0.731183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f37f063ffa0cd66c554790bd5c044be104ee8f35 | 190 | py | Python | Windows/WaitingRoom/waiting_room.py | SuperSecretPryncyMafia/PKN | 3de43c3cd2f63d29098adb92b563bbc4bd79bbf8 | [
"MIT"
] | null | null | null | Windows/WaitingRoom/waiting_room.py | SuperSecretPryncyMafia/PKN | 3de43c3cd2f63d29098adb92b563bbc4bd79bbf8 | [
"MIT"
] | null | null | null | Windows/WaitingRoom/waiting_room.py | SuperSecretPryncyMafia/PKN | 3de43c3cd2f63d29098adb92b563bbc4bd79bbf8 | [
"MIT"
] | null | null | null | from abc import ABC
from .view import View
from .model import Model
from .controller import Controller
class WaitingRoom(ABC):
Model = Model
View = View
Controller = Controller | 19 | 34 | 0.742105 | 25 | 190 | 5.64 | 0.32 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 190 | 10 | 35 | 19 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
f38eb28dc30d81aaf90ad337ef669d71ae24e45a | 307 | py | Python | twrap/__init__.py | itsnarsi/twrap | cc3128428e37fe0a363e5b18fd7fa0039a963365 | [
"MIT"
] | null | null | null | twrap/__init__.py | itsnarsi/twrap | cc3128428e37fe0a363e5b18fd7fa0039a963365 | [
"MIT"
] | null | null | null | twrap/__init__.py | itsnarsi/twrap | cc3128428e37fe0a363e5b18fd7fa0039a963365 | [
"MIT"
] | null | null | null | # @Author: Narsi Reddy <cibitaw1>
# @Date: 2018-09-19T11:53:44-05:00
# @Email: sainarsireddy@outlook.com
# @Last modified by: narsi
# @Last modified time: 2019-01-03T21:57:01-06:00
# import torch
# if '0.4.' not in torch.__version__:
# raise Exception('Only works currently with PyTorch ver0.4.x')
| 34.111111 | 67 | 0.697068 | 49 | 307 | 4.285714 | 0.857143 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157692 | 0.153094 | 307 | 8 | 68 | 38.375 | 0.65 | 0.944625 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f3a4cd8ed608d821130f28fbb04623bfe8e9ecb1 | 488 | py | Python | students/k33402/Sholomov_Dan/lab34/lab3/order/views.py | heidamn/ITMO_ICT_WebDevelopment_2020-2021 | 47eb0cdf7c7dbe8d071bc4fd3f1ac94848475e7b | [
"MIT"
] | null | null | null | students/k33402/Sholomov_Dan/lab34/lab3/order/views.py | heidamn/ITMO_ICT_WebDevelopment_2020-2021 | 47eb0cdf7c7dbe8d071bc4fd3f1ac94848475e7b | [
"MIT"
] | null | null | null | students/k33402/Sholomov_Dan/lab34/lab3/order/views.py | heidamn/ITMO_ICT_WebDevelopment_2020-2021 | 47eb0cdf7c7dbe8d071bc4fd3f1ac94848475e7b | [
"MIT"
] | null | null | null | from rest_framework import viewsets
from .serializers import OrdersSerializer, OrderedItemsSerializer
from .models import Order, OrderedItem
class OrdersViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = OrdersSerializer
queryset = Order.objects.all()
permission_classes = []
class OrderedItemsViewSet(viewsets.ReadOnlyModelViewSet):
serializer_class = OrderedItemsSerializer
queryset = OrderedItem.objects.all()
permission_classes = []
| 30.5 | 66 | 0.776639 | 41 | 488 | 9.121951 | 0.512195 | 0.149733 | 0.203209 | 0.229947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161885 | 488 | 15 | 67 | 32.533333 | 0.914425 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f3a6c053f2f12cb3261718563ed3380060f834ad | 462 | py | Python | initialize_countstats.py | handrews/gcd-django-vagrant-install | 9eae6baab82f5a9a88b674a7773cfd6bb69760d1 | [
"MIT"
] | 5 | 2015-05-18T13:37:52.000Z | 2021-06-11T10:46:15.000Z | initialize_countstats.py | handrews/gcd-django-vagrant-install | 9eae6baab82f5a9a88b674a7773cfd6bb69760d1 | [
"MIT"
] | 11 | 2015-09-23T19:44:42.000Z | 2018-04-22T13:26:37.000Z | initialize_countstats.py | handrews/gcd-django-vagrant-install | 9eae6baab82f5a9a88b674a7773cfd6bb69760d1 | [
"MIT"
] | 6 | 2015-10-08T19:40:37.000Z | 2017-08-11T00:50:58.000Z | import django
from apps.stats.models import CountStats
from apps.stddata.models import Language, Country
django.setup()
if CountStats.objects.filter(language__isnull=False).count() == 0:
for i in Language.objects.all():
CountStats.objects.init_stats(language=i)
if CountStats.objects.filter(country__isnull=False).count() == 0:
for i in Country.objects.all():
CountStats.objects.init_stats(country=i)
CountStats.objects.init_stats()
| 27.176471 | 66 | 0.753247 | 63 | 462 | 5.412698 | 0.365079 | 0.249267 | 0.184751 | 0.228739 | 0.346041 | 0.346041 | 0.134897 | 0 | 0 | 0 | 0 | 0.004975 | 0.12987 | 462 | 16 | 67 | 28.875 | 0.843284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
f3b02ec223e1a94f2363cc2fc7c80af0fbe1f7bc | 2,752 | py | Python | Homework_2/problem3.py | aefernandez/DS501 | 15799c8690c2f934d8e710db060e2b9e1b6afc8a | [
"MIT"
] | null | null | null | Homework_2/problem3.py | aefernandez/DS501 | 15799c8690c2f934d8e710db060e2b9e1b6afc8a | [
"MIT"
] | null | null | null | Homework_2/problem3.py | aefernandez/DS501 | 15799c8690c2f934d8e710db060e2b9e1b6afc8a | [
"MIT"
] | null | null | null | from mrjob.job import MRJob
#-------------------------------------------------------------------------
'''
Problem 3:
In this problem, you will get familiar with the mapreduce framework.
In this problem, please install the following python package:
* mrjob
Numpy is the library for writing Python programs that run on Hadoop.
To install mrjob using pip, you could type `pip install mrjob` in the terminal.
Alternatively, you could install from source code:
(1) download source code from: https://github.com/Yelp/mrjob
(2) in the code folder, type "python setup.py install" in the terminal.
You could test the correctness of your code by typing `nosetests test3.py` in the terminal.
'''
#--------------------------
class CharCount(MRJob):
''' a character count for class, which compute the count of characters in a text document'''
#----------------------
def mapper(self, in_key, in_value):
'''
mapper function, which process a key-value pair in the data and generate intermediate key-value pair(s).
It should return one key-value pair with value as the number of characters in the line (in_value)
Input:
in_key: the key of a data record (in this example, can be ignored)
in_value: the value of a data record, (in this example, it is a line of text string in the document)
Yield:
(out_key, out_value) :intermediate key-value pair(s).
In this example, the out_key can be anything,
and out_value is the count of characters in the line, an integer value
'''
#########################################
## INSERT YOUR CODE HERE
#########################################
#----------------------
def reducer(self, in_key, in_values):
'''
reducer function, which processes a key and value list and produces output key-value pair(s)
Input:
in_key: the key of a data record (in this example, can be ignored)
in_values: the python list of values, (in this example, it is a line of integer counts from different lines of the document)
Yield:
(out_key, out_value) : output key-value pair(s).
In this example, the out_key can be anything,
and out_value is the count of characters in the document, an integer value
'''
#########################################
## INSERT YOUR CODE HERE
#########################################
| 50.962963 | 144 | 0.52798 | 333 | 2,752 | 4.312312 | 0.333333 | 0.031337 | 0.050139 | 0.036212 | 0.402507 | 0.326602 | 0.326602 | 0.231198 | 0.197772 | 0.197772 | 0 | 0.002109 | 0.310683 | 2,752 | 53 | 145 | 51.924528 | 0.754876 | 0.534884 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037736 | 0 | 1 | 0.5 | false | 0 | 0.25 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
f3bdd237e7a92835cd9c312fe1cb570edab1eefa | 130 | py | Python | src/treepath/path/typing/json_types.py | monkeydevtools/treepath-python | 56f6cbf662f8a4c13f0c9e753a839fc9f6323dba | [
"Apache-2.0"
] | 2 | 2021-05-26T08:26:25.000Z | 2021-09-24T21:26:01.000Z | src/treepath/path/typing/json_types.py | monkeydevtools/treepath-python | 56f6cbf662f8a4c13f0c9e753a839fc9f6323dba | [
"Apache-2.0"
] | null | null | null | src/treepath/path/typing/json_types.py | monkeydevtools/treepath-python | 56f6cbf662f8a4c13f0c9e753a839fc9f6323dba | [
"Apache-2.0"
] | null | null | null | from typing import Union, Dict, List
JsonTypes = Union[Dict[str, 'JsonTypes'], List['JsonTypes'], str, int, float, bool, None]
| 21.666667 | 89 | 0.7 | 18 | 130 | 5.055556 | 0.666667 | 0.197802 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146154 | 130 | 5 | 90 | 26 | 0.81982 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
f3c45f7c70e73a743a700dfb086cc73a291cc487 | 1,751 | py | Python | chaospy/distributions/collection/gompertz.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 333 | 2016-10-25T12:00:48.000Z | 2022-03-30T07:50:33.000Z | chaospy/distributions/collection/gompertz.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 327 | 2016-09-25T16:29:41.000Z | 2022-03-30T03:26:27.000Z | chaospy/distributions/collection/gompertz.py | utsekaj42/chaospy | 0fb23cbb58eb987c3ca912e2a20b83ebab0514d0 | [
"MIT"
] | 74 | 2016-10-17T11:14:13.000Z | 2021-12-09T10:55:59.000Z | """Gompertz distribution."""
import numpy
from scipy import special
from ..baseclass import SimpleDistribution, ShiftScaleDistribution
class gompertz(SimpleDistribution):
"""Gompertz distribution."""
def __init__(self, c):
super(gompertz, self).__init__(dict(c=c))
def _pdf(self, x, c):
ex = numpy.exp(x)
return c*ex*numpy.exp(-c*(ex-1))
def _cdf(self, x, c):
return 1.0-numpy.exp(-c*(numpy.exp(x)-1))
def _ppf(self, q, c):
return numpy.log(1-1.0/c*numpy.log(1-q))
def _lower(self, c):
return 0.
def _upper(self, c):
return numpy.log(1+27.7/c)
class Gompertz(ShiftScaleDistribution):
"""
Gompertz distribution
Args:
shape (float, Distribution):
Shape parameter
scale (float, Distribution):
Scaling parameter
shift (float, Distribution):
Location parameter
Examples:
>>> distribution = chaospy.Gompertz(1.5)
>>> distribution
Gompertz(1.5)
>>> uloc = numpy.linspace(0, 1, 6)
>>> uloc
array([0. , 0.2, 0.4, 0.6, 0.8, 1. ])
>>> xloc = distribution.inv(uloc)
>>> xloc.round(3)
array([0. , 0.139, 0.293, 0.477, 0.729, 2.969])
>>> numpy.allclose(distribution.fwd(xloc), uloc)
True
>>> distribution.pdf(xloc).round(3)
array([1.5 , 1.379, 1.206, 0.967, 0.622, 0. ])
>>> distribution.sample(4).round(3)
array([0.535, 0.078, 1.099, 0.364])
"""
def __init__(self, shape, scale=1, shift=0):
super(Gompertz, self).__init__(
dist=gompertz(shape),
scale=scale,
shift=shift,
repr_args=[shape],
)
| 25.376812 | 66 | 0.5494 | 222 | 1,751 | 4.234234 | 0.328829 | 0.034043 | 0.028723 | 0.044681 | 0.034043 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074494 | 0.294689 | 1,751 | 68 | 67 | 25.75 | 0.68664 | 0.4506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28 | false | 0 | 0.12 | 0.16 | 0.68 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
45f776e9532d54844e073e21417dafd35670fca1 | 104 | py | Python | dash-user-guide-components/dash_user_guide_components/_imports_.py | joelostblom/dash-docs | 7be5aed7795f61ac32375ce33a18046b8f2f5254 | [
"MIT"
] | 379 | 2017-06-21T14:35:52.000Z | 2022-03-20T01:47:14.000Z | dash-user-guide-components/dash_user_guide_components/_imports_.py | joelostblom/dash-docs | 7be5aed7795f61ac32375ce33a18046b8f2f5254 | [
"MIT"
] | 746 | 2017-06-21T19:58:17.000Z | 2022-03-23T14:51:24.000Z | dash-user-guide-components/dash_user_guide_components/_imports_.py | joelostblom/dash-docs | 7be5aed7795f61ac32375ce33a18046b8f2f5254 | [
"MIT"
] | 201 | 2017-06-21T21:53:19.000Z | 2022-03-17T13:23:55.000Z | from .PageMenu import PageMenu
from .Sidebar import Sidebar
__all__ = [
"PageMenu",
"Sidebar"
] | 14.857143 | 30 | 0.692308 | 11 | 104 | 6.181818 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.211538 | 104 | 7 | 31 | 14.857143 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3419940201b190d163c1916d0f6f399eb96f956f | 782 | py | Python | torchtext_th/data/sentence.py | phiradet/torchtext-th | 6dd795cbe712998ac175d6f2dd4cbfa40d227bdf | [
"MIT"
] | 3 | 2019-06-14T23:19:34.000Z | 2021-07-08T08:28:25.000Z | torchtext_th/data/sentence.py | phiradet/torchtext-th | 6dd795cbe712998ac175d6f2dd4cbfa40d227bdf | [
"MIT"
] | 3 | 2019-06-12T09:16:01.000Z | 2019-06-20T14:30:39.000Z | torchtext_th/data/sentence.py | phiradet/torchtext-th | 6dd795cbe712998ac175d6f2dd4cbfa40d227bdf | [
"MIT"
] | null | null | null | from typing import Iterator
from itertools import chain
from torchtext_th.data.token import Token
class Sentence(object):
def __init__(self, raw_sentence: str, delim: str = "|") -> None:
self.delim = delim
self.tokens = []
for t in raw_sentence.strip().split(delim):
if len(t) > 0:
self.tokens.append(Token(t))
def to_chars(self, is_norm: bool = False) -> Iterator[str]:
return chain.from_iterable([t.to_chars(is_norm) for t in self.tokens])
def to_bmes_labels(self)-> Iterator[str]:
return chain.from_iterable([t.to_bmes_labels() for t in self.tokens])
def __len__(self):
return len(self.tokens)
def __str__(self):
return self.delim.join([str(t) for t in self.tokens])
| 28.962963 | 78 | 0.641944 | 114 | 782 | 4.184211 | 0.359649 | 0.125786 | 0.050314 | 0.062893 | 0.268344 | 0.234801 | 0.155136 | 0.155136 | 0 | 0 | 0 | 0.001678 | 0.237852 | 782 | 26 | 79 | 30.076923 | 0.798658 | 0 | 0 | 0 | 0 | 0 | 0.001279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.277778 | false | 0 | 0.166667 | 0.222222 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.