hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d7e72c05c821b5b2aa9289c98ad93b19e168816c | 1,487 | py | Python | issue_order/migrations/0005_auto_20170211_0033.py | jiejiang/courier | 6fdeaf041c77dba0f97e206adb7b0cded9674d3d | [
"Apache-2.0"
] | null | null | null | issue_order/migrations/0005_auto_20170211_0033.py | jiejiang/courier | 6fdeaf041c77dba0f97e206adb7b0cded9674d3d | [
"Apache-2.0"
] | 13 | 2020-02-12T02:56:24.000Z | 2022-01-13T01:23:08.000Z | issue_order/migrations/0005_auto_20170211_0033.py | jiejiang/courier | 6fdeaf041c77dba0f97e206adb7b0cded9674d3d | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.10.3 on 2017-02-11 00:33
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('issue_order', '0004_auto_20170210_2358'),
]
operations = [
migrations.AlterField(
model_name='courierbatch',
name='credit',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=10, null=True, verbose_name='Credit'),
),
migrations.AlterField(
model_name='courierbatch',
name='rate',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=10, null=True, verbose_name='Rate per Package'),
),
migrations.AlterField(
model_name='courierbatch',
name='state',
field=models.IntegerField(db_index=True, default=2, verbose_name='State'),
),
migrations.AlterField(
model_name='courierbatch',
name='system',
field=models.CharField(blank=True, choices=[('yunda', '\u97f5\u8fbe\u7ebf'), ('postal', '\u90ae\u653f\u7ebf')], db_index=True, max_length=32, null=True, verbose_name='System Name'),
),
migrations.AlterField(
model_name='courierbatch',
name='uuid',
field=models.CharField(blank=True, db_index=True, max_length=64, null=True, unique=True, verbose_name='UUID'),
),
]
| 36.268293 | 193 | 0.615333 | 162 | 1,487 | 5.475309 | 0.438272 | 0.11274 | 0.140924 | 0.163472 | 0.535513 | 0.425028 | 0.171364 | 0.171364 | 0.171364 | 0.171364 | 0 | 0.049416 | 0.251513 | 1,487 | 40 | 194 | 37.175 | 0.747529 | 0.04573 | 0 | 0.454545 | 1 | 0 | 0.146893 | 0.016243 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d7ec3af0754886496ab74d63932b32e5dda81ec2 | 547 | py | Python | resources/__init__.py | tiralinka/amazon_fires | bda8cb2a6910be17e9cbbfb4f214a2b019efd145 | [
"MIT"
] | 1 | 2021-03-08T02:40:00.000Z | 2021-03-08T02:40:00.000Z | resources/__init__.py | tiralinka/amazon_fires | bda8cb2a6910be17e9cbbfb4f214a2b019efd145 | [
"MIT"
] | null | null | null | resources/__init__.py | tiralinka/amazon_fires | bda8cb2a6910be17e9cbbfb4f214a2b019efd145 | [
"MIT"
] | 2 | 2021-01-17T13:51:31.000Z | 2021-05-27T22:22:49.000Z | """
This package contains modules built specifically for the project in question.
Below are decribed the modules and packages used in the notebooks of this project.
Modules
-----------
polynomials:
| This module groups functions and classes for generating polynomials
| whether fitting data or directly ortogonal polynomials
| (Legendre polynomials).
plotter:
| This module groups functions for visualizations presented throughout the notebooks.
functk:
| This module exits outside this package and contains utilities functions.
""" | 28.789474 | 87 | 0.778793 | 67 | 547 | 6.358209 | 0.597015 | 0.070423 | 0.075117 | 0.117371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162706 | 547 | 19 | 88 | 28.789474 | 0.930131 | 0.985375 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d7f2887e18f1782d6580198e20f7cacc72ac9027 | 292 | py | Python | elasticine/adapter.py | Drizzt1991/plasticine | be61baa88f53bdfa666d068a14f17ccc0cfe4d02 | [
"MIT"
] | null | null | null | elasticine/adapter.py | Drizzt1991/plasticine | be61baa88f53bdfa666d068a14f17ccc0cfe4d02 | [
"MIT"
] | null | null | null | elasticine/adapter.py | Drizzt1991/plasticine | be61baa88f53bdfa666d068a14f17ccc0cfe4d02 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from elasticsearch import Elasticsearch
class ElasticAdapter(object):
""" Abstraction in case we will need to add another or change elastic
driver.
"""
def __init__(self, hosts, **es_params):
self.es = Elasticsearch(hosts, **es_params)
| 22.461538 | 73 | 0.660959 | 35 | 292 | 5.342857 | 0.8 | 0.074866 | 0.139037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004425 | 0.226027 | 292 | 12 | 74 | 24.333333 | 0.823009 | 0.328767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cc0fa341d500758a4666f87a2715835852c484f0 | 12,526 | py | Python | vb2py/PythonCard/tools/codeEditor/codeEditorR.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null | vb2py/PythonCard/tools/codeEditor/codeEditorR.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null | vb2py/PythonCard/tools/codeEditor/codeEditorR.rsrc.py | ceprio/xl_vb2py | 899fec0301140fd8bd313e8c80b3fa839b3f5ee4 | [
"BSD-3-Clause"
] | null | null | null | {'application':{'type':'Application',
'name':'codeEditor',
'backgrounds': [
{'type':'Background',
'name':'bgCodeEditor',
'title':'Code Editor R PythonCard Application',
'size':(400, 300),
'statusBar':1,
'visible':0,
'style':['resizeable'],
'visible':0,
'menubar': {'type':'MenuBar',
'menus': [
{'type':'Menu',
'name':'menuFile',
'label':'&File',
'items': [
{'type':'MenuItem',
'name':'menuFileNewWindow',
'label':'New Window',
},
{'type':'MenuItem',
'name':'menuFileNew',
'label':'&New\tCtrl+N',
},
{'type':'MenuItem',
'name':'menuFileOpen',
'label':'&Open\tCtrl+O',
},
{'type':'MenuItem',
'name':'menuFileSave',
'label':'&Save\tCtrl+S',
},
{'type':'MenuItem',
'name':'menuFileSaveAs',
'label':'Save &As...',
},
{'type':'MenuItem',
'name':'fileSep1',
'label':'-',
},
{'type':'MenuItem',
'name':'menuFileCheckSyntax',
'label':'&Check Syntax (Module)\tAlt+F5',
'command':'checkSyntax',
},
{'type':'MenuItem',
'name':'menuFileRun',
'label':'&Run\tCtrl+R',
'command':'fileRun',
},
{'type':'MenuItem',
'name':'menuFileRunWithInterpreter',
'label':'Run with &interpreter\tCtrl+Shift+R',
'command':'fileRunWithInterpreter',
},
{'type':'MenuItem',
'name':'menuFileRunOptions',
'label':'Run Options...',
'command':'fileRunOptions',
},
{'type':'MenuItem',
'name':'fileSep2',
'label':'-',
},
{'type':'MenuItem',
'name':'menuFilePageSetup',
'label':'Page Set&up...',
},
{'type':'MenuItem',
'name':'menuFilePrint',
'label':'&Print...\tCtrl+P',
},
{'type':'MenuItem',
'name':'menuFilePrintPreview',
'label':'Print Pre&view',
},
{'type':'MenuItem',
'name':'fileSep2',
'label':'-',
},
{'type':'MenuItem',
'name':'menuFileExit',
'label':'E&xit\tAlt+X',
'command':'exit',
},
]
},
{'type':'Menu',
'name':'Edit',
'label':'&Edit',
'items': [
{'type':'MenuItem',
'name':'menuEditUndo',
'label':'&Undo\tCtrl+Z',
},
{'type':'MenuItem',
'name':'menuEditRedo',
'label':'&Redo\tCtrl+Y',
},
{'type':'MenuItem',
'name':'editSep1',
'label':'-',
},
{'type':'MenuItem',
'name':'menuEditCut',
'label':'Cu&t\tCtrl+X',
},
{'type':'MenuItem',
'name':'menuEditCopy',
'label':'&Copy\tCtrl+C',
},
{'type':'MenuItem',
'name':'menuEditPaste',
'label':'&Paste\tCtrl+V',
},
{'type':'MenuItem',
'name':'editSep2',
'label':'-',
},
{'type':'MenuItem',
'name':'menuEditFind',
'label':'&Find...\tCtrl+F',
'command':'doEditFind',
},
{'type':'MenuItem',
'name':'menuEditFindNext',
'label':'&Find Next\tF3',
'command':'doEditFindNext',
},
{'type':'MenuItem',
'name':'menuEditFindFiles',
'label':'Find in Files...\tAlt+F3',
'command':'findFiles',
},
{'type':'MenuItem',
'name':'menuEditReplace',
'label':'&Replace...\tCtrl+H',
'command':'doEditFindReplace',
},
{'type':'MenuItem',
'name':'menuEditGoTo',
'label':'&Go To...\tCtrl+G',
'command':'doEditGoTo',
},
{'type':'MenuItem',
'name':'editSep3',
'label':'-',
},
{'type':'MenuItem',
'name':'menuEditReplaceTabs',
'label':'&Replace tabs with spaces',
'command':'doEditReplaceTabs',
},
{'type':'MenuItem',
'name':'editSep3',
'label':'-',
},
{'type':'MenuItem',
'name':'menuEditClear',
'label':'Cle&ar\tDel',
},
{'type':'MenuItem',
'name':'menuEditSelectAll',
'label':'Select A&ll\tCtrl+A',
},
{'type':'MenuItem',
'name':'editSep4',
'label':'-',
},
{'type':'MenuItem',
'name':'menuEditIndentRegion',
'label':'&Indent Region',
'command':'indentRegion',
},
{'type':'MenuItem',
'name':'menuEditDedentRegion',
'label':'&Dedent Region',
'command':'dedentRegion',
},
{'type':'MenuItem',
'name':'menuEditCommentRegion',
'label':'Comment &out region\tAlt+3',
'command':'commentRegion',
},
{'type':'MenuItem',
'name':'menuEditUncommentRegion',
'label':'U&ncomment region\tShift+Alt+3',
'command':'uncommentRegion',
},
]
},
{'type':'Menu',
'name':'menuView',
'label':'&View',
'items': [
{'type':'MenuItem',
'name':'menuViewWhitespace',
'label':'&Whitespace',
'checkable':1,
},
{'type':'MenuItem',
'name':'menuViewIndentationGuides',
'label':'Indentation &guides',
'checkable':1,
},
{'type':'MenuItem',
'name':'menuViewRightEdgeIndicator',
'label':'&Right edge indicator',
'checkable':1,
},
{'type':'MenuItem',
'name':'menuViewEndOfLineMarkers',
'label':'&End-of-line markers',
'checkable':1,
},
{'type':'MenuItem',
'name':'menuViewFixedFont',
'label':'&Fixed Font',
'enabled':0,
'checkable':1,
},
{'type':'MenuItem',
'name':'viewSep1',
'label':'-',
},
{'type':'MenuItem',
'name':'menuViewLineNumbers',
'label':'&Line Numbers',
'checkable':1,
'checked':1,
},
{'type':'MenuItem',
'name':'menuViewCodeFolding',
'label':'&Code Folding',
'checkable':1,
'checked':0,
},
]
},
{'type':'Menu',
'name':'menuFormat',
'label':'F&ormat',
'items': [
{'type':'MenuItem',
'name':'menuFormatStyles',
'label':'&Styles...',
'command':'doSetStyles',
},
{'type':'MenuItem',
'name':'menuFormatWrap',
'label':'&Wrap Lines',
'checkable':1,
},
]
},
{'type':'Menu',
'name':'menuScriptlet',
'label':'&Shell',
'items': [
{'type':'MenuItem',
'name':'menuScriptletShell',
'label':'&Shell Window\tF5',
},
{'type':'MenuItem',
'name':'menuScriptletNamespace',
'label':'&Namespace Window\tF6',
},
{'type':'MenuItem',
'name':'scriptletSep1',
'label':'-',
},
{'type':'MenuItem',
'name':'menuScriptletSaveShellSelection',
'label':'Save Shell Selection...',
},
{'type':'MenuItem',
'name':'menuScriptletRunScriptlet',
'label':'Run Scriptlet...',
},
]
},
{'type':'Menu',
'name':'menuHelp',
'label':'&Help',
'items': [
{'type':'MenuItem',
'name':'menuShellDocumentation',
'label':'&Shell Documentation...',
'command':'showShellDocumentation',
},
{'type':'MenuItem',
'name':'menuPythonCardDocumentation',
'label':'&PythonCard Documentation...\tF1',
'command':'showPythonCardDocumentation',
},
{'type':'MenuItem',
'name':'menuPythonDocumentation',
'label':'Python &Documentation...',
'command':'showPythonDocumentation',
},
{'type':'MenuItem',
'name':'helpSep1',
'label':'-',
},
{'type':'MenuItem',
'name':'menuHelpAbout',
'label':'&About codeEditor...',
'command':'doHelpAbout',
},
]
},
]
},
'strings': {
'saveAs':'Save As',
'about':'About codeEditor...',
'saveAsWildcard':'All files (*.*)|*.*|Python scripts (*.py;*.pyw)|*.pyw;*.PY;*.PYW;*.py|Text files (*.txt;*.text)|*.text;*.TXT;*.TEXT;*.txt|HTML and XML files (*.htm;*.html;*.xml)|*.htm;*.xml;*.HTM;*.HTML;*.XML;*.html',
'chars':'chars',
'gotoLine':'Goto line',
'lines':'lines',
'gotoLineNumber':'Goto line number:',
'documentChangedPrompt':'The text in the %s file has changed.\n\nDo you want to save the changes?',
'untitled':'Untitled',
'sample':'codeEditor sample',
'codeEditor':'codeEditor',
'replaced':'Replaced %d occurances',
'words':'words',
'openFile':'Open file',
'scriptletWildcard':'Python files (*.py)|*.py|All Files (*.*)|*.*',
'document':'Document',
},
'components': [
{'type':'Choice',
'name':'popComponentNames',
},
{'type':'Choice',
'name':'popComponentEvents',
},
{'type':'CodeEditor',
'name':'document',
'position':(0, 0),
'size':(250, 100),
},
] # end components
} # end background
] # end backgrounds
} }
| 35.284507 | 228 | 0.345042 | 685 | 12,526 | 6.309489 | 0.370803 | 0.161037 | 0.214715 | 0.053447 | 0.071726 | 0.041647 | 0.041647 | 0.041647 | 0 | 0 | 0 | 0.00737 | 0.490899 | 12,526 | 354 | 229 | 35.384181 | 0.670378 | 0.003593 | 0 | 0.278736 | 0 | 0.002874 | 0.368117 | 0.046325 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
cc126dbbfecd1ff026b9c6b831a847190b8423eb | 911 | py | Python | unittest_reinvent/diversity_filter_tests/test_murcko_scaffold_superfluous_addition.py | MolecularAI/reinvent-scoring | f7e052ceeffd29e17e1672c33607189873c82a45 | [
"MIT"
] | null | null | null | unittest_reinvent/diversity_filter_tests/test_murcko_scaffold_superfluous_addition.py | MolecularAI/reinvent-scoring | f7e052ceeffd29e17e1672c33607189873c82a45 | [
"MIT"
] | 2 | 2021-11-01T23:19:42.000Z | 2021-11-22T23:41:39.000Z | unittest_reinvent/diversity_filter_tests/test_murcko_scaffold_superfluous_addition.py | MolecularAI/reinvent-scoring | f7e052ceeffd29e17e1672c33607189873c82a45 | [
"MIT"
] | 2 | 2021-11-18T13:14:22.000Z | 2022-03-16T07:52:57.000Z | from reinvent_scoring.scoring.diversity_filters.curriculum_learning.update_diversity_filter_dto import \
UpdateDiversityFilterDTO
from unittest_reinvent.diversity_filter_tests.test_murcko_scaffold_base import BaseMurckoScaffoldFilter
from unittest_reinvent.diversity_filter_tests.fixtures import tanimoto_scaffold_filter_arrangement
from unittest_reinvent.fixtures.test_data import ASPIRIN
class TestMurckoScaffoldSuperfluousAddition(BaseMurckoScaffoldFilter):
def setUp(self):
super().setUp()
# try to add a smile already present
final_summary = tanimoto_scaffold_filter_arrangement([ASPIRIN], [1.0], [0])
self.update_dto = UpdateDiversityFilterDTO(final_summary, [])
def test_superfluous_addition(self):
self.scaffold_filter.update_score(self.update_dto)
self.assertEqual(2, self.scaffold_filter._diversity_filter_memory.number_of_scaffolds())
| 45.55 | 104 | 0.815587 | 102 | 911 | 6.921569 | 0.490196 | 0.084986 | 0.084986 | 0.082153 | 0.113314 | 0.113314 | 0 | 0 | 0 | 0 | 0 | 0.004988 | 0.119649 | 911 | 19 | 105 | 47.947368 | 0.875312 | 0.037322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.153846 | false | 0 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
0bcce6782aa31e7dd2aa944b38edc0071b4e58f7 | 847 | py | Python | boardgame/connectfour/connectfourviewer.py | suryaambrose/boardgame | 459f9ae26ce571d34da88c295eb577b835f3ad13 | [
"MIT"
] | null | null | null | boardgame/connectfour/connectfourviewer.py | suryaambrose/boardgame | 459f9ae26ce571d34da88c295eb577b835f3ad13 | [
"MIT"
] | null | null | null | boardgame/connectfour/connectfourviewer.py | suryaambrose/boardgame | 459f9ae26ce571d34da88c295eb577b835f3ad13 | [
"MIT"
] | null | null | null | import os
import sys
from ..gameviewer import GameViewer
class ConnectFourViewer(GameViewer):
def __init__(self):
super(ConnectFourViewer, self).__init__([6,7])
def showState(self, state):
os.system("clear")
sys.stdout.write("x\y|")
for k in range(0, self.map_width):
sys.stdout.write("%d "%(k))
sys.stdout.write("\n")
for i in range(0, self.map_height):
sys.stdout.write(" %d "%(i))
for j in range(0, self.map_width):
sys.stdout.write("|")
if state._board[i][j] is not None:
sys.stdout.write(self.symbol_map[state._board[i][j]])
else:
sys.stdout.write(" ")
sys.stdout.write("|\n")
def waitForAMove(self):
while True:
try:
played_column = raw_input("Type where you wish to play (e.g. 1 for column 1):")
c = int(played_column)
break
except Exception, e:
print e
return c | 25.666667 | 83 | 0.654073 | 133 | 847 | 4.037594 | 0.481203 | 0.134078 | 0.208566 | 0.067039 | 0.154562 | 0.126629 | 0.126629 | 0.126629 | 0.126629 | 0 | 0 | 0.010145 | 0.18536 | 847 | 33 | 84 | 25.666667 | 0.768116 | 0 | 0 | 0 | 0 | 0 | 0.086085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.033333 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0bd3fe284b31a90b04c8a385e5ac021eadf08bd7 | 577 | py | Python | tests/test_article.py | JohnKarima/news-hub | 261969fe949bf7efbdc6dabb502b7b9b9eecabac | [
"MIT"
] | null | null | null | tests/test_article.py | JohnKarima/news-hub | 261969fe949bf7efbdc6dabb502b7b9b9eecabac | [
"MIT"
] | null | null | null | tests/test_article.py | JohnKarima/news-hub | 261969fe949bf7efbdc6dabb502b7b9b9eecabac | [
"MIT"
] | null | null | null | import unittest
from app.models import Article
class ArticleTest(unittest.TestCase):
'''
Test Class to test the behaviour of the Article class
'''
def setUp(self):
'''
Set up method that will run before every Test
'''
self.new_article = Article('NewsDaily', 'NewsDailyTrue','Larry Madowo', 'Hummus...thoughts?','Literally talking about hummus sir','www.newsdaily.net','www.newsdaily.net/picOfHummus6', '2020/2/3', 'lorem gang et all')
def test_instance(self):
self.assertTrue(isinstance(self.new_article,Article))
| 33.941176 | 224 | 0.679376 | 73 | 577 | 5.328767 | 0.671233 | 0.061697 | 0.071979 | 0.107969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015119 | 0.197574 | 577 | 16 | 225 | 36.0625 | 0.825054 | 0.171577 | 0 | 0 | 0 | 0 | 0.359909 | 0.068337 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
0bd5c6b04b7d3a4ccf8589e0b2129df29191d0f5 | 737 | py | Python | miqa/core/models/image.py | davidshq/miqa-1 | aeb5fbf40a65a6fdb82b6e3d3aff8fe47474792f | [
"Apache-2.0"
] | null | null | null | miqa/core/models/image.py | davidshq/miqa-1 | aeb5fbf40a65a6fdb82b6e3d3aff8fe47474792f | [
"Apache-2.0"
] | null | null | null | miqa/core/models/image.py | davidshq/miqa-1 | aeb5fbf40a65a6fdb82b6e3d3aff8fe47474792f | [
"Apache-2.0"
] | null | null | null | from pathlib import Path
from uuid import uuid4
from django.db import models
from django_extensions.db.models import TimeStampedModel
class Image(TimeStampedModel, models.Model):
class Meta:
indexes = [models.Index(fields=['scan', 'name'])]
ordering = ['name']
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
scan = models.ForeignKey('Scan', related_name='images', on_delete=models.CASCADE)
raw_path = models.CharField(max_length=500, blank=False, unique=True)
name = models.CharField(max_length=255, blank=False)
@property
def path(self) -> Path:
return Path(self.raw_path)
@property
def size(self) -> int:
return self.path.stat().st_size
| 29.48 | 85 | 0.700136 | 96 | 737 | 5.28125 | 0.53125 | 0.039448 | 0.071006 | 0.094675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013289 | 0.183175 | 737 | 24 | 86 | 30.708333 | 0.828904 | 0 | 0 | 0.111111 | 0 | 0 | 0.029851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.111111 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
0bee95548e6fcafe6a5b00d7403872593616c0ae | 11,892 | py | Python | src/app/todos/routes.py | alexzanderr/metro.digital | 71a58b417f6498808224a6de96578bde76f89c60 | [
"MIT"
] | null | null | null | src/app/todos/routes.py | alexzanderr/metro.digital | 71a58b417f6498808224a6de96578bde76f89c60 | [
"MIT"
] | 1 | 2021-12-16T22:11:25.000Z | 2021-12-16T22:11:25.000Z | src/app/todos/routes.py | alexzanderr/flask_web_app | 71a58b417f6498808224a6de96578bde76f89c60 | [
"MIT"
] | null | null | null | """
# type: ignore
type ignore is to tell LSP-pyright to ignore the line
because something it thinks that there are errors, but actually at runtime there are not
"""
from .validation import validate_password_check
from .validation import validate_email
from .validation import validate_password
from .validation import validate_username
from json import dumps
from flask import render_template
from flask import Blueprint
from flask import request
from flask import url_for
from flask import redirect
# mongo db client stuff
from ..mongodb_client import mongodb
from ..mongodb_client import CollectionInvalid
from ..mongodb_client import ObjectId
from ..mongodb_client import collection_exists
from ..mongodb_client import get_db_name
from ..mongodb_client import collection_create
from ..mongodb_client import get_collection
from ..mongodb_client import create_or_get_collection
from ..routes_utils import json_response
from string import ascii_letters, digits
from random import choice, randint
from datetime import datetime, timedelta
import hashlib
todos = Blueprint(
"todos",
__name__,
url_prefix="/todos",
# not working
# template_folder="templates/todos"
)
# document template
# todo = {
# text: 'yeaaah',
# timestamp: 1639492801.10111,
# datetime: '14.12.2021-16:40:01',
# completed: false
# }
todos_collection_name = "todos"
todos_collection = create_or_get_collection(todos_collection_name)
# document template
# user = {
# "username": "alexzander",
# "password": "37djw7dh237dh2yudhja1721hg2", # hashed
# "eamil": "alexxander18360@gmail.com",
# "creation_timestamp": datetime.timestamp(datetime.now()),
# "creation_datetime": datetime.now().strftime("%d.%m.%Y-%H:%M:%S")
# }
users_collection_name = "users"
users_collection = create_or_get_collection(users_collection_name)
# ('_id', 1)]},
# 'username_1': {'v': 2, 'key': [('username', 1)], 'unique': True}}
users_unique_keys = [{
"name": "username",
"exists": False
}]
for _, value in users_collection.index_information().items():
for unique_key in value["key"]:
for users_unique_key in users_unique_keys:
if unique_key[0] == users_unique_key["name"]:
users_unique_key["exists"] = True
for users_unique_key in users_unique_keys:
if not users_unique_key["exists"]:
users_collection.create_index([
(users_unique_key["name"], 1)
], unique=True)
register_tokens_collection_name = "register_tokens"
register_tokens_collection = create_or_get_collection(register_tokens_collection_name)
# ('_id', 1)]},
# 'username_1': {'v': 2, 'key': [('username', 1)], 'unique': True}}
tokens_unique_keys = [{
"name": "token",
"exists": False
}]
for _, value in users_collection.index_information().items():
for unique_key in value["key"]:
for tokens_unique_key in tokens_unique_keys:
if unique_key[0] == tokens_unique_key["name"]:
tokens_unique_key["exists"] = True
for tokens_unique_key in users_unique_keys:
if not tokens_unique_key["exists"]:
users_collection.create_index([
(tokens_unique_key["name"], 1)
], unique=True)
# users_collection.create_index([("username", 1)], unique=True)
@todos.route("/")
def todos_root():
# TODO
# add authentication with accounts
todos_collection = get_collection(todos_collection_name)
todo_list = todos_collection.find()
return render_template("todos/index.html", todo_list=todo_list)
def hash_password(password: str):
# deci input pentru sha256 trebuie sa fie bytes
return hashlib.sha256(password.encode()).hexdigest()
def check_hash_of_password(username: str, password: str):
_user = users_collection.find_one({"username": username})
_hashed_password = hash_password(password)
return _user["password"] == _hashed_password # type: ignore
@todos.route("/login", methods=["GET", "POST"])
def todos_login():
"""
Function: todos_login
Summary: this function returns a login page with a form
Returns: render_template("todos/login.html")
"""
method = request.method
if method == "POST":
# then create a new user in database and encrypt
# the password
# then redirect to /todos based on the content that the user has in todos database
# return render_template ?
pass
else:
# GET
# if the user is already authenticated
# then redirect to /todos page
# else
# return below
return render_template("todos/login.html")
@todos.route("/mongo/add", methods=["POST"])
def mongo_add():
todos_collection.insert_one({
"text": request.form["text"],
"timestamp": datetime.timestamp(datetime.now()),
"datetime": datetime.now().strftime("%d.%m.%Y-%H:%M:%S"),
"completed": False
})
# return dict(todo), {
# "Refresh": "1; url={}".format(url_for("todos"))
# }
return redirect("/todos")
@todos.route("/mongo/complete/<oid>")
def mongo_complete(oid):
requested_todo = todos_collection.find_one({
"_id": ObjectId(oid)
})
completed = True
if requested_todo["completed"]: # type: ignore
completed = False
todos_collection.update_one(
requested_todo,
{"$set": {"completed": completed}})
# todos_collection.replace_one(requested_todo, {"something": "else"})
# 61b6247e165b109454a32c1b
# 61b6247e165b109454a32c1b
return redirect("/todos")
@todos.route("/mongo/delete/<oid>")
def mongo_delete(oid):
requested_todo = todos_collection.find_one({
"_id": ObjectId(oid)
})
todos_collection.delete_one(requested_todo)
return redirect(url_for("todos"))
@todos.route('/mongo/delete/all')
def mongo_delete_all():
todos_collection.delete_many({})
return redirect(url_for('todos'))
# @todos.route("/", methods=['POST'])
# @todos.route("/<component_name>", methods=['POST'])
# def graphql_query(component_name="app"):
# return str(component_name)
todos_api = Blueprint(
"todos_api",
__name__,
url_prefix="/todos/api")
@todos_api.route("/")
def todos_api_root():
return {"message": "salutare"}, 200
@todos_api.route("/mongo/add", methods=["POST"])
def todos_api_mongo_add():
json_from_request = request.get_json()
todo = {
"text": json_from_request["text"], # type: ignore
"timestamp": datetime.timestamp(datetime.now()),
"datetime": datetime.now().strftime("%d.%m.%Y-%H:%M:%S"),
"completed": False
}
todos_collection.insert_one(todo)
# the above function insert a _id key
todo["oid"] = str(todo["_id"])
del todo["_id"]
return json_response(todo, 200)
# PATCH request
# The PATCH method applies partial modifications to a resource
# meaning that in this case partial mods are todo completed == true
@todos_api.route("/mongo/complete/<oid>", methods=["PATCH"])
def todos_api_mongo_complete(oid):
requested_todo = todos_collection.find_one({
"_id": ObjectId(oid)
})
completed = True
if requested_todo["completed"]: # type: ignore
completed = False
todos_collection.update_one(
requested_todo,
{"$set": {"completed": completed}}
)
requested_todo["oid"] = str(requested_todo["_id"]) # type: ignore
requested_todo["completed"] = completed # type: ignore
del requested_todo["_id"] # type: ignore
return json_response(requested_todo, 200) # type: ignore
# TODO add the oid in the post data body
# instead of making it an url, so that no one can see
# te oid
@todos_api.route("/mongo/delete/<oid>", methods=["DELETE"])
def todos_api_mongo_delete(oid):
requested_todo = todos_collection.find_one({
"_id": ObjectId(oid)
})
todos_collection.delete_one(requested_todo)
requested_todo["oid"] = str(requested_todo["_id"]) # type: ignore
del requested_todo["_id"] # type: ignore
return json_response(requested_todo, 200) # type: ignore
def generate_random_register_token():
return "".join([choice(ascii_letters + digits) for _ in range(30)])
def get_new_register_token():
"""
Function: get_new_token()
Summary: gets new token based on whats in the db
Returns: new token that is not the database
"""
brand_new_token = generate_random_register_token()
while register_tokens_collection.find_one({"token": brand_new_token}):
brand_new_token = generate_random_register_token()
return brand_new_token
@todos.route("/register", methods=["GET", "POST"])
def todos_register():
method = request.method
if method == "POST":
# then create a new user in database and encrypt
# the password
# then redirect to /todos based on the content that the user has in todos database
# return render_template ?
# get data and token from request data body
json_from_request: dict = request.get_json() # type: ignore
username = json_from_request["username"]
email = json_from_request["email"]
password = json_from_request["password"]
password_check = json_from_request["password_check"]
remember_me = json_from_request["remember_me"]
register_token = json_from_request["register_token"]
if not register_tokens_collection.find_one({"token": register_token}):
return {
"message": "cannot register, register token is not database"
}, 403
users_collection.insert_one({
"username": username,
"password": hash_password(password), # hashed
"email": email,
"creation_timestamp": datetime.timestamp(datetime.now()),
"creation_datetime": datetime.now().strftime("%d.%m.%Y-%H:%M:%S")
})
# you can redirect from POST request sorry
# and you can render HTML from here because you
# are making the request from ajax, not from firefox
return {"message": "success", "redirectTo": "/todos"}, 200
# or you can redirect to login page
# or you can automatically login the user after registration
else:
# GET
# if the user is already authenticated
# then redirect to /todos page
# else
# return below
return render_template("todos/register.html")
@todos_api.post("/register/validation")
def todos_api_register():
"""
Function: todos_api_register
Returns: json with validated input
"""
json_from_request: dict = request.get_json() # type: ignore
username = json_from_request["username"]
email = json_from_request["email"]
password = json_from_request["password"]
password_check = json_from_request["password_check"]
remember_me = json_from_request["remember_me"]
# some examples
results = {
"username": validate_username(username),
"password": validate_password(password),
"email": validate_email(email),
"password_check": validate_password_check(password, password_check),
"register_token": None
}
all_passed = True
for k, v in results.items():
if k != "register_token" and not v["passed"]:
all_passed = False
break
if all_passed:
new_token = get_new_register_token()
results["register_token"] = new_token
register_tokens_collection.insert_one({
"token": new_token,
"expiration_timestamp": datetime.timestamp(datetime.now() + timedelta(minutes=2))
})
# TODO add check for username in database
return json_response(results, 200)
# return {
# "username": username,
# "email": email,
# "password": password,
# "password_check": password_check,
# "remember_me": remember_me
# }, 200
| 30.414322 | 93 | 0.670114 | 1,451 | 11,892 | 5.248794 | 0.168849 | 0.034139 | 0.029543 | 0.02416 | 0.464154 | 0.389049 | 0.353072 | 0.32261 | 0.318146 | 0.298845 | 0 | 0.013751 | 0.21115 | 11,892 | 390 | 94 | 30.492308 | 0.798103 | 0.255466 | 0 | 0.354839 | 0 | 0 | 0.116357 | 0.004848 | 0 | 0 | 0 | 0.017949 | 0 | 1 | 0.073733 | false | 0.087558 | 0.105991 | 0.013825 | 0.262673 | 0.013825 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0bff94692e4bbe3d3e04ebd669ece2fd2be0847b | 489 | py | Python | nrm_django/nrm_site/settings/test.py | 18F/NRM-Grants-Agreements | 7b9016e034b75a2237f7c70ba539b542108c335e | [
"CC0-1.0"
] | 5 | 2020-11-18T20:00:02.000Z | 2021-04-16T23:50:07.000Z | nrm_django/nrm_site/settings/test.py | USDAForestService/NRM-Grants-Agreements | 7b9016e034b75a2237f7c70ba539b542108c335e | [
"CC0-1.0"
] | 210 | 2021-04-28T16:26:34.000Z | 2022-03-14T16:31:21.000Z | nrm_django/nrm_site/settings/test.py | USDAForestService/NRM-Grants-Agreements | 7b9016e034b75a2237f7c70ba539b542108c335e | [
"CC0-1.0"
] | 2 | 2021-07-06T20:57:27.000Z | 2021-07-07T13:06:46.000Z | import os
from .base import * # noqa
import dj_database_url
SECRET_KEY = "test mode"
database_url = os.getenv("DATABASE_URL")
if database_url:
DATABASES = {"default": dj_database_url.parse(database_url)}
else:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "nrm_test",
"HOST": "postgres",
"PORT": "5432",
"USER": "postgres",
"PASSWORD": "postgres",
}
}
| 21.26087 | 64 | 0.554192 | 50 | 489 | 5.22 | 0.62 | 0.252874 | 0.099617 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01173 | 0.302658 | 489 | 22 | 65 | 22.227273 | 0.753666 | 0.00818 | 0 | 0 | 0 | 0 | 0.269151 | 0.060041 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.055556 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
042a1dff477ec006dda477d8738dfe23bcc7b467 | 9,358 | py | Python | modules.py | callistachang/CycleGAN-Music-Transfer | 928e87b4bebc4da1dcf7c43936d2c10fe76170f1 | [
"MIT"
] | null | null | null | modules.py | callistachang/CycleGAN-Music-Transfer | 928e87b4bebc4da1dcf7c43936d2c10fe76170f1 | [
"MIT"
] | 1 | 2021-07-07T13:36:18.000Z | 2021-07-07T13:36:18.000Z | modules.py | callistachang/CycleGAN-Music-Transfer | 928e87b4bebc4da1dcf7c43936d2c10fe76170f1 | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
from tensorflow.keras import Model, layers, Input
from collections import namedtuple
def abs_criterion(pred, target):
return tf.reduce_mean(tf.abs(pred - target))
def mae_criterion(pred, target):
return tf.reduce_mean((pred - target) ** 2)
def sce_criterion(logits, labels):
return tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)
)
def softmax_criterion(logits, labels):
return tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)
)
def padding(x, p=3):
return tf.pad(x, [[0, 0], [p, p], [p, p], [0, 0]], "REFLECT")
class InstanceNorm(layers.Layer):
def __init__(self, input_shape):
super(InstanceNorm, self).__init__()
self.scale = tf.Variable(
initial_value=np.random.normal(1.0, 0.02, input_shape),
trainable=True,
name="SCALE",
dtype=tf.float32,
)
self.offset = tf.Variable(
initial_value=np.zeros(input_shape),
trainable=True,
name="OFFSET",
dtype=tf.float32,
)
def call(self, x, epsilon=1e-5):
mean, variance = tf.nn.moments(x, axes=[1, 2], keepdims=True)
inv = tf.math.rsqrt(variance + epsilon)
normalized = (x - mean) * inv
return self.scale * normalized + self.offset
class Padding(layers.Layer):
def __init__(self):
super(Padding, self).__init__()
def call(self, x, p=3):
return tf.pad(x, [[0, 0], [p, p], [p, p], [0, 0]], "REFLECT")
class ResNetBlock(layers.Layer):
def __init__(self):
super(ResNetBlock, self).__init__()
def call(self, x, dim, k_init, ks=3, s=1):
p = (ks - 1) // 2
# For ks = 3, p = 1
y = layers.Lambda(padding, arguments={"p": p}, name="PADDING_1")(x)
# After first padding, (batch * 130 * 130 * 3)
y = layers.Conv2D(
filters=dim,
kernel_size=ks,
strides=s,
padding="valid",
kernel_initializer=k_init,
use_bias=False,
)(y)
y = InstanceNorm(y.shape[-1:])(y)
y = layers.ReLU()(y)
# After first conv2d, (batch * 128 * 128 * 3)
y = layers.Lambda(padding, arguments={"p": p}, name="PADDING_2")(y)
# After second padding, (batch * 130 * 130 * 3)
y = layers.Conv2D(
filters=dim,
kernel_size=ks,
strides=s,
padding="valid",
kernel_initializer=k_init,
use_bias=False,
)(y)
y = InstanceNorm(y.shape[-1:])(y)
y = layers.ReLU()(y + x)
# After second conv2d, (batch * 128 * 128 * 3)
return y
# def instance_norm(x, epsilon=1e-5):
# scale = tf.Variable(
# initial_value=np.random.normal(1.0, 0.02, x.shape[-1:]),
# trainable=True,
# name="SCALE",
# dtype=tf.float32,
# )
# offset = tf.Variable(
# initial_value=np.zeros(x.shape[-1:]),
# trainable=True,
# name="OFFSET",
# dtype=tf.float32,
# )
# mean, variance = tf.nn.moments(x, axes=[1, 2], keepdims=True)
# inv = tf.math.rsqrt(variance + epsilon)
# normalized = (x - mean) * inv
# return scale * normalized + offset
def build_discriminator(options, name="Discriminator"):
initializer = tf.random_normal_initializer(0.0, 0.02)
inputs = Input(shape=(options.time_step, options.pitch_range, options.output_nc))
x = inputs
x = layers.Conv2D(
filters=options.df_dim,
kernel_size=7,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_1",
)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 32 * 42 * 64)
x = layers.Conv2D(
filters=options.df_dim * 4,
kernel_size=7,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_2",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 16 * 21 * 256)
x = layers.Conv2D(
filters=1,
kernel_size=7,
strides=1,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_3",
)(x)
# (batch * 16 * 21 * 1)
outputs = x
return Model(inputs=inputs, outputs=outputs, name=name)
def build_generator(options, name="Generator"):
initializer = tf.random_normal_initializer(0.0, 0.02)
inputs = Input(shape=(options.time_step, options.pitch_range, options.output_nc))
x = inputs
# (batch * 64 * 84 * 1)
x = layers.Lambda(padding, name="PADDING_1")(x)
# (batch * 70 * 90 * 1)
x = layers.Conv2D(
filters=options.gf_dim,
kernel_size=7,
strides=1,
padding="valid",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_1",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.ReLU()(x)
# (batch * 64 * 84 * 64)
x = layers.Conv2D(
filters=options.gf_dim * 2,
kernel_size=3,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_2",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.ReLU()(x)
# (batch * 32 * 42 * 128)
x = layers.Conv2D(
filters=options.gf_dim * 4,
kernel_size=3,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_3",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.ReLU()(x)
# (batch * 16 * 21 * 256)
for i in range(10):
# x = resnet_block(x, options.gf_dim * 4)
x = ResNetBlock()(x, options.gf_dim * 4, initializer)
# (batch * 16 * 21 * 256)
x = layers.Conv2DTranspose(
filters=options.gf_dim * 2,
kernel_size=3,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="DECONV2D_1",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.ReLU()(x)
# (batch * 32 * 42 * 128)
x = layers.Conv2DTranspose(
filters=options.gf_dim,
kernel_size=3,
strides=2,
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="DECONV2D_2",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.ReLU()(x)
# (batch * 64 * 84 * 64)
x = layers.Lambda(padding, name="PADDING_2")(x)
# After padding, (batch * 70 * 90 * 64)
x = layers.Conv2D(
filters=options.output_nc,
kernel_size=7,
strides=1,
padding="valid",
kernel_initializer=initializer,
activation="sigmoid",
use_bias=False,
name="CONV2D_4",
)(x)
# (batch * 64 * 84 * 1)
outputs = x
return Model(inputs=inputs, outputs=outputs, name=name)
def build_discriminator_classifier(options, name="Discriminator_Classifier"):
initializer = tf.random_normal_initializer(0.0, 0.02)
inputs = Input(shape=(options.time_step, options.pitch_range, options.output_nc))
x = inputs
# (batch * 64, 84, 1)
x = layers.Conv2D(
filters=options.df_dim,
kernel_size=[1, 12],
strides=[1, 12],
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_1",
)(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 64 * 7 * 64)
x = layers.Conv2D(
filters=options.df_dim * 2,
kernel_size=[4, 1],
strides=[4, 1],
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_2",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 16 * 7 * 128)
x = layers.Conv2D(
filters=options.df_dim * 4,
kernel_size=[2, 1],
strides=[2, 1],
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_3",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 8 * 7 * 256)
x = layers.Conv2D(
filters=options.df_dim * 8,
kernel_size=[8, 1],
strides=[8, 1],
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_4",
)(x)
x = InstanceNorm(x.shape[-1:])(x)
x = layers.LeakyReLU(alpha=0.2)(x)
# (batch * 1 * 7 * 512)
x = layers.Conv2D(
filters=2,
kernel_size=[1, 7],
strides=[1, 7],
padding="same",
kernel_initializer=initializer,
use_bias=False,
name="CONV2D_5",
)(x)
# (batch * 1 * 1 * 2)
x = tf.reshape(x, [-1, 2])
# (batch * 2)
outputs = x
return Model(inputs=inputs, outputs=outputs, name=name)
if __name__ == "__main__":
OPTIONS = namedtuple(
"OPTIONS",
"batch_size "
"time_step "
"input_nc "
"output_nc "
"pitch_range "
"gf_dim "
"df_dim ",
)
options = OPTIONS._make((128, 64, 1, 1, 84, 64, 64))
model = build_generator(options)
print(model.summary())
| 25.498638 | 85 | 0.556315 | 1,179 | 9,358 | 4.272265 | 0.124682 | 0.037522 | 0.038118 | 0.044471 | 0.797499 | 0.771491 | 0.725829 | 0.633313 | 0.626166 | 0.559659 | 0 | 0.051493 | 0.298568 | 9,358 | 366 | 86 | 25.568306 | 0.715874 | 0.131545 | 0 | 0.666667 | 0 | 0 | 0.047136 | 0.002969 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054902 | false | 0 | 0.015686 | 0.023529 | 0.12549 | 0.003922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0436d3f10986a9986bb88f66529f9631838fc465 | 295 | py | Python | class2/demo3.py | sanderslhc/python-learing | 2769f72c9b6de24d768175bed1aa9851d0469d19 | [
"MIT"
] | 1 | 2021-07-20T09:52:55.000Z | 2021-07-20T09:52:55.000Z | class2/demo3.py | sanderslhc/python-learning | 2769f72c9b6de24d768175bed1aa9851d0469d19 | [
"MIT"
] | null | null | null | class2/demo3.py | sanderslhc/python-learning | 2769f72c9b6de24d768175bed1aa9851d0469d19 | [
"MIT"
] | null | null | null | #多分支结构
score=int(input('请输入成绩'))
#判断
if score>=90 and score<=100:
print('A')
elif score>=80 and score<=89:
print('B')
elif score>=70 and score<=79:
print('C')
elif score>=60 and score<=69:
print('D')
elif score>=0 and score<=59:
print('E')
else:
print('无效') | 19.666667 | 30 | 0.572881 | 49 | 295 | 3.44898 | 0.55102 | 0.236686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087719 | 0.227119 | 295 | 15 | 31 | 19.666667 | 0.653509 | 0.023729 | 0 | 0 | 0 | 0 | 0.043956 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.461538 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
0447af988a6ba77384680fe4e01e6a7e24dba0af | 2,253 | py | Python | src/ch3/generatefeedvector.py | amolnayak311/Programming-Collective-Intelligence | eaa55c3989a8d36e7b766fbaba267b4cbaedf5be | [
"Apache-2.0"
] | null | null | null | src/ch3/generatefeedvector.py | amolnayak311/Programming-Collective-Intelligence | eaa55c3989a8d36e7b766fbaba267b4cbaedf5be | [
"Apache-2.0"
] | null | null | null | src/ch3/generatefeedvector.py | amolnayak311/Programming-Collective-Intelligence | eaa55c3989a8d36e7b766fbaba267b4cbaedf5be | [
"Apache-2.0"
] | null | null | null | '''
Created on Sep 4, 2015
@author: Amol
'''
from feedparser import parse
import re
from itertools import groupby
#Remove HTML and get the remaining words
def getwords(html):
txt = re.compile(r'<[^>]+>').sub('', html)
words = re.compile(r'[^A-Z^a-z]+').split(txt)
return [word.lower() for word in words if word != '']
#Short implementation to count words, however, the performance is not something I have benchmarked
#As this includesm sort, groupby and len(list(group))
def getwordcounts(url):
d = parse(url)
print "Getting feed from URL %s" % url
feed_map = d['feed']
if 'title' in feed_map:
res = [getwords(e['title'] + ' ' + e['summary' if 'summary' in e else 'description']) for e in d['entries']]
word_count_map = dict((key, len(list(group))) for key, group in groupby(sorted([word for l in res for word in l])))
return feed_map['title'], word_count_map
else:
print "Warn: Unable to access data from feed %s" % url
#Special handling for some URLs not found or Forbidden
return 'NA', {}
# TODO: Clean implementation of the following code
# TODO: Experiment using Stopword filters
apcount = {}
wordcount = {}
feedlist = 0
for feedurl in file('feedlist.txt'):
title, wc = getwordcounts(feedurl)
if title != 'NA' and len(wc) > 0:
feedlist += 1
wordcount[title] = wc
for word, count in wc.items():
apcount.setdefault(word, 0)
if count > 1:
apcount[word] += 1
print "Retrieved and parsed all words from the List of Blogs"
wordlist = []
for w,bc in apcount.items():
frac = float(bc) / float(feedlist)
if frac > 0.1 and frac < 0.5: wordlist.append(w)
print "Writing to blogdata.txt"
out = file('blogdata.txt', 'w')
out.write('Blog')
for word in wordlist: out.write('\t%s'% word)
out.write('\n')
for blog, wc in wordcount.items():
out.write(blog)
for word in wordlist:
word_count = wc[word] if word in wc else 0
out.write("\t%d" % word_count)
out.write("\n")
out.close()
print "Successfully written to blogdata.txt"
#Note that the output generated will not match the one given by the author. Some of the URLs dont even work now | 31.732394 | 127 | 0.637816 | 341 | 2,253 | 4.187683 | 0.407625 | 0.033613 | 0.02521 | 0.021008 | 0.040616 | 0.040616 | 0.040616 | 0 | 0 | 0 | 0 | 0.009308 | 0.237017 | 2,253 | 71 | 128 | 31.732394 | 0.821408 | 0.195739 | 0 | 0 | 0 | 0 | 0.165247 | 0 | 0 | 0 | 0 | 0.014085 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.104167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
044bb9506b68dc9263b143fd71b62d2aa484539b | 10,954 | py | Python | 0.17/_downloads/8c453dbbabf4b225611c41642ea9b1d5/plot_morph_stc.py | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.17/_downloads/8c453dbbabf4b225611c41642ea9b1d5/plot_morph_stc.py | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | 0.17/_downloads/8c453dbbabf4b225611c41642ea9b1d5/plot_morph_stc.py | drammock/mne-tools.github.io | 5d3a104d174255644d8d5335f58036e32695e85d | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
r"""
================================================================
Morphing source estimates: Moving data from one brain to another
================================================================
Morphing refers to the operation of transferring
:ref:`source estimates <sphx_glr_auto_tutorials_plot_object_source_estimate.py>`
from one anatomy to another. It is commonly referred as realignment in fMRI
literature. This operation is necessary for group studies as one needs
then data in a common space.
In this tutorial we will morph different kinds of source estimation results
between individual subject spaces using :class:`mne.SourceMorph` object.
We will use precomputed data and morph surface and volume source estimates to a
reference anatomy. The common space of choice will be FreeSurfer's 'fsaverage'
See :ref:`sphx_glr_auto_tutorials_plot_background_freesurfer.py` for more
information. Method used for cortical surface data in based
on spherical registration [1]_ and Symmetric Diffeomorphic Registration (SDR)
for volumic data [2]_.
Furthermore we will convert our volume source estimate into a NIfTI image using
:meth:`morph.apply(..., output='nifti1') <mne.SourceMorph.apply>`.
In order to morph :class:`labels <mne.Label>` between subjects allowing the
definition of labels in a one brain and transforming them to anatomically
analogous labels in another use :func:`mne.Label.morph`.
.. contents::
:local:
Why morphing?
=============
Modern neuroimaging techniques, such as source reconstruction or fMRI analyses,
make use of advanced mathematical models and hardware to map brain activity
patterns into a subject specific anatomical brain space.
This enables the study of spatio-temporal brain activity. The representation of
spatio-temporal brain data is often mapped onto the anatomical brain structure
to relate functional and anatomical maps. Thereby activity patterns are
overlaid with anatomical locations that supposedly produced the activity.
Anatomical MR images are often used as such or are transformed into an inflated
surface representations to serve as "canvas" for the visualization.
In order to compute group level statistics, data representations across
subjects must be morphed to a common frame, such that anatomically and
functional similar structures are represented at the same spatial location for
*all subjects equally*.
Since brains vary, morphing comes into play to tell us how the data
produced by subject A, would be represented on the brain of subject B.
See also this :ref:`tutorial on surface source estimation
<sphx_glr_auto_tutorials_plot_mne_solutions.py>`
or this :ref:`example on volumetric source estimation
<sphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py>`.
Morphing **volume** source estimates
====================================
A volumetric source estimate represents functional data in a volumetric 3D
space. The difference between a volumetric representation and a "mesh" (
commonly referred to as "3D-model"), is that the volume is "filled" while the
mesh is "empty". Thus it is not only necessary to morph the points of the
outer hull, but also the "content" of the volume.
In MNE-Python, volumetric source estimates are represented as
:class:`mne.VolSourceEstimate`. The morph was successful if functional data of
Subject A overlaps with anatomical data of Subject B, in the same way it does
for Subject A.
Setting up :class:`mne.SourceMorph` for :class:`mne.VolSourceEstimate`
----------------------------------------------------------------------
Morphing volumetric data from subject A to subject B requires a non-linear
registration step between the anatomical T1 image of subject A to
the anatomical T1 image of subject B.
MNE-Python uses the Symmetric Diffeomorphic Registration [2]_ as implemented
in dipy_ [3]_ (See
`tutorial <http://nipy.org/dipy/examples_built/syn_registration_3d.html>`_
from dipy_ for more details).
:class:`mne.SourceMorph` uses segmented anatomical MR images computed
using :ref:`FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>`
to compute the transformations. In order tell SourceMorph which MRIs to use,
``subject_from`` and ``subject_to`` need to be defined as the name of the
respective folder in FreeSurfer's home directory.
See :ref:`sphx_glr_auto_examples_inverse_plot_morph_volume_stc.py`
usage and for more details on:
- How to create a SourceMorph object for volumetric data
- Apply it to VolSourceEstimate
- Get the output is NIfTI format
- Save a SourceMorph object to disk
Morphing **surface** source estimates
=====================================
A surface source estimate represents data relative to a 3-dimensional mesh of
the cortical surface computed using FreeSurfer. This mesh is defined by
its vertices. If we want to morph our data from one brain to another, then
this translates to finding the correct transformation to transform each
vertex from Subject A into a corresponding vertex of Subject B. Under the hood
:ref:`FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>`
uses spherical representations to compute the morph, as relies on so
called *morphing maps*.
The morphing maps
-----------------
The MNE software accomplishes morphing with help of morphing
maps which can be either computed on demand or precomputed.
The morphing is performed with help
of the registered spherical surfaces (``lh.sphere.reg`` and ``rh.sphere.reg`` )
which must be produced in FreeSurfer.
A morphing map is a linear mapping from cortical surface values
in subject A (:math:`x^{(A)}`) to those in another
subject B (:math:`x^{(B)}`)
.. math:: x^{(B)} = M^{(AB)} x^{(A)}\ ,
where :math:`M^{(AB)}` is a sparse matrix
with at most three nonzero elements on each row. These elements
are determined as follows. First, using the aligned spherical surfaces,
for each vertex :math:`x_j^{(B)}`, find the triangle :math:`T_j^{(A)}` on the
spherical surface of subject A which contains the location :math:`x_j^{(B)}`.
Next, find the numbers of the vertices of this triangle and set
the corresponding elements on the *j* th row of :math:`M^{(AB)}` so that
:math:`x_j^{(B)}` will be a linear interpolation between the triangle vertex
values reflecting the location :math:`x_j^{(B)}` within the triangle
:math:`T_j^{(A)}`.
It follows from the above definition that in general
.. math:: M^{(AB)} \neq (M^{(BA)})^{-1}\ ,
*i.e.*,
.. math:: x_{(A)} \neq M^{(BA)} M^{(AB)} x^{(A)}\ ,
even if
.. math:: x^{(A)} \approx M^{(BA)} M^{(AB)} x^{(A)}\ ,
*i.e.*, the mapping is *almost* a bijection.
Morphing maps can be computed on the fly or read with
:func:`mne.read_morph_map`. Precomputed maps are
located in ``$SUBJECTS_DIR/morph-maps``.
The names of the files in ``$SUBJECTS_DIR/morph-maps`` are
of the form:
<*A*> - <*B*> -``morph.fif`` ,
where <*A*> and <*B*> are names of subjects. These files contain the maps
for both hemispheres, and in both directions, *i.e.*, both :math:`M^{(AB)}`
and :math:`M^{(BA)}`, as defined above. Thus the files
<*A*> - <*B*> -``morph.fif`` or <*B*> - <*A*> -``morph.fif`` are
functionally equivalent. The name of the file produced depends on the role
of <*A*> and <*B*> in the analysis.
About smoothing
---------------
The current estimates are normally defined only in a decimated
grid which is a sparse subset of the vertices in the triangular
tessellation of the cortical surface. Therefore, any sparse set
of values is distributed to neighboring vertices to make the visualized
results easily understandable. This procedure has been traditionally
called smoothing but a more appropriate name
might be smudging or blurring in
accordance with similar operations in image processing programs.
In MNE software terms, smoothing of the vertex data is an
iterative procedure, which produces a blurred image :math:`x^{(N)}` from
the original sparse image :math:`x^{(0)}` by applying
in each iteration step a sparse blurring matrix:
.. math:: x^{(p)} = S^{(p)} x^{(p - 1)}\ .
On each row :math:`j` of the matrix :math:`S^{(p)}` there
are :math:`N_j^{(p - 1)}` nonzero entries whose values
equal :math:`1/N_j^{(p - 1)}`. Here :math:`N_j^{(p - 1)}` is
the number of immediate neighbors of vertex :math:`j` which
had non-zero values at iteration step :math:`p - 1`.
Matrix :math:`S^{(p)}` thus assigns the average
of the non-zero neighbors as the new value for vertex :math:`j`.
One important feature of this procedure is that it tends to preserve
the amplitudes while blurring the surface image.
Once the indices non-zero vertices in :math:`x^{(0)}` and
the topology of the triangulation are fixed the matrices :math:`S^{(p)}` are
fixed and independent of the data. Therefore, it would be in principle
possible to construct a composite blurring matrix
.. math:: S^{(N)} = \prod_{p = 1}^N {S^{(p)}}\ .
However, it turns out to be computationally more effective
to do blurring with an iteration. The above formula for :math:`S^{(N)}` also
shows that the smudging (smoothing) operation is linear.
From theory to practice
-----------------------
In MNE-Python, surface source estimates are represented as
:class:`mne.SourceEstimate` or :class:`mne.VectorSourceEstimate`. Those can
be used together with :class:`mne.SourceSpaces` or without.
The morph was successful if functional data of Subject A overlaps with
anatomical surface data of Subject B, in the same way it does for Subject A.
See :ref:`sphx_glr_auto_examples_inverse_plot_morph_surface_stc.py`
usage and for more details:
- How to create a :class:`mne.SourceMorph` object using
:func:`mne.compute_source_morph` for surface data
- Apply it to :class:`mne.SourceEstimate` or
:class:`mne.VectorSourceEstimate`
- Save a :class:`mne.SourceMorph` object to disk
Please see also Gramfort *et al.* (2013) [4]_.
References
==========
.. [1] Greve D. N., Van der Haegen L., Cai Q., Stufflebeam S., Sabuncu M.
R., Fischl B., Brysbaert M.
A Surface-based Analysis of Language Lateralization and Cortical
Asymmetry. Journal of Cognitive Neuroscience 25(9), 1477-1492, 2013.
.. [2] Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009).
Symmetric Diffeomorphic Image Registration with Cross- Correlation:
Evaluating Automated Labeling of Elderly and Neurodegenerative
Brain, 12(1), 26-41.
.. [3] Garyfallidis E, Brett M, Amirbekian B, Rokem A, van der Walt S,
Descoteaux M, Nimmo-Smith I and Dipy Contributors (2014). DIPY, a
library for the analysis of diffusion MRI data. Frontiers in
Neuroinformatics, vol.8, no.8.
.. [4] Gramfort A., Luessi M., Larson E., Engemann D. A., Strohmeier D.,
Brodbeck C., Goj R., Jas. M., Brooks T., Parkkonen L. & Hämäläinen, M.
(2013). MEG and EEG data analysis with MNE-Python. Frontiers in
neuroscience, 7, 267.
.. _dipy: http://nipy.org/dipy/
""" # noqa: E501
| 43.125984 | 80 | 0.726675 | 1,673 | 10,954 | 4.705918 | 0.304244 | 0.010161 | 0.011177 | 0.012702 | 0.142512 | 0.117363 | 0.088657 | 0.059698 | 0.053855 | 0.04344 | 0 | 0.00799 | 0.154464 | 10,954 | 253 | 81 | 43.296443 | 0.842043 | 0.998722 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0453131e08ad9df3feb4129ffbeb4612445ebb21 | 763 | py | Python | analysis/dbase/tracking/train.py | BrancoLab/FC_analysis | 7124a7d998275bce6f7a18c264399c7dabfd430b | [
"MIT"
] | 1 | 2018-08-20T14:47:09.000Z | 2018-08-20T14:47:09.000Z | analysis/dbase/tracking/train.py | BrancoLab/FC_analysis | 7124a7d998275bce6f7a18c264399c7dabfd430b | [
"MIT"
] | null | null | null | analysis/dbase/tracking/train.py | BrancoLab/FC_analysis | 7124a7d998275bce6f7a18c264399c7dabfd430b | [
"MIT"
] | 1 | 2018-09-24T15:58:57.000Z | 2018-09-24T15:58:57.000Z | import deeplabcut as dlc
import os
from fcutils.file_io.utils import listdir
# from fcutils.video.utils import trim_clip
config_file = 'D:\\Dropbox (UCL - SWC)\\Rotation_vte\\Locomotion\\dlc\\locomotion-Federico\\config.yaml'
dlc.train_network(config_file)
# fld = 'D:\\Dropbox (UCL - SWC)\\Rotation_vte\\Locomotion\\dlc'
# vids = [os.path.join(fld, '200203_CA8493_video_trim.mp4'), os.path.join(fld, '200204_CA8491_video_trim.mp4'), os.path.join(fld, '200204_CA8494_video_trim.mp4')]
# dlc.extract_outlier_frames(config_file, vids, epsilon=40)
# dlc.merge_datasets(config_file)
# vids = [f for f in listdir(fld) if f.endswith('.mp4')]
# for vid in vids:
# savepath = vid.split(".")[0]+'_trim.mp4'
# trim_clip(vid, savepath, start=0.25, stop=0.35) | 34.681818 | 162 | 0.728702 | 121 | 763 | 4.404959 | 0.46281 | 0.075047 | 0.056285 | 0.073171 | 0.258912 | 0.258912 | 0.258912 | 0.258912 | 0 | 0 | 0 | 0.064706 | 0.108781 | 763 | 22 | 163 | 34.681818 | 0.719118 | 0.686763 | 0 | 0 | 0 | 0.2 | 0.382609 | 0.3 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f08b09f830197b622c222148be38b3159d8bff5d | 6,346 | py | Python | src/experimental_results/outdoor test/path_planning_analysis.py | NASLab/GroundROS | 6673db009ffcff59500eb1e3d5873111282e7749 | [
"MIT"
] | 1 | 2017-12-17T11:11:55.000Z | 2017-12-17T11:11:55.000Z | src/experimental_results/outdoor test/path_planning_analysis.py | NASLab/GroundROS | 6673db009ffcff59500eb1e3d5873111282e7749 | [
"MIT"
] | 2 | 2015-10-02T19:02:06.000Z | 2015-10-02T19:02:36.000Z | src/experimental_results/outdoor test/path_planning_analysis.py | NASLab/GroundROS | 6673db009ffcff59500eb1e3d5873111282e7749 | [
"MIT"
] | null | null | null | # python experimental tests for Husky
from numpy import sin, cos, pi, load
import matplotlib.pyplot as plt
from time import sleep
yaw_bound = 2 * pi / 180
yaw_calibrate = pi / 180 * (0)
x_offset_calibrate = 0
y_offset_calibrate = -.08
f0 = plt.figure()
ax0 = f0.add_subplot(111)
ax1 = f0.add_subplot(111)
env_data = load('loginfo2.npy')[1:]
x = [[]] * len(env_data)
y = [[]] * len(env_data)
m=2
# print len(env_data)
for i in range(m, len(env_data) - m):
if len(env_data[i]) > 0:
x[i] = env_data[i][0]
y[i] = env_data[i][1]
print i, x[i], y[i]
yaw = env_data[i][2]
# filter some of the readings; comment to see the effect
if len(env_data[i + m]) == 0 or abs(yaw - env_data[i - m][2]) > yaw_bound or abs(yaw - env_data[i + m][2]) > yaw_bound:
continue
readings = env_data[i][3]
readings_x = [[]] * len(readings)
readings_y = [[]] * len(readings)
k = 0
for j in range(len(readings)):
# lidar readings in lidar frame
x_temp = readings[j][0] * cos(-readings[j][1])
y_temp = readings[j][0] * sin(-readings[j][1])
# lidar readings in robot frame
x_temp2 = x_temp * \
cos(yaw_calibrate) - y_temp * \
sin(yaw_calibrate) + x_offset_calibrate
y_temp2 = y_temp * \
cos(yaw_calibrate) + x_temp * \
sin(yaw_calibrate) + y_offset_calibrate
# lidar readings in global frame
readings_x[k] = x_temp2 * cos(yaw) - y_temp2 * sin(yaw) + x[i]
readings_y[k] = y_temp2 * cos(yaw) + x_temp2 * sin(yaw) + y[i]
k += 1
ax0.plot(readings_x, readings_y, 'r.')
ax0.plot(x[i],y[i], 'go', lw=3)
ax0.plot(0,-10,'ko')
ax0.set_xlim([-50, 50])
ax0.set_ylim([-50, 50])
# ax0.axis('equal')
plt.draw()
plt.pause(.0001)
ax0.clear()
ax0.plot([], [], 'r.', label='Lidar Reading')
# env_data = load('planner_of_agent_0.npy')[1:]
# x = [[]] * len(env_data)
# y = [[]] * len(env_data)
# for i in range(1, len(env_data) - 1):
# if len(env_data[i]) > 0:
# x[i] = env_data[i][0]
# y[i] = env_data[i][1]
ax0.plot([value for value in x if value],
[value for value in y if value], 'go', lw=3, label='Robot\'s Trajectory')
# env_data = load('planner_of_agent_1.npy')[1:]
# x = [[]] * len(env_data)
# y = [[]] * len(env_data)
# for i in range(1, len(env_data) - 1):
# if len(env_data[i]) > 0:
# x[i] = env_data[i][0]
# y[i] = env_data[i][1]
# ax0.plot([value for value in x if value],
# [value for value in y if value], 'bo', lw=3, label='Robot\'s Trajectory')
# ax0.legend()
# ax0.axis('equal')
# plt.draw()
# plt.pause(.1)
# raw_input("<Hit Enter To Close>")
plt.close(f0)
# yaw_bound = 3 * pi / 180
# yaw_calibrate = pi / 180 * (0)
# x_offset_calibrate = .23
# y_offset_calibrate = -.08
# data = np.load('pos.npy')[1:]
# print len(data)
# error_long = data[:, 0]
# error_lat = data[:, 1]
# ref_x = [value for value in data[:, 2]]
# print ref_x[:30]
# ref_y = [value for value in data[:, 3]]
# pos_x = [value for value in data[:, 4]][0::1]
# pos_y = [value for value in data[:, 5]][0::1]
# pos_theta = data[:, 6]
# print data
# time = data[:, 7] - data[0, 7]
# vel = data[:, 8]
# plt.plot(ref_x, ref_y, 'ro')
# plt.gca().set_aspect('equal', adjustable='box')
# f0 = plt.figure(1, figsize=(9, 9))
# ax0 = f0.add_subplot(111)
# ax0.plot(ref_x, ref_y, '--', lw=3, label='Reference Trajectory')
# ax0.plot(pos_x[0], pos_y[0], 'ms', markersize=10, label='Start Point')
# ax0.plot(pos_x, pos_y, 'go', label='Robot Trajectory')
# env_data = np.load('planner_of_agent_0.npy')[1:]
# x = [[]] * len(env_data)
# y = [[]] * len(env_data)
# print len(env_data)
# for i in range(1, len(env_data) - 1):
# if len(env_data[i]) > 0:
# x[i] = env_data[i][0]
# y[i] = env_data[i][1]
# yaw = env_data[i][2]
# filter some of the readings; comment to see the effect
# if len(env_data[i + 1]) == 0 or abs(yaw - env_data[i - 1][2]) > yaw_bound or abs(yaw - env_data[i + 1][2]) > yaw_bound:
# continue
# readings = env_data[i][3]
# readings_x = [[]] * len(readings)
# readings_y = [[]] * len(readings)
# k = 0
# for j in range(len(readings)):
# lidar readings in lidar frame
# x_temp = readings[j][0] * cos(-readings[j][1])
# y_temp = readings[j][0] * sin(-readings[j][1])
# lidar readings in robot frame
# x_temp2 = x_temp * \
# cos(yaw_calibrate) - y_temp * \
# sin(yaw_calibrate) + x_offset_calibrate
# y_temp2 = y_temp * \
# cos(yaw_calibrate) + x_temp * \
# sin(yaw_calibrate) + y_offset_calibrate
# lidar readings in global frame
# readings_x[k] = x_temp2 * cos(yaw) - y_temp2 * sin(yaw) + x[i]
# readings_y[k] = y_temp2 * cos(yaw) + x_temp2 * sin(yaw) + y[i]
# k += 1
# ax0.plot(readings_x, readings_y, 'r.')
# for i in range(len(env_data)):
# if len(env_data[i])>0:
# x[i] = env_data[i][0]
# y[i] = env_data[i][1]
# yaw = env_data[i][2]
# print yaw
# readings = env_data[i][3]
# readings_x = [[]]*len(readings)
# readings_y = [[]]*len(readings)
# print len(readings),len(readings_x)
# k=0
# for j in range(len(readings)):
# if i<200:
# print k,j,len(readings_x)
# readings_x[k] = x[i] + readings[j][0]*sin(pi/2-yaw+readings[j][1])
# readings_y[k] = y[i] + readings[j][0]*cos(pi/2-yaw+readings[j][1])
# k+=1
# ax0.plot(readings_x, readings_y,'r.')
# ax0.plot([], [], 'r.', label='Lidar Reading')
# print x
# ax0.plot([value for value in x if value],
# [value for value in y if value], 'go', lw=3,label='Robot\'s Trajectory')
# env_y = np.load('env.npy')[1]
# env_x = [value for value in env_x if value]
# env_y = [value for value in env_y if value]
# ax0.plot(env_x, env_y, 'r.', )
# ax0.plot(-.5, 2.7, 'cs', markersize=10, label='Destination')
# ax0.legend()
# ax0.axis('equal')
# ax0.set_xlim(-3.5, 3.5)
# ax0.set_ylim(-3, 4)
# ax0.set_xlabel('X (m)')
# ax0.set_ylabel('Y (m)')
# ax0.axis('equal')
# plt.tight_layout()
# plt.draw()
# plt.pause(.1) # <-------
# raw_input("<Hit Enter To Close>")
# plt.close(f0)
| 30.956098 | 129 | 0.560668 | 1,051 | 6,346 | 3.226451 | 0.125595 | 0.094957 | 0.063698 | 0.053082 | 0.728104 | 0.690357 | 0.629018 | 0.616927 | 0.600708 | 0.600708 | 0 | 0.043806 | 0.248188 | 6,346 | 204 | 130 | 31.107843 | 0.666946 | 0.656161 | 0 | 0 | 0 | 0 | 0.019932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06 | null | null | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f08ba58aa1ee5462b9589a7cefae921b8d9e4b35 | 477 | py | Python | titan/react_pkg/prettier/__init__.py | mnieber/moonleap | 2c951565c32f2e733a063b4a4f7b3d917ef1ec07 | [
"MIT"
] | null | null | null | titan/react_pkg/prettier/__init__.py | mnieber/moonleap | 2c951565c32f2e733a063b4a4f7b3d917ef1ec07 | [
"MIT"
] | null | null | null | titan/react_pkg/prettier/__init__.py | mnieber/moonleap | 2c951565c32f2e733a063b4a4f7b3d917ef1ec07 | [
"MIT"
] | null | null | null | from pathlib import Path
from moonleap import add, create
from titan.project_pkg.service import Tool
from titan.react_pkg.nodepackage import load_node_package_config
class Prettier(Tool):
pass
base_tags = [("prettier", ["tool"])]
@create("prettier")
def create_prettier(term, block):
prettier = Prettier(name="prettier")
prettier.add_template_dir(Path(__file__).parent / "templates")
add(prettier, load_node_package_config(__file__))
return prettier
| 22.714286 | 66 | 0.761006 | 62 | 477 | 5.532258 | 0.532258 | 0.052478 | 0.087464 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136268 | 477 | 20 | 67 | 23.85 | 0.832524 | 0 | 0 | 0 | 0 | 0 | 0.077568 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0.076923 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
f0935e4564c845dcf620246319af92237bea563f | 167 | py | Python | calvestbr/__init__.py | IsaacHiguchi/calvestbr | ebf702e9e67299c822a6cc21cad60b247446fcfa | [
"MIT"
] | null | null | null | calvestbr/__init__.py | IsaacHiguchi/calvestbr | ebf702e9e67299c822a6cc21cad60b247446fcfa | [
"MIT"
] | null | null | null | calvestbr/__init__.py | IsaacHiguchi/calvestbr | ebf702e9e67299c822a6cc21cad60b247446fcfa | [
"MIT"
] | null | null | null | """Top-level package for Calendário dos Vestibulares do Brasil."""
__author__ = """Ana_Isaac_Marina"""
__email__ = 'marinalara170303@gmail.com'
__version__ = '0.0.1'
| 27.833333 | 66 | 0.742515 | 21 | 167 | 5.238095 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060403 | 0.107784 | 167 | 5 | 67 | 33.4 | 0.677852 | 0.359281 | 0 | 0 | 0 | 0 | 0.465347 | 0.257426 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f0ad10d1d55fd2bdc2c491c48aba252ed553fa31 | 3,568 | py | Python | srblib/__init__.py | srbcheema1/srblib | 26146cb0d5586548da5f97a9fe3af355cd97f3ca | [
"MIT"
] | 2 | 2019-04-03T00:51:54.000Z | 2019-05-16T10:33:44.000Z | srblib/__init__.py | srbcheema1/srblib | 26146cb0d5586548da5f97a9fe3af355cd97f3ca | [
"MIT"
] | null | null | null | srblib/__init__.py | srbcheema1/srblib | 26146cb0d5586548da5f97a9fe3af355cd97f3ca | [
"MIT"
] | null | null | null | __version__ = '0.1.6'
__mod_name__ = 'srblib'
from .colour import Colour # A class with color names and a static print function which prints coloured output to stderr
from .debugger import debug # a boolean whose value can be changed in ~/.config/srblib/debug.json
from .debugger import on_appveyor # a boolean value which is true if code is running on appveyor
from .debugger import on_ci # a boolean value which is true if it code is running on CI
from .debugger import on_srbpc # a boolean value which is true if it is my PC i.e. srb-pc
from .debugger import on_travis # a boolean value which is true if code is running on travis
from .dependency import install_arg_complete # A function to append line of argcomplete in ~/.bashrc
from .dependency import install_dependencies # A function that takes a special data-template to install dependencies
from .dependency import install_dependencies_pkg # similar but based on package-managers (Recommended)
from .dependency import is_installed # checks if the following application is installed on machine or not
from .dependency import remove_dependencies # Opposite of install_dependencies
from .dependency import remove_dependencies_pkg # Opposite of install_dependencies_pkg
from .email import email # a function
from .email import Email # a class to send email
from .files import file_extension # returns back the extention of a file from filepath, may return '' if no ext
from .files import file_name # returns filename from a filepath
from .files import remove # removes a path recursively. it deletes all files and folders under that path
from .files import verify_file # verify that a file exists. if not it will create one. also creates parents if needed
from .files import verify_folder # verify that a folder exists. creates one if not there. also creates parents if needed
from .file_importer import Module # a class to import modules
# one cant declare more attributes in frozen class
from .frozen import FrozenClass # A class to be inherited to make a class frozen. i.e. no more attributes can be added.
from .path import abs_path # returns absolute path of a path given. works on windows as well as linux.
from .path import is_child_of # returns if a given path is child(direct/indirect) of the second path given.
from .path import parent_dir # returns Nth parent of a path. default it returns 1st parent
from .path import relative_path # returns relative path if given absolute path
from .requests import debug_res # print debug output of response.
from .srb_bank import SrbBank # A class to store things for later use of a program. can act as a database
from .srb_json import SrbJson # A class to use json file more easily
from .srb_hash import path_hash # get hash of full path (recursively)
from .srb_hash import str_hash # get hash of string
from .soup import Soup # A class to make scrapping easier
from .system import get_os_name # returns OS name. values are windows, linux or mac
from .system import os_name # value of get_os_name
from .system import on_windows # True if system is windows OS
from .tabular import Tabular # A class to process dabular data
from .util import line_adder # append a line if not present in a given file
from .util import show_dependency_error_and_exit # display missing dependency error and exit
from .util import similarity # returns percentage of similarity of two strings
from .util import top # first element of list or set or dict(first key)
from .util import dump_output # variable containing string value ` > /dev/null 2>&1 ` or ` > nul 2>&1 `.
| 60.474576 | 120 | 0.7912 | 597 | 3,568 | 4.638191 | 0.340034 | 0.019502 | 0.020224 | 0.028891 | 0.156013 | 0.071506 | 0.049837 | 0.049837 | 0.029614 | 0.029614 | 0 | 0.002695 | 0.168161 | 3,568 | 58 | 121 | 61.517241 | 0.930256 | 0.597534 | 0 | 0 | 0 | 0 | 0.007891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.952381 | 0 | 0.952381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f0c3581446d85c97a243b1adbc5978391a877b8e | 269 | py | Python | src/app/buy_btc.py | simondorfman/hello_cb_pro | cdb96ee1390d22753630e24dac9bfdc5e47e788d | [
"MIT"
] | null | null | null | src/app/buy_btc.py | simondorfman/hello_cb_pro | cdb96ee1390d22753630e24dac9bfdc5e47e788d | [
"MIT"
] | null | null | null | src/app/buy_btc.py | simondorfman/hello_cb_pro | cdb96ee1390d22753630e24dac9bfdc5e47e788d | [
"MIT"
] | null | null | null | import os
from cbt.private_client import PrivateClient
from cbt.auth import get_new_private_connection
if __name__ == "__main__":
usd = os.getenv("USD_BUY")
auth = get_new_private_connection()
client = PrivateClient(auth)
client.market_buy_btc(usd)
| 20.692308 | 47 | 0.754647 | 37 | 269 | 5 | 0.513514 | 0.075676 | 0.140541 | 0.248649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163569 | 269 | 12 | 48 | 22.416667 | 0.822222 | 0 | 0 | 0 | 0 | 0 | 0.055762 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f0ca3d609391dc32aa46d1c4b4ec4ee3f9a34e0a | 448 | py | Python | cbandits/core/bayesian_nn.py | AlliedToasters/dev_bandits | 7e3655bd5a91854951a52d0f037ee06aefb2922c | [
"MIT"
] | null | null | null | cbandits/core/bayesian_nn.py | AlliedToasters/dev_bandits | 7e3655bd5a91854951a52d0f037ee06aefb2922c | [
"MIT"
] | null | null | null | cbandits/core/bayesian_nn.py | AlliedToasters/dev_bandits | 7e3655bd5a91854951a52d0f037ee06aefb2922c | [
"MIT"
] | null | null | null | """Define the abstract class for Bayesian Neural Networks."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
class BayesianNN(object):
"""A Bayesian neural network keeps a distribution over neural nets."""
def __init__(self, optimizer):
pass
def build_model(self):
pass
def train(self, data):
pass
def sample(self, steps):
pass | 21.333333 | 74 | 0.694196 | 55 | 448 | 5.309091 | 0.618182 | 0.10274 | 0.164384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234375 | 448 | 21 | 75 | 21.333333 | 0.851312 | 0.267857 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.333333 | 0.25 | 0 | 0.666667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f0d87d7b32c42472be81003f06fe5e9c4bf5e20f | 1,781 | py | Python | e_secretary/migrations/0010_auto_20190329_2219.py | tsitsikas96/e-secretary | bdda95e17093da730af33acf4b15ed03331c7643 | [
"MIT"
] | null | null | null | e_secretary/migrations/0010_auto_20190329_2219.py | tsitsikas96/e-secretary | bdda95e17093da730af33acf4b15ed03331c7643 | [
"MIT"
] | null | null | null | e_secretary/migrations/0010_auto_20190329_2219.py | tsitsikas96/e-secretary | bdda95e17093da730af33acf4b15ed03331c7643 | [
"MIT"
] | 1 | 2020-03-08T16:12:34.000Z | 2020-03-08T16:12:34.000Z | # Generated by Django 2.1.7 on 2019-03-29 20:19
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('e_secretary', '0009_profile'),
]
operations = [
migrations.AlterModelOptions(
name='professor',
options={'ordering': ['title']},
),
migrations.RemoveField(
model_name='professor',
name='email',
),
migrations.RemoveField(
model_name='professor',
name='fname',
),
migrations.RemoveField(
model_name='professor',
name='lname',
),
migrations.RemoveField(
model_name='student',
name='email',
),
migrations.RemoveField(
model_name='student',
name='fname',
),
migrations.RemoveField(
model_name='student',
name='lname',
),
migrations.AddField(
model_name='profile',
name='email',
field=models.EmailField(default='test@email.com', max_length=254, null=True),
),
migrations.AddField(
model_name='profile',
name='fname',
field=models.CharField(default='First', help_text='First Name', max_length=50),
),
migrations.AddField(
model_name='profile',
name='lname',
field=models.CharField(default='Last', help_text='Last Name', max_length=50),
),
migrations.AlterField(
model_name='profile',
name='grammateia',
field=models.BooleanField(default=False),
),
migrations.DeleteModel(
name='Grammateia',
),
]
| 27.4 | 91 | 0.522179 | 150 | 1,781 | 6.086667 | 0.393333 | 0.098576 | 0.170865 | 0.197152 | 0.466594 | 0.422782 | 0 | 0 | 0 | 0 | 0 | 0.022589 | 0.353734 | 1,781 | 64 | 92 | 27.828125 | 0.770634 | 0.025267 | 0 | 0.689655 | 1 | 0 | 0.131488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017241 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f0db46fd26b0c7315a9b0cf93b8d1fbaf8362e97 | 2,743 | py | Python | gender_converter/logger_aegender.py | roebel/DeepGC | 03eee63ff9d9f4daa34435ddca530b262f097ea6 | [
"MIT"
] | null | null | null | gender_converter/logger_aegender.py | roebel/DeepGC | 03eee63ff9d9f4daa34435ddca530b262f097ea6 | [
"MIT"
] | 1 | 2021-08-11T06:41:56.000Z | 2021-08-11T06:41:56.000Z | gender_converter/logger_aegender.py | roebel/DeepGC | 03eee63ff9d9f4daa34435ddca530b262f097ea6 | [
"MIT"
] | null | null | null | import random
from plotting_utils import plot_spectrogram_to_numpy, image_for_logger, plot_to_image
import numpy as np
import tensorflow as tf
class GParrotLogger():
def __init__(self, logdir, ali_path='ali'):
# super(ParrotLogger, self).__init__(logdir)
self.writer = tf.summary.create_file_writer(logdir)
def log_training(self, train_loss, loss_list, accuracy_list, grad_norm, learning_rate, duration, iteration):
(speaker_encoder_loss, gender_autoencoder_loss, gender_classification_loss, gender_adv_loss,
gender_autoencoder_destandardized_loss) = loss_list
speaker_encoder_acc, gender_classification_acc = accuracy_list
with self.writer.as_default():
tf.summary.scalar("training.loss", train_loss, iteration)
tf.summary.scalar("training.loss.spenc", speaker_encoder_loss, iteration)
tf.summary.scalar("training.loss.gauto", gender_autoencoder_loss, iteration)
tf.summary.scalar("training.loss.gautotrue", gender_autoencoder_destandardized_loss, iteration)
tf.summary.scalar("training.loss.gcla", gender_classification_loss, iteration)
tf.summary.scalar("training.loss.gadv", gender_adv_loss, iteration)
tf.summary.scalar('training.acc.spenc', speaker_encoder_acc, iteration)
tf.summary.scalar('training.acc.gcla', gender_classification_acc, iteration)
tf.summary.scalar("grad.norm", grad_norm, iteration)
tf.summary.scalar("learning.rate", learning_rate, iteration)
tf.summary.scalar("duration", duration, iteration)
self.writer.flush()
def log_validation(self, val_loss, loss_list, accuracy_list, iteration):
(speaker_encoder_loss, gender_autoencoder_loss, gender_classification_loss, gender_adv_loss,
gender_autoencoder_destandardized_loss) = loss_list
speaker_encoder_acc, gender_classification_acc = accuracy_list
with self.writer.as_default():
tf.summary.scalar("validation.loss", val_loss, iteration)
tf.summary.scalar("validation.loss.spenc", speaker_encoder_loss, iteration)
tf.summary.scalar("validation.loss.gauto", gender_autoencoder_loss, iteration)
tf.summary.scalar("validation.loss.gautotrue", gender_autoencoder_destandardized_loss, iteration)
tf.summary.scalar("validation.loss.gcla", gender_classification_loss, iteration)
tf.summary.scalar("validation.loss.gadv", gender_adv_loss, iteration)
tf.summary.scalar('validation.acc.spenc', speaker_encoder_acc, iteration)
tf.summary.scalar('validation.acc.gcla', gender_classification_acc, iteration)
self.writer.flush()
| 55.979592 | 112 | 0.728035 | 321 | 2,743 | 5.912773 | 0.190031 | 0.094837 | 0.150158 | 0.214963 | 0.758166 | 0.711275 | 0.651212 | 0.574289 | 0.574289 | 0.305585 | 0 | 0 | 0.176085 | 2,743 | 48 | 113 | 57.145833 | 0.839823 | 0.015312 | 0 | 0.263158 | 0 | 0 | 0.125602 | 0.033346 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.105263 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f0dc6c0f1ad89845b0333162183c190359534d22 | 906 | py | Python | texel/keys.py | Xen0byte/texel | 9dcfba163c66e9da5e9b0757c4e587f297b0cfcb | [
"MIT"
] | 119 | 2022-02-06T21:47:55.000Z | 2022-03-21T23:14:30.000Z | texel/keys.py | Xen0byte/texel | 9dcfba163c66e9da5e9b0757c4e587f297b0cfcb | [
"MIT"
] | 3 | 2022-02-07T08:47:20.000Z | 2022-02-09T09:07:17.000Z | texel/keys.py | Xen0byte/texel | 9dcfba163c66e9da5e9b0757c4e587f297b0cfcb | [
"MIT"
] | 5 | 2022-02-07T08:13:11.000Z | 2022-02-12T22:31:37.000Z | import curses
class Key:
def __init__(self, *values):
self.values = values
self._hash = hash(values)
self._keyset = set(values)
def __eq__(self, other):
return self._hash == other._hash
def __hash__(self):
return self._hash
class Keys:
ESC = Key(27)
TAB = Key(ord("\t"), ord("n"))
SHIFT_TAB = Key(353, ord("N"))
VISUAL = Key(ord("v"), ord("V"))
COPY = Key(ord("c"), ord("y"))
QUIT = Key(ord("q"))
UP = Key(curses.KEY_UP, ord("k"))
DOWN = Key(curses.KEY_DOWN, ord("j"))
LEFT = Key(curses.KEY_LEFT, ord("h"))
RIGHT = Key(curses.KEY_RIGHT, ord("l"))
HELP = Key(ord("?"))
ALL = [ESC, TAB, SHIFT_TAB, VISUAL, COPY, QUIT, UP, DOWN, LEFT, RIGHT, HELP]
_id_to_key = {id: key for key in ALL for id in key.values}
@staticmethod
def to_key(key: int) -> Key:
return Keys._id_to_key.get(key)
| 25.885714 | 80 | 0.572848 | 139 | 906 | 3.517986 | 0.33813 | 0.06135 | 0.09816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007396 | 0.253863 | 906 | 34 | 81 | 26.647059 | 0.715976 | 0 | 0 | 0 | 0 | 0 | 0.015453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.037037 | 0.111111 | 0.851852 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f0e7d6bd1974b95ea4a8abd9c52fb010ef93328b | 475 | py | Python | app/converter/nsl/substance_cv_converter.py | c0d3m0nkey/xml-to-json-converter | 9cf040b591f45031c80dc5bc64d6fbb2c4665d25 | [
"BSD-2-Clause"
] | null | null | null | app/converter/nsl/substance_cv_converter.py | c0d3m0nkey/xml-to-json-converter | 9cf040b591f45031c80dc5bc64d6fbb2c4665d25 | [
"BSD-2-Clause"
] | null | null | null | app/converter/nsl/substance_cv_converter.py | c0d3m0nkey/xml-to-json-converter | 9cf040b591f45031c80dc5bc64d6fbb2c4665d25 | [
"BSD-2-Clause"
] | null | null | null | from lxml import objectify, etree
from operator import itemgetter
from ..xml_converter import XmlConverter
class SubstanceCVConverter(XmlConverter):
def convert(self, xml):
item = {}
item["term_english_equiv"] = str(xml.attrib["term-english-equiv"])
item["term_id"] = str(xml.attrib["term-id"])
item["term_lang"] = str(xml.attrib["term-lang"])
item["term_revision_num"] = str(xml.attrib["term-revision-num"])
return item
| 36.538462 | 74 | 0.671579 | 60 | 475 | 5.2 | 0.433333 | 0.102564 | 0.153846 | 0.205128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.187368 | 475 | 12 | 75 | 39.583333 | 0.80829 | 0 | 0 | 0 | 0 | 0 | 0.214737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f0f5116f620313599917f1b146e0c00251125aed | 728 | py | Python | prickly-pufferfish/python_questions/merge_ranges.py | Vthechamp22/summer-code-jam-2021 | 0a8bf1f22f6c73300891fd779da36efd8e1304c1 | [
"MIT"
] | 40 | 2020-08-02T07:38:22.000Z | 2021-07-26T01:46:50.000Z | prickly-pufferfish/python_questions/merge_ranges.py | Vthechamp22/summer-code-jam-2021 | 0a8bf1f22f6c73300891fd779da36efd8e1304c1 | [
"MIT"
] | 134 | 2020-07-31T12:15:45.000Z | 2020-12-13T04:42:19.000Z | prickly-pufferfish/python_questions/merge_ranges.py | Vthechamp22/summer-code-jam-2021 | 0a8bf1f22f6c73300891fd779da36efd8e1304c1 | [
"MIT"
] | 101 | 2020-07-31T12:00:47.000Z | 2021-11-01T09:06:58.000Z | """
In HiCal, a meeting is stored as tuples of integers (start_time, end_time). /
These integers represent the number of 30-minute blocks past 9:00am. /
For example: /
(2, 3) # meeting from 10:00 - 10:30 am /
(6, 9) # meeting from 12:00 - 1:30 pm /
Write a function merge_ranges() that /
takes a list of meeting time ranges as a parameter /
and returns a list of condensed ranges. /
>>> merge_ranges([(3, 5), (4, 8), (10, 12), (9, 10), (0, 1)]) /
[(0, 1), (3, 8), (9, 12)] /
>>> merge_ranges([(0, 3), (3, 5), (4, 8), (10, 12), (9, 10)]) /
[(0, 8), (9, 12)] /
>>> merge_ranges([(0, 3), (3, 5)]) /
[(0, 5)] /
>>> merge_ranges([(0, 3), (3, 5), (7, 8)]) /
[(0, 5), (7, 8)] /
>>> merge_ranges([(1, 5), (2, 3)]) /
[(1, 5)] /
"""
| 28 | 77 | 0.539835 | 131 | 728 | 2.938931 | 0.389313 | 0.171429 | 0.093506 | 0.101299 | 0.194805 | 0.194805 | 0.155844 | 0.155844 | 0.155844 | 0 | 0 | 0.14433 | 0.200549 | 728 | 25 | 78 | 29.12 | 0.517182 | 0.987637 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0b04b5673e663cf9c8fc8da92d1c7bd8d879657a | 1,152 | py | Python | Neural_Networks/Multilayer/Neural_Multi_pybrain.py | jeffreire/Deep_Learning | 960142080dc63ea103b326ea3d6d17bd44ae0f51 | [
"MIT"
] | null | null | null | Neural_Networks/Multilayer/Neural_Multi_pybrain.py | jeffreire/Deep_Learning | 960142080dc63ea103b326ea3d6d17bd44ae0f51 | [
"MIT"
] | null | null | null | Neural_Networks/Multilayer/Neural_Multi_pybrain.py | jeffreire/Deep_Learning | 960142080dc63ea103b326ea3d6d17bd44ae0f51 | [
"MIT"
] | null | null | null | # from pybrain.datasets import SupervisedDataSet
# from pybrain.supervised.trainers import BackpropTrainer
# from pybrain.structure.modules import SoftmaxLayer
# from pybrain.structure.modules import SigmoidLayer
# from pybrain.tools.s import buildNetwork
# (x,y,z) = x é a quantidade de camadas de entrada, y é a quantidade de camada oculta, z é a quantidade de camada de saida
# rede = buildNetwork(3,3,1)
# (x,y) sao os previsores, dois atributos e uma class
# base = SupervisedDataSet(2, 1)
# queremos dizer que a entrada sera de (0,0) e queremos obter uma saida de (0, )
# base.addSample((0,0),(0, ))
# base.addSample((0,1),(1, ))
# base.addSample((1,0),(0, ))
# base.addSample((1,1),(0, ))
# print(base['input'])
# treinamos a nossa rede passando por parametro rede, basa, taxa de aprendizagem e momento
# treinamento = BackpropTrainer(rede, dataset = base, learningrate = 0.01,
# momentum = 0.06)
# o for é quantas epocas iremos calcular os nossos pesos
# for i in range(1, 30000):
# erro = treinamento.train()
# if i % 1000 == 0:
# print("Erro: %s" % erro)
| 39.724138 | 123 | 0.664931 | 165 | 1,152 | 4.642424 | 0.484848 | 0.071802 | 0.046997 | 0.05483 | 0.138381 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04102 | 0.217014 | 1,152 | 29 | 124 | 39.724138 | 0.808204 | 0.923611 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0b08ce05031842d1b1da5e73875c7ace68953124 | 6,040 | py | Python | mydb/test_postgres.py | dappsunilabs/DB4SCI | 54bdd03aaa12957e622c921b263e187740a8b2ae | [
"Apache-2.0"
] | 7 | 2018-12-05T19:18:20.000Z | 2020-11-21T07:27:54.000Z | mydb/test_postgres.py | dappsunilabs/DB4SCI | 54bdd03aaa12957e622c921b263e187740a8b2ae | [
"Apache-2.0"
] | 8 | 2018-04-25T06:02:41.000Z | 2020-09-08T21:55:56.000Z | mydb/test_postgres.py | FredHutch/DB4SCI | cc950a36b6b678fe16c1c91925ec402581636fc0 | [
"Apache-2.0"
] | 2 | 2019-11-14T02:09:09.000Z | 2021-12-28T19:05:51.000Z | #!/usr/bin/python
import time
import psycopg2
import argparse
import postgres_util
import container_util
import admin_db
import volumes
from send_mail import send_mail
from config import Config
def full_test(params):
admin_db.init_db()
con_name = params['dbname']
dbtype = params['dbtype']
print('Starting %s Test; Container Name: %s' % (dbtype, con_name))
if container_util.container_exists(con_name):
print(' Duplicate container: KILLING')
result = container_util.kill_con(con_name,
Config.accounts[dbtype]['admin'],
Config.accounts[dbtype]['admin_pass'],
params['username'])
time.sleep(5)
print(result)
print(' removing old directories')
volumes.cleanup_dirs(con_name)
print(' Create container')
result = postgres_util.create(params)
print(' Create result: %s' % result)
port = params['port']
#
# Admin DB checking
#
print(' Check Admin DB log for "create"')
admin_db.display_container_log(limit=1)
print(' Check Admin DB for State entry')
info = admin_db.get_container_state(con_name)
print(' Name: %s ' % info.name),
print('State: %s ' % info.state),
print('TS: %s ' % info.ts),
print('CID: %d' % info.c_id)
print(' Check Admin DB for Container Info')
info = admin_db.display_container_info(con_name)
print('Info: %s' % info)
print(' Postgres Show All')
postgres_util.showall(params)
print("\n=========")
print(" - Test Accounts\n")
print("=========")
admin_user = Config.accounts[dbtype]['admin']
admin_pass = Config.accounts[dbtype]['admin_pass']
test_user = Config.accounts['test_user']['admin']
test_pass = Config.accounts['test_user']['admin_pass']
for dbuser, dbuserpass in [[test_user, test_pass],
['svc_'+test_user, params['longpass']],
[admin_user, admin_pass]]:
auth = postgres_util.auth_check(dbuser,
dbuserpass,
port)
if auth:
print('User %s verified!' % dbuser)
else:
print('user account not valid: %s' % dbuser)
print(" - Test Complete")
def populate(params):
dbTestName = 'testdb'
dbtype = params['dbtype']
conn_string = "dbname='%s' " % params['dbname']
conn_string += "user='%s' " % Config.accounts[dbtype]['admin']
conn_string += "host='%s' " % Config.container_host
conn_string += "port='%d' " % params['port']
conn_string += "password='%s'" % Config.accounts[dbtype]['admin_pass']
print(' - Populate with test data: ')
try:
conn = psycopg2.connect(conn_string)
except:
print "I am unable to connect to the database"
conn.set_isolation_level(0)
cur = conn.cursor()
print(' - Create DB: ' + dbTestName)
cur.execute("CREATE TABLE t1 (id serial PRIMARY KEY, num integer, data varchar);")
cur.execute("INSERT INTO t1 (num, data) VALUES (%s, %s)",
(100, "table t1 in Primary database"))
cur.execute("CREATE DATABASE " + dbTestName)
conn.close()
target = "dbname='%s'" % params['dbname']
testdb = "dbname='%s'" % dbTestName
conn2 = conn_string.replace(target, testdb)
print(' - Connect to new DB: ' + conn2)
conn = psycopg2.connect(conn2)
cur = conn.cursor()
print(' - Create Table and Insert ')
cur.execute("CREATE TABLE t2 (id serial PRIMARY KEY, num integer, data varchar);")
cur.execute("INSERT INTO t2 (num, data) VALUES (%s, %s)",
(100, "Important test data in t2"))
conn.commit()
cur.close()
print(' - Populate Success')
def delete_test_container(dbtype, con_name):
print("\n=========")
print(" - Removing Container")
print("=========")
result = container_util.kill_con(con_name,
Config.accounts[dbtype]['admin'],
Config.accounts[dbtype]['admin_pass'])
print(result)
def setup(dbtype, con_name):
params = {'dbname': con_name,
'dbuser': Config.accounts['test_user']['admin'],
'dbtype': dbtype,
'dbuserpass': Config.accounts['test_user']['admin_pass'],
'support': 'Basic',
'owner': Config.accounts['test_user']['owner'],
'description': 'Test the Dev',
'contact': Config.accounts['test_user']['contact'],
'life': 'medium',
'backup_type': 'User',
'backup_freq': 'Daily',
'backup_life': '6',
'backup_window': 'any',
'pitr': 'n',
'maintain': 'standard',
'phi': 'No',
'pitr': 'n',
'username': Config.accounts['test_user']['admin'],
'image': Config.info[dbtype]['images'][1][1],
'db_vol': '/mydb/dbs_data',
}
return params
if __name__ == "__main__":
dbtype = 'Postgres'
con_name = 'postgres-test'
params = setup(dbtype, con_name)
# paramd['db_vol'] = '/mydb/encrypt',
parser = argparse.ArgumentParser(prog='test_postgres.py',
description='Test %s routines' % dbtype)
parser.add_argument('--purge', '-d', action='store_true', default=False,
help='Delete test container')
parser.add_argument('--backup', '-b', action='store_true', default=False,
help='backup %s' % params['dbname'])
args = parser.parse_args()
if args.purge:
delete_test_container(dbtype, con_name)
elif args.backup:
(cmd, mesg) = postgres_util.backup(params)
print("Command: %s\nBackup result: %s" % (cmd, mesg))
else:
full_test(params)
populate(params)
postgres_util.backup(params)
print('- Tests Complete!')
| 36.606061 | 86 | 0.565728 | 672 | 6,040 | 4.93006 | 0.269345 | 0.063387 | 0.048295 | 0.060368 | 0.252943 | 0.167522 | 0.088138 | 0.088138 | 0.088138 | 0.088138 | 0 | 0.005579 | 0.287748 | 6,040 | 164 | 87 | 36.829268 | 0.764528 | 0.011755 | 0 | 0.124138 | 0 | 0 | 0.264085 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.068966 | 0.068966 | null | null | 0.227586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
0b0f14dc9edd917b5943eaee8fa4e20472331f44 | 170 | py | Python | src/matlab2cpp/node/__init__.py | neilferg/matlab2cpp | aa26671fc73dad297c977511053b076e05bdd2df | [
"BSD-3-Clause"
] | null | null | null | src/matlab2cpp/node/__init__.py | neilferg/matlab2cpp | aa26671fc73dad297c977511053b076e05bdd2df | [
"BSD-3-Clause"
] | null | null | null | src/matlab2cpp/node/__init__.py | neilferg/matlab2cpp | aa26671fc73dad297c977511053b076e05bdd2df | [
"BSD-3-Clause"
] | null | null | null | """
The module contains the following submodules.
"""
from .frontend import Node
__all__ = ["Node"]
if __name__ == "__main__":
import doctest
doctest.testmod()
| 15.454545 | 45 | 0.688235 | 19 | 170 | 5.526316 | 0.789474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.188235 | 170 | 10 | 46 | 17 | 0.76087 | 0.264706 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
9bd8035c6b3b5723ba2c49f36471229439b947c4 | 1,013 | py | Python | gitrack/exceptions.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 5 | 2019-02-19T10:56:56.000Z | 2020-11-28T11:37:45.000Z | gitrack/exceptions.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 63 | 2019-01-21T21:44:28.000Z | 2022-03-21T14:01:11.000Z | gitrack/exceptions.py | AuHau/giTrack | 802ee23513d60b2379f0f5968e595288d5b6c31d | [
"MIT"
] | 2 | 2019-01-04T19:31:52.000Z | 2020-12-10T21:40:09.000Z |
class GitrackException(Exception):
"""
General giTrack's exception
"""
pass
class ConfigException(GitrackException):
"""
Exception related to Config functionality.
"""
pass
class InitializedRepoException(GitrackException):
"""
Raised when user tries to initialized repo that has been already initialized before.
"""
pass
class UninitializedRepoException(GitrackException):
"""
Raised when giTrack invoke in Git repository that has not been initialized.
"""
pass
class UnknownShell(GitrackException):
pass
class PromptException(GitrackException):
pass
class ProviderException(GitrackException):
def __init__(self, provider_name, message, *args, **kwargs):
self.message = message
self.provider_name = provider_name
super().__init__(*args, **kwargs)
def __str__(self):
return 'Provider \'{}\': {}'.format(self.provider_name, self.message)
class RunningEntry(ProviderException):
pass
| 19.862745 | 88 | 0.691017 | 96 | 1,013 | 7.125 | 0.479167 | 0.078947 | 0.070175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215202 | 1,013 | 50 | 89 | 20.26 | 0.860377 | 0.228036 | 0 | 0.333333 | 0 | 0 | 0.019444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.333333 | 0 | 0.047619 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
501a6e4b0542dfab3ece831a797cbcd4d5a9b0c2 | 7,119 | py | Python | oembed/migrations/0001_initial.py | EightMedia/djangoembed | ee325f7375c48405f9c3e7e2c0fa7f5a08fafd48 | [
"MIT"
] | 8 | 2015-02-06T19:18:49.000Z | 2021-01-01T05:46:02.000Z | oembed/migrations/0001_initial.py | EightMedia/djangoembed | ee325f7375c48405f9c3e7e2c0fa7f5a08fafd48 | [
"MIT"
] | null | null | null | oembed/migrations/0001_initial.py | EightMedia/djangoembed | ee325f7375c48405f9c3e7e2c0fa7f5a08fafd48 | [
"MIT"
] | 5 | 2015-03-15T11:41:26.000Z | 2018-03-08T09:45:26.000Z | # encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.conf import settings
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'StoredOEmbed'
db.create_table('oembed_storedoembed', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('match', self.gf('django.db.models.fields.TextField')()),
('response_json', self.gf('django.db.models.fields.TextField')()),
('resource_type', self.gf('django.db.models.fields.CharField')(max_length=8)),
('date_added', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('date_expires', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('maxwidth', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('maxheight', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('object_id', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('content_type', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='related_storedoembed', null=True, to=orm['contenttypes.ContentType'])),
))
db.send_create_signal('oembed', ['StoredOEmbed'])
# Adding unique constraint on 'StoredOEmbed', fields ['match', 'maxwidth', 'maxheight']
if 'mysql' not in settings.DATABASES['default']['ENGINE']:
db.create_unique('oembed_storedoembed', ['match', 'maxwidth', 'maxheight'])
# Adding model 'StoredProvider'
db.create_table('oembed_storedprovider', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('endpoint_url', self.gf('django.db.models.fields.CharField')(max_length=255)),
('regex', self.gf('django.db.models.fields.CharField')(max_length=255)),
('wildcard_regex', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('resource_type', self.gf('django.db.models.fields.CharField')(max_length=8)),
('active', self.gf('django.db.models.fields.BooleanField')(default=False, blank=True)),
('provides', self.gf('django.db.models.fields.BooleanField')(default=False, blank=True)),
('scheme_url', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
))
db.send_create_signal('oembed', ['StoredProvider'])
# Adding model 'AggregateMedia'
db.create_table('oembed_aggregatemedia', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('url', self.gf('django.db.models.fields.TextField')()),
('object_id', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('content_type', self.gf('django.db.models.fields.related.ForeignKey')(blank=True, related_name='aggregate_media', null=True, to=orm['contenttypes.ContentType'])),
))
db.send_create_signal('oembed', ['AggregateMedia'])
def backwards(self, orm):
# Deleting model 'StoredOEmbed'
db.delete_table('oembed_storedoembed')
# Removing unique constraint on 'StoredOEmbed', fields ['match', 'maxwidth', 'maxheight']
if 'mysql' not in settings.DATABASES['default']['ENGINE']:
db.delete_unique('oembed_storedoembed', ['match', 'maxwidth', 'maxheight'])
# Deleting model 'StoredProvider'
db.delete_table('oembed_storedprovider')
# Deleting model 'AggregateMedia'
db.delete_table('oembed_aggregatemedia')
models = {
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'oembed.aggregatemedia': {
'Meta': {'object_name': 'AggregateMedia'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'aggregate_media'", 'null': 'True', 'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'url': ('django.db.models.fields.TextField', [], {})
},
'oembed.storedoembed': {
'Meta': {'ordering': "('-date_added',)", 'unique_together': "(('match', 'maxwidth', 'maxheight'),)", 'object_name': 'StoredOEmbed'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'related_storedoembed'", 'null': 'True', 'to': "orm['contenttypes.ContentType']"}),
'date_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'date_expires': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'match': ('django.db.models.fields.TextField', [], {}),
'maxheight': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'maxwidth': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'resource_type': ('django.db.models.fields.CharField', [], {'max_length': '8'}),
'response_json': ('django.db.models.fields.TextField', [], {})
},
'oembed.storedprovider': {
'Meta': {'ordering': "('endpoint_url', 'resource_type', 'wildcard_regex')", 'object_name': 'StoredProvider'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'blank': 'True'}),
'endpoint_url': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'provides': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'blank': 'True'}),
'regex': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'resource_type': ('django.db.models.fields.CharField', [], {'max_length': '8'}),
'scheme_url': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'wildcard_regex': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
}
}
complete_apps = ['oembed']
| 62.447368 | 197 | 0.606827 | 758 | 7,119 | 5.572559 | 0.131926 | 0.092803 | 0.159091 | 0.227273 | 0.735795 | 0.71875 | 0.680398 | 0.651989 | 0.622869 | 0.495265 | 0 | 0.006722 | 0.184998 | 7,119 | 113 | 198 | 63 | 0.721303 | 0.052114 | 0 | 0.229885 | 0 | 0 | 0.501633 | 0.293114 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022989 | false | 0 | 0.057471 | 0 | 0.114943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5025f16dc15906f1c7c7af67d516b81c43cb2edb | 967 | bzl | Python | source/bazel/deps/libevent/get.bzl | luxe/CodeLang-compiler | 78837d90bdd09c4b5aabbf0586a5d8f8f0c1e76a | [
"MIT"
] | 1 | 2019-01-06T08:45:46.000Z | 2019-01-06T08:45:46.000Z | source/bazel/deps/libevent/get.bzl | luxe/CodeLang-compiler | 78837d90bdd09c4b5aabbf0586a5d8f8f0c1e76a | [
"MIT"
] | 264 | 2015-11-30T08:34:00.000Z | 2018-06-26T02:28:41.000Z | source/bazel/deps/libevent/get.bzl | UniLang/compiler | c338ee92994600af801033a37dfb2f1a0c9ca897 | [
"MIT"
] | null | null | null | # Do not edit this file directly.
# It was auto-generated by: code/programs/reflexivity/reflexive_refresh
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
def libevent():
http_archive(
name = "libevent",
build_file = "//bazel/deps/libevent:build.BUILD",
sha256 = "9b436b404793be621c6e01cea573e1a06b5db26dad25a11c6a8c6f8526ed264c",
strip_prefix = "libevent-eee26deed38fc7a6b6780b54628b007a2810efcd",
urls = [
"https://github.com/Unilang/libevent/archive/eee26deed38fc7a6b6780b54628b007a2810efcd.tar.gz",
],
patches = [
"//bazel/deps/libevent/patches:p1.patch",
],
patch_args = [
"-p1",
],
patch_cmds = [
"find . -type f -name '*.c' -exec sed -i 's/#include <stdlib.h>/#include <stdlib.h>\n#include <stdint.h>\n/g' {} \\;",
],
)
| 37.192308 | 130 | 0.623578 | 100 | 967 | 5.91 | 0.6 | 0.030457 | 0.047377 | 0.064298 | 0.145516 | 0.145516 | 0.145516 | 0.145516 | 0.145516 | 0.145516 | 0 | 0.117962 | 0.228542 | 967 | 25 | 131 | 38.68 | 0.674263 | 0.104447 | 0 | 0.190476 | 1 | 0.047619 | 0.590962 | 0.31518 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | true | 0 | 0 | 0 | 0.047619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
502f754cdffdb05797ba5ba3fc5cba7ad3499d41 | 250 | py | Python | segitiga/hitung-luas-segitiga.py | Yurimahendra/latihan-big-data | 5ea495bc1187c4d99a83654f8377d73e72eb63d2 | [
"MIT"
] | null | null | null | segitiga/hitung-luas-segitiga.py | Yurimahendra/latihan-big-data | 5ea495bc1187c4d99a83654f8377d73e72eb63d2 | [
"MIT"
] | null | null | null | segitiga/hitung-luas-segitiga.py | Yurimahendra/latihan-big-data | 5ea495bc1187c4d99a83654f8377d73e72eb63d2 | [
"MIT"
] | null | null | null | # jumlah segitiga
n = 123
# panjang alas sebuah segitiga
alas = 30
# tinggi sebuah segitiga
tinggi = 18
# hitung luas sebuah segitiga
luas = alas * tinggi * 1/2
# hitung luas total
luastotal = n * luas
print('luas total : ', luastotal,'satuan luas') | 20.833333 | 47 | 0.716 | 36 | 250 | 4.972222 | 0.5 | 0.234637 | 0.201117 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044554 | 0.192 | 250 | 12 | 47 | 20.833333 | 0.841584 | 0.452 | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
503945544f0f8af4e632fde69205e3dc522fc1a7 | 562 | py | Python | kolejka/judge/tasks/list_files.py | zielinskit/kolejka-judge | 571df05b12c5a4748d7a2ca4c217b0042acf6b48 | [
"MIT"
] | null | null | null | kolejka/judge/tasks/list_files.py | zielinskit/kolejka-judge | 571df05b12c5a4748d7a2ca4c217b0042acf6b48 | [
"MIT"
] | null | null | null | kolejka/judge/tasks/list_files.py | zielinskit/kolejka-judge | 571df05b12c5a4748d7a2ca4c217b0042acf6b48 | [
"MIT"
] | null | null | null | import glob
import itertools
from functools import partial
from typing import Tuple, Optional
from kolejka.judge.tasks.base import TaskBase
class ListFiles(TaskBase):
def __init__(self, *args, variable_name):
self.files = list(args)
self.variable_name = variable_name
def execute(self, environment) -> Tuple[Optional[str], Optional[object]]:
files = list(itertools.chain.from_iterable(map(partial(glob.glob, recursive=True), self.files)))
environment.set_variable(self.variable_name, files)
return None, None
| 29.578947 | 104 | 0.729537 | 71 | 562 | 5.633803 | 0.507042 | 0.12 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174377 | 562 | 18 | 105 | 31.222222 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.384615 | 0 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
504170f7d179aab0a2ca38addb3c52b1e0a07920 | 519 | py | Python | ExIt/Expert/BaseExpert.py | LarsChrWiik/Expert-Iteration-Algorithmic-Comparison | daed2972159c451be19892ee31c413d60dd2f987 | [
"MIT"
] | 1 | 2019-03-01T15:46:06.000Z | 2019-03-01T15:46:06.000Z | ExIt/Expert/BaseExpert.py | LarsChrWiik/Expert-Iteration | daed2972159c451be19892ee31c413d60dd2f987 | [
"MIT"
] | null | null | null | ExIt/Expert/BaseExpert.py | LarsChrWiik/Expert-Iteration | daed2972159c451be19892ee31c413d60dd2f987 | [
"MIT"
] | null | null | null |
from ExIt.Apprentice import BaseApprentice
from Games.GameLogic import BaseGame
class BaseExpert:
""" Class for the tree search algorithm used for policy improvement """
def __init__(self):
self.__name__ = type(self).__name__
def search(self, state: BaseGame, predictor: BaseApprentice, search_time, use_exploration_policy):
""" Do policy improvement for a given state.
:return: a_explore, a_optimal, soft-z """
raise NotImplementedError("Please Implement this method")
| 32.4375 | 102 | 0.722543 | 62 | 519 | 5.774194 | 0.677419 | 0.094972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196532 | 519 | 15 | 103 | 34.6 | 0.858513 | 0.27553 | 0 | 0 | 0 | 0 | 0.07932 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5044725258a6a777f86c0f33b6d96e0f8f308a62 | 1,235 | py | Python | monasca_common/kafka_lib/partitioner/base.py | zhangjm12/monasca-common | 2ebc766534eba6163e98b94a1f114ece18739fff | [
"Apache-2.0"
] | 26 | 2015-10-18T02:54:54.000Z | 2022-02-15T01:36:41.000Z | monasca_common/kafka_lib/partitioner/base.py | zhangjm12/monasca-common | 2ebc766534eba6163e98b94a1f114ece18739fff | [
"Apache-2.0"
] | 18 | 2019-11-01T13:03:36.000Z | 2022-02-16T02:28:52.000Z | monasca_common/kafka_lib/partitioner/base.py | zhangjm12/monasca-common | 2ebc766534eba6163e98b94a1f114ece18739fff | [
"Apache-2.0"
] | 22 | 2016-06-01T11:47:17.000Z | 2020-02-11T14:41:45.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class Partitioner(object):
"""
Base class for a partitioner
"""
def __init__(self, partitions):
"""
Initialize the partitioner
Arguments:
partitions: A list of available partitions (during startup)
"""
self.partitions = partitions
def partition(self, key, partitions=None):
"""
Takes a string key and num_partitions as argument and returns
a partition to be used for the message
Arguments:
key: the key to use for partitioning
partitions: (optional) a list of partitions.
"""
raise NotImplementedError('partition function has to be implemented')
| 33.378378 | 77 | 0.680972 | 160 | 1,235 | 5.225 | 0.56875 | 0.07177 | 0.0311 | 0.038278 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004334 | 0.252632 | 1,235 | 36 | 78 | 34.305556 | 0.901408 | 0.690688 | 0 | 0 | 0 | 0 | 0.152672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
504a5158dee21a88d36bd8c0ab457b382a4195ba | 143 | py | Python | Python/CodingBat/string_times.py | dvt32/cpp-journey | afd7db7a1ad106c41601fb09e963902187ae36e6 | [
"MIT"
] | 1 | 2018-05-24T11:30:05.000Z | 2018-05-24T11:30:05.000Z | Python/CodingBat/string_times.py | dvt32/cpp-journey | afd7db7a1ad106c41601fb09e963902187ae36e6 | [
"MIT"
] | null | null | null | Python/CodingBat/string_times.py | dvt32/cpp-journey | afd7db7a1ad106c41601fb09e963902187ae36e6 | [
"MIT"
] | 2 | 2017-08-11T06:53:30.000Z | 2017-08-29T12:07:52.000Z | # http://codingbat.com/prob/p193507
def string_times(str, n):
result = ""
for i in range(0, n):
result += str
return result
| 14.3 | 35 | 0.601399 | 21 | 143 | 4.047619 | 0.809524 | 0.164706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066038 | 0.258741 | 143 | 9 | 36 | 15.888889 | 0.735849 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
504e0deae6fdb470fcb220ab052c73d97c382d30 | 636 | py | Python | recipe_server/recipeView.py | Shouyin/Recipe | dffaafdebefd7c39a1438444db910f5d7943cf1f | [
"MIT"
] | null | null | null | recipe_server/recipeView.py | Shouyin/Recipe | dffaafdebefd7c39a1438444db910f5d7943cf1f | [
"MIT"
] | null | null | null | recipe_server/recipeView.py | Shouyin/Recipe | dffaafdebefd7c39a1438444db910f5d7943cf1f | [
"MIT"
] | null | null | null | from django import http
from django.shortcuts import render
from django.shortcuts import render_to_response
from . import settings
from .scripts import logic
import os
import urllib
import json
def recipe_html(request):
print(os.path.join(settings.BASE_DIR, "recipe_server/static"))
return render_to_response("recipe.html")
def recipe_api(request):
fridge = request.GET["fridge"]
reconstructed_string = ""
for i in json.loads(fridge):
reconstructed_string += i + "\n"
recipe = request.GET["recipe"]
print(reconstructed_string)
return http.HttpResponse(logic.main(reconstructed_string, recipe)) | 28.909091 | 70 | 0.751572 | 84 | 636 | 5.547619 | 0.452381 | 0.16309 | 0.081545 | 0.107296 | 0.133047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154088 | 636 | 22 | 70 | 28.909091 | 0.866171 | 0 | 0 | 0 | 0 | 0 | 0.070644 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.421053 | 0 | 0.631579 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
504e2922325d5c9d9b0ab7cf89788990779e8e47 | 296 | py | Python | ua_project_transfer/wf_steps_template.py | UACoreFacilitiesIT/UA-Project_Transfer | 0360f20f54a6c9c49dcdb1568a1c961222cb1404 | [
"MIT"
] | 1 | 2020-07-14T16:27:25.000Z | 2020-07-14T16:27:25.000Z | ua_project_transfer/wf_steps_template.py | UACoreFacilitiesIT/UA-Project_Transfer | 0360f20f54a6c9c49dcdb1568a1c961222cb1404 | [
"MIT"
] | null | null | null | ua_project_transfer/wf_steps_template.py | UACoreFacilitiesIT/UA-Project_Transfer | 0360f20f54a6c9c49dcdb1568a1c961222cb1404 | [
"MIT"
] | null | null | null | """A json-like master list of workflows and steps."""
# NOTE: Create a json-like dictionary in the form of:
# WF_STEPS = {
# env1: {
# 1st condition defined in next_steps: {
# 2nd condition defined in next_steps: (Workflow Name, Step Name),
# },
# },
# }
WF_STEPS = {}
| 24.666667 | 76 | 0.608108 | 40 | 296 | 4.4 | 0.625 | 0.056818 | 0.102273 | 0.25 | 0.306818 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013761 | 0.263514 | 296 | 11 | 77 | 26.909091 | 0.793578 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5050238035d838d374850823b3ad1446226e79b5 | 329 | py | Python | tools/exploitation_tools.py | LucaRibeiro/pentestools | 2e7a6b9bf51a84aec90944c50a23e882d184ccdc | [
"MIT"
] | 1 | 2021-02-18T16:15:25.000Z | 2021-02-18T16:15:25.000Z | tools/exploitation_tools.py | LucaRibeiro/Pentools | 2e7a6b9bf51a84aec90944c50a23e882d184ccdc | [
"MIT"
] | null | null | null | tools/exploitation_tools.py | LucaRibeiro/Pentools | 2e7a6b9bf51a84aec90944c50a23e882d184ccdc | [
"MIT"
] | null | null | null | #!/usr/bin/python3
list = ["Armitage", "Backdoor Factory", "BeEF","cisco-auditing-tool",
"cisco-global-exploiter","cisco-ocs","cisco-torch","Commix","crackle",
"exploitdb","jboss-autopwn","Linux Exploit Suggester","Maltego Teeth",
"Metasploit Framework","MSFPC","RouterSploit","SET","ShellNoob","sqlmap",
"THC-IPV6","Yersinia"]
| 41.125 | 73 | 0.714286 | 38 | 329 | 6.184211 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00639 | 0.048632 | 329 | 7 | 74 | 47 | 0.744409 | 0.051672 | 0 | 0 | 0 | 0 | 0.742765 | 0.07074 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
50530270ca9767c3a423e4f06bb397a9db26bf9c | 1,152 | py | Python | web/api/serializer/foodComment.py | bounswe/bounswe2016group2 | f5dbba9b78fc03e8fd6a1fc7548de6cd1177a5ad | [
"Apache-2.0"
] | 10 | 2016-02-10T13:57:10.000Z | 2021-04-01T14:34:33.000Z | web/api/serializer/foodComment.py | bounswe/bounswe2016group2 | f5dbba9b78fc03e8fd6a1fc7548de6cd1177a5ad | [
"Apache-2.0"
] | 203 | 2016-02-14T16:13:15.000Z | 2016-12-23T21:27:08.000Z | web/api/serializer/foodComment.py | bounswe/bounswe2016group2 | f5dbba9b78fc03e8fd6a1fc7548de6cd1177a5ad | [
"Apache-2.0"
] | 2 | 2017-05-10T18:41:28.000Z | 2019-02-27T21:01:18.000Z | from rest_framework import serializers
from api.model.foodComment import FoodComment
from api.model.food import Food
from django.contrib.auth.models import User
from api.serializer.user import UserSerializer
class FoodCommentSerializer(serializers.ModelSerializer):
comment = serializers.CharField(max_length=255)
photo = serializers.CharField(max_length=255, allow_null=True, required=False)
user = serializers.PrimaryKeyRelatedField(queryset=User.objects.all())
food = serializers.PrimaryKeyRelatedField(queryset=Food.objects.all())
class Meta:
model = FoodComment
fields = '__all__'
depth = 1
class FoodCommentReadSerializer(serializers.ModelSerializer):
user = UserSerializer()
class Meta:
model = FoodComment
fields = '__all__'
depth = 1
class FoodCommentPureSerializer(serializers.ModelSerializer):
user = serializers.PrimaryKeyRelatedField(queryset=User.objects.all())
food = serializers.PrimaryKeyRelatedField(queryset=Food.objects.all())
class Meta:
model = FoodComment
fields = ('comment', 'user', 'food')
depth = 1
| 27.428571 | 82 | 0.730903 | 115 | 1,152 | 7.217391 | 0.347826 | 0.077108 | 0.19759 | 0.090361 | 0.507229 | 0.43012 | 0.43012 | 0.43012 | 0.43012 | 0.359036 | 0 | 0.009554 | 0.182292 | 1,152 | 41 | 83 | 28.097561 | 0.87155 | 0 | 0 | 0.555556 | 0 | 0 | 0.025174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.185185 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
505bbd3722c72235b1e0bdbc9e5ed0fb5d0411eb | 780 | py | Python | solutions/problem_122.py | ksvr444/daily-coding-problem | 5d9f488f81c616847ee4e9e48974523ec2d598d7 | [
"MIT"
] | 1,921 | 2018-11-13T18:19:56.000Z | 2021-11-15T14:25:41.000Z | solutions/problem_122.py | MohitIndian/daily-coding-problem | 5d9f488f81c616847ee4e9e48974523ec2d598d7 | [
"MIT"
] | 2 | 2019-07-19T01:06:16.000Z | 2019-08-01T22:21:36.000Z | solutions/problem_122.py | MohitIndian/daily-coding-problem | 5d9f488f81c616847ee4e9e48974523ec2d598d7 | [
"MIT"
] | 1,066 | 2018-11-19T19:06:55.000Z | 2021-11-13T12:33:56.000Z | def get_max_coins_helper(matrix, crow, ccol, rows, cols):
cval = matrix[crow][ccol]
if crow == rows - 1 and ccol == cols - 1:
return cval
down, right = cval, cval
if crow < rows - 1:
down += get_max_coins_helper(
matrix, crow + 1, ccol, rows, cols)
if ccol < cols - 1:
right += get_max_coins_helper(
matrix, crow, ccol + 1, rows, cols)
return max(down, right)
def get_max_coins(matrix):
if matrix:
return get_max_coins_helper(
matrix, 0, 0, len(matrix), len(matrix[0]))
coins = [[0, 3, 1, 1],
[2, 0, 0, 4],
[1, 5, 3, 1]]
assert get_max_coins(coins) == 12
coins = [[0, 3, 1, 1],
[2, 8, 9, 4],
[1, 5, 3, 1]]
assert get_max_coins(coins) == 25
| 23.636364 | 57 | 0.534615 | 120 | 780 | 3.325 | 0.233333 | 0.105263 | 0.192982 | 0.170426 | 0.466165 | 0.408521 | 0.290727 | 0.135338 | 0.135338 | 0.135338 | 0 | 0.069811 | 0.320513 | 780 | 32 | 58 | 24.375 | 0.683019 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
5060bae394ef43f65427b7889ebd8a488a199475 | 1,014 | py | Python | toughradius/common/event_common.py | geosson/GSRadius | 5870e3d055e8366f98b8e65220a1520b5da22f6d | [
"Apache-2.0"
] | 1 | 2019-05-12T15:06:58.000Z | 2019-05-12T15:06:58.000Z | toughradius/common/event_common.py | geosson/GSRadius | 5870e3d055e8366f98b8e65220a1520b5da22f6d | [
"Apache-2.0"
] | null | null | null | toughradius/common/event_common.py | geosson/GSRadius | 5870e3d055e8366f98b8e65220a1520b5da22f6d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding:utf-8
from toughlib import dispatch
"""触发邮件,短信发送公共方法"""
def trigger_notify(obj, user_info, **kwargs):
if int(obj.get_param_value("webhook_notify_enable", 0)) > 0 and kwargs.get('webhook_notify'):
dispatch.pub(kwargs['webhook_notify'], user_info, async=False)
if int(obj.get_param_value("mail_notify_enable", 0)) > 0:
if obj.get_param_value("mail_mode", 'smtp') == 'toughcloud' and \
obj.get_param_value("toughcloud_license", None) and kwargs.get('toughcloud_mail'):
dispatch.pub(kwargs['toughcloud_mail'], user_info, async=False)
if obj.get_param_value("mail_mode", 'smtp') == 'smtp' and kwargs.get('smtp_mail'):
dispatch.pub(kwargs['smtp_mail'], user_info, async=False)
if int(obj.get_param_value("sms_notify_enable", 0)) > 0 and \
obj.get_param_value("toughcloud_license", None) and kwargs.get('toughcloud_sms'):
dispatch.pub(kwargs['toughcloud_sms'], user_info, async=False)
| 39 | 98 | 0.68146 | 143 | 1,014 | 4.566434 | 0.272727 | 0.064319 | 0.117917 | 0.171516 | 0.526799 | 0.473201 | 0.401225 | 0.401225 | 0.309342 | 0.309342 | 0 | 0.008304 | 0.168639 | 1,014 | 25 | 99 | 40.56 | 0.766311 | 0.032544 | 0 | 0 | 0 | 0 | 0.246347 | 0.021921 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.076923 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ac94d7600e1dceeb8d2024e62d34864ff7ca1d58 | 2,041 | py | Python | app/models/admin.py | ShuaiGao/mini-shop-server | 8a72b2d457bba8778e97637027ffa82bfa11e8a9 | [
"MIT"
] | null | null | null | app/models/admin.py | ShuaiGao/mini-shop-server | 8a72b2d457bba8778e97637027ffa82bfa11e8a9 | [
"MIT"
] | 1 | 2019-07-08T12:32:29.000Z | 2019-07-08T12:32:29.000Z | app/models/admin.py | ShuaiGao/mini-shop-server | 8a72b2d457bba8778e97637027ffa82bfa11e8a9 | [
"MIT"
] | null | null | null | # _*_ coding: utf-8 _*_
"""
Created by Allen7D on 2018/6/16.
"""
import os.path as op
from flask_admin import Admin, BaseView, expose
from flask import render_template, redirect, url_for
from flask_admin.contrib.sqla import ModelView
from flask_admin.contrib.fileadmin import FileAdmin
from flask_admin import form
from app.models.base import db
from app.models.user import User
from app.models.banner import BannerView
from app.models.user_address import UserAddressView
from app.models.product import ProductView
from app.models.category import CategoryView
__author__ = 'Allen7D'
from wtforms.fields import SelectField
class HomeView(BaseView):
@expose('/')
def index(self):
return self.render("admin.html")
class MyView(ModelView):
# Disable model creation
# can_create = False
can_delete = False
# Override displayed fields
column_exclude_list = ['delete_time', 'update_time', 'create_time', 'status']
column_list = ('email', 'nickname', 'auth')
column_labels = {
'email': u"邮件",
'nickname':u"头像",
'auth':u"权限"
}
form_extra_fields = {
'auth':form.Select2Field('权限',choices=[('1','权限1'),('2','权限2')])
}
# form_overrides = dict(auth=SelectField)
# form_args = dict(
# # Pass the choices to the `SelectField`
# auth=dict(
# choices=[(1, '超级管理员'), (10, '普通管理员'), (100, '普通用户')]
# ))
def __init__(self, session, **kwargs):
# You can pass name add other parameters if you want to
super(MyView, self).__init__(User, session, **kwargs)
# @expose("/new/", methods=("GET", "POST"))
# def create_view(self):
# return self.render("create_user.html")
def CreateAdminView(admin):
path = op.join(op.dirname(__file__), u'../static')
admin.add_view(FileAdmin(path, u'/static', name = '文件管理'))
admin.add_view(BannerView(db.session, name=u'轮播图'))
admin.add_view(MyView(db.session, name=u'用户管理'))
admin.add_view(ProductView(db.session, name=u'商品管理'))
admin.add_view(CategoryView(db.session, name=u'商品分类'))
admin.add_view(UserAddressView(db.session, name=u'地址管理'))
| 27.958904 | 78 | 0.709456 | 284 | 2,041 | 4.929577 | 0.433099 | 0.03 | 0.055714 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011986 | 0.141597 | 2,041 | 72 | 79 | 28.347222 | 0.7871 | 0.234199 | 0 | 0 | 0 | 0 | 0.097529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.325 | 0.025 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
ac9516fa184eda7179ab54abc6a3a63da22f79c3 | 9,334 | py | Python | MKCommand.py | the-snowwhite/Machinekit-Workbench | 3e0c3ae55e67553bd599a3010ccf3a0392212333 | [
"MIT"
] | 8 | 2019-09-27T18:45:51.000Z | 2020-02-27T09:58:10.000Z | MKCommand.py | the-snowwhite/Machinekit-Workbench | 3e0c3ae55e67553bd599a3010ccf3a0392212333 | [
"MIT"
] | null | null | null | MKCommand.py | the-snowwhite/Machinekit-Workbench | 3e0c3ae55e67553bd599a3010ccf3a0392212333 | [
"MIT"
] | 3 | 2019-10-19T00:18:41.000Z | 2019-11-17T19:58:44.000Z | # Classes implementing the different commands that can be sent to MK
# The implemented classes do not cover the complete functional set of MK
# but are what is required to implement a basic UI.
import enum
import machinetalk.protobuf.message_pb2 as MESSAGE
import machinetalk.protobuf.status_pb2 as STATUS
import machinetalk.protobuf.types_pb2 as TYPES
class MKCommandStatus(enum.Enum):
'''An enumeration used to track a command through its entire lifetime.'''
Created = 0
Sent = 1
Executed = 2
Completed = 3
Obsolete = 4
class MKCommand(object):
'''Base class for all commands implementing the general framework.'''
def __init__(self, command):
self.msg = MESSAGE.Container()
self.msg.type = command
self.state = MKCommandStatus.Created
def __str__(self):
return self.__class__.__name__
def expectsResponses(self):
'''Overwrite and return False if the specific command does not get a response message.
Most commands do get a response so the default is to return True'''
return True
def serializeToString(self):
return self.msg.SerializeToString()
def msgSent(self):
'''Called by the framework when the command was sent to MK'''
self.state = MKCommandStatus.Sent
def msgExecuted(self):
'''Called by the framework when the command was executed by MK'''
self.state = MKCommandStatus.Executed
def msgCompleted(self):
'''Called by the framework when the command has completed'''
self.state = MKCommandStatus.Completed
def msgObsolete(self):
'''Called by the framework when the command has become obsolete'''
self.state = MKCommandStatus.Obsolete
def isExecuted(self):
'''Returns True if the command has been executed by MK'''
return self.state in [MKCommandStatus.Executed, MKCommandStatus.Completed]
def isCompleted(self):
'''Returns True if the command has completed'''
return self.state == MKCommandStatus.Completed
def isObsolete(self):
'''Returns True if the command is obsolete and can be removed'''
return self.state == MKCommandStatus.Obsolete
def statusString(self):
'''Return command's status as string.'''
return self.state.name
class MKCommandExecute(MKCommand):
'''Base class for all commands sent to the 'execute' interpreter.'''
def __init__(self, command):
MKCommand.__init__(self, command)
self.msg.interp_name = 'execute'
class MKCommandPreview(MKCommand):
'''Base class for all commands sent to the 'preview' interpreter.'''
def __init__(self, command):
MKCommand.__init__(self, command)
self.msg.interp_name = 'preview'
class MKCommandTaskSetState(MKCommandExecute):
'''Base class for setting the state of task variables.'''
def __init__(self, state):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_SET_STATE)
self.msg.emc_command_params.task_state = state
class MKCommandEstop(MKCommandTaskSetState):
'''Command to engage or disengage the E-Stop.
on=True means the E-Stop is pressed and MK will ignore all other commands.'''
def __init__(self, on):
MKCommandTaskSetState.__init__(self, STATUS.EMC_TASK_STATE_ESTOP if on else STATUS.EMC_TASK_STATE_ESTOP_RESET)
class MKCommandPower(MKCommandTaskSetState):
'''Command to power MK on or off.'''
def __init__(self, on):
MKCommandTaskSetState.__init__(self, STATUS.EMC_TASK_STATE_ON if on else STATUS.EMC_TASK_STATE_OFF)
class MKCommandOpenFile(MKCommand):
'''Command to open a file, either for 'executing' it or for 'previewing' it.'''
def __init__(self, filename, preview):
if preview:
MKCommandPreview.__init__(self, TYPES.MT_EMC_TASK_PLAN_OPEN)
else:
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_OPEN)
self.msg.emc_command_params.path = filename
class MKCommandTaskRun(MKCommand):
'''Command to start execution of the currently opened file - or to display its preview.'''
def __init__(self, preview, line=0):
if preview:
MKCommandPreview.__init__(self, TYPES.MT_EMC_TASK_PLAN_RUN)
else:
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_RUN)
self.msg.emc_command_params.line_number = line
self.preview = preview
def expectsResponses(self):
return not self.preview
class MKCommandTaskStep(MKCommandExecute):
'''Command to execute a single step of the current task (from its current line).'''
def __init__(self):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_STEP)
class MKCommandTaskPause(MKCommandExecute):
'''Command to pause execution of the current task.'''
def __init__(self):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_PAUSE)
class MKCommandTaskResume(MKCommandExecute):
'''Command to resume a currently paused task.'''
def __init__(self):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_RESUME)
class MKCommandTaskReset(MKCommandExecute):
'''Command to reset task execution. This clears any paused state and resets progress to line 0.'''
def __init__(self, preview):
if preview:
MKCommandPreview.__init__(self, TYPES.MT_EMC_TASK_PLAN_INIT)
else:
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_INIT)
class MKCommandAxisHome(MKCommand):
'''Command to initiate homing (or unhoming) of a gifen axis.
The homing itself is done by MK without any need of interaction. The staging and sequencing
of homing multiple axes has to be orchestrated by the UI though.'''
def __init__(self, index, home=True):
MKCommand.__init__(self, TYPES.MT_EMC_AXIS_HOME if home else TYPES.MT_EMC_AXIS_UNHOME)
self.msg.emc_command_params.index = index
def __str__(self):
return "MKCommandAxisHome[%d]" % (self.msg.emc_command_params.index)
class MKCommandTaskExecute(MKCommandExecute):
'''Command for executing arbitrary commands and command sequences.'''
def __init__(self, cmd):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_PLAN_EXECUTE)
self.msg.emc_command_params.command = cmd
class MKCommandTaskSetMode(MKCommandExecute):
'''Command to set a specific task mode. Valid modes are:
* STATUS.EmcTaskModeType.EMC_TASK_MODE_AUTO ... required for the execute interpreter to take control
* STATUS.EmcTaskModeType.EMC_TASK_MODE_MDI ... required to issue individual g-code commands
* STATUS.EmcTaskModeType.EMC_TASK_MODE_MANUAL ... required for jogging
'''
def __init__(self, mode):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_SET_MODE)
self.msg.emc_command_params.task_mode = mode
class MKCommandTaskAbort(MKCommandExecute):
'''Command to abort the current task.'''
def __init__(self):
MKCommandExecute.__init__(self, TYPES.MT_EMC_TASK_ABORT)
class MKCommandAxisAbort(MKCommandExecute):
'''Command to abort the current axis command - mostly used to stop the active jogging command.'''
def __init__(self, index):
MKCommandExecute.__init__(self, TYPES.MT_EMC_AXIS_ABORT)
self.msg.emc_command_params.index = index
class MKCommandAxisJog(MKCommandExecute):
'''Command to initiate jogging.
There are two different types of jog, distance and incremental.
Incremental jogging initiates the jog which will continue until either
a new jog command is sent or a MKCommandAxisAbort command is sent. This puts some
requirements on the UI's reliability and capability of sending that termination
command.
Distance jogging is marginally safer because MK will silently ignore
a distance jog if it exceeds the axis' limit. There is no indication that the
command was not executed and the tool is still at the same position as it was
before, making the next jog a risky manouver. This is important for scripted jog
sequences like a contour around the tasks boundaries.
It is the UI's responsibility to extract proper values for velocity and distance.
'''
def __init__(self, index, velocity, distance = None):
self.index = index
self.velocity = velocity
self.distance = distance
if distance is None:
MKCommandExecute.__init__(self, TYPES.MT_EMC_AXIS_JOG)
else:
MKCommandExecute.__init__(self, TYPES.MT_EMC_AXIS_INCR_JOG)
self.msg.emc_command_params.distance = distance
self.msg.emc_command_params.index = index
self.msg.emc_command_params.velocity = velocity
def __str__(self):
if self.distance:
return "AxisJog(%d, %.2f, %.2f)" % (self.index, self.velocity, self.distance)
return "AxisJog(%d, %.2f, -)" % (self.index, self.velocity)
class MKCommandTrajSetScale(MKCommand):
'''Command to overwrite the feed rate or rapid speed of the tool bit. scale is a multiplier of the configured speed.'''
def __init__(self, scale, rapid=False):
if rapid:
MKCommand.__init__(self, TYPES.MT_EMC_TRAJ_SET_RAPID_SCALE)
else:
MKCommand.__init__(self, TYPES.MT_EMC_TRAJ_SET_SCALE)
self.msg.emc_command_params.scale = scale
| 42.235294 | 123 | 0.714378 | 1,198 | 9,334 | 5.300501 | 0.234558 | 0.052913 | 0.031496 | 0.044882 | 0.367717 | 0.292913 | 0.251024 | 0.199213 | 0.165512 | 0.105039 | 0 | 0.001756 | 0.206985 | 9,334 | 220 | 124 | 42.427273 | 0.85612 | 0.349261 | 0 | 0.206107 | 0 | 0 | 0.013358 | 0.003597 | 0 | 0 | 0 | 0 | 0 | 1 | 0.251908 | false | 0 | 0.030534 | 0.030534 | 0.557252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ac97d43733100708fa2b0d168201fd6612736104 | 2,185 | py | Python | GearC/material.py | cfernandesFEUP/Gear-Calculation | c15249c23f97e1168e3316ad5e27ed747758353a | [
"Unlicense"
] | 3 | 2020-09-01T13:19:10.000Z | 2021-12-13T13:59:00.000Z | GearC/material.py | cfernandesFEUP/Gear-Calculation | c15249c23f97e1168e3316ad5e27ed747758353a | [
"Unlicense"
] | null | null | null | GearC/material.py | cfernandesFEUP/Gear-Calculation | c15249c23f97e1168e3316ad5e27ed747758353a | [
"Unlicense"
] | null | null | null | ## LIBRARY OF MATERIALS #######################################################
def matp(mat, Tbulk, NL):
import numpy as np
E, v, cpg, kg, rohg, sigmaHlim, sigmaFlim = [np.zeros(2) for _ in range(7)]
for i in range(len(E)):
if mat[i] == 'POM':
E[i] = 3.2e9 # 2900 MPa (min) - 3500 MPa (max)
v[i] = 0.35
cpg[i] = 1465 # J/kg.K
kg[i] = 0.3 # W/m.K (0.23 (min) 0.37 (max))
rohg[i] = 1415 # 1410 (min) - 1420 (max)
sigmaHlim[i] = 36 - 0.0012*Tbulk**2 + (1000 - 0.025*Tbulk**2)*NL** - 0.21
sigmaFlim[i] = 26 - 0.0025*Tbulk**2 + 400*NL** - 0.2
elif mat[i] == 'PEEK':
E[i] = 3.65e9
v[i] = 0.38
cpg[i] = 1472 # 1443 - 1501
kg[i] = 0.25 # W/m.K
rohg[i] = 1320
sigmaHlim[i] = 36 - 0.0012*Tbulk**2 + (1000 - 0.025*Tbulk**2)*NL** - 0.21 # Nylon (PA66)
sigmaFlim[i] = 30 - 0.22*Tbulk + (4600 - 900*Tbulk**0.3)*NL**( - 1/3) # Nylon (PA66)
elif mat[i] == 'PA66':
E[i] = 1.85e9 # 1700 MPa (min) - 2000 MPa (max)
v[i] = 0.3 # 0.25 - 0.35
cpg[i] = 1670 # J/kg.K
kg[i] = 0.26 # W/m.K (0.25 (min) 0.27 (max))
rohg[i] = 1140 # 1130 (min) - 1150 (max))
sigmaHlim[i] = 36 - 0.0012*Tbulk**2 + (1000 - 0.025*Tbulk**2)*NL** - 0.21
sigmaFlim[i] = 30 - 0.22*Tbulk + (4600 - 900*Tbulk**0.3)*NL**( - 1/3)
elif mat[i] == 'ADI':
E[i] = 210e9
v[i] = 0.26 # 0.22 (min) 0.30 (max)
cpg[i] = 460.548
kg[i] = 55 # W/m.K
rohg[i] = 7850
sigmaHlim[i] = 1500
sigmaFlim[i] = 430
elif mat[i] == 'STEEL':
E[i] = 206e9
v[i] = 0.3 # 0.22 (min) 0.30 (max)
cpg[i] = 465
kg[i] = 46 # W/m.K
rohg[i] = 7830
sigmaHlim[i] = 1500
sigmaFlim[i] = 430
return E, v, cpg, kg, rohg, sigmaHlim, sigmaFlim
| 46.489362 | 101 | 0.381236 | 318 | 2,185 | 2.616352 | 0.295597 | 0.019231 | 0.018029 | 0.046875 | 0.512019 | 0.454327 | 0.370192 | 0.300481 | 0.262019 | 0.262019 | 0 | 0.223632 | 0.422883 | 2,185 | 46 | 102 | 47.5 | 0.436162 | 0.150114 | 0 | 0.244444 | 0 | 0 | 0.010945 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.022222 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
aca120dbe457333e15ac5e2607f6815fc7e3bb5a | 4,465 | py | Python | deepchem/data/tests/test_shape.py | deloragaskins/deepchem | 234ab699cdb997e5963966a8b6926cb2cda7c064 | [
"MIT"
] | 3,782 | 2016-02-21T03:53:11.000Z | 2022-03-31T16:10:26.000Z | deepchem/data/tests/test_shape.py | deloragaskins/deepchem | 234ab699cdb997e5963966a8b6926cb2cda7c064 | [
"MIT"
] | 2,666 | 2016-02-11T01:54:54.000Z | 2022-03-31T11:14:33.000Z | deepchem/data/tests/test_shape.py | deloragaskins/deepchem | 234ab699cdb997e5963966a8b6926cb2cda7c064 | [
"MIT"
] | 1,597 | 2016-02-21T03:10:08.000Z | 2022-03-30T13:21:28.000Z | import deepchem as dc
import numpy as np
import os
def test_numpy_dataset_get_shape():
"""Test that get_shape works for numpy datasets."""
num_datapoints = 100
num_features = 10
num_tasks = 10
# Generate data
X = np.random.rand(num_datapoints, num_features)
y = np.random.randint(2, size=(num_datapoints, num_tasks))
w = np.random.randint(2, size=(num_datapoints, num_tasks))
ids = np.array(["id"] * num_datapoints)
dataset = dc.data.NumpyDataset(X, y, w, ids)
X_shape, y_shape, w_shape, ids_shape = dataset.get_shape()
assert X_shape == X.shape
assert y_shape == y.shape
assert w_shape == w.shape
assert ids_shape == ids.shape
def test_disk_dataset_get_shape_single_shard():
"""Test that get_shape works for disk dataset."""
num_datapoints = 100
num_features = 10
num_tasks = 10
# Generate data
X = np.random.rand(num_datapoints, num_features)
y = np.random.randint(2, size=(num_datapoints, num_tasks))
w = np.random.randint(2, size=(num_datapoints, num_tasks))
ids = np.array(["id"] * num_datapoints)
dataset = dc.data.DiskDataset.from_numpy(X, y, w, ids)
X_shape, y_shape, w_shape, ids_shape = dataset.get_shape()
assert X_shape == X.shape
assert y_shape == y.shape
assert w_shape == w.shape
assert ids_shape == ids.shape
def test_disk_dataset_get_shape_multishard():
"""Test that get_shape works for multisharded disk dataset."""
num_datapoints = 100
num_features = 10
num_tasks = 10
# Generate data
X = np.random.rand(num_datapoints, num_features)
y = np.random.randint(2, size=(num_datapoints, num_tasks))
w = np.random.randint(2, size=(num_datapoints, num_tasks))
ids = np.array(["id"] * num_datapoints)
dataset = dc.data.DiskDataset.from_numpy(X, y, w, ids)
# Should now have 10 shards
dataset.reshard(shard_size=10)
X_shape, y_shape, w_shape, ids_shape = dataset.get_shape()
assert X_shape == X.shape
assert y_shape == y.shape
assert w_shape == w.shape
assert ids_shape == ids.shape
def test_disk_dataset_get_legacy_shape_single_shard():
"""Test that get_shape works for legacy disk dataset."""
# This is the shape of legacy_data
num_datapoints = 100
num_features = 10
num_tasks = 10
current_dir = os.path.dirname(os.path.abspath(__file__))
# legacy_dataset is a dataset in the legacy format kept around for testing
# purposes.
data_dir = os.path.join(current_dir, "legacy_dataset")
dataset = dc.data.DiskDataset(data_dir)
X_shape, y_shape, w_shape, ids_shape = dataset.get_shape()
assert X_shape == (num_datapoints, num_features)
assert y_shape == (num_datapoints, num_tasks)
assert w_shape == (num_datapoints, num_tasks)
assert ids_shape == (num_datapoints,)
def test_disk_dataset_get_legacy_shape_multishard():
"""Test that get_shape works for multisharded legacy disk dataset."""
# This is the shape of legacy_data_reshard
num_datapoints = 100
num_features = 10
num_tasks = 10
# legacy_dataset_reshard is a sharded dataset in the legacy format kept
# around for testing
current_dir = os.path.dirname(os.path.abspath(__file__))
data_dir = os.path.join(current_dir, "legacy_dataset_reshard")
dataset = dc.data.DiskDataset(data_dir)
# Should now have 10 shards
assert dataset.get_number_shards() == 10
X_shape, y_shape, w_shape, ids_shape = dataset.get_shape()
assert X_shape == (num_datapoints, num_features)
assert y_shape == (num_datapoints, num_tasks)
assert w_shape == (num_datapoints, num_tasks)
assert ids_shape == (num_datapoints,)
def test_get_shard_size():
"""
Test that using ids for getting the shard size does not break the method.
The issue arises when attempting to load a dataset that does not have a labels
column. The create_dataset method of the DataLoader class sets the y to None
in this case, which causes the existing implementation of the get_shard_size()
method to fail, as it relies on the dataset having a not None y column. This
consequently breaks all methods depending on this, like the splitters for
example.
Note
----
DiskDatasets without labels cannot be resharded!
"""
current_dir = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(current_dir, "reaction_smiles.csv")
featurizer = dc.feat.DummyFeaturizer()
loader = dc.data.CSVLoader(
tasks=[], feature_field="reactions", featurizer=featurizer)
dataset = loader.create_dataset(file_path)
assert dataset.get_shard_size() == 4
| 33.074074 | 80 | 0.738186 | 695 | 4,465 | 4.48777 | 0.179856 | 0.1042 | 0.076948 | 0.067329 | 0.73421 | 0.714332 | 0.686759 | 0.678423 | 0.678423 | 0.505931 | 0 | 0.01338 | 0.163046 | 4,465 | 134 | 81 | 33.320896 | 0.821247 | 0.253303 | 0 | 0.746835 | 0 | 0 | 0.021413 | 0.00673 | 0 | 0 | 0 | 0 | 0.278481 | 1 | 0.075949 | false | 0 | 0.037975 | 0 | 0.113924 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
acad836bb967db6d3ec59df7b4fb252d32176a06 | 905 | py | Python | src/the_teleop/test_popout.py | NuenoB/TheTeleop | 57e3f745d391743fac408fb44bf20ffad945aa19 | [
"BSD-3-Clause"
] | null | null | null | src/the_teleop/test_popout.py | NuenoB/TheTeleop | 57e3f745d391743fac408fb44bf20ffad945aa19 | [
"BSD-3-Clause"
] | null | null | null | src/the_teleop/test_popout.py | NuenoB/TheTeleop | 57e3f745d391743fac408fb44bf20ffad945aa19 | [
"BSD-3-Clause"
] | null | null | null | #! /usr/bin/env python
import os
import rospy
import rospkg
from readbag import restore
from qt_gui.plugin import Plugin
from python_qt_binding.QtCore import Qt
from python_qt_binding import loadUi
from python_qt_binding.QtGui import QFileDialog, QGraphicsView, QIcon, QWidget
from PyQt4 import QtGui, QtCore
from example_ui import *
from PyQt4 import QtGui
from v2 import Ui_addbag
class Form1(QtGui.QWidget, Ui_addbag):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self.setupUi(self)
self.pushButton_2.clicked.connect(self.handleButton)
self.window2 = None
def handleButton(self):
if self.window2 is None:
self.window2 = Form1(self)
self.window2.show()
self.hide()
def pop():
import sys
app = QtGui.QApplication(sys.argv)
window = Form1()
window.show()
sys.exit(app.exec_()) | 23.815789 | 78 | 0.709392 | 124 | 905 | 5.016129 | 0.427419 | 0.07074 | 0.057878 | 0.09164 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015342 | 0.207735 | 905 | 38 | 79 | 23.815789 | 0.852162 | 0.023204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.448276 | 0 | 0.586207 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
acb17cfb85ffc305e2395079620b49264e4e9636 | 377 | py | Python | active/setup.py | jordan-schneider/value-alignment-verification | f2c877b16dfefa7cd8089b7aa3fe084ab907235e | [
"MIT"
] | null | null | null | active/setup.py | jordan-schneider/value-alignment-verification | f2c877b16dfefa7cd8089b7aa3fe084ab907235e | [
"MIT"
] | 2 | 2020-05-25T14:50:11.000Z | 2021-01-18T20:23:30.000Z | active/setup.py | jordan-schneider/batch-active-preference-based-learning | f2c877b16dfefa7cd8089b7aa3fe084ab907235e | [
"MIT"
] | 1 | 2021-08-24T18:22:13.000Z | 2021-08-24T18:22:13.000Z | from distutils.core import setup
from pathlib import Path
# TODO(joschnei): Add typing info
setup(
name="active",
version="0.1",
packages=["active",],
install_requires=[
"scipy",
"numpy",
"driver @ git+https://github.com/jordan-schneider/driver-env.git#egg=driver",
],
package_data = {
'active': ['py.typed'],
},
)
| 19.842105 | 85 | 0.588859 | 43 | 377 | 5.116279 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.246684 | 377 | 18 | 86 | 20.944444 | 0.767606 | 0.082228 | 0 | 0 | 0 | 0.066667 | 0.328488 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 1 | 0 | true | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ace2b1a29a3abb15aedb474de4948707e3d81eeb | 416 | py | Python | erpnext_feature_board/hook_events/review_request.py | akurungadam/erpnext_feature_board | 8c99b4dfaa79d86d8e8b46fa1bf235d0bfa471e0 | [
"MIT"
] | 15 | 2021-05-31T16:29:22.000Z | 2021-12-02T20:18:32.000Z | erpnext_feature_board/hook_events/review_request.py | akurungadam/erpnext_feature_board | 8c99b4dfaa79d86d8e8b46fa1bf235d0bfa471e0 | [
"MIT"
] | 18 | 2021-06-01T07:39:08.000Z | 2021-07-14T09:02:35.000Z | erpnext_feature_board/hook_events/review_request.py | akurungadam/erpnext_feature_board | 8c99b4dfaa79d86d8e8b46fa1bf235d0bfa471e0 | [
"MIT"
] | 6 | 2021-06-01T07:19:53.000Z | 2021-12-28T20:06:25.000Z | import frappe
def delete_approved_build_requests():
"""
Scheduled hook to delete approved Review Requests for changing site deployments.
"""
approved_build_requests = frappe.get_all(
"Review Request",
filters={
"request_type": ["in", ["Build", "Upgrade", "Delete"]],
"request_status": "Approved",
},
)
for request in approved_build_requests:
frappe.delete_doc("Review Request", request.name)
| 21.894737 | 81 | 0.71875 | 49 | 416 | 5.877551 | 0.510204 | 0.135417 | 0.21875 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151442 | 416 | 18 | 82 | 23.111111 | 0.815864 | 0.192308 | 0 | 0 | 0 | 0 | 0.251534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ace7ab1c03480ac4b4f41e3fb954c1d488666de5 | 247 | py | Python | trustpayments/models/failure_category.py | TrustPayments/python-sdk | 6fde6eb8cfce270c3612a2903a845c13018c3bb9 | [
"Apache-2.0"
] | 2 | 2020-01-16T13:24:06.000Z | 2020-11-21T17:40:17.000Z | postfinancecheckout/models/failure_category.py | pfpayments/python-sdk | b8ef159ea3c843a8d0361d1e0b122a9958adbcb4 | [
"Apache-2.0"
] | 4 | 2019-10-14T17:33:23.000Z | 2021-10-01T14:49:11.000Z | postfinancecheckout/models/failure_category.py | pfpayments/python-sdk | b8ef159ea3c843a8d0361d1e0b122a9958adbcb4 | [
"Apache-2.0"
] | 2 | 2019-10-15T14:17:10.000Z | 2021-09-17T13:07:09.000Z | # coding: utf-8
from enum import Enum, unique
@unique
class FailureCategory(Enum):
TEMPORARY_ISSUE = "TEMPORARY_ISSUE"
INTERNAL = "INTERNAL"
END_USER = "END_USER"
CONFIGURATION = "CONFIGURATION"
DEVELOPER = "DEVELOPER"
| 17.642857 | 39 | 0.688259 | 26 | 247 | 6.384615 | 0.615385 | 0.168675 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005181 | 0.218623 | 247 | 13 | 40 | 19 | 0.854922 | 0.052632 | 0 | 0 | 0 | 0 | 0.229437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.875 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
acee87269de38c5afcc9577b696b2d9e96852134 | 149 | py | Python | Questoes/b1_q09_piso.py | viniciusm0raes/python | c4d4f1a08d1e4de105109e1f67fae9fcc20d7fce | [
"MIT"
] | null | null | null | Questoes/b1_q09_piso.py | viniciusm0raes/python | c4d4f1a08d1e4de105109e1f67fae9fcc20d7fce | [
"MIT"
] | null | null | null | Questoes/b1_q09_piso.py | viniciusm0raes/python | c4d4f1a08d1e4de105109e1f67fae9fcc20d7fce | [
"MIT"
] | null | null | null | metros = float(input('Quantos metros de piso vc deseja? '))
preco = 70
total = metros*preco
print('O preço total do pedido é: R$ %.2f' % (total))
| 18.625 | 59 | 0.66443 | 24 | 149 | 4.125 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024793 | 0.187919 | 149 | 7 | 60 | 21.285714 | 0.793388 | 0 | 0 | 0 | 0 | 0 | 0.459459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
acf995ba4adee5652bf497dcac8aaaa0df89b254 | 702 | py | Python | tests/test_day22.py | arcadecoffee/advent-2021 | 57d24cd6ba6e2b4d7e68ea492b955b73eaad7b6a | [
"MIT"
] | null | null | null | tests/test_day22.py | arcadecoffee/advent-2021 | 57d24cd6ba6e2b4d7e68ea492b955b73eaad7b6a | [
"MIT"
] | null | null | null | tests/test_day22.py | arcadecoffee/advent-2021 | 57d24cd6ba6e2b4d7e68ea492b955b73eaad7b6a | [
"MIT"
] | null | null | null | """
Tests for Day 22
"""
from day22.module import part_1, part_2, \
FULL_INPUT_FILE, TEST_INPUT_FILE_1, TEST_INPUT_FILE_2, TEST_INPUT_FILE_3
def test_part_1_1():
result = part_1(TEST_INPUT_FILE_1)
assert result == 39
def test_part_1_2():
result = part_1(TEST_INPUT_FILE_2)
assert result == 590784
def test_part_1_3():
result = part_1(TEST_INPUT_FILE_3)
assert result == 474140
def test_part_1_full():
result = part_1(FULL_INPUT_FILE)
assert result == 546724
def test_part_2():
result = part_2(TEST_INPUT_FILE_3)
assert result == 2758514936282235
def test_part_2_full():
result = part_2(FULL_INPUT_FILE)
assert result == 1346544039176841
| 18.972973 | 76 | 0.720798 | 114 | 702 | 3.982456 | 0.219298 | 0.198238 | 0.200441 | 0.123348 | 0.473568 | 0.244493 | 0 | 0 | 0 | 0 | 0 | 0.141093 | 0.192308 | 702 | 36 | 77 | 19.5 | 0.659612 | 0.022792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.3 | false | 0 | 0.05 | 0 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4a21f3279034131e287608aa7f238be08a6231f6 | 986 | py | Python | project4github/largest_digit.py | chinkaih319/SC101 | 25c179c96e0a2bbc4e47768c029ee4bf49e06245 | [
"MIT"
] | null | null | null | project4github/largest_digit.py | chinkaih319/SC101 | 25c179c96e0a2bbc4e47768c029ee4bf49e06245 | [
"MIT"
] | null | null | null | project4github/largest_digit.py | chinkaih319/SC101 | 25c179c96e0a2bbc4e47768c029ee4bf49e06245 | [
"MIT"
] | null | null | null | """
File: largest_digit.py
Name:
----------------------------------
This file recursively prints the biggest digit in
5 different integers, 12345, 281, 6, -111, -9453
If your implementation is correct, you should see
5, 8, 6, 1, 9 on Console.
"""
def main():
print(find_largest_digit(12345)) # 5
print(find_largest_digit(281)) # 8
print(find_largest_digit(6)) # 6
print(find_largest_digit(-111)) # 1
print(find_largest_digit(-9453)) # 9
def find_largest_digit(n):
"""
:param n:
:return:
"""
time = 0
bs = 0
return helper(n, time, bs)
def helper(n, time, bs):
if 0 <= n <= 10:
return n
else:
if n < 10 ** (time+1):
if n < 0:
return helper(-n, time, bs)
else:
first = n // (10 ** time)
if first > bs:
return first
else:
return bs
else:
sq = n//(10 ** time) - (n//(10 ** (time + 1))) * 10
if sq > bs:
bs = sq
time += 1
return helper(n, time, bs)
if __name__ == '__main__':
main()
| 18.603774 | 54 | 0.558824 | 149 | 986 | 3.557047 | 0.315436 | 0.158491 | 0.181132 | 0.198113 | 0.143396 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0.085048 | 0.260649 | 986 | 52 | 55 | 18.961538 | 0.641975 | 0.271805 | 0 | 0.193548 | 0 | 0 | 0.011494 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0 | 0 | 0.290323 | 0.16129 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c57cdef697b1ae7480e0770028e9b4a5e38b5778 | 2,155 | py | Python | stock/migrations/0020_stockproductcds_stockproductdis.py | unicefburundi/paludisme | 775af3c15349d4437e3780690eb6fa2ea8622ee7 | [
"MIT"
] | 1 | 2017-04-26T10:09:12.000Z | 2017-04-26T10:09:12.000Z | stock/migrations/0020_stockproductcds_stockproductdis.py | srugano/paludisme | 775af3c15349d4437e3780690eb6fa2ea8622ee7 | [
"MIT"
] | null | null | null | stock/migrations/0020_stockproductcds_stockproductdis.py | srugano/paludisme | 775af3c15349d4437e3780690eb6fa2ea8622ee7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2017-09-10 20:53
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
("bdiadmin", "0013_auto_20170319_1415"),
("stock", "0019_stockproductprov"),
]
operations = [
migrations.CreateModel(
name="StockProductCDS",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("year", models.PositiveIntegerField(default=2017)),
("week", models.CharField(max_length=3)),
("product", models.CharField(max_length=50)),
("quantity", models.FloatField(default=0.0)),
(
"cds",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE, to="bdiadmin.CDS"
),
),
],
),
migrations.CreateModel(
name="StockProductDis",
fields=[
(
"id",
models.AutoField(
auto_created=True,
primary_key=True,
serialize=False,
verbose_name="ID",
),
),
("year", models.PositiveIntegerField(default=2017)),
("week", models.CharField(max_length=3)),
("product", models.CharField(max_length=50)),
("quantity", models.FloatField(default=0.0)),
(
"district",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to="bdiadmin.District",
),
),
],
),
]
| 32.164179 | 86 | 0.429234 | 155 | 2,155 | 5.832258 | 0.445161 | 0.035398 | 0.079646 | 0.106195 | 0.615044 | 0.615044 | 0.615044 | 0.615044 | 0.615044 | 0.615044 | 0 | 0.047496 | 0.462645 | 2,155 | 66 | 87 | 32.651515 | 0.733161 | 0.031555 | 0 | 0.644068 | 1 | 0 | 0.086852 | 0.021113 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.101695 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c584ba722dcfc049b1e65f8d0be9570ef5cdc8bf | 1,566 | py | Python | KivyApp/login.py | yeltayzhastay/jadenapp | 41f593fb897cb6b4e17aeeb1dff4287a9e89f4d9 | [
"MIT"
] | null | null | null | KivyApp/login.py | yeltayzhastay/jadenapp | 41f593fb897cb6b4e17aeeb1dff4287a9e89f4d9 | [
"MIT"
] | null | null | null | KivyApp/login.py | yeltayzhastay/jadenapp | 41f593fb897cb6b4e17aeeb1dff4287a9e89f4d9 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import pickle
from sklearn.metrics.pairwise import linear_kernel
from sklearn.feature_extraction.text import TfidfVectorizer
import csv
from kivy.app import App
from kivy.uix.gridlayout import GridLayout
from kivy.uix.label import Label
from kivy.uix.textinput import TextInput
class Jaden:
_model = None
_vector = None
_vocabulary = None
def __init__(self):
self._model = pickle.load(open('_model.sav', 'rb'))
self._vector = pickle.load(open('_vectorized.sav', 'rb'))
with open('dataset/tarih.csv', newline='', encoding='utf8') as f:
reader = csv.reader(f)
_vocabulary = list(reader)
self._vocabulary = _vocabulary
def find_answer(self, question):
_cos_sim = linear_kernel(_model.transform([question]), _vector).flatten()
_cos_sim = np.ndarray.argsort(-_cos_sim)[:5]
_result = []
for i in _cos_sim:
_result.append(self._vocabulary[i+1][1])
return _result
class LoginScreen(GridLayout):
def __init__(self, **kwargs):
super(LoginScreen, self).__init__(**kwargs)
self.cols = 2
self.add_widget(Label(text='User Name'))
self.username = TextInput(multiline=False)
self.add_widget(self.username)
self.add_widget(Label(text='password'))
self.password = TextInput(password=True, multiline=False)
self.add_widget(self.password)
class MyApp(App):
def build(self):
return LoginScreen()
MyApp().run() | 27.473684 | 81 | 0.659642 | 193 | 1,566 | 5.124352 | 0.419689 | 0.032356 | 0.052578 | 0.0364 | 0.107179 | 0.06269 | 0 | 0 | 0 | 0 | 0 | 0.004143 | 0.229246 | 1,566 | 57 | 82 | 27.473684 | 0.815244 | 0 | 0 | 0 | 0 | 0 | 0.042757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0.071429 | 0.238095 | 0.02381 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
c5924e0b85cfa3c5247ec98d6988bcc8eef21a43 | 251 | py | Python | 1 - Beginner/1079.py | andrematte/uri-submissions | 796e7fee56650d9e882880318d6e7734038be2dc | [
"MIT"
] | 1 | 2020-09-09T12:48:09.000Z | 2020-09-09T12:48:09.000Z | 1 - Beginner/1079.py | andrematte/uri-submissions | 796e7fee56650d9e882880318d6e7734038be2dc | [
"MIT"
] | null | null | null | 1 - Beginner/1079.py | andrematte/uri-submissions | 796e7fee56650d9e882880318d6e7734038be2dc | [
"MIT"
] | null | null | null | # Uri Online Judge 1079
N = int(input())
for i in range(0,N):
Numbers = input()
num1 = float(Numbers.split()[0])
num2 = float(Numbers.split()[1])
num3 = float(Numbers.split()[2])
print(((2*num1+3*num2+5*num3)/10).__round__(1)) | 19.307692 | 51 | 0.59761 | 40 | 251 | 3.65 | 0.625 | 0.246575 | 0.349315 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09901 | 0.195219 | 251 | 13 | 51 | 19.307692 | 0.623762 | 0.083665 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c597a65becfbb1c61ad0f698d1e6335d34dce5df | 4,251 | py | Python | viewcount.py | Peace1-zhwiki/MOSIW | 13a97842ef53fd500296d3569e548a83e12698d1 | [
"MIT"
] | null | null | null | viewcount.py | Peace1-zhwiki/MOSIW | 13a97842ef53fd500296d3569e548a83e12698d1 | [
"MIT"
] | null | null | null | viewcount.py | Peace1-zhwiki/MOSIW | 13a97842ef53fd500296d3569e548a83e12698d1 | [
"MIT"
] | null | null | null | import pywikibot
from pywikibot import pagegenerators
from urllib.request import urlopen
import urllib.parse
import regex as re #use this rather than "re" to avoid the "look-behind requires fixed-width pattern" error
site = pywikibot.Site('zh','wikipedia')
cat = pywikibot.Category(site,'Category:連結格式不正確的條目')
page_to_write = pywikibot.Page(site, u"User:和平奮鬥救地球/MOSIW")
gen = pagegenerators.CategorizedPageGenerator(cat, recurse=True)
ilh='(?<!\{\{(Advtranslation|Plant\-translation|Translate|Translating|Translation[ _]+WIP|Translation|Trans|Tran|Voltranslation|Wptranslation|正在翻(譯|译)|(翻)?(譯|译)(中)?)[^\}]*)\[\[\:(aa|ab|ace|ady|af|ak|als|am|an|ang|ar|arc|arz|as|ast|av|ay|az|azb|ba|bar|bat-smg|bcl|be|be-tarask|be-x-old|bg|bh|bi|bjn|bm|bn|bo|bpy|br|bs|bug|bxr|ca|cbk-zam|cdo|ce|ceb|ch|cho|chr|chy|ckb|co|cr|crh|cs|csb|cu|cv|cy|da|de|diq|dsb|dv|dz|ee|egl|eml|el|en|eo|es|et|eu|ext|fa|ff|fi|fiu-vro|fj|fo|fr|frp|frr|fur|fy|ga|gag|gan|gd|gl|glk|gn|gom|got|gsw|als|gu|gv|ha|hak|haw|he|hi|hif|ho|hr|hsb|ht|hu|hy|hz|ia|id|ie|ig|ii|ik|ilo|io|is|it|iu|ja|jp|jam|jbo|jv|ka|kaa|kab|kbd|kg|ki|kj|kk|kl|km|kn|ko|koi|kr|krc|ks|ksh|ku|kv|kw|ky|la|lad|lb|lbe|lez|lg|li|lij|lmo|ln|lo|lrc|lt|ltg|lv|lzh|zh-classical|mai|map-bms|mdf|mg|mh|mhr|mi|min|mk|ml|mn|mo|mr|mrj|ms|mt|mus|mwl|my|myv|mzn|na|nah|nan|zh-min-nan|nap|nb|no|nds|nds-nl|ne|ne|new|ng|nl|nn|no|nov|nrm|nso|nv|ny|oc|olo|om|or|os|pa|pag|pam|pap|pcd|pdc|pfl|pi|pih|pl|pms|pnb|pnt|ps|pt|qu|rm|rmy|rn|ro|roa-rup|roa-tara|ru|rue|rup|rw|sa|sah|sc|scn|sco|sd|se|sg|sgs|sh|si|simple|sk|sl|sm|sn|so|sq|sr|srn|ss|st|stq|su|sv|sw|szl|ta|tcy|te|tet|tg|th|ti|tk|tl|tn|to|tpi|tr|ts|tt|tum|tw|ty|tyv|udm|ug|uk|ur|uz|ve|vec|vep|vi|vls|vo|vro|wa|war|wo|wuu|xal|xh|xmf|yi|yo|yue|zh-yue|za|zea|zu)\:(?!(wiktionary|wikt|wikinews|n|wikibooks|b|wikiquote|q|wikisource|s|oldwikisource|species|wikispecies|wikiversity|v|betawikiversity|wikimedia|foundation|wmf|wikivoyage|voy|commons|c|meta|metawikipedia|m|strategy|incubator|mediawikiwiki|mw|mediawiki|quality|otrswiki|otrs|ticket|phabricator|bugzilla|mediazilla|phab|nost|testwiki|wikidata|d|outreach|outreachwiki|toollabs|wikitech|dbdump|download|gerrit|mail|mailarchive|rev|spcom|sulutil|svn|tools|tswiki|wm2016|wm2017|wmania|User|Wikipedia|MediaWiki|File|Image|WP|Project|Template|Help|Special|U|利用者)\:)|\[\[(JP|JA|EN)\:\:'
viewcount = 0
arts = []
views = []
ilh_count = []
edit_num = []
page_size = []
count = 0
html_start = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/zh.wikipedia/all-access/user/"
html_end = "/monthly/2020100100/2020110100"
tot_num = len(list(cat.articles(namespaces=0,recurse=True)))
print(tot_num)
for page in gen:
count+=1
percentage = 100*count/tot_num
art_name = page.title()
html_url = html_start + urllib.parse.quote(art_name).replace('/','%2F') + html_end
try:
urlopen(html_url)
except:
continue
html = urlopen(html_url).read()
strhtml = str(html)
viewcount = strhtml[strhtml.find('views')+7:-4]
if(int(viewcount)<1000): continue
art_txt = page.text
ilh_num = len(re.findall(ilh,art_txt,re.I))
print(format(percentage, '0.3f'),'%:',art_name,viewcount,ilh_num,page.revision_count(),len(page.text.encode("utf8")))
arts.append(art_name)
views.append(int(viewcount))
ilh_count.append(ilh_num)
edit_num.append(page.revision_count())
page_size.append(len(page.text.encode("utf8")))
for i in range(len(views)):
for j in range(len(views)-i-1):
if views[j]<views[j+1]:
views[j], views[j+1] = views[j+1], views[j]
arts[j], arts[j+1] = arts[j+1], arts[j]
ilh_count[j], ilh_count[j+1] = ilh_count[j+1], ilh_count[j]
edit_num[j], edit_num[j+1] = edit_num[j+1], edit_num[j]
page_size[j], page_size[j+1] = page_size[j+1], page_size[j]
writestr = '[[:Category:連結格式不正確的條目]]當中前1,000高瀏覽量(2020年10月份數據)之條目\n\n'
writestr += '最後更新時間:~~~~~\n\n'
writestr += '{| class="wikitable sortable"\n! 條目名 !! 瀏覽量 !! 不合規跨語言連結總數(粗估) !! 頁面編輯次數 !! 頁面長度(位元組)\n'
for i in range(len(views)):
if i>=1000: break
print(arts[i],views[i])
writestr += '|-\n|[[' + arts[i] + ']]||' + str(views[i]) + '||' + str(ilh_count[i]) + '||' + str(edit_num[i]) + '||' + str(page_size[i]) + '\n'
writestr += '|}'
page_to_write.text = writestr
page_to_write.save(u"使用[[mw:Manual:Pywikibot/zh|Pywikibot]]更新數據")
print('Done') | 53.810127 | 1,865 | 0.717008 | 801 | 4,251 | 3.742821 | 0.64794 | 0.007338 | 0.012008 | 0.01501 | 0.073382 | 0.051368 | 0.038692 | 0 | 0 | 0 | 0 | 0.018369 | 0.065161 | 4,251 | 79 | 1,866 | 53.810127 | 0.736034 | 0.020466 | 0 | 0.032787 | 0 | 0.04918 | 0.54707 | 0.476945 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.081967 | 0 | 0.081967 | 0.065574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5ae3a56e64e4529136d5912d32600637f06223a | 417 | py | Python | base/migrations/0006_profile_history.py | polarity-cf/arugo | 530ea6092702916d63f36308d5a615d118b73850 | [
"MIT"
] | 34 | 2021-11-11T14:00:15.000Z | 2022-03-16T12:30:04.000Z | base/migrations/0006_profile_history.py | polarity-cf/arugo | 530ea6092702916d63f36308d5a615d118b73850 | [
"MIT"
] | 22 | 2021-11-11T23:18:14.000Z | 2022-03-31T15:07:02.000Z | base/migrations/0006_profile_history.py | polarity-cf/arugo | 530ea6092702916d63f36308d5a615d118b73850 | [
"MIT"
] | 1 | 2022-03-14T07:35:09.000Z | 2022-03-14T07:35:09.000Z | # Generated by Django 3.2.9 on 2021-11-13 14:43
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0005_authquery_password'),
]
operations = [
migrations.AddField(
model_name='profile',
name='history',
field=models.CharField(default='[]', max_length=1000),
),
]
| 21.947368 | 67 | 0.568345 | 42 | 417 | 5.547619 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080139 | 0.311751 | 417 | 18 | 68 | 23.166667 | 0.731707 | 0.107914 | 0 | 0 | 1 | 0 | 0.122159 | 0.065341 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.083333 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c5c4c83c0fcae56bd39628ba7e24ff40a480e5b9 | 3,553 | py | Python | models/van_der_waals.py | HARSHAL-IITB/spa-design-tool | 84d250a02cc3f4af56770550c9f559feb524cb07 | [
"MIT"
] | null | null | null | models/van_der_waals.py | HARSHAL-IITB/spa-design-tool | 84d250a02cc3f4af56770550c9f559feb524cb07 | [
"MIT"
] | null | null | null | models/van_der_waals.py | HARSHAL-IITB/spa-design-tool | 84d250a02cc3f4af56770550c9f559feb524cb07 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# The MIT License (MIT)
#
# Copyright (c) 2015, EPFL Reconfigurable Robotics Laboratory,
# Philip Moseley, philip.moseley@gmail.com
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import numpy as np
#--------------------------------------------------------------------------------
# Material model name.
#--------------------------------------------------------------------------------
def name(): return 'vdw'
def pname(): return 'Van der Waals'
def params(): return 'mu lambda_m alpha beta'
def descr(): return 'Van der Waals Model.'
#--------------------------------------------------------------------------------
# Function defining the uniaxial stress given strain.
#--------------------------------------------------------------------------------
def stressU(x, u, Lm, a, B):
L = 1.0+x
I1 = np.power(L,2.0) + 2.0*np.power(L,-1.0)
I2 = np.power(L,-2.0) + 2.0*L
I = (1.0-B)*I1 + B*I2
n = np.sqrt((I-3.0)/(np.power(Lm,2.0)-3.0))
t1 = (1.0/(1.0-n)) - a * np.sqrt(0.5*(I-3.0))
t2 = L*(1.0-B) + B
return u*(1.0-np.power(L,-3.0)) * t1 * t2
#--------------------------------------------------------------------------------
# Function defining the biaxial stress given strain.
#--------------------------------------------------------------------------------
def stressB(x, u, Lm, a, B):
L = 1.0+x
I1 = 2.0*np.power(L,2.0) + np.power(L,-4.0)
I2 = 2.0*np.power(L,-2.0) + np.power(L,4.0)
I = (1.0-B)*I1 + B*I2
n = np.sqrt((I-3.0)/(np.power(Lm,2.0)-3.0))
t1 = (1.0/(1.0-n)) - a * np.sqrt(0.5*(I-3.0))
t2 = 1.0 - B + B*np.power(L,2.0)
return u*(L-np.power(L,-5.0)) * t1 * t2
#--------------------------------------------------------------------------------
# Function defining the planar stress given strain.
#--------------------------------------------------------------------------------
def stressP(x, u, Lm, a, B):
L = 1.0+x
I1 = np.power(L,2.0)+np.power(L,-2.0) + 1.0
I2 = I1
I = (1.0-B)*I1 + B*I2
n = np.sqrt((I-3.0)/(np.power(Lm,2.0)-3.0))
t1 = (1.0/(1.0-n)) - a * np.sqrt(0.5*(I-3.0))
return u*(L-np.power(L,-3.0)) * t1
#--------------------------------------------------------------------------------
# Calculate the Ds
#--------------------------------------------------------------------------------
def compressibility(v, u, Lm, a, B):
u0 = u
D1 = 3.0*(1.0-2.0*v) / (u0*(1.0+v))
return [D1]
| 41.8 | 82 | 0.474529 | 507 | 3,553 | 3.323471 | 0.321499 | 0.022552 | 0.061721 | 0.037389 | 0.232641 | 0.222552 | 0.18635 | 0.15727 | 0.152522 | 0.152522 | 0 | 0.046312 | 0.179567 | 3,553 | 84 | 83 | 42.297619 | 0.531732 | 0.612722 | 0 | 0.342857 | 0 | 0 | 0.046252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.228571 | false | 0 | 0.028571 | 0.114286 | 0.371429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
c5ca707dce77fad1c990e8395c01664a2f944aa1 | 4,980 | py | Python | idfy_rest_client/models/person_person_information.py | dealflowteam/Idfy | fa3918a6c54ea0eedb9146578645b7eb1755b642 | [
"MIT"
] | null | null | null | idfy_rest_client/models/person_person_information.py | dealflowteam/Idfy | fa3918a6c54ea0eedb9146578645b7eb1755b642 | [
"MIT"
] | null | null | null | idfy_rest_client/models/person_person_information.py | dealflowteam/Idfy | fa3918a6c54ea0eedb9146578645b7eb1755b642 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
idfy_rest_client.models.person_person_information
This file was automatically generated for Idfy by APIMATIC v2.0 ( https://apimatic.io )
"""
from idfy_rest_client.api_helper import APIHelper
class PersonPersonInformation(object):
"""Implementation of the 'Person.PersonInformation' model.
TODO: type model description here.
Attributes:
firstname (string): TODO: type description here.
middlename (string): TODO: type description here.
lastname (string): TODO: type description here.
date_of_birth (string): TODO: type description here.
address (string): TODO: type description here.
zip_code (string): TODO: type description here.
city (string): TODO: type description here.
mobile (string): TODO: type description here.
phone (string): TODO: type description here.
gender (string): TODO: type description here.
raw_json (string): TODO: type description here.
request_id (string): TODO: type description here.
dead (datetime): TODO: type description here.
source (string): TODO: type description here.
"""
# Create a mapping from Model property names to API property names
_names = {
"firstname":'Firstname',
"middlename":'Middlename',
"lastname":'Lastname',
"date_of_birth":'DateOfBirth',
"address":'Address',
"zip_code":'ZipCode',
"city":'City',
"mobile":'Mobile',
"phone":'Phone',
"gender":'Gender',
"raw_json":'RawJson',
"request_id":'RequestId',
"dead":'Dead',
"source":'Source'
}
def __init__(self,
firstname=None,
middlename=None,
lastname=None,
date_of_birth=None,
address=None,
zip_code=None,
city=None,
mobile=None,
phone=None,
gender=None,
raw_json=None,
request_id=None,
dead=None,
source=None,
additional_properties = {}):
"""Constructor for the PersonPersonInformation class"""
# Initialize members of the class
self.firstname = firstname
self.middlename = middlename
self.lastname = lastname
self.date_of_birth = date_of_birth
self.address = address
self.zip_code = zip_code
self.city = city
self.mobile = mobile
self.phone = phone
self.gender = gender
self.raw_json = raw_json
self.request_id = request_id
self.dead = APIHelper.RFC3339DateTime(dead) if dead else None
self.source = source
# Add additional model properties to the instance
self.additional_properties = additional_properties
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
firstname = dictionary.get('Firstname')
middlename = dictionary.get('Middlename')
lastname = dictionary.get('Lastname')
date_of_birth = dictionary.get('DateOfBirth')
address = dictionary.get('Address')
zip_code = dictionary.get('ZipCode')
city = dictionary.get('City')
mobile = dictionary.get('Mobile')
phone = dictionary.get('Phone')
gender = dictionary.get('Gender')
raw_json = dictionary.get('RawJson')
request_id = dictionary.get('RequestId')
dead = APIHelper.RFC3339DateTime.from_value(dictionary.get("Dead")).datetime if dictionary.get("Dead") else None
source = dictionary.get('Source')
# Clean out expected properties from dictionary
for key in cls._names.values():
if key in dictionary:
del dictionary[key]
# Return an object of this model
return cls(firstname,
middlename,
lastname,
date_of_birth,
address,
zip_code,
city,
mobile,
phone,
gender,
raw_json,
request_id,
dead,
source,
dictionary)
| 34.109589 | 121 | 0.555823 | 481 | 4,980 | 5.644491 | 0.237006 | 0.044199 | 0.097974 | 0.1186 | 0.138858 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003454 | 0.360442 | 4,980 | 145 | 122 | 34.344828 | 0.84898 | 0.324699 | 0 | 0 | 1 | 0 | 0.100295 | 0 | 0 | 0 | 0 | 0.103448 | 0 | 1 | 0.023256 | false | 0 | 0.011628 | 0 | 0.081395 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5d42cfeef185ddf6696d8600cebabde18dc035e | 25,439 | py | Python | odym/modules/test/DSM_test_known_results.py | DominikWiedenhofer/ODYM | 89aca3706b34df02d745f5d76cffc9f50dc2c3e7 | [
"MIT"
] | 3 | 2019-04-01T09:35:29.000Z | 2021-01-03T18:51:55.000Z | odym/modules/test/DSM_test_known_results.py | DominikWiedenhofer/ODYM | 89aca3706b34df02d745f5d76cffc9f50dc2c3e7 | [
"MIT"
] | null | null | null | odym/modules/test/DSM_test_known_results.py | DominikWiedenhofer/ODYM | 89aca3706b34df02d745f5d76cffc9f50dc2c3e7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Aug 11 16:19:39 2014
"""
import os
import sys
import imp
# Put location of
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..\\..')) + '\\modules') # add ODYM module directory to system path
#NOTE: Hidden variable __file__ must be know to script for the directory structure to work.
# Therefore: When first using the model, run the entire script with F5 so that the __file__ variable can be created.
import dynamic_stock_model as dsm # remove and import the class manually if this unit test is run as standalone script
imp.reload(dsm)
import numpy as np
import unittest
###############################################################################
"""My Input for fixed lifetime"""
Time_T_FixedLT = np.arange(0,10)
Inflow_T_FixedLT = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
lifetime_FixedLT = {'Type': 'Fixed', 'Mean': np.array([5])}
lifetime_FixedLT0 = {'Type': 'Fixed', 'Mean': np.array([0])}
#lifetime_FixedLT = {'Type': 'Fixed', 'Mean': np.array([5,5,5,5,5,5,5,5,5,5])}
lifetime_NormLT = {'Type': 'Normal', 'Mean': np.array([5]), 'StdDev': np.array([1.5])}
lifetime_NormLT0 = {'Type': 'Normal', 'Mean': np.array([0]), 'StdDev': np.array([1.5])}
###############################################################################
"""My Output for fixed lifetime"""
Outflow_T_FixedLT = np.array([0, 0, 0, 0, 0, 1, 2, 3, 4, 5])
Outflow_TC_FixedLT = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 3, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 4, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 5, 0, 0, 0, 0, 0]])
Stock_T_FixedLT = np.array([1, 3, 6, 10, 15, 20, 25, 30, 35, 40])
StockChange_T_FixedLT = np.array([1, 2, 3, 4, 5, 5, 5, 5, 5, 5])
Stock_TC_FixedLT = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 2, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 2, 3, 0, 0, 0, 0, 0, 0, 0],
[1, 2, 3, 4, 0, 0, 0, 0, 0, 0],
[1, 2, 3, 4, 5, 0, 0, 0, 0, 0],
[0, 2, 3, 4, 5, 6, 0, 0, 0, 0],
[0, 0, 3, 4, 5, 6, 7, 0, 0, 0],
[0, 0, 0, 4, 5, 6, 7, 8, 0, 0],
[0, 0, 0, 0, 5, 6, 7, 8, 9, 0],
[0, 0, 0, 0, 0, 6, 7, 8, 9, 10]])
Bal = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
"""My Output for normally distributed lifetime"""
Stock_TC_NormLT = np.array([[ 9.99570940e-01, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 9.96169619e-01, 1.99914188e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 9.77249868e-01, 1.99233924e+00, 2.99871282e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 9.08788780e-01, 1.95449974e+00, 2.98850886e+00,
3.99828376e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 7.47507462e-01, 1.81757756e+00, 2.93174960e+00,
3.98467848e+00, 4.99785470e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 5.00000000e-01, 1.49501492e+00, 2.72636634e+00,
3.90899947e+00, 4.98084810e+00, 5.99742564e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 2.52492538e-01, 1.00000000e+00, 2.24252239e+00,
3.63515512e+00, 4.88624934e+00, 5.97701772e+00,
6.99699658e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 9.12112197e-02, 5.04985075e-01, 1.50000000e+00,
2.99002985e+00, 4.54394390e+00, 5.86349921e+00,
6.97318734e+00, 7.99656752e+00, 0.00000000e+00,
0.00000000e+00],
[ 2.27501319e-02, 1.82422439e-01, 7.57477613e-01,
2.00000000e+00, 3.73753731e+00, 5.45273268e+00,
6.84074908e+00, 7.96935696e+00, 8.99613846e+00,
0.00000000e+00],
[ 3.83038057e-03, 4.55002639e-02, 2.73633659e-01,
1.00997015e+00, 2.50000000e+00, 4.48504477e+00,
6.36152146e+00, 7.81799894e+00, 8.96552657e+00,
9.99570940e+00]])
Stock_T_NormLT = np.array([ 0.99957094, 2.9953115 , 5.96830193, 9.85008113,
14.4793678 , 19.60865447, 24.99043368, 30.46342411,
35.95916467, 41.45873561])
Outflow_T_NormLT = np.array([ 4.29060333e-04, 4.25944090e-03, 2.70095728e-02,
1.18220793e-01, 3.70713330e-01, 8.70713330e-01,
1.61822079e+00, 2.52700957e+00, 3.50425944e+00,
4.50042906e+00])
Outflow_TC_NormLT = np.array([[ 4.29060333e-04, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00],
[ 3.40132023e-03, 8.58120666e-04, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 1.89197514e-02, 6.80264047e-03, 1.28718100e-03,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 6.84610878e-02, 3.78395028e-02, 1.02039607e-02,
1.71624133e-03, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 1.61281318e-01, 1.36922176e-01, 5.67592541e-02,
1.36052809e-02, 2.14530167e-03, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 2.47507462e-01, 3.22562636e-01, 2.05383263e-01,
7.56790055e-02, 1.70066012e-02, 2.57436200e-03,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 2.47507462e-01, 4.95014925e-01, 4.83843953e-01,
2.73844351e-01, 9.45987569e-02, 2.04079214e-02,
3.00342233e-03, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00],
[ 1.61281318e-01, 4.95014925e-01, 7.42522387e-01,
6.45125271e-01, 3.42305439e-01, 1.13518508e-01,
2.38092416e-02, 3.43248267e-03, -0.00000000e+00,
-0.00000000e+00],
[ 6.84610878e-02, 3.22562636e-01, 7.42522387e-01,
9.90029850e-01, 8.06406589e-01, 4.10766527e-01,
1.32438260e-01, 2.72105619e-02, 3.86154300e-03,
-0.00000000e+00],
[ 1.89197514e-02, 1.36922176e-01, 4.83843953e-01,
9.90029850e-01, 1.23753731e+00, 9.67687907e-01,
4.79227614e-01, 1.51358011e-01, 3.06118821e-02,
4.29060333e-03]])
StockChange_T_NormLT = np.array([ 0.99957094, 1.99574056, 2.97299043, 3.88177921, 4.62928667,
5.12928667, 5.38177921, 5.47299043, 5.49574056, 5.49957094])
"""My Output for Weibull-distributed lifetime"""
Stock_TC_WeibullLT = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0], # computed with Excel and taken from there
[0.367879441, 2, 0, 0, 0, 0, 0, 0, 0, 0],
[0.100520187, 0.735758882, 3, 0, 0, 0, 0, 0, 0, 0],
[0.023820879, 0.201040373, 1.103638324, 4, 0, 0, 0, 0, 0, 0],
[0.005102464, 0.047641758, 0.30156056, 1.471517765,5, 0, 0, 0, 0, 0],
[0.001009149, 0.010204929, 0.071462637, 0.402080746,1.839397206, 6, 0, 0, 0, 0],
[0.000186736, 0.002018297, 0.015307393, 0.095283516, 0.502600933, 2.207276647, 7, 0, 0, 0],
[3.26256E-05, 0.000373472, 0.003027446, 0.020409858, 0.119104394, 0.60312112, 2.575156088, 8, 0, 0],
[5.41828E-06, 6.52513E-05, 0.000560208, 0.004036594, 0.025512322, 0.142925273, 0.703641306, 2.943035529, 9, 0],
[8.59762E-07, 1.08366E-05, 9.78769E-05, 0.000746944, 0.005045743, 0.030614786, 0.166746152, 0.804161493, 3.310914971, 10]])
Stock_T_WeibullLT = np.array([1,2.367879441,3.836279069,5.328499576,6.825822547,8.324154666,9.822673522,11.321225,12.8197819,14.31833966])
Outflow_T_WeibullLT = np.array([0,0.632120559,1.531600372,2.507779493,3.502677029,4.50166788,5.501481144,6.501448519,7.5014431,8.501442241])
Outflow_TC_WeibullLT = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0.632120559, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0.267359255, 1.264241118, 0, 0, 0, 0, 0, 0, 0, 0],
[0.076699308, 0.534718509, 1.896361676, 0, 0, 0, 0, 0, 0, 0],
[0.018718414, 0.153398615, 0.802077764, 2.528482235, 0, 0, 0, 0, 0, 0],
[0.004093316, 0.037436829, 0.230097923, 1.069437018, 3.160602794, 0, 0, 0, 0, 0],
[0.000822413, 0.008186632, 0.056155243, 0.306797231, 1.336796273, 3.792723353, 0, 0, 0, 0],
[0.00015411, 0.001644825, 0.012279947, 0.074873658, 0.383496539, 1.604155527, 4.424843912, 0, 0, 0],
[2.72074E-05, 0.000308221, 0.002467238, 0.016373263, 0.093592072, 0.460195846, 1.871514782, 5.056964471, 0, 0],
[4.55852E-06, 5.44147E-05 , 0.000462331 , 0.00328965, 0.020466579, 0.112310487, 0.536895154, 2.138874037, 5.689085029, 0]])
StockChange_T_WeibullLT = np.array([1,1.367879441,1.468399628,1.492220507,1.497322971,1.49833212,1.498518856,1.498551481,1.4985569,1.498557759])
lifetime_WeibullLT = {'Type': 'Weibull', 'Shape': np.array([1.2]), 'Scale': np.array([1])}
InitialStock_WB = np.array([0.01, 0.01, 0.08, 0.2, 0.2, 2, 2, 3, 4, 7.50])
Inflow_WB = np.array([11631.1250671964, 1845.6048709861, 2452.0593141014, 1071.0305279511, 198.1868742385, 391.9674590243, 83.9599583940, 29.8447516023, 10.8731273138, 7.5000000000])
# We need 10 digits AFTER the . to get a 9 digits after the . overlap with np.testing.
# The total number of counting digits is higher, because there are up to 5 digits before the .
# For the stock-driven model with initial stock, colculated with Excel
Sc_InitialStock_2_Ref = np.array([[ 3.29968072, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ],
[ 3.28845263, 5.1142035 , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ],
[ 3.2259967 , 5.09680099, 2.0068288 , 0. , 0. ,
0. , 0. , 0. , 0. ],
[ 3. , 5. , 2. , 4. , 0. ,
0. , 0. , 0. , 0. ],
[ 2.46759471, 4.64972578, 1.962015 , 3.98638888, 4.93427563,
0. , 0. , 0. , 0. ],
[ 1.65054855, 3.82454624, 1.82456634, 3.91067739, 4.91748538,
3.8721761 , 0. , 0. , 0. ],
[ 0.83350238, 2.55819937, 1.50076342, 3.63671549, 4.82409004,
3.85899993, 2.78772936, 0. , 0. ],
[ 0.30109709, 1.2918525 , 1.00384511, 2.9913133 , 4.48613916,
3.78570788, 2.77824333, 3.36180162, 0. ],
[ 0.07510039, 0.46667297, 0.5069268 , 2.00085849, 3.68999109,
3.5205007 , 2.72547754, 3.35036215, 3.66410986]])
Sc_InitialStock_2_Ref_Sum = np.array([ 3.29968072, 8.40265614, 10.32962649, 14. ,
18. , 20. , 20. , 20. , 20. ])
Oc_InitialStock_2_Ref = np.array([[ 1.41636982e-03, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 1.12280883e-02, 2.19524375e-03, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00],
[ 6.24559363e-02, 1.74025106e-02, 8.61420234e-04,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00],
[ 2.25996698e-01, 9.68009922e-02, 6.82879736e-03,
1.71697802e-03, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00],
[ 5.32405289e-01, 3.50274224e-01, 3.79849998e-02,
1.36111209e-02, 2.11801070e-03, -0.00000000e+00,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00],
[ 8.17046165e-01, 8.25179532e-01, 1.37448656e-01,
7.57114903e-02, 1.67902556e-02, 1.66211031e-03,
-0.00000000e+00, -0.00000000e+00, -0.00000000e+00],
[ 8.17046165e-01, 1.26634687e+00, 3.23802924e-01,
2.73961897e-01, 9.33953405e-02, 1.31761643e-02,
1.19661751e-03, -0.00000000e+00, -0.00000000e+00],
[ 5.32405289e-01, 1.26634687e+00, 4.96918311e-01,
6.45402188e-01, 3.37950879e-01, 7.32920558e-02,
9.48603036e-03, 1.44303487e-03, -0.00000000e+00],
[ 2.25996698e-01, 8.25179532e-01, 4.96918311e-01,
9.90454815e-01, 7.96148072e-01, 2.65207178e-01,
5.27657861e-02, 1.14394721e-02, 1.57279902e-03]])
I_InitialStock_2_Ref = np.array([ 3.30109709, 5.11639875, 2.00769022, 4.00171698, 4.93639364, 3.87383821, 2.78892598, 3.36324466, 3.66568266])
""" Test case with fixed lifetime for initial stock"""
Time_T_FixedLT_X = np.arange(1, 9, 1)
lifetime_FixedLT_X = {'Type': 'Fixed', 'Mean': np.array([5])}
InitialStock_X = np.array([0, 0, 0, 7, 5, 4, 3, 2])
Inflow_X = np.array([0, 0, 0, 7, 5, 4, 3, 2])
Time_T_FixedLT_XX = np.arange(1, 11, 1)
lifetime_NormLT_X = {'Type': 'Normal', 'Mean': np.array([5]), 'StdDev': np.array([1.5])}
InitialStock_XX = np.array([0.01, 0.01, 0.08, 0.2, 0.2, 2, 2, 3, 4, 7.50])
Inflow_XX = np.array([ 2.61070664, 0.43955789, 0.87708508, 0.79210262, 0.4,
2.67555857, 2.20073139, 3.06983925, 4.01538044, 7.50321933])
""" Test case with normally distributed lifetime for initial stock and stock-driven model"""
Time_T_FixedLT_2 = np.arange(1, 10, 1)
lifetime_NormLT_2 = {'Type': 'Normal', 'Mean': np.array([5]), 'StdDev': np.array([1.5])}
InitialStock_2 = np.array([3,5,2,4])
FutureStock_2 = np.array([0,0,0,0,18,20,20,20,20])
ThisSwitchTime = 5 # First year with future stock curve, start counting from 1.
Inflow_2 = np.array([3.541625588, 5.227890554,2.01531097,4])
###############################################################################
"""Create Dynamic Stock Models and hand over the pre-defined values."""
# For zero lifetime: border case
myDSM0 = dsm.DynamicStockModel(t=Time_T_FixedLT, i=Inflow_T_FixedLT, lt=lifetime_FixedLT0)
# For fixed LT
myDSM = dsm.DynamicStockModel(t=Time_T_FixedLT, i=Inflow_T_FixedLT, lt=lifetime_FixedLT)
myDSM2 = dsm.DynamicStockModel(t=Time_T_FixedLT, s=Stock_T_FixedLT, lt=lifetime_FixedLT)
myDSMx = dsm.DynamicStockModel(t=Time_T_FixedLT_X, lt=lifetime_FixedLT_X)
TestInflow_X = myDSMx.compute_i_from_s(InitialStock=InitialStock_X)
myDSMxy = dsm.DynamicStockModel(t=Time_T_FixedLT_X, i=TestInflow_X, lt=lifetime_FixedLT_X)
# For zero normally distributed lifetime: border case
myDSM0n = dsm.DynamicStockModel(t=Time_T_FixedLT, i=Inflow_T_FixedLT, lt=lifetime_NormLT0)
# For normally distributed Lt
myDSM3 = dsm.DynamicStockModel(t=Time_T_FixedLT, i=Inflow_T_FixedLT, lt=lifetime_NormLT)
myDSM4 = dsm.DynamicStockModel(t=Time_T_FixedLT, s=Stock_T_NormLT, lt=lifetime_NormLT)
myDSMX = dsm.DynamicStockModel(t=Time_T_FixedLT_XX, lt=lifetime_NormLT_X)
TestInflow_XX = myDSMX.compute_i_from_s(InitialStock=InitialStock_XX)
myDSMXY = dsm.DynamicStockModel(t=Time_T_FixedLT_XX, i=TestInflow_XX, lt=lifetime_NormLT_X)
# Test compute_stock_driven_model_initialstock:
TestDSM_IntitialStock = dsm.DynamicStockModel(t=Time_T_FixedLT_2, s=FutureStock_2, lt=lifetime_NormLT_2)
Sc_InitialStock_2,Oc_InitialStock_2,I_InitialStock_2 = TestDSM_IntitialStock.compute_stock_driven_model_initialstock(InitialStock = InitialStock_2, SwitchTime = ThisSwitchTime)
# Compute stock back from inflow
TestDSM_IntitialStock_Verify = dsm.DynamicStockModel(t=Time_T_FixedLT_2, i=I_InitialStock_2, lt=lifetime_NormLT_2)
Sc_Stock_2 = TestDSM_IntitialStock_Verify.compute_s_c_inflow_driven()
Sc_Stock_2_Sum = Sc_Stock_2.sum(axis =1)
Sc_Stock_Sum = TestDSM_IntitialStock_Verify.compute_stock_total()
Sc_Outflow_t_c = TestDSM_IntitialStock_Verify.compute_o_c_from_s_c()
# For Weibull-distributed Lt
myDSMWB1 = dsm.DynamicStockModel(t=Time_T_FixedLT, i=Inflow_T_FixedLT, lt=lifetime_WeibullLT)
myDSMWB2 = dsm.DynamicStockModel(t=Time_T_FixedLT, s=Stock_T_WeibullLT, lt=lifetime_WeibullLT)
myDSMWB3 = dsm.DynamicStockModel(t=Time_T_FixedLT_XX, lt=lifetime_WeibullLT)
TestInflow_WB = myDSMWB3.compute_i_from_s(InitialStock=InitialStock_XX)
myDSMWB4 = dsm.DynamicStockModel(t=Time_T_FixedLT_XX, i=TestInflow_WB, lt=lifetime_WeibullLT)
# Compute full stock model in correct order
###############################################################################
"""Unit Test Class"""
class KnownResultsTestCase(unittest.TestCase):
def test_inflow_driven_model_fixedLifetime_0(self):
"""Test Inflow Driven Model with Fixed product lifetime of 0."""
np.testing.assert_array_equal(myDSM0.compute_s_c_inflow_driven(), np.zeros(Stock_TC_FixedLT.shape))
np.testing.assert_array_equal(myDSM0.compute_stock_total(), np.zeros((Stock_TC_FixedLT.shape[0])))
np.testing.assert_array_equal(myDSM0.compute_stock_change(), np.zeros((Stock_TC_FixedLT.shape[0])))
np.testing.assert_array_equal(myDSM0.compute_outflow_mb(), Inflow_T_FixedLT)
np.testing.assert_array_equal(myDSM0.check_stock_balance(), Bal.transpose())
def test_inflow_driven_model_fixedLifetime(self):
"""Test Inflow Driven Model with Fixed product lifetime."""
np.testing.assert_array_equal(myDSM.compute_s_c_inflow_driven(), Stock_TC_FixedLT)
np.testing.assert_array_equal(myDSM.compute_stock_total(),Stock_T_FixedLT)
np.testing.assert_array_equal(myDSM.compute_o_c_from_s_c(), Outflow_TC_FixedLT)
np.testing.assert_array_equal(myDSM.compute_outflow_total(), Outflow_T_FixedLT)
np.testing.assert_array_equal(myDSM.compute_stock_change(), StockChange_T_FixedLT)
np.testing.assert_array_equal(myDSM.check_stock_balance(), Bal.transpose())
def test_stock_driven_model_fixedLifetime(self):
"""Test Stock Driven Model with Fixed product lifetime."""
np.testing.assert_array_equal(myDSM2.compute_stock_driven_model()[0], Stock_TC_FixedLT)
np.testing.assert_array_equal(myDSM2.compute_stock_driven_model()[1], Outflow_TC_FixedLT)
np.testing.assert_array_equal(myDSM2.compute_stock_driven_model()[2], Inflow_T_FixedLT)
np.testing.assert_array_equal(myDSM2.compute_outflow_total(), Outflow_T_FixedLT)
np.testing.assert_array_equal(myDSM2.compute_stock_change(), StockChange_T_FixedLT)
np.testing.assert_array_equal(myDSM2.check_stock_balance(), Bal.transpose())
def test_inflow_driven_model_normallyDistrLifetime_0(self):
"""Test Inflow Driven Model with Fixed product lifetime of 0."""
np.testing.assert_array_equal(myDSM0n.compute_s_c_inflow_driven(), np.zeros(Stock_TC_FixedLT.shape))
np.testing.assert_array_equal(myDSM0n.compute_stock_total(), np.zeros((Stock_TC_FixedLT.shape[0])))
np.testing.assert_array_equal(myDSM0n.compute_stock_change(), np.zeros((Stock_TC_FixedLT.shape[0])))
np.testing.assert_array_equal(myDSM0n.compute_outflow_mb(), Inflow_T_FixedLT)
np.testing.assert_array_equal(myDSM0n.check_stock_balance(), Bal.transpose())
def test_inflow_driven_model_normallyDistLifetime(self):
"""Test Inflow Driven Model with normally distributed product lifetime."""
np.testing.assert_array_almost_equal(myDSM3.compute_s_c_inflow_driven(), Stock_TC_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM3.compute_stock_total(), Stock_T_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM3.compute_o_c_from_s_c(), Outflow_TC_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM3.compute_outflow_total(), Outflow_T_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM3.compute_stock_change(), StockChange_T_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM3.check_stock_balance(), Bal.transpose(), 12)
def test_stock_driven_model_normallyDistLifetime(self):
"""Test Stock Driven Model with normally distributed product lifetime."""
np.testing.assert_array_almost_equal(
myDSM4.compute_stock_driven_model()[0], Stock_TC_NormLT, 8)
np.testing.assert_array_almost_equal(
myDSM4.compute_stock_driven_model()[1], Outflow_TC_NormLT, 8)
np.testing.assert_array_almost_equal(
myDSM4.compute_stock_driven_model()[2], Inflow_T_FixedLT, 8)
np.testing.assert_array_almost_equal(myDSM4.compute_outflow_total(), Outflow_T_NormLT, 8)
np.testing.assert_array_almost_equal(
myDSM4.compute_stock_change(), StockChange_T_NormLT, 8)
np.testing.assert_array_almost_equal(myDSM4.check_stock_balance(), Bal.transpose(), 12)
def test_inflow_driven_model_WeibullDistLifetime(self):
"""Test Inflow Driven Model with Weibull-distributed product lifetime."""
np.testing.assert_array_almost_equal(
myDSMWB1.compute_s_c_inflow_driven(), Stock_TC_WeibullLT, 9)
np.testing.assert_array_almost_equal(myDSMWB1.compute_stock_total(), Stock_T_WeibullLT, 8)
np.testing.assert_array_almost_equal(myDSMWB1.compute_o_c_from_s_c(), Outflow_TC_WeibullLT, 9)
np.testing.assert_array_almost_equal(myDSMWB1.compute_outflow_total(), Outflow_T_WeibullLT, 9)
np.testing.assert_array_almost_equal(
myDSMWB1.compute_stock_change(), StockChange_T_WeibullLT, 9)
np.testing.assert_array_almost_equal(myDSMWB1.check_stock_balance(), Bal.transpose(), 12)
def test_stock_driven_model_WeibullDistLifetime(self):
"""Test Stock Driven Model with Weibull-distributed product lifetime."""
np.testing.assert_array_almost_equal(
myDSMWB1.compute_stock_driven_model()[0], Stock_TC_WeibullLT, 8)
np.testing.assert_array_almost_equal(
myDSMWB1.compute_stock_driven_model()[1], Outflow_TC_WeibullLT, 8)
np.testing.assert_array_almost_equal(
myDSMWB1.compute_stock_driven_model()[2], Inflow_T_FixedLT, 8)
np.testing.assert_array_almost_equal(myDSMWB1.compute_outflow_total(), Outflow_T_WeibullLT, 9)
np.testing.assert_array_almost_equal(
myDSMWB1.compute_stock_change(), StockChange_T_WeibullLT, 8)
np.testing.assert_array_almost_equal(myDSMWB1.check_stock_balance(), Bal.transpose(), 12)
def test_inflow_from_stock_fixedLifetime(self):
"""Test computation of inflow from stock with Fixed product lifetime."""
np.testing.assert_array_equal(TestInflow_X, Inflow_X)
np.testing.assert_array_equal(myDSMxy.compute_s_c_inflow_driven()[-1, :], InitialStock_X)
def test_inflow_from_stock_normallyDistLifetime(self):
"""Test computation of inflow from stock with normally distributed product lifetime."""
np.testing.assert_array_almost_equal(TestInflow_XX, Inflow_XX, 8)
np.testing.assert_array_almost_equal(myDSMXY.compute_s_c_inflow_driven()[-1, :], InitialStock_XX, 9)
def test_inflow_from_stock_WeibullDistLifetime(self):
"""Test computation of inflow from stock with Weibull-distributed product lifetime."""
np.testing.assert_array_almost_equal(TestInflow_WB, Inflow_WB, 9)
np.testing.assert_array_almost_equal(myDSMWB4.compute_s_c_inflow_driven()[-1, :], InitialStock_WB, 9)
def test_compute_stock_driven_model_initialstock(self):
"""Test stock-driven model with initial stock given."""
np.testing.assert_array_almost_equal(I_InitialStock_2, I_InitialStock_2_Ref, 8)
np.testing.assert_array_almost_equal(Sc_InitialStock_2, Sc_InitialStock_2_Ref, 8)
np.testing.assert_array_almost_equal(Sc_InitialStock_2.sum(axis =1), Sc_InitialStock_2_Ref_Sum, 8)
np.testing.assert_array_almost_equal(Oc_InitialStock_2, Oc_InitialStock_2_Ref, 8)
if __name__ == '__main__':
unittest.main()
| 59.576112 | 182 | 0.626243 | 3,649 | 25,439 | 4.179775 | 0.160866 | 0.037372 | 0.04701 | 0.051665 | 0.591922 | 0.541372 | 0.498951 | 0.460858 | 0.436992 | 0.385654 | 0 | 0.301661 | 0.225952 | 25,439 | 426 | 183 | 59.715962 | 0.472906 | 0.073706 | 0 | 0.208723 | 0 | 0 | 0.007163 | 0 | 0 | 0 | 0 | 0 | 0.174455 | 1 | 0.037383 | false | 0 | 0.018692 | 0 | 0.05919 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5d6d1a64a423018703822d97798dfe358235126 | 1,081 | py | Python | HLTrigger/Configuration/python/HLT_75e33/paths/HLT_DoublePFPuppiJets128_DoublePFPuppiBTagDeepCSV_2p4_cfi.py | PKUfudawei/cmssw | 8fbb5ce74398269c8a32956d7c7943766770c093 | [
"Apache-2.0"
] | 1 | 2021-11-30T16:24:46.000Z | 2021-11-30T16:24:46.000Z | HLTrigger/Configuration/python/HLT_75e33/paths/HLT_DoublePFPuppiJets128_DoublePFPuppiBTagDeepCSV_2p4_cfi.py | PKUfudawei/cmssw | 8fbb5ce74398269c8a32956d7c7943766770c093 | [
"Apache-2.0"
] | 4 | 2021-11-29T13:57:56.000Z | 2022-03-29T06:28:36.000Z | HLTrigger/Configuration/python/HLT_75e33/paths/HLT_DoublePFPuppiJets128_DoublePFPuppiBTagDeepCSV_2p4_cfi.py | PKUfudawei/cmssw | 8fbb5ce74398269c8a32956d7c7943766770c093 | [
"Apache-2.0"
] | 1 | 2021-11-30T16:16:05.000Z | 2021-11-30T16:16:05.000Z | import FWCore.ParameterSet.Config as cms
from ..modules.hltBTagPFPuppiDeepCSV0p865DoubleEta2p4_cfi import *
from ..modules.hltDoublePFPuppiJets128Eta2p4MaxDeta1p6_cfi import *
from ..modules.hltDoublePFPuppiJets128MaxEta2p4_cfi import *
from ..modules.l1tDoublePFPuppiJet112offMaxEta2p4_cfi import *
from ..modules.l1tDoublePFPuppiJets112offMaxDeta1p6_cfi import *
from ..sequences.HLTAK4PFPuppiJetsReconstruction_cfi import *
from ..sequences.HLTBeginSequence_cfi import *
from ..sequences.HLTBtagDeepCSVSequencePFPuppiModEta2p4_cfi import *
from ..sequences.HLTEndSequence_cfi import *
from ..sequences.HLTParticleFlowSequence_cfi import *
HLT_DoublePFPuppiJets128_DoublePFPuppiBTagDeepCSV_2p4 = cms.Path(
HLTBeginSequence +
l1tDoublePFPuppiJet112offMaxEta2p4 +
l1tDoublePFPuppiJets112offMaxDeta1p6 +
HLTParticleFlowSequence +
HLTAK4PFPuppiJetsReconstruction +
hltDoublePFPuppiJets128MaxEta2p4 +
hltDoublePFPuppiJets128Eta2p4MaxDeta1p6 +
HLTBtagDeepCSVSequencePFPuppiModEta2p4 +
hltBTagPFPuppiDeepCSV0p865DoubleEta2p4 +
HLTEndSequence
)
| 41.576923 | 68 | 0.848289 | 72 | 1,081 | 12.555556 | 0.347222 | 0.099558 | 0.129425 | 0.121681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073045 | 0.100833 | 1,081 | 25 | 69 | 43.24 | 0.856996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.478261 | 0 | 0.478261 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
c5de930741f4d8551e11905355564ca78fd72a62 | 320 | py | Python | alertmanager_telegram/config.py | medeirosjrm/alertmanager-telegram | ff9701936ec766b7992399fd741d8a8a4dab3957 | [
"Apache-2.0"
] | 5 | 2020-05-20T11:37:37.000Z | 2021-11-23T09:04:14.000Z | alertmanager_telegram/config.py | medeirosjrm/alertmanager-telegram | ff9701936ec766b7992399fd741d8a8a4dab3957 | [
"Apache-2.0"
] | null | null | null | alertmanager_telegram/config.py | medeirosjrm/alertmanager-telegram | ff9701936ec766b7992399fd741d8a8a4dab3957 | [
"Apache-2.0"
] | 3 | 2021-01-31T17:57:08.000Z | 2021-11-24T13:33:31.000Z | import os
TELEGRAM_CHAT_ID = os.environ.get("TELEGRAM_CHAT_ID")
if not TELEGRAM_CHAT_ID:
raise ValueError("No TELEGRAM_CHAT_ID set for application")
TELEGRAM_TOKEN = os.environ.get("TELEGRAM_TOKEN")
if not TELEGRAM_TOKEN:
raise ValueError("No TELEGRAM_TOKEN set for application")
TEMPLATES_AUTO_RELOAD = True
| 26.666667 | 63 | 0.796875 | 48 | 320 | 5.020833 | 0.416667 | 0.19917 | 0.232365 | 0.165975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 320 | 11 | 64 | 29.090909 | 0.860714 | 0 | 0 | 0 | 0 | 0 | 0.33125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5e00eaee5bf1dc4f3ef2430a8612b17e78c2729 | 282 | py | Python | appexemple/__main__.py | yoannmos/Inupdater-AppExemple | ac62430a8ca02505eb73afaff698500b7f46ea31 | [
"MIT"
] | null | null | null | appexemple/__main__.py | yoannmos/Inupdater-AppExemple | ac62430a8ca02505eb73afaff698500b7f46ea31 | [
"MIT"
] | null | null | null | appexemple/__main__.py | yoannmos/Inupdater-AppExemple | ac62430a8ca02505eb73afaff698500b7f46ea31 | [
"MIT"
] | null | null | null | import sys
from pathlib import Path
from appexemple import __version__
print(
f"""
Hello you are in App Exemple version {__version__}\n
sys.argv[-1] : {sys.argv[-1]}\n
Path().cwd() : {Path().cwd()}\n
Path(__file__) : {Path(__file__)},\n
"""
)
input("Press [Enter] to quit.")
| 17.625 | 52 | 0.663121 | 43 | 282 | 3.976744 | 0.581395 | 0.081871 | 0.093567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008333 | 0.148936 | 282 | 15 | 53 | 18.8 | 0.704167 | 0 | 0 | 0 | 0 | 0 | 0.62766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5e73f85a5d535b9725cdc5daf85b03ea7c65ebb | 904 | py | Python | venv/lib/python3.7/site-packages/webdriver_manager/microsoft.py | wayshon/pylogin | 12ecfddc3ceaf552a42f62608027924541c63254 | [
"Apache-2.0"
] | null | null | null | venv/lib/python3.7/site-packages/webdriver_manager/microsoft.py | wayshon/pylogin | 12ecfddc3ceaf552a42f62608027924541c63254 | [
"Apache-2.0"
] | 7 | 2019-12-04T23:08:08.000Z | 2022-02-10T12:47:38.000Z | venv/lib/python3.7/site-packages/webdriver_manager/microsoft.py | wayshon/pylogin | 12ecfddc3ceaf552a42f62608027924541c63254 | [
"Apache-2.0"
] | null | null | null | from webdriver_manager.driver import EdgeDriver, IEDriver
from webdriver_manager.manager import DriverManager
from webdriver_manager import utils
class EdgeDriverManager(DriverManager):
def __init__(self, version=None,
os_type=utils.os_name()):
super(EdgeDriverManager, self).__init__()
self.driver = EdgeDriver(version=version,
os_type=os_type)
def install(self, path=None):
# type: () -> str
return self._file_manager.download_binary(self.driver, path).path
class IEDriverManager(DriverManager):
def __init__(self, version=None, os_type=utils.os_type()):
super(IEDriverManager, self).__init__()
self.driver = IEDriver(version=version, os_type=os_type)
def install(self, path=None):
# type: () -> str
return self._file_manager.download_driver(self.driver, path).path
| 34.769231 | 73 | 0.683628 | 104 | 904 | 5.625 | 0.269231 | 0.071795 | 0.102564 | 0.082051 | 0.451282 | 0.451282 | 0.451282 | 0.451282 | 0.451282 | 0.451282 | 0 | 0 | 0.215708 | 904 | 25 | 74 | 36.16 | 0.825106 | 0.034292 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.176471 | 0.117647 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
c5ef8d7c1493136be617512f99564772f9d404af | 1,354 | py | Python | Python/sir_cost.py | Wasim5620/SIRmodel | bbca1431673dc5450f290db1235eb73e92e74979 | [
"MIT"
] | 26 | 2018-08-08T20:40:21.000Z | 2022-01-13T19:46:40.000Z | Python/sir_cost.py | Wasim5620/SIRmodel | bbca1431673dc5450f290db1235eb73e92e74979 | [
"MIT"
] | 24 | 2020-03-25T19:35:43.000Z | 2022-02-10T11:46:50.000Z | Python/sir_cost.py | Wasim5620/SIRmodel | bbca1431673dc5450f290db1235eb73e92e74979 | [
"MIT"
] | 9 | 2017-07-22T04:23:15.000Z | 2021-03-19T09:42:35.000Z | # cost function for the SIR model for python 2.7
# Marisa Eisenberg (marisae@umich.edu)
# Yu-Han Kao (kaoyh@umich.edu) -7-9-17
import numpy as np
import sir_ode
from scipy.stats import poisson
from scipy.stats import norm
from scipy.integrate import odeint as ode
def NLL(params, data, times): #negative log likelihood
params = np.abs(params)
data = np.array(data)
res = ode(sir_ode.model, sir_ode.x0fcn(params,data), times, args=(params,))
y = sir_ode.yfcn(res, params)
nll = sum(y) - sum(data*np.log(y))
# note this is a slightly shortened version--there's an additive constant term missing but it
# makes calculation faster and won't alter the threshold. Alternatively, can do:
# nll = -sum(np.log(poisson.pmf(np.round(data),np.round(y)))) # the round is b/c Poisson is for (integer) count data
# this can also barf if data and y are too far apart because the dpois will be ~0, which makes the log angry
# ML using normally distributed measurement error (least squares)
# nll = -sum(np.log(norm.pdf(data,y,0.1*np.mean(data)))) # example WLS assuming sigma = 0.1*mean(data)
# nll = sum((y - data)**2) # alternatively can do OLS but note this will mess with the thresholds
# for the profile! This version of OLS is off by a scaling factor from
# actual LL units.
return nll
| 46.689655 | 117 | 0.697194 | 230 | 1,354 | 4.086957 | 0.53913 | 0.025532 | 0.029787 | 0.042553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012015 | 0.200886 | 1,354 | 28 | 118 | 48.357143 | 0.856747 | 0.693501 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.416667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c5f1d3547439c81f976d6e6a8b962ce796bdbe68 | 125 | py | Python | fhirbug/constants.py | VerdantAI/fhirbug | 8a8e2555c0edfeee0a7edbc8d67f2fcb2edd3c2d | [
"MIT"
] | 8 | 2019-01-06T18:11:20.000Z | 2022-02-24T02:06:55.000Z | fhirbug/constants.py | VerdantAI/fhirbug | 8a8e2555c0edfeee0a7edbc8d67f2fcb2edd3c2d | [
"MIT"
] | 5 | 2019-01-25T14:15:35.000Z | 2021-06-01T23:22:41.000Z | fhirbug/constants.py | VerdantAI/fhirbug | 8a8e2555c0edfeee0a7edbc8d67f2fcb2edd3c2d | [
"MIT"
] | 3 | 2020-10-14T23:09:29.000Z | 2021-08-09T19:27:31.000Z |
# Audit Event Outcomes
AUDIT_SUCCESS = "0"
AUDIT_MINOR_FAILURE = "4"
AUDIT_SERIOUS_FAILURE = "8"
AUDIT_MAJOR_FAILURE = "12"
| 17.857143 | 27 | 0.76 | 18 | 125 | 4.888889 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046296 | 0.136 | 125 | 6 | 28 | 20.833333 | 0.768519 | 0.16 | 0 | 0 | 0 | 0 | 0.04902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c5f32783266b0ae2f58b621ca0211597718a479a | 323 | py | Python | task_queue/management/commands/run_scheduler.py | 2600box/harvest | 57264c15a3fba693b4b58d0b6d4fbf4bd5453bbd | [
"Apache-2.0"
] | 9 | 2019-03-26T14:50:00.000Z | 2020-11-10T16:44:08.000Z | task_queue/management/commands/run_scheduler.py | 2600box/harvest | 57264c15a3fba693b4b58d0b6d4fbf4bd5453bbd | [
"Apache-2.0"
] | 22 | 2019-03-02T23:16:13.000Z | 2022-02-27T10:36:36.000Z | task_queue/management/commands/run_scheduler.py | 2600box/harvest | 57264c15a3fba693b4b58d0b6d4fbf4bd5453bbd | [
"Apache-2.0"
] | 5 | 2019-04-24T00:51:30.000Z | 2020-11-06T18:31:49.000Z | import asyncio
from django.core.management.base import BaseCommand
from Harvest.utils import get_logger
from task_queue.scheduler import QueueScheduler
logger = get_logger(__name__)
class Command(BaseCommand):
help = "Run the queue consumer"
def handle(self, *args, **options):
QueueScheduler().run()
| 20.1875 | 51 | 0.755418 | 40 | 323 | 5.925 | 0.7 | 0.075949 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.160991 | 323 | 15 | 52 | 21.533333 | 0.874539 | 0 | 0 | 0 | 0 | 0 | 0.068111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.444444 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c5f43a8aa085560b1c95a6c088b9ec19d6141f65 | 2,005 | py | Python | examples/example2.py | alenaizan/resp | 41c58f465f9f3c00225dba9dfa54e5269a970240 | [
"BSD-3-Clause"
] | 10 | 2019-03-15T20:33:02.000Z | 2021-12-15T02:05:28.000Z | examples/example2.py | alenaizan/resp | 41c58f465f9f3c00225dba9dfa54e5269a970240 | [
"BSD-3-Clause"
] | 18 | 2018-06-13T04:10:39.000Z | 2022-03-18T08:17:33.000Z | examples/example2.py | alenaizan/resp | 41c58f465f9f3c00225dba9dfa54e5269a970240 | [
"BSD-3-Clause"
] | 5 | 2018-06-13T02:57:28.000Z | 2021-06-08T15:43:15.000Z | import psi4
import resp
# Initialize two different conformations of ethanol
geometry = """C 0.00000000 0.00000000 0.00000000
C 1.48805540 -0.00728176 0.39653260
O 2.04971655 1.37648153 0.25604810
H 3.06429978 1.37151670 0.52641124
H 1.58679428 -0.33618761 1.43102358
H 2.03441010 -0.68906454 -0.25521028
H -0.40814044 -1.00553466 0.10208540
H -0.54635470 0.68178278 0.65174288
H -0.09873888 0.32890585 -1.03449097
"""
mol1 = psi4.geometry(geometry)
mol1.update_geometry()
mol1.set_name('conformer1')
geometry = """C 0.00000000 0.00000000 0.00000000
C 1.48013500 -0.00724300 0.39442200
O 2.00696300 1.29224100 0.26232800
H 2.91547900 1.25572900 0.50972300
H 1.61500700 -0.32678000 1.45587700
H 2.07197500 -0.68695100 -0.26493400
H -0.32500012 1.02293415 -0.30034094
H -0.18892141 -0.68463906 -0.85893815
H -0.64257065 -0.32709111 0.84987482
"""
mol2 = psi4.geometry(geometry)
mol2.update_geometry()
mol2.set_name('conformer2')
molecules = [mol1, mol2]
# Specify options
options = {'VDW_SCALE_FACTORS' : [1.4, 1.6, 1.8, 2.0],
'VDW_POINT_DENSITY' : 1.0,
'RESP_A' : 0.0005,
'RESP_B' : 0.1,
'RESTRAINT' : True,
'IHFREE' : False,
'WEIGHT' : [1, 1],
}
# Call for first stage fit
charges1 = resp.resp(molecules, options)
print("Restrained Electrostatic Potential Charges")
print(charges1[1])
options['RESP_A'] = 0.001
resp.set_stage2_constraint(molecules[0], charges1[1], options)
# Add constraint for atoms fixed in second stage fit
options['grid'] = []
options['esp'] = []
for mol in range(len(molecules)):
options['grid'].append('%i_%s_grid.dat' %(mol+1, molecules[mol].name()))
options['esp'].append('%i_%s_grid_esp.dat' %(mol+1, molecules[mol].name()))
# Call for second stage fit
charges2 = resp.resp(molecules, options)
print("\nStage Two\n")
print("RESP Charges")
print(charges2[1])
| 30.378788 | 79 | 0.653367 | 293 | 2,005 | 4.409556 | 0.419795 | 0.041796 | 0.03096 | 0.055728 | 0.139319 | 0.094427 | 0.058824 | 0.058824 | 0.058824 | 0.058824 | 0 | 0.336484 | 0.208479 | 2,005 | 65 | 80 | 30.846154 | 0.477631 | 0.083292 | 0 | 0.078431 | 0 | 0 | 0.505459 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.039216 | 0 | 0.039216 | 0.098039 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a84380c1f97670fcb660541a388530604b2cd9bd | 1,180 | py | Python | bot/audio_trial2.py | Nova-Striker/discord-bot | 1c977711f348467bd73e1886c27ad4a9a93c779b | [
"Apache-2.0"
] | 1 | 2020-11-10T06:33:49.000Z | 2020-11-10T06:33:49.000Z | bot/audio_trial2.py | Nova-Striker/discord-bot | 1c977711f348467bd73e1886c27ad4a9a93c779b | [
"Apache-2.0"
] | null | null | null | bot/audio_trial2.py | Nova-Striker/discord-bot | 1c977711f348467bd73e1886c27ad4a9a93c779b | [
"Apache-2.0"
] | 1 | 2020-11-13T17:12:00.000Z | 2020-11-13T17:12:00.000Z | ##incompleted yt tuitorial
##import discord
##import json
##import asyncio
##import youtube_dl
##import shell
##import os
##from discord.utils import get
##from discord.ext import commands
##
##@client.command(pass_context=True)
##async def join(ctx):
## global voice
## channel=ctx.message.author.voice.channel
## voice=get(client.voice_clients,guild=ctx.guild)
##
## if voice and voice.is_connected():
## await voice.move_to(channel)
## else:
## voice=await chqannel.connect()
## await ctx.send(f"Joined {channel}")
##
##@client.command(pass_context=True)
##async def leave(ctx):
## channel=ctx.message.author.voice.channel
## voice=get(client.voice_clients,guild=ctx.guild)
##
## if voice and voice.is_connected():
## await voice.disconnect()
## await ctx.send(f"Left {channel}")
##
##@client.command(pass_context=True,aliases=["p"])
##async def play(ctx,url:str):
## def check_queue():
## Queue_infile=os.path.indir("./Queue")
## if Queue_infile is True:
## DIR =os.path.abspath(os.path.realpath("Queue"))
## length=len(os.
##
| 28.095238 | 62 | 0.626271 | 149 | 1,180 | 4.879195 | 0.422819 | 0.053645 | 0.070151 | 0.099037 | 0.459422 | 0.459422 | 0.401651 | 0.302613 | 0.302613 | 0.302613 | 0 | 0 | 0.207627 | 1,180 | 41 | 63 | 28.780488 | 0.77754 | 0.854237 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8487bc9414b8343a3c74e1a9dd0ffa5bd1bc6e4 | 1,012 | py | Python | algs4/max_pq.py | dumpmemory/algs4-py | 8555076b554583b5438ed5180e2815cf049fb233 | [
"MIT"
] | 230 | 2018-02-27T02:26:44.000Z | 2022-03-29T10:26:57.000Z | algs4/max_pq.py | dumpmemory/algs4-py | 8555076b554583b5438ed5180e2815cf049fb233 | [
"MIT"
] | 5 | 2018-04-06T12:08:56.000Z | 2021-12-19T09:44:58.000Z | algs4/max_pq.py | dumpmemory/algs4-py | 8555076b554583b5438ed5180e2815cf049fb233 | [
"MIT"
] | 55 | 2018-02-27T02:26:45.000Z | 2022-03-30T03:51:41.000Z | class MaxPQ:
def __init__(self):
self.pq = []
def insert(self, v):
self.pq.append(v)
self.swim(len(self.pq) - 1)
def max(self):
return self.pq[0]
def del_max(self, ):
m = self.pq[0]
self.pq[0], self.pq[-1] = self.pq[-1], self.pq[0]
self.pq = self.pq[:-1]
self.sink(0)
return m
def is_empty(self, ):
return not self.pq
def size(self, ):
return len(self.pq)
def swim(self, k):
while k > 0 and self.pq[(k - 1) // 2] < self.pq[k]:
self.pq[k], self.pq[
(k - 1) // 2] = self.pq[(k - 1) // 2], self.pq[k]
k = k // 2
def sink(self, k):
N = len(self.pq)
while 2 * k + 1 <= N - 1:
j = 2 * k + 1
if j < N - 1 and self.pq[j] < self.pq[j + 1]:
j += 1
if self.pq[k] > self.pq[j]:
break
self.pq[k], self.pq[j] = self.pq[j], self.pq[k]
k = j
| 23 | 65 | 0.414032 | 162 | 1,012 | 2.549383 | 0.185185 | 0.40678 | 0.152542 | 0.106538 | 0.348668 | 0.234867 | 0.099274 | 0.099274 | 0 | 0 | 0 | 0.042017 | 0.412055 | 1,012 | 43 | 66 | 23.534884 | 0.652101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242424 | false | 0 | 0 | 0.090909 | 0.393939 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a84ba5a5805b33c3b1ad0afc0fc822a27f2c5d05 | 2,926 | py | Python | tests/test_python.py | sfeltman/libsmf | a70eb477732d2e8584af59389b4909541cf5bc98 | [
"BSD-2-Clause"
] | 1 | 2019-04-15T01:37:55.000Z | 2019-04-15T01:37:55.000Z | tests/test_python.py | sfeltman/libsmf | a70eb477732d2e8584af59389b4909541cf5bc98 | [
"BSD-2-Clause"
] | null | null | null | tests/test_python.py | sfeltman/libsmf | a70eb477732d2e8584af59389b4909541cf5bc98 | [
"BSD-2-Clause"
] | null | null | null | import os
import unittest
import tempfile
from gi.repository import Smf
class Test(unittest.TestCase):
def setUp(self):
self.path = os.path.dirname(__file__)
def compare_smf_files(self, a, b):
self.assertEqual(a.format, b.format)
self.assertEqual(a.ppqn, b.ppqn)
self.assertEqual(a.frames_per_second, b.frames_per_second)
self.assertEqual(a.resolution, b.resolution)
self.assertEqual(a.number_of_tracks, b.number_of_tracks)
self.assertEqual(len(a.tracks_array), len(b.tracks_array))
for i in range(a.number_of_tracks):
tracka = a.tracks_array[i]
trackb = b.tracks_array[i]
self.assertEqual(tracka.smf, a)
self.assertEqual(trackb.smf, b)
self.assertEqual(tracka.track_number, trackb.track_number)
self.assertEqual(tracka.number_of_events, trackb.number_of_events)
self.assertEqual(tracka.file_buffer_length, trackb.file_buffer_length)
self.assertEqual(tracka.last_status, trackb.last_status)
self.assertEqual(tracka.next_event_offset, trackb.next_event_offset)
#self.assertEqual(tracka.next_event_number, trackb.next_event_number)
#self.assertEqual(tracka.time_of_next_event, trackb.time_of_next_event)
tracka_events = tracka.events_array
trackb_events = trackb.events_array
for j in range(tracka.number_of_events):
eventa = tracka_events[j]
eventb = trackb_events[j]
self.assertEqual(tracka, eventa.track)
self.assertEqual(trackb, eventb.track)
self.assertEqual(eventa.event_number, eventb.event_number)
self.assertEqual(eventa.delta_time_pulses, eventb.delta_time_pulses)
self.assertEqual(eventa.time_pulses, eventb.time_pulses)
self.assertEqual(eventa.time_seconds, eventb.time_seconds)
self.assertEqual(eventa.track_number, eventb.track_number)
self.assertEqual(eventa.midi_buffer_length, eventb.midi_buffer_length)
self.assertEqual(eventa.get_buffer(), eventb.get_buffer())
@unittest.expectedFailure
def test_tempo_ref_counts(self):
bach = Smf.File.load(os.path.join(self.path, 'chpn_op53.mid'))
tempo = bach.get_last_tempo()
#self.assertEqual(tempo.ref_count, 2)
bach.remove_tempo(tempo)
self.assertEqual(tempo.ref_count, 1)
def test_file_ref_count(self):
pass
def test_bach_read_write_read_compare(self):
orig = Smf.File.load(os.path.join(self.path, 'chpn_op53.mid'))
handle, temp_filename = tempfile.mkstemp('mid')
os.close(handle)
orig.save(temp_filename)
new = Smf.File.load(temp_filename)
self.compare_smf_files(orig, new)
if __name__ == '__main__':
unittest.main()
| 38 | 86 | 0.669515 | 369 | 2,926 | 5.02981 | 0.230352 | 0.210129 | 0.101832 | 0.016164 | 0.148707 | 0.116379 | 0.043103 | 0.043103 | 0.043103 | 0.043103 | 0 | 0.002677 | 0.234108 | 2,926 | 76 | 87 | 38.5 | 0.825524 | 0.059467 | 0 | 0 | 0 | 0 | 0.013464 | 0 | 0 | 0 | 0 | 0 | 0.418182 | 1 | 0.090909 | false | 0.018182 | 0.072727 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a855889fb82fea703cc4439aa3a13845ae7ffaa9 | 1,903 | py | Python | examples/ADT.py | SophiaZhyrovetska/Music_analizer | 9454aa1df9a75b25526a972c620a4aea3f30541f | [
"MIT"
] | 2 | 2018-06-26T21:49:49.000Z | 2018-06-26T21:49:53.000Z | examples/ADT.py | SophiaZhyrovetska/Music_analizer | 9454aa1df9a75b25526a972c620a4aea3f30541f | [
"MIT"
] | 1 | 2018-06-20T23:17:52.000Z | 2018-06-27T08:43:49.000Z | examples/ADT.py | SophiaZhyrovetska/Music_analizer | 9454aa1df9a75b25526a972c620a4aea3f30541f | [
"MIT"
] | 1 | 2018-06-26T21:49:52.000Z | 2018-06-26T21:49:52.000Z | class Song:
"A class for representing a song"
def __init__(self, name, singer):
"""
Initialize a new song with it's name and singer
:param name: str
:param singer: str
"""
self.name = name
self.singer = singer
self.mood = self.mood()
def text(self):
"""
Returns a text of a song
:return: str
"""
pass
def mood(self):
"""
Returns a mood of a song
:return: str
"""
pass
def theme(self):
"""
Returns a theme of a song
:return: str
"""
pass
def key_words(self):
"""
Returns key words of a song
:return: list
"""
pass
class Singer:
"A class for representing a singer"
def __init__(self, name):
"""
Initialize a new singer with it's name
:param name: str
"""
self.name = name
class Discography:
"A class for representing a discography of a singer. Uses Singer() and Song() instances"
def __init__(self, singer):
"""
Initialize a new discography
:param singer: Singer() instance
"""
self.singer = singer
self.songs = []
def add_song(self, song):
"""
Adds a song to discography (self.songs)
:param song: Song() instance
:return: None
"""
pass
def number_of_songs(self):
"""
Returns a number of songs in this discography
:return: int
"""
pass
def mood(self):
"""
Returns a a dictionary, with moods as keys and number of songs as values
:return: dict
"""
pass
def themes(self):
"""
Returns most popular themes of songs in this discography
:return: list
"""
pass
| 20.462366 | 92 | 0.504467 | 217 | 1,903 | 4.35023 | 0.230415 | 0.081568 | 0.063559 | 0.055085 | 0.247881 | 0.177966 | 0.073093 | 0 | 0 | 0 | 0 | 0 | 0.405675 | 1,903 | 92 | 93 | 20.684783 | 0.83466 | 0.426169 | 0 | 0.451613 | 0 | 0 | 0.169109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.354839 | false | 0.258065 | 0 | 0 | 0.451613 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a87a0c58e2acdc88f9a0c4132a845c88665fa4ac | 1,531 | py | Python | stubs.min/Autodesk/Revit/DB/__init___parts/BRepBuilderGeometryId.py | denfromufa/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | 1 | 2017-07-07T11:15:45.000Z | 2017-07-07T11:15:45.000Z | stubs.min/Autodesk/Revit/DB/__init___parts/BRepBuilderGeometryId.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | stubs.min/Autodesk/Revit/DB/__init___parts/BRepBuilderGeometryId.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | class BRepBuilderGeometryId(object,IDisposable):
"""
This class is used by the BRepBuilder class to identify objects it creates (faces,edges,etc.).
BRepBuilderGeometryId(other: BRepBuilderGeometryId)
"""
def Dispose(self):
""" Dispose(self: BRepBuilderGeometryId) """
pass
@staticmethod
def InvalidGeometryId():
"""
InvalidGeometryId() -> BRepBuilderGeometryId
Returns an invalid BRepBuilderGeometryId,used as a return value to indicate an
error.
"""
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: BRepBuilderGeometryId,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,other):
""" __new__(cls: type,other: BRepBuilderGeometryId) """
pass
def __repr__(self,*args):
""" __repr__(self: object) -> str """
pass
IsValidObject=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Specifies whether the .NET object represents a valid Revit entity.
Get: IsValidObject(self: BRepBuilderGeometryId) -> bool
"""
| 33.282609 | 215 | 0.701502 | 165 | 1,531 | 6.054545 | 0.412121 | 0.035035 | 0.048048 | 0.057057 | 0.113113 | 0.113113 | 0.113113 | 0.113113 | 0.113113 | 0.113113 | 0 | 0 | 0.171783 | 1,531 | 45 | 216 | 34.022222 | 0.787855 | 0.525147 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
a8803c766a451bb61117713eb202084c40d7750f | 1,435 | py | Python | students/k3343/laboratory_works/Berezhnova_Marina/laboratory_work_1/django_project_flights/flights_app/models.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 10 | 2020-03-20T09:06:12.000Z | 2021-07-27T13:06:02.000Z | students/k3343/laboratory_works/Berezhnova_Marina/laboratory_work_1/django_project_flights/flights_app/models.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 134 | 2020-03-23T09:47:48.000Z | 2022-03-12T01:05:19.000Z | students/k3343/laboratory_works/Berezhnova_Marina/laboratory_work_1/django_project_flights/flights_app/models.py | TonikX/ITMO_ICT_-WebProgramming_2020 | ba566c1b3ab04585665c69860b713741906935a0 | [
"MIT"
] | 71 | 2020-03-20T12:45:56.000Z | 2021-10-31T19:22:25.000Z | from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Companies(models.Model):
name = models.CharField(max_length=30)
def __str__(self):
return "{}".format(self.name)
class Gates(models.Model):
name = models.CharField(max_length=30)
def __str__(self):
return "{}".format(self.name)
class Flights(models.Model):
company = models.ForeignKey(Companies, on_delete=models.CASCADE)
gate = models.ForeignKey(Gates, on_delete=models.CASCADE)
def __str__(self):
return "Company: {} | Gate: {}".format(self.company, self.gate)
class FlightActivities(models.Model):
ACTIVITY = [
('0', 'arrival'),
('1', 'departure')
]
flight = models.ForeignKey(Flights, on_delete=models.CASCADE)
activity = models.CharField(choices=ACTIVITY, default='0', max_length=1)
time = models.DateField()
def __str__(self):
return "{} | Arrival/departure: {} | Date {}".format(self.flight, self.get_activity_display(), self.time)
class FlightComments(models.Model):
flight = models.ForeignKey(FlightActivities, on_delete=models.CASCADE)
COMMENT_TYPE = [
('0', 'Gate changing'),
('1', 'Lateness'),
('2', 'Other')
]
com_type = models.CharField(choices=COMMENT_TYPE, default='0', max_length=1)
com_text = models.CharField(max_length=1024)
author = models.ForeignKey(User, on_delete=models.CASCADE)
| 27.075472 | 110 | 0.687108 | 175 | 1,435 | 5.451429 | 0.325714 | 0.057652 | 0.073375 | 0.110063 | 0.197065 | 0.159329 | 0.159329 | 0.159329 | 0.159329 | 0.159329 | 0 | 0.014202 | 0.165854 | 1,435 | 52 | 111 | 27.596154 | 0.78279 | 0.016725 | 0 | 0.228571 | 0 | 0 | 0.078779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.057143 | 0.114286 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
a883398d1013f82065fdcd6cb3f64c0a32a024f7 | 743 | py | Python | membership/urls.py | kay-han/building-blog | 2bdbee68b484193c636ed869b2de605df67b2a48 | [
"Unlicense"
] | null | null | null | membership/urls.py | kay-han/building-blog | 2bdbee68b484193c636ed869b2de605df67b2a48 | [
"Unlicense"
] | null | null | null | membership/urls.py | kay-han/building-blog | 2bdbee68b484193c636ed869b2de605df67b2a48 | [
"Unlicense"
] | null | null | null | from django.urls import path
from .views import UserRegisterView, UserEditView, PasswordsChangeView
from django.contrib.auth import views as auth_views #It allows using some of the views that come with the authentication system comes with django
from . import views
urlpatterns = [
path('registeration/', UserRegisterView.as_view(), name='registeration'),
path('edit_profile/', UserEditView.as_view(), name='edit-profile'),
#path('password/', auth_views.PasswordsChangeView.as_view(template_name='registration/change-password.html')),
path('password/', PasswordsChangeView.as_view(template_name='registration/change-password.html')),
path('password_success/', views.password_success, name='password_success.html'),
]
| 57.153846 | 146 | 0.776581 | 90 | 743 | 6.277778 | 0.4 | 0.042478 | 0.035398 | 0.116814 | 0.279646 | 0.279646 | 0.279646 | 0.279646 | 0.279646 | 0.279646 | 0 | 0 | 0.106326 | 743 | 12 | 147 | 61.916667 | 0.850904 | 0.270525 | 0 | 0 | 0 | 0 | 0.244444 | 0.1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.3 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
a89d5ea301daab707e4e307ee463a9e25963e7c1 | 7,626 | py | Python | tests/epc_schemes/test_giai.py | nedap/retail-epcpy | f5a454f2a06053f64bc42e6c6411fbd6cb47e745 | [
"MIT"
] | 2 | 2022-03-21T08:22:30.000Z | 2022-03-22T12:32:29.000Z | tests/epc_schemes/test_giai.py | nedap/retail-epcpy | f5a454f2a06053f64bc42e6c6411fbd6cb47e745 | [
"MIT"
] | 1 | 2022-03-28T14:48:52.000Z | 2022-03-28T14:48:52.000Z | tests/epc_schemes/test_giai.py | nedap/retail-epcpy | f5a454f2a06053f64bc42e6c6411fbd6cb47e745 | [
"MIT"
] | null | null | null | import unittest
from epcpy.epc_schemes.giai import GIAI, GIAIFilterValue
from tests.epc_schemes.test_base_scheme import (
TestEPCSchemeInitMeta,
TestGS1ElementMeta,
TestTagEncodableMeta,
)
class TestGIAIInit(
unittest.TestCase,
metaclass=TestEPCSchemeInitMeta,
scheme=GIAI,
valid_data=[
{
"name": "test_valid_giai_1",
"uri": "urn:epc:id:giai:0614141.12345400",
},
{
"name": "test_valid_giai_2",
"uri": "urn:epc:id:giai:0614141.0",
},
{
"name": "test_valid_giai_3",
"uri": "urn:epc:id:giai:0614141.1ABc%2FD",
},
{
"name": "test_valid_giai_4",
"uri": "urn:epc:id:giai:061411.01ABc%2FD",
},
{
"name": "test_valid_giai_5",
"uri": "urn:epc:id:giai:012345.012345678901234567890123",
},
{
"name": "test_valid_giai_6",
"uri": "urn:epc:id:giai:012345678901.012345678901234567",
},
],
invalid_data=[
{
"name": "test_invalid_giai_identifier",
"uri": "urn:epc:id:gai:061411.01ABc%2FD",
},
{
"name": "test_invalid_giai_company_prefix_1",
"uri": "urn:epc:id:giai:06141.1ABc%2FD",
},
{
"name": "test_invalid_giai_company_prefix_2",
"uri": "urn:epc:id:giai:0614111111111.1ABc%2FD",
},
{
"name": "test_invalid_giai_serial_too_long_1",
"uri": "urn:epc:id:giai:012345.0123456789012345678901234",
},
{
"name": "test_invalid_giai_serial_too_long_1",
"uri": "urn:epc:id:giai:012345.0123456789012345678901234",
},
{
"name": "test_invalid_giai_serial_too_long_2",
"uri": "urn:epc:id:giai:012345678901.0123456789012345678",
},
],
):
pass
class TestGIAIGS1Key(
unittest.TestCase,
metaclass=TestGS1ElementMeta,
scheme=GIAI,
valid_data=[
{
"name": "test_valid_giai_gs1_key_1",
"uri": "urn:epc:id:giai:0614141.12345400",
"gs1_key": "061414112345400",
"gs1_element_string": "(8004)061414112345400",
"company_prefix_length": 7,
},
{
"name": "test_valid_giai_gs1_key_2",
"uri": "urn:epc:id:giai:0614141.0",
"gs1_key": "06141410",
"gs1_element_string": "(8004)06141410",
"company_prefix_length": 7,
},
{
"name": "test_valid_giai_gs1_key_3",
"uri": "urn:epc:id:giai:0614141.1ABc%2FD",
"gs1_key": "06141411ABc/D",
"gs1_element_string": "(8004)06141411ABc/D",
"company_prefix_length": 7,
},
{
"name": "test_valid_giai_gs1_key_4",
"uri": "urn:epc:id:giai:061411.01ABc%2FD",
"gs1_key": "06141101ABc/D",
"gs1_element_string": "(8004)06141101ABc/D",
"company_prefix_length": 6,
},
{
"name": "test_valid_giai_gs1_key_5",
"uri": "urn:epc:id:giai:012345.012345678901234567890123",
"gs1_key": "012345012345678901234567890123",
"gs1_element_string": "(8004)012345012345678901234567890123",
"company_prefix_length": 6,
},
{
"name": "test_valid_giai_gs1_key_6",
"uri": "urn:epc:id:giai:012345678901.012345678901234567",
"gs1_key": "012345678901012345678901234567",
"gs1_element_string": "(8004)012345678901012345678901234567",
"company_prefix_length": 12,
},
],
invalid_data=[],
):
pass
class TestGIAITagEncodable(
unittest.TestCase,
metaclass=TestTagEncodableMeta,
scheme=GIAI,
valid_data=[
{
"name": "test_valid_giai_tag_encodable_1",
"uri": "urn:epc:id:giai:0614141.12345400",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_202,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
"tag_uri": "urn:epc:tag:giai-202:1.0614141.12345400",
"hex": "3834257BF58B266D1AB460C00000000000000000000000000000",
},
{
"name": "test_valid_giai_tag_encodable_2",
"uri": "urn:epc:id:giai:0614141.0",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_96,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
"tag_uri": "urn:epc:tag:giai-96:1.0614141.0",
"hex": "3434257BF400000000000000",
},
{
"name": "test_valid_giai_tag_encodable_3",
"uri": "urn:epc:id:giai:0614141.1ABc%2FD",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_202,
"filter_value": GIAIFilterValue.ALL_OTHERS,
},
"tag_uri": "urn:epc:tag:giai-202:0.0614141.1ABc%2FD",
"hex": "3814257BF58C1858D7C400000000000000000000000000000000",
},
{
"name": "test_valid_giai_tag_encodable_4",
"uri": "urn:epc:id:giai:061411.01ABc%2FD",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_202,
"filter_value": GIAIFilterValue.RESERVED_4,
},
"tag_uri": "urn:epc:tag:giai-202:4.061411.01ABc%2FD",
"hex": "38983BF8D831830B1AF880000000000000000000000000000000",
},
{
"name": "test_valid_giai_tag_encodable_5",
"uri": "urn:epc:id:giai:012345.012345678901234567890123",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_202,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
"tag_uri": "urn:epc:tag:giai-202:1.012345.012345678901234567890123",
"hex": "38380C0E583164CDA356CDDC3960C593368D5B3770E583164CC0",
},
{
"name": "test_valid_giai_tag_encodable_6",
"uri": "urn:epc:id:giai:0614141.12345400",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_96,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
"tag_uri": "urn:epc:tag:giai-96:1.0614141.12345400",
"hex": "3434257BF400000000BC6038",
},
{
"name": "test_valid_giai_tag_encodable_7",
"uri": "urn:epc:id:giai:0614141.02",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_202,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
"tag_uri": "urn:epc:tag:giai-202:1.0614141.02",
"hex": "3834257BF5832000000000000000000000000000000000000000",
},
],
invalid_data=[
{
"name": "test_invalid_giai_tag_encodable_invalid_serial_1",
"uri": "urn:epc:id:giai:0614141.02",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_96,
"filter_value": GIAIFilterValue.RAIL_VEHICLE,
},
},
{
"name": "test_invalid_giai_tag_encodable_invalid_serial_2",
"uri": "urn:epc:id:giai:061411.11ABc%2FD",
"kwargs": {
"binary_coding_scheme": GIAI.BinaryCodingScheme.GIAI_96,
"filter_value": GIAIFilterValue.RESERVED_4,
},
},
],
):
pass
| 34.663636 | 80 | 0.550747 | 720 | 7,626 | 5.527778 | 0.134722 | 0.051256 | 0.076884 | 0.074623 | 0.682915 | 0.656533 | 0.574372 | 0.545477 | 0.403769 | 0.36005 | 0 | 0.216465 | 0.31668 | 7,626 | 219 | 81 | 34.821918 | 0.547304 | 0 | 0 | 0.363208 | 0 | 0 | 0.450433 | 0.325334 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.014151 | 0.014151 | 0 | 0.028302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a89fa94a38be0f7ff83779a36c140bcbf11011b7 | 1,422 | py | Python | publicacion/migrations/0002_remove_publicacion_user_publicacion_autor_and_more.py | chelocastillo1/test | b783e64dbd3071c3ed074e9ce23da047e9bad97d | [
"CC0-1.0"
] | 1 | 2021-12-12T22:27:52.000Z | 2021-12-12T22:27:52.000Z | publicacion/migrations/0002_remove_publicacion_user_publicacion_autor_and_more.py | chelocastillo1/test | b783e64dbd3071c3ed074e9ce23da047e9bad97d | [
"CC0-1.0"
] | null | null | null | publicacion/migrations/0002_remove_publicacion_user_publicacion_autor_and_more.py | chelocastillo1/test | b783e64dbd3071c3ed074e9ce23da047e9bad97d | [
"CC0-1.0"
] | null | null | null | # Generated by Django 4.0 on 2021-12-15 02:51
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('cuenta', '0001_initial'),
('publicacion', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='publicacion',
name='user',
),
migrations.AddField(
model_name='publicacion',
name='autor',
field=models.ForeignKey(default=0, on_delete=django.db.models.deletion.DO_NOTHING, to='cuenta.usuario'),
),
migrations.AddField(
model_name='publicacion',
name='destacado',
field=models.BooleanField(default=False),
),
migrations.AddField(
model_name='publicacion',
name='imagen',
field=models.ImageField(default=None, upload_to=''),
),
migrations.AlterField(
model_name='publicacion',
name='fechaCreacion',
field=models.DateTimeField(),
),
migrations.AlterField(
model_name='publicacion',
name='fechaEdicion',
field=models.DateTimeField(),
),
migrations.AlterField(
model_name='publicacion',
name='titulo',
field=models.CharField(max_length=100),
),
]
| 28.44 | 116 | 0.561181 | 123 | 1,422 | 6.382114 | 0.447154 | 0.080255 | 0.178344 | 0.214013 | 0.389809 | 0.389809 | 0.173248 | 0.173248 | 0.173248 | 0 | 0 | 0.026887 | 0.319972 | 1,422 | 49 | 117 | 29.020408 | 0.784902 | 0.030239 | 0 | 0.511628 | 1 | 0 | 0.135802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.046512 | 0 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8a48dba10dc8bcece98d956d127819f587eacf1 | 13,796 | py | Python | src/main/java/nl/Ipsen5Server/Service/kik-bot-api-unofficial/examples/kik_unofficial/protobuf/common/v2/model_pb2.py | anthonyscheeres/Ipen5BackendGroep11 | e2675c2ac6580f0a6f1d9e5f755f19405d17e514 | [
"Apache-2.0"
] | null | null | null | src/main/java/nl/Ipsen5Server/Service/kik-bot-api-unofficial/examples/kik_unofficial/protobuf/common/v2/model_pb2.py | anthonyscheeres/Ipen5BackendGroep11 | e2675c2ac6580f0a6f1d9e5f755f19405d17e514 | [
"Apache-2.0"
] | null | null | null | src/main/java/nl/Ipsen5Server/Service/kik-bot-api-unofficial/examples/kik_unofficial/protobuf/common/v2/model_pb2.py | anthonyscheeres/Ipen5BackendGroep11 | e2675c2ac6580f0a6f1d9e5f755f19405d17e514 | [
"Apache-2.0"
] | null | null | null | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: common/v2/model.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
import kik_unofficial.protobuf.kik_options_pb2 as kik__options__pb2
import kik_unofficial.protobuf.protobuf_validation_pb2 as protobuf__validation__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='common/v2/model.proto',
package='common.v2',
syntax='proto3',
serialized_pb=_b('\n\x15\x63ommon/v2/model.proto\x12\tcommon.v2\x1a\x11kik_options.proto\x1a\x19protobuf_validation.proto\"K\n\tAccountId\x12>\n\nlocal_part\x18\x01 \x01(\tB*\xca\x9d%&\x08\x01\x12\"^[a-z_0-9\\.]{2,30}(_[a-z0-9]{3})?$\")\n\tPersonaId\x12\x1c\n\traw_value\x18\x01 \x01(\x0c\x42\t\xca\x9d%\x05\x08\x01\x30\x80\x01\"(\n\x06\x43hatId\x12\x1e\n\traw_value\x18\x01 \x01(\x0c\x42\x0b\xca\x9d%\x07\x08\x01(\x01\x30\x80\x04\"A\n\nOneToOneId\x12\x33\n\x08personas\x18\x01 \x03(\x0b\x32\x14.common.v2.PersonaIdB\x0b\xca\x9d%\x07\x08\x01x\x02\x80\x01\x02\"/\n\x10\x43lientInstanceId\x12\x1b\n\traw_value\x18\x01 \x01(\x0c\x42\x08\xca\x9d%\x04\x08\x01\x30\x64\"%\n\x04Uuid\x12\x1d\n\traw_value\x18\x01 \x01(\x0c\x42\n\xca\x9d%\x06\x08\x01(\x10\x30\x10\"\x82\x01\n\x05\x45mail\x12y\n\x05\x65mail\x18\x01 \x01(\tBj\xca\x9d%f\x08\x01\x12_^[\\w\\-+]+(\\.[\\w\\-+]+)*@[A-Za-z0-9][A-Za-z0-9\\-]*(\\.[A-Za-z0-9][A-Za-z0-9\\-]*)*(\\.[A-Za-z]{2,})$0\xf8\x07\"4\n\x08Username\x12(\n\x08username\x18\x02 \x01(\tB\x16\xca\x9d%\x12\x08\x01\x12\x0e^[\\w\\.]{2,30}$B~\n\x15\x63om.kik.gen.common.v2P\x01ZLgithub.com/kikinteractive/xiphias-model-common/generated/go/common/v2;common\xa0\x01\x01\xa2\x02\x0bKPBCommonV2\xaa\xa3*\x02\x08\x01\x62\x06proto3')
,
dependencies=[kik__options__pb2.DESCRIPTOR,protobuf__validation__pb2.DESCRIPTOR,])
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_ACCOUNTID = _descriptor.Descriptor(
name='AccountId',
full_name='common.v2.AccountId',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='local_part', full_name='common.v2.AccountId.local_part', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%&\010\001\022\"^[a-z_0-9\\.]{2,30}(_[a-z0-9]{3})?$'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=82,
serialized_end=157,
)
_PERSONAID = _descriptor.Descriptor(
name='PersonaId',
full_name='common.v2.PersonaId',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='raw_value', full_name='common.v2.PersonaId.raw_value', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\005\010\0010\200\001'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=159,
serialized_end=200,
)
_CHATID = _descriptor.Descriptor(
name='ChatId',
full_name='common.v2.ChatId',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='raw_value', full_name='common.v2.ChatId.raw_value', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\007\010\001(\0010\200\004'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=202,
serialized_end=242,
)
_ONETOONEID = _descriptor.Descriptor(
name='OneToOneId',
full_name='common.v2.OneToOneId',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='personas', full_name='common.v2.OneToOneId.personas', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\007\010\001x\002\200\001\002'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=244,
serialized_end=309,
)
_CLIENTINSTANCEID = _descriptor.Descriptor(
name='ClientInstanceId',
full_name='common.v2.ClientInstanceId',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='raw_value', full_name='common.v2.ClientInstanceId.raw_value', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\004\010\0010d'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=311,
serialized_end=358,
)
_UUID = _descriptor.Descriptor(
name='Uuid',
full_name='common.v2.Uuid',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='raw_value', full_name='common.v2.Uuid.raw_value', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\006\010\001(\0200\020'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=360,
serialized_end=397,
)
_EMAIL = _descriptor.Descriptor(
name='Email',
full_name='common.v2.Email',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='email', full_name='common.v2.Email.email', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%f\010\001\022_^[\\w\\-+]+(\\.[\\w\\-+]+)*@[A-Za-z0-9][A-Za-z0-9\\-]*(\\.[A-Za-z0-9][A-Za-z0-9\\-]*)*(\\.[A-Za-z]{2,})$0\370\007'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=400,
serialized_end=530,
)
_USERNAME = _descriptor.Descriptor(
name='Username',
full_name='common.v2.Username',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='username', full_name='common.v2.Username.username', index=0,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\022\010\001\022\016^[\\w\\.]{2,30}$'))),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=532,
serialized_end=584,
)
_ONETOONEID.fields_by_name['personas'].message_type = _PERSONAID
DESCRIPTOR.message_types_by_name['AccountId'] = _ACCOUNTID
DESCRIPTOR.message_types_by_name['PersonaId'] = _PERSONAID
DESCRIPTOR.message_types_by_name['ChatId'] = _CHATID
DESCRIPTOR.message_types_by_name['OneToOneId'] = _ONETOONEID
DESCRIPTOR.message_types_by_name['ClientInstanceId'] = _CLIENTINSTANCEID
DESCRIPTOR.message_types_by_name['Uuid'] = _UUID
DESCRIPTOR.message_types_by_name['Email'] = _EMAIL
DESCRIPTOR.message_types_by_name['Username'] = _USERNAME
AccountId = _reflection.GeneratedProtocolMessageType('AccountId', (_message.Message,), dict(
DESCRIPTOR = _ACCOUNTID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.AccountId)
))
_sym_db.RegisterMessage(AccountId)
PersonaId = _reflection.GeneratedProtocolMessageType('PersonaId', (_message.Message,), dict(
DESCRIPTOR = _PERSONAID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.PersonaId)
))
_sym_db.RegisterMessage(PersonaId)
ChatId = _reflection.GeneratedProtocolMessageType('ChatId', (_message.Message,), dict(
DESCRIPTOR = _CHATID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.ChatId)
))
_sym_db.RegisterMessage(ChatId)
OneToOneId = _reflection.GeneratedProtocolMessageType('OneToOneId', (_message.Message,), dict(
DESCRIPTOR = _ONETOONEID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.OneToOneId)
))
_sym_db.RegisterMessage(OneToOneId)
ClientInstanceId = _reflection.GeneratedProtocolMessageType('ClientInstanceId', (_message.Message,), dict(
DESCRIPTOR = _CLIENTINSTANCEID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.ClientInstanceId)
))
_sym_db.RegisterMessage(ClientInstanceId)
Uuid = _reflection.GeneratedProtocolMessageType('Uuid', (_message.Message,), dict(
DESCRIPTOR = _UUID,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.Uuid)
))
_sym_db.RegisterMessage(Uuid)
Email = _reflection.GeneratedProtocolMessageType('Email', (_message.Message,), dict(
DESCRIPTOR = _EMAIL,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.Email)
))
_sym_db.RegisterMessage(Email)
Username = _reflection.GeneratedProtocolMessageType('Username', (_message.Message,), dict(
DESCRIPTOR = _USERNAME,
__module__ = 'common.v2.model_pb2'
# @@protoc_insertion_point(class_scope:common.v2.Username)
))
_sym_db.RegisterMessage(Username)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\n\025com.kik.gen.common.v2P\001ZLgithub.com/kikinteractive/xiphias-model-common/generated/go/common/v2;common\240\001\001\242\002\013KPBCommonV2\252\243*\002\010\001'))
_ACCOUNTID.fields_by_name['local_part'].has_options = True
_ACCOUNTID.fields_by_name['local_part']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%&\010\001\022\"^[a-z_0-9\\.]{2,30}(_[a-z0-9]{3})?$'))
_PERSONAID.fields_by_name['raw_value'].has_options = True
_PERSONAID.fields_by_name['raw_value']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\005\010\0010\200\001'))
_CHATID.fields_by_name['raw_value'].has_options = True
_CHATID.fields_by_name['raw_value']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\007\010\001(\0010\200\004'))
_ONETOONEID.fields_by_name['personas'].has_options = True
_ONETOONEID.fields_by_name['personas']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\007\010\001x\002\200\001\002'))
_CLIENTINSTANCEID.fields_by_name['raw_value'].has_options = True
_CLIENTINSTANCEID.fields_by_name['raw_value']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\004\010\0010d'))
_UUID.fields_by_name['raw_value'].has_options = True
_UUID.fields_by_name['raw_value']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\006\010\001(\0200\020'))
_EMAIL.fields_by_name['email'].has_options = True
_EMAIL.fields_by_name['email']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%f\010\001\022_^[\\w\\-+]+(\\.[\\w\\-+]+)*@[A-Za-z0-9][A-Za-z0-9\\-]*(\\.[A-Za-z0-9][A-Za-z0-9\\-]*)*(\\.[A-Za-z]{2,})$0\370\007'))
_USERNAME.fields_by_name['username'].has_options = True
_USERNAME.fields_by_name['username']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\312\235%\022\010\001\022\016^[\\w\\.]{2,30}$'))
# @@protoc_insertion_point(module_scope)
| 37.79726 | 1,243 | 0.711293 | 1,781 | 13,796 | 5.211679 | 0.130264 | 0.032752 | 0.021978 | 0.071429 | 0.65794 | 0.577031 | 0.551605 | 0.538246 | 0.512821 | 0.512821 | 0 | 0.072892 | 0.125906 | 13,796 | 364 | 1,244 | 37.901099 | 0.696824 | 0.044796 | 0 | 0.574132 | 1 | 0.025237 | 0.25123 | 0.190688 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025237 | 0 | 0.025237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8a830fc1bf61dcb27f3d46222c49301867517cd | 1,216 | py | Python | woodstock/auth_backends.py | allink/woodstock | afafecb7c4454f96e51c051044ed8ed74853c048 | [
"BSD-3-Clause"
] | null | null | null | woodstock/auth_backends.py | allink/woodstock | afafecb7c4454f96e51c051044ed8ed74853c048 | [
"BSD-3-Clause"
] | null | null | null | woodstock/auth_backends.py | allink/woodstock | afafecb7c4454f96e51c051044ed8ed74853c048 | [
"BSD-3-Clause"
] | null | null | null | from woodstock.models import Invitee, Participant
from woodstock import settings
from django.core.exceptions import ObjectDoesNotExist
class PersonBackend(object):
supports_object_permissions = False
supports_anonymous_user = False
supports_inactive_user = False
def authenticate(self, username=None, password=None, user=None):
if user:
return user
try:
if settings.USERNAME_FIELD:
get_filter = {settings.USERNAME_FIELD: username}
user = self.model.objects.get(**get_filter)
if user.check_password(password):
return user
else:
return self.model.objects.get(password=password)
except ObjectDoesNotExist:
return None
def get_user(self, user_id):
try:
return self.model.objects.get(pk=user_id)
except ObjectDoesNotExist:
return None
class InviteeBackend(PersonBackend):
"""
Authenticates against woodstock.models.Invitee.
"""
model = Invitee
class ParticipantBackend(PersonBackend):
"""
Authenticates against woodstock.models.Participant.
"""
model = Participant
| 27.636364 | 68 | 0.649671 | 120 | 1,216 | 6.466667 | 0.358333 | 0.05799 | 0.061856 | 0.073454 | 0.188144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.279605 | 1,216 | 43 | 69 | 28.27907 | 0.885845 | 0.081414 | 0 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0.103448 | 0.103448 | 0 | 0.655172 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
a8b7116ef65af10b72e0c07da19900a3a9d62e85 | 48,908 | py | Python | deep_disfluency/corpus/tree_pos_map_writer.py | askender/deep_disfluency | bea8403ed954df8eadd3e2b9d98bb7c2b416a665 | [
"MIT"
] | null | null | null | deep_disfluency/corpus/tree_pos_map_writer.py | askender/deep_disfluency | bea8403ed954df8eadd3e2b9d98bb7c2b416a665 | [
"MIT"
] | null | null | null | deep_disfluency/corpus/tree_pos_map_writer.py | askender/deep_disfluency | bea8403ed954df8eadd3e2b9d98bb7c2b416a665 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from collections import defaultdict
import re
from nltk import tree
from swda import CorpusReader
from tree_pos_map import TreeMapCorpus
from tree_pos_map import POSMapCorpus
possibleMistranscription = [("its", "it's"),
("Its", "It's"),
("it's", "its"),
("It's", "Its"),
("whose", "who's"),
("Whose", "Who's"),
("who's", "whose"),
("Who's", "Whose"),
("you're", "your"),
("You're", "Your"),
("your", "you're"),
("Your", "You're"),
("their", "they're"),
("Their", "They're"),
("they're", "their"),
("They're", "Their"),
("programme", "program"),
("program", "programme"),
("centre", "center"),
("center", "centre"),
("travelling", "traveling"),
("traveling", "travelling"),
("colouring", "coloring"),
("coloring", "colouring")]
class TreeMapWriter:
"""Object which writes mappings from the words in utterances
to the nodes of the corresponding trees in a treebank
"""
def __init__(self, corpus_path="../swda",
metadata_path="swda-metadata.csv",
target_folder_path="Maps",
ranges=None,
errorLog=None):
print "started TreeMapWriting"
self.write_to_file(corpus_path,
metadata_path,
target_folder_path,
ranges,
errorLog)
def write_to_file(self, corpus_path,
metadata_path,
target_folder_path,
ranges,
errorLog):
"""Writes files to a target folder with the mappings
from words in utterances to tree nodes in trees.
"""
if errorLog:
errorLog = open(errorLog, 'w')
corpus = CorpusReader(corpus_path, metadata_path)
# Iterate through all transcripts
incorrectTrees = 0
folder = None
corpus_file = None
for trans in corpus.iter_transcripts():
# print "iterating",trans.conversation_no
if not trans.has_pos():
continue
# print "has pos"
if ranges and not trans.conversation_no in ranges:
continue
# print "in range"
# just look at transcripts WITH trees as compliment to the
# below models
if not trans.has_trees():
continue
end = trans.swda_filename.rfind("/")
start = trans.swda_filename.rfind("/", 0, end)
c_folder = trans.swda_filename[start + 1:end]
if c_folder != folder:
# for now splitting the maps by folder
folder = c_folder
if corpus_file:
corpus_file.close()
corpus_file = open(target_folder_path +
"/Tree_map_{0}.csv.text".format(folder), 'w')
wordTreeMapList = TreeMapCorpus(False, errorLog)
print "new map for folder", folder
translist = trans.utterances
translength = len(translist)
count = 0
# iterating through transcript utterance by utterance
# create list of tuples i.e. map from word to the index(ices)
# (possibly multiple or null) of the relevant leaf/ves
# of a given tree i.e. utt.tree[0].leaves[0] would be a pair (0,0))
while count < translength:
utt = trans.utterances[count]
words = utt.text_words()
wordTreeMap = [] # [((word), (List of LeafIndices))]
forwardtrack = 0
backtrack = 0
continued = False
# print "\n COUNT" + str(count)
# print utt.damsl_act_tag()
if len(utt.trees) == 0 or utt.damsl_act_tag() == "x":
wordTreeMap.append((utt, [])) # just dummy value
# errormessage = "WARNING: NO TREE for file/utt: " +\
# str(utt.swda_filename) + " " + utt.caller + "." + \
# str(utt.utterance_index) + "." + \
#str(utt.subutterance_index) + " " + utt.text
# print(errormessage)
count += 1
continue
# raw_input()
# indices for which tree and leaf we're at:
i = 0 # tree
j = 0 # leaf
# initialise pairs of trees and ptb pairs
trees = []
for l in range(0, len(utt.trees)):
trees.append(
(utt.ptb_treenumbers[l], count, l, utt.trees[l]))
# print "TREES = "
# for tree in trees:
# print tree
origtrees = list(trees)
origcount = count
# overcoming the problem of previous utterances contributing
# to the tree at this utterance, we need to add the words from
# the previous utt add in all the words from previous utterance
# with a dialogue act tag/or the same tree?
# check that the last tree in the previous utterance
# is the same as the previous one
previousUttSame = trans.previous_utt_same_speaker(utt)
# print previousUttSame
lastTreeMap = None
if previousUttSame:
# print "search for previous full act utt
# for " + str(utt.swda_filename) +
# str(utt.transcript_index)
lastTreeMap = wordTreeMapList.get_treemap(
trans,
previousUttSame)
if ((not lastTreeMap) or (len(lastTreeMap) == 0) or
(len(lastTreeMap) == 1 and lastTreeMap[0][1] == [])):
# print "no last tree map, backwards searching"
while previousUttSame and \
((not lastTreeMap) or (len(lastTreeMap) == 0) or
(len(lastTreeMap) == 1 and lastTreeMap[0][1] == [])):
previousUttSame = trans.previous_utt_same_speaker(
previousUttSame) # go back one more
lastTreeMap = wordTreeMapList.get_treemap(trans,
previousUttSame)
if previousUttSame:
pass
# print previousUttSame.transcript_index
if not lastTreeMap:
pass
# print "no last treemap found for:"
# print utt.swda_filename
# print utt.transcript_index
if lastTreeMap and \
(utt.damsl_act_tag() == "+" or
(len(lastTreeMap.treebank_numbers) > 0
and lastTreeMap.treebank_numbers[-1] ==
utt.ptb_treenumbers[0])):
continued = True
# might have to backtrack
# now checking for wrong trees
lastPTB = lastTreeMap.treebank_numbers
lastIndexes = lastTreeMap.transcript_numbers
lastTreesTemp = lastTreeMap.get_trees(trans)
lastTrees = []
for i in range(0, len(lastPTB)):
lastTrees.append([lastPTB[i], lastIndexes[i][0],
lastIndexes[i][1], lastTreesTemp[i]])
if not (lastPTB[-1] == utt.ptb_treenumbers[0]):
# print "not same, need to correct!"
# print words
# print trees
# print "last one"
# print previousUttSame.text_words()
# print lastTrees
if utt.ptb_treenumbers[0] - lastPTB[-1] > 1:
# backtrack and redo the antecedent
count = count - (count - lastIndexes[-1][0])
utt = previousUttSame
words = utt.text_words()
mytrees = []
for i in range(0, len(lastTrees) - 1):
mytrees.append(lastTrees[i])
trees = mytrees + [origtrees[0]]
# print "\n(1)backtrack to with new trees:"
backtrack = 1
# print utt.transcript_index
# print words
# print trees
# raw_input()
# alternately, this utt's tree may be further back
# than its antecdent's, rare mistake
elif utt.ptb_treenumbers[0] < lastTrees[-1][0]:
# continue with this utterance and trees
# (if there are any), but replace its first
# tree with its antecdents last one
forwardtrack = 1
trees = [lastTrees[-1]] + origtrees[1:]
# print "\n(2)replacing first one to lasttreemap's:"
# print words
# print trees
# raw_input()
if backtrack != 1: # we should have no match
found_treemap = False
# resetting
# for t in wordTreeMapList.keys():
# print t
# print wordTreeMapList[t]
for t in range(len(lastTreeMap) - 1, -1, -1):
# print lastTreeMap[t][1]
# if there is a leafIndices for the
# word being looked at, gets last mapped one
if len(lastTreeMap[t][1]) > 0:
# print "last treemapping of last
# caller utterance =
# " + str(lastTreeMap[t][1][-1])
j = lastTreeMap[t][1][-1][1] + 1
found_treemap = True
# print "found last mapping, j -1 = " + str(j-1)
# raw_input()
break
if not found_treemap:
pass
# print "NO matched last TREEMAP found for \
# previous Utt Same Speaker of " + \
# str(trans.swda_filename) + " " + \
# str(utt.transcript_index)
# print lastTreeMap
# for tmap in wordTreeMapList.keys():
# print tmap
# print wordTreeMapList[tmap]
# raw_input()
possibleComment = False # can have comments, flag
mistranscribe = False
LeafIndices = [] # possibly empty list of leaf indices
word = words[0]
# loop until no more words left to be matched in utterance
while len(words) > 0:
# print "top WORD:" + word
if not mistranscribe:
wordtest = re.sub(r"[\.\,\?\"\!]", "", word)
wordtest = wordtest.replace("(", "").replace(")", "")
match = False
LeafIndices = [] # possibly empty list of leaf indices
if (possibleComment
or word[0:1] in ["{", "}", "-"]
or word in ["/", ".", ",", "]"]
or wordtest == ""
or any([x in word for x in ["<", ">", "*", "[", "+", "]]",
"...", "#", "="]])):
# no tree equivalent for {D } type annotations
if (word[0:1] == "-" or
any([x in word for x in
["*", "<<", "<+", "[[", "<"]])) \
and not possibleComment:
possibleComment = True
if possibleComment:
#print("match COMMENT!:" + word)
# raw_input()
LeafIndices = []
match = True
#wordTreeMap.append((word, LeafIndices))
if any([x in word for x in [">>", "]]", ">"]]) or \
word[0] == "-": # turn off comment
possibleComment = False
#del words[0]
# LeadIndices will be null here
wordTreeMap.append((word, LeafIndices))
LeafIndices = []
match = True
# print "match annotation!:" + word
del words[0] # word is consumed, should always be one
if len(words) > 0:
word = words[0]
wordtest = re.sub(r"[\.\,\?\/\)\(\"\!]", "", word)
wordtest = wordtest.replace("(", "")
wordtest = wordtest.replace(")", "")
else:
break
continue
# carry on to next word without updating indices?
else:
while i < len(trees):
# print "i number of trees :" + str(len(utt.trees))
# print "i tree number :" + str(i)
# print "i loop word :" + word
tree = trees[i][3]
# print "looking at ptb number " + str(trees[i][0])
# print "looking at index number " \
#+ str(trees[i][1])+","+str(trees[i][2])
while j < len(tree.leaves()):
leaf = tree.leaves()[j]
# print "j number of leaves : " \
#+ str(len(tree.leaves()))
# print "j loop word : " + word
# print "j loop wordtest : " + wordtest
# print "j leaf : " + str(j) + " " + leaf
breaker = False
# exact match
if wordtest == leaf or word == leaf:
LeafIndices.append((i, j))
wordTreeMap.append((word, LeafIndices))
# print("match!:" + word + " " + \
# str(utt.swda_filename) + " " + \
# utt.caller + "." + \
# str(utt.utterance_index) + \
# "." + str(utt.subutterance_index))
del words[0] # word is consumed
if len(words) > 0:
word = words[0] # next word
wordtest = re.sub(
r"[\.\,\?\/\)\(\"\!]", "", word)
wordtest = wordtest.replace("(", "")
wordtest = wordtest.replace(")", "")
LeafIndices = []
j += 1 # increment loop to next leaf
match = True
breaker = True
# raw_input()
break
elif leaf in wordtest or \
leaf in word and not leaf == ",":
testleaf = leaf
LeafIndices.append((i, j))
j += 1
for k in range(j, j + 3): # 3 beyond
if (k >= len(tree.leaves())):
j = 0
i += 1
#breaker = True
breaker = True
break # got to next tree
if (testleaf + tree.leaves()[k]) \
in wordtest or (testleaf +
tree.leaves()[k])\
in word:
testleaf += tree.leaves()[k]
LeafIndices.append((i, k))
j += 1
# concatenation
if testleaf == wordtest or \
testleaf == word: # word matched
wordTreeMap.append((word,
LeafIndices))
del words[0] # remove word
# print "match!:" + word +\
#str(utt.swda_filename) + " "\
# + utt.caller + "." + \
# str(utt.utterance_index) +\
# "." + \
# str(utt.subutterance_index))
if len(words) > 0:
word = words[0]
wordtest = re.sub(
r"[\.\,\?\/\)\(\"\!]",
"", word)
wordtest = wordtest.\
replace("(", "")
wordtest = wordtest.\
replace(")", "")
# reinitialise leaves
LeafIndices = []
j = k + 1
match = True
breaker = True
# raw_input()
break
else:
# otherwise go on
j += 1
if breaker:
break
if match:
break
if j >= len(tree.leaves()):
j = 0
i += 1
if match:
break
# could not match word! try mistranscriptions first:
if not match:
if not mistranscribe: # one final stab at matching!
mistranscribe = True
for pair in possibleMistranscription:
if pair[0] == wordtest:
wordtest = pair[1]
if len(wordTreeMap) > 0:
if len(wordTreeMap[-1][1]) > 0:
i = wordTreeMap[-1][1][-1][0]
j = wordTreeMap[-1][1][-1][1]
else:
# go back to beginning of
# tree search
i = 0
j = 0
else:
i = 0 # go back to beginning
j = 0
break # matched
elif continued:
# possible lack of matching up of words in
# previous utterance same caller and same
# tree// not always within same tree!!
errormessage = "Possible bad start for \
CONTINUED UTT ''" + words[0] + "'' in file/utt: "\
+ str(utt.swda_filename) + "\n " + utt.caller + \
"." + str(utt.utterance_index) + "." + \
str(utt.subutterance_index) + \
"POSSIBLE COMMENT = " + str(possibleComment)
# print errormessage
if not errorLog is None:
errorLog.write(errormessage + "\n")
# raw_input()
if backtrack == 1:
backtrack += 1
elif backtrack == 2:
# i.e. we've done two loops and
# still haven't found it, try the other way
count = origcount
utt = trans.utterances[count]
words = utt.text_words()
word = words[0]
trees = [lastTrees[-1]] + origtrees[1:]
# print "\nSECOND PASS(2)replacing \
# first one to lasttreemap's:"
# print words
# print trees
backtrack += 1
# mistranscribe = False #TODO perhaps needed
wordTreeMap = []
# switch to forward track this is
# the only time we want to try
# from the previous mapped leaf in the
# other tree
foundTreemap = False
for t in range(len(lastTreeMap) - 1, -1, -1):
# backwards iteration through words
# print lastTreeMap[t][1]
if len(lastTreeMap[t][1]) > 0:
# print "last treemapping of last \
# caller utterance = " + \
# str(lastTreeMap[t][1][-1])
j = lastTreeMap[t][1][-1][1] + 1
foundTreemap = True
# print "found last mapping, j = " \
#+ str(j)
# raw_input()
# break when last tree
# mapped word from this caller is found
break
if not foundTreemap:
# print "NO matched last TREEMAP found\
# for previous Utt Same Speaker of " + \
# str(utt.swda_filename) + " " + \
# utt.caller + "." + \
# str(utt.utterance_index) + "." +\
# str(utt.subutterance_index)
j = 0
# for tmap in wordTreeMapList.keys():
# print tmap
# print wordTreeMapList[tmap]
# raw_input()
i = 0 # go back to first tree
continue
elif forwardtrack == 1:
forwardtrack += 1
elif forwardtrack == 2:
count = count - (count - lastIndexes[-1][0])
utt = previousUttSame
words = utt.text_words()
word = words[0]
mytrees = []
for i in range(0, len(lastTrees) - 1):
mytrees.append(lastTrees[i])
trees = mytrees + [origtrees[0]]
# print "\nSECOND PASS(1)backtrack to \
# with new trees:"
# print utt.transcript_index
# print words
# print trees
forwardtrack += 1
# mistranscribe = False #TODO maybe needed
wordTreeMap = []
# raw_input()
elif forwardtrack == 3 or backtrack == 3:
# if this hasn't worked reset to old trees
# print "trying final reset"
count = origcount
utt = trans.utterances[count]
words = utt.text_words()
word = words[0]
trees = origtrees
forwardtrack = 0
backtrack = 0
# mistranscribe = False #TODO maybe needed
wordTreeMap = []
# raw_input()
else:
pass
# print "resetting search"
# raw_input()
# unless forward tracking now,
# just go back to beginning
i = 0 # go back to beginning of tree search
j = 0
else:
mistranscribe = False
LeafIndices = []
wordTreeMap.append((word, LeafIndices))
errormessage = "WARNING: 440 no/partial tree \
mapping for ''" + words[0] + "'' in file/utt: "\
+ str(utt.swda_filename) + " \n" + utt.caller\
+ "." + str(utt.utterance_index) + "." + \
str(utt.subutterance_index) + \
"POSSIBLE COMMENT = " + str(possibleComment)
# print utt.text_words()
del words[0] # remove word
# for trip in wordTreeMap:
# print "t",trip
if len(words) > 0:
word = words[0]
wordtest = re.sub(r"[\.\,\?\/\)\(\"\!]", "",
word)
wordtest = wordtest.replace("(", "")
wordtest = wordtest.replace(")", "")
# print errormessage
if errorLog:
errorLog.write("possible wrong tree mapping:"
+ errormessage + "\n")
raw_input()
# end of while loop (words)
mytreenumbers = []
for treemap in trees:
# the whole list but the tree
mytreenumbers.append(treemap[:-1])
if not len(utt.text_words()) == len(wordTreeMap):
print "ERROR. uneven lengths!"
print utt.text_words()
print wordTreeMap
print trans.swda_filename
print utt.transcript_index
raw_input()
count += 1
continue
# add the treemap
wordTreeMapList.append(trans.conversation_no,
utt.transcript_index,
tuple(mytreenumbers),
tuple(wordTreeMap))
count += 1
# rewrite after each transcript
filedict = defaultdict(str)
for key in wordTreeMapList.keys():
csv_string = '"' + str(list(wordTreeMapList[key])) + '"'
mytreenumbers = wordTreeMapList[key].transcript_numbers
myptbnumbers = wordTreeMapList[key].treebank_numbers
tree_list_string = '"'
for i in range(0, len(mytreenumbers)):
treemap = [myptbnumbers[i]] + mytreenumbers[i]
tree_list_string += str(treemap) + ";"
tree_list_string = tree_list_string[:-1] + '"'
filename = '"' + key[0:key.rfind(':')] + '"'
transindex = key[key.rfind(':') + 1:]
filedict[int(transindex)] = filename \
+ "\t" + transindex + '\t' + csv_string + "\t" \
+ tree_list_string + "\n"
for key in sorted(filedict.keys()):
corpus_file.write(filedict[key])
wordTreeMapList = TreeMapCorpus(False, errorLog) # reset each time
print "\n" + str(incorrectTrees) + " incorrect trees"
corpus_file.close()
if not errorLog is None:
errorLog.close()
class POSMapWriter:
"""Object which writes mappings from the words in utterances
to the corresponding POS tags.
"""
def __init__(self, corpus_path="../swda",
metadata_path="swda-metadata.csv",
target_folder_path="Maps",
ranges=None,
errorLog=None):
print "started MapWriting"
self.write_to_file(corpus_path,
metadata_path,
target_folder_path,
ranges,
errorLog)
def write_to_file(self, corpus_path,
metadata_path,
target_folder_path,
ranges,
errorLog):
"""Writes files to a target folder with the mappings
from words in utterances to corresponding POS tags.
"""
if errorLog:
errorLog = open(errorLog, 'w')
corpus = CorpusReader(corpus_path, metadata_path)
folder = None
corpus_file = None
for trans in corpus.iter_transcripts():
# print "iterating",trans.conversation_no
if not trans.has_pos():
continue
# print "has pos"
if ranges and not trans.conversation_no in ranges:
continue
# print "in range"
# just look at transcripts WITHOUT trees as compliment to the
# above models
if trans.has_trees():
continue
end = trans.swda_filename.rfind("/")
start = trans.swda_filename.rfind("/", 0, end)
c_folder = trans.swda_filename[start + 1:end]
if c_folder != folder:
# for now splitting the maps by folder
folder = c_folder
if corpus_file:
corpus_file.close()
corpus_file = open(target_folder_path +
"/POS_map_{0}.csv.text".format(folder), 'w')
wordPOSMapList = POSMapCorpus(False, errorLog)
print "new map for folder", folder
translist = trans.utterances
translength = len(translist)
count = 0
# iterating through transcript utterance by utterance
while count < translength:
utt = trans.utterances[count]
words = utt.text_words()
wordPOSMap = []
if len(utt.pos) == 0: # no POS
wordPOSMap.append((utt, [])) # just dummy value
wordPOSMapList.append(trans.conversation_no,
utt.transcript_index,
list(wordPOSMap))
errormessage = "WARNING: NO POS for file/utt: " +\
str(utt.swda_filename) + " " + utt.caller + "." + \
str(utt.utterance_index) + "." + \
str(utt.subutterance_index) + " " + utt.text
# print errormessage
# raw_input()
else:
# indices for which POS we're at
j = 0
possibleComment = False # can have comments, flag
mistranscribe = False
word = words[0]
# loop until no more words left to be matched in utterance
while len(words) > 0:
word = words[0]
# print "top WORD:" + word
if not mistranscribe:
wordtest = re.sub(r"[\.\,\?\/\)\(\"\!\\]", "",
word)
wordtest = wordtest.replace("(", "").\
replace(")", "").replace("/", "")
match = False
POSIndices = []
if (possibleComment
or word[0:1] in ["{", "}", "-"]
or word in ["/", ".", ",", "]"]
or wordtest == ""
or any([x in word for x in
["<", ">", "*", "[", "+", "]]",
"...", "#", "="]])):
# no tree equivalent for {D } type annotations
if (word[0:1] == "-" or
any([x in word for x in
["*", "<<", "<+", "[[", "<"]])) \
and not possibleComment:
possibleComment = True
if possibleComment:
# print "match COMMENT!:" + word
# raw_input()
POSIndices = []
match = True
if (any([x in word for x in [">>", "]]", "))",
">"]]) or
word[0] == "-") \
and not word == "->":
# turn off comment
possibleComment = False
if (">>" in word or "]]" in word or "))"
in word or ">" in word and
not word == "->"): # turn off comment
possibleComment = False
#del words[0]
wordPOSMap.append((word, POSIndices))
POSIndices = []
match = True
# print "match annotation!:" + word
del words[0] # word is consumed
if len(words) > 0:
word = words[0]
wordtest = re.sub(r"[\.\,\?\/\)\(\"\!\\]",
"", word)
wordtest = wordtest.replace("(", "")
wordtest = wordtest.replace(")", "")
else:
break
continue # carry on to next word
else:
myPOS = utt.regularize_pos_lemmas()
while j < len(myPOS):
pos = myPOS[j][0] # pair of (word,POS)
# print "j number of pos : " + str(len(myPOS))
# print "j loop word : " + word
# print "j loop wordtest : " + wordtest
# print "j pos : " + str(j) + " " + str(pos)
# raw_input()
breaker = False
if wordtest == pos or word == pos: # exact match
POSIndices.append(j)
wordPOSMap.append((word, POSIndices))
# print "match!:" + word + " in file/utt: "\
# + str(utt.swda_filename) + \
# str(utt.transcript_index))
del words[0] # word is consumed
if len(words) > 0:
word = words[0] # next word
wordtest = re.sub(
r"[\.\,\?\/\)\(\"\!\\]",
"", word)
wordtest = wordtest.replace("(", "").\
replace(")", "").replace("/", "")
POSIndices = []
j += 1 # increment lead number
match = True
breaker = True
# raw_input()
break
elif (pos in wordtest or pos in word) \
and not pos in [",", "."]:
# substring relation
testpos = pos
POSIndices.append(j)
j += 1
if wordtest[-1] == "-" and \
pos == wordtest[0:-1]:
wordPOSMap.append((word, POSIndices))
del words[0] # remove word
# print "match!:" + word + " in \
# file/utt: " + str(utt.swda_filename) \
#+ str(utt.transcript_index)
if len(words) > 0:
word = words[0]
wordtest = re.sub(
r"[\.\,\?\/\)\(\"\!\\]",
"", word)
wordtest = wordtest.\
replace("(", "").\
replace(")", "").\
replace("/", "")
POSIndices = []
match = True
breaker = True
break
for k in range(j, j + 3):
if (k >= len(myPOS)):
breaker = True
break
if (testpos + myPOS[k][0]) in wordtest\
or (testpos + myPOS[k][0]) in word:
testpos += myPOS[k][0]
POSIndices.append(k)
j += 1
# concatenation
if testpos == wordtest or \
testpos == word: # matched
wordPOSMap.append((word,
POSIndices))
del words[0] # remove word
# print "match!:" +\
# word + " in file/utt: " + \
# str(utt.swda_filename) +\
# str(utt.transcript_index))
if len(words) > 0:
word = words[0]
wordtest = re.sub(
r"[\.\,\?\/\)\(\"\!\\]",
"", word)
wordtest = wordtest.\
replace("(", "")
wordtest = wordtest.\
replace(")", "")
POSIndices = []
j = k + 1
match = True
breaker = True
break
else:
j += 1 # otherwise go on
if breaker:
break
if match:
break
# could not match word! Could be mistransription
if not match:
# print "false checking other options"
# print j
# print word
# print wordtest
if not mistranscribe:
mistranscribe = True
for pair in possibleMistranscription:
if pair[0] == wordtest:
wordtest = pair[1]
break # matched
if wordtest[-1] == "-": # partial words
wordtest = wordtest[0:-1]
if "'" in wordtest:
wordtest = wordtest.replace("'", "")
if len(wordPOSMap) > 0:
found = False
for n in range(
len(wordPOSMap) - 1, -1, -1):
if len(wordPOSMap[n][1]) > 0:
j = wordPOSMap[n][1][-1] + 1
# print j
found = True
break
if not found:
# if not possible go back to
# the beginning!
j = 0
else:
j = 0
# print j
else:
mistranscribe = False
wordPOSMap.append((word, POSIndices))
errormessage = "WARNING: no/partial POS \
mapping for ''" + words[0] + "'' in file/utt:"\
+ str(utt.swda_filename) + "-" + \
str(utt.transcript_index) + \
"POSSIBLE COMMENT = " + \
str(possibleComment)
del words[0] # remove word
if len(words) > 0:
word = words[0]
wordtest = re.sub(r"[\.\,\?\/\)\(\"\!\\]",
"", word)
wordtest = wordtest.replace("(", "").\
replace(")", "").replace("/", "")
# print errormessage
if errorLog:
errorLog.write("possible wrong POS : " +
errormessage + "\n")
# raw_input()
# end of while loop (words)
if not len(wordPOSMap) == len(utt.text_words()):
print "Error "
print "Length mismatch in file/utt: " + \
str(utt.swda_filename) + str(utt.transcript_index)
print utt.text_words()
print wordPOSMap
raw_input()
wordPOSMapList.append(trans.conversation_no,
str(utt.transcript_index),
list(wordPOSMap))
# print "\nadded POSmap " + str(trans.swda_filename) + \
#"." + str(utt.transcript_index) + "\n"
csv_string = '"' + str(wordPOSMap) + '"'
corpus_file.write('"' + str(utt.conversation_no) +
'"\t' + str(utt.transcript_index) +
'\t' + csv_string + "\n")
count += 1
corpus_file.close()
if errorLog:
errorLog.close()
if __name__ == '__main__':
t = TreeMapWriter()
| 53.393013 | 86 | 0.329598 | 3,344 | 48,908 | 4.752093 | 0.111842 | 0.015103 | 0.026052 | 0.014725 | 0.574036 | 0.526273 | 0.496445 | 0.46454 | 0.415959 | 0.40073 | 0 | 0.011476 | 0.588431 | 48,908 | 915 | 87 | 53.451366 | 0.777982 | 0.154719 | 0 | 0.633491 | 0 | 0 | 0.025872 | 0.00106 | 0 | 0 | 0 | 0.001093 | 0 | 0 | null | null | 0.006319 | 0.009479 | null | null | 0.022117 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8c4ef65d4167eae76cb55b333d7586f25c61d43 | 169 | py | Python | base/NeighborResult.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | base/NeighborResult.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | base/NeighborResult.py | Holly-Jiang/QCTSA | b90136b9df18fc21ae53b431f1e5e0c6ef786fae | [
"MIT"
] | null | null | null | class NeighborResult:
def __init__(self):
self.solutions = []
self.choose_path = []
self.current_num = 0
self.curr_solved_gates = []
| 24.142857 | 35 | 0.585799 | 18 | 169 | 5.055556 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008547 | 0.307692 | 169 | 6 | 36 | 28.166667 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a8c77d5a50c3e402e1747bc1dedba092e6e2fa50 | 4,131 | py | Python | machine/EventCase.py | technosvitman/sm_gene | 243d060fec7c642ce74843776a016ded13a68855 | [
"MIT"
] | null | null | null | machine/EventCase.py | technosvitman/sm_gene | 243d060fec7c642ce74843776a016ded13a68855 | [
"MIT"
] | 9 | 2021-08-11T07:25:53.000Z | 2021-09-01T11:10:33.000Z | machine/EventCase.py | technosvitman/sm_gene | 243d060fec7c642ce74843776a016ded13a68855 | [
"MIT"
] | null | null | null |
'''
@brief this class reflect action decision regarding condition
'''
class EventAction():
'''
@brief build event action
@param cond the conditions to perform the action
@param to the target state if any
@param job the job to do if any
'''
def __init__(self, cond="", to="", job=""):
self.__to = to
self.__job = job
self.__cond = cond
'''
@brief get action state target
@return state name
'''
def getState(self) :
return self.__to
'''
@brief has transition condition
@return true if not empty
'''
def hasCond(self) :
return ( self.__cond != "" )
'''
@brief get action conditions
@return condition
'''
def getCond(self) :
return self.__cond
'''
@brief get action job
@return job
'''
def getJob(self) :
return self.__job
'''
@brief string represtation for state action
@return the string
'''
def __str__(self):
return "Act( %s, %s, %s )"%(self.__to, self.__job, self.__cond)
'''
@brief this class reflect the output switch on event received regarding condition and action to perform
'''
class EventCase():
'''
@brief build event case
@param event the event title
'''
def __init__(self, event):
self.__event = event
self.__acts = []
'''
@brief get iterator
'''
def __iter__(self):
return iter(self.__acts)
'''
@brief equality implementation
@param other the other element to compare with
'''
def __eq__(self, other):
if isinstance(other, str):
return self.__event == other
if not isinstance(other, EventCase):
return False
if self.__event != other.getEvent():
return False
return True
'''
@brief get action event
@return event
'''
def getEvent(self) :
return self.__event
'''
@brief add action
@param act the new action
'''
def addAct(self, act) :
if act not in self.__acts:
self.__acts.append(act)
'''
@brief string represtation for state action
@return the string
'''
def __str__(self):
output = "Event( %s ) { "%self.__event
if len(self.__acts):
output += "\n"
for act in self.__acts:
output += "%s\n"%str(act)
return output + "}"
'''
@brief this class store all event case for a state
'''
class EventCaseList():
'''
@brief build event case list
'''
def __init__(self):
self.__events = []
'''
@brief get iterator
'''
def __iter__(self):
return iter(self.__events)
'''
@brief append from StateAction
@param act the state action
'''
def append(self, act):
for cond in act.getConds():
evt = None
a = EventAction(cond=cond.getCond(),\
to=act.getState(),\
job=act.getJob())
for e in self.__events:
if e == cond.getEvent():
evt = e
break
if not evt:
evt = EventCase(cond.getEvent())
self.__events.append(evt)
evt.addAct(a)
'''
@brief append from State
@param state the state
'''
def appendState(self, state):
for act in state.getActions():
self.append(act)
'''
@brief string represtation for state action
@return the string
'''
def __str__(self):
output = "{ "
if len(self.__events):
output += "\n"
for e in self.__events:
output += "%s\n"%str(e)
return output + "}" | 25.189024 | 107 | 0.487775 | 422 | 4,131 | 4.561611 | 0.206161 | 0.041558 | 0.036364 | 0.028052 | 0.204675 | 0.188052 | 0.188052 | 0.154805 | 0.154805 | 0.112208 | 0 | 0 | 0.41007 | 4,131 | 164 | 108 | 25.189024 | 0.789906 | 0.068748 | 0 | 0.185714 | 0 | 0 | 0.018643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.242857 | false | 0 | 0 | 0.114286 | 0.485714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
764382ce910f9b78f0afeeafad12a7ae05b25d0c | 2,879 | py | Python | lib/ezdxf/sections/abstract.py | tapnair/DXFer | 8ec957d80c2f251bb78440147d1478106f99b3eb | [
"MIT"
] | 4 | 2019-03-31T00:41:13.000Z | 2021-07-31T05:09:07.000Z | lib/ezdxf/sections/abstract.py | tapnair/DXFer | 8ec957d80c2f251bb78440147d1478106f99b3eb | [
"MIT"
] | null | null | null | lib/ezdxf/sections/abstract.py | tapnair/DXFer | 8ec957d80c2f251bb78440147d1478106f99b3eb | [
"MIT"
] | 5 | 2018-03-29T06:28:07.000Z | 2021-07-31T05:09:08.000Z | # Purpose: entity section
# Created: 13.03.2011
# Copyright (C) 2011, Manfred Moitzi
# License: MIT License
from __future__ import unicode_literals
__author__ = "mozman <mozman@gmx.at>"
from itertools import islice
from ..lldxf.tags import TagGroups, DXFStructureError
from ..lldxf.classifiedtags import ClassifiedTags, get_tags_linker
from ..query import EntityQuery
class AbstractSection(object):
name = 'abstract'
def __init__(self, entity_space, tags, drawing):
self._entity_space = entity_space
self.drawing = drawing
if tags is not None:
self._build(tags)
@property
def dxffactory(self):
return self.drawing.dxffactory
@property
def entitydb(self):
return self.drawing.entitydb
def get_entity_space(self):
return self._entity_space
def _build(self, tags):
if tags[0] != (0, 'SECTION') or tags[1] != (2, self.name.upper()) or tags[-1] != (0, 'ENDSEC'):
raise DXFStructureError("Critical structure error in {} section.".format(self.name.upper()))
if len(tags) == 3: # empty entities section
return
linked_tags = get_tags_linker()
store_tags = self._entity_space.store_tags
entitydb = self.entitydb
fix_tags = self.dxffactory.modify_tags
for group in TagGroups(islice(tags, 2, len(tags)-1)):
tags = ClassifiedTags(group)
fix_tags(tags) # post read tags fixer for VERTEX!
handle = entitydb.add_tags(tags)
if not linked_tags(tags, handle): # also creates the link structure as side effect
store_tags(tags) # add to entity space
def write(self, stream):
stream.write(" 0\nSECTION\n 2\n%s\n" % self.name.upper())
self._entity_space.write(stream)
stream.write(" 0\nENDSEC\n")
def create_new_dxf_entity(self, _type, dxfattribs):
""" Create new DXF entity add it to th entity database and add it to the entity space.
"""
dxf_entity = self.dxffactory.create_db_entry(_type, dxfattribs)
self._entity_space.add_handle(dxf_entity.dxf.handle)
return dxf_entity
def add_handle(self, handle):
self._entity_space.add_handle(handle)
def remove_handle(self, handle):
self._entity_space.remove(handle)
def delete_entity(self, entity):
self.remove_handle(entity.dxf.handle)
self.entitydb.delete_entity(entity)
# start of public interface
def __len__(self):
return len(self._entity_space)
def __contains__(self, handle):
return handle in self._entity_space
def query(self, query='*'):
return EntityQuery(iter(self), query)
def delete_all_entities(self):
""" Delete all entities. """
self._entity_space.delete_all_entities()
# end of public interface
| 31.293478 | 104 | 0.658909 | 367 | 2,879 | 4.948229 | 0.313352 | 0.090859 | 0.090859 | 0.029736 | 0.052313 | 0.034141 | 0 | 0 | 0 | 0 | 0 | 0.011009 | 0.242793 | 2,879 | 91 | 105 | 31.637363 | 0.822018 | 0.134422 | 0 | 0.033898 | 0 | 0 | 0.048178 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.237288 | false | 0 | 0.084746 | 0.101695 | 0.491525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
764937dbe3389486664886c5dde62290994457f4 | 551 | py | Python | lessweb/__init__.py | qorzj/lessweb | d21473d724216c3afa1101729137573ff21b7015 | [
"MIT"
] | 17 | 2017-11-02T02:07:35.000Z | 2020-07-23T06:33:36.000Z | lessweb/__init__.py | qorzj/lessweb | d21473d724216c3afa1101729137573ff21b7015 | [
"MIT"
] | 12 | 2018-02-06T05:52:04.000Z | 2021-02-07T14:43:35.000Z | lessweb/__init__.py | qorzj/lessweb | d21473d724216c3afa1101729137573ff21b7015 | [
"MIT"
] | 2 | 2018-06-06T09:30:44.000Z | 2018-09-19T02:03:16.000Z | """lessweb: 用最python3的方法创建web apps"""
__version__ = '0.3.3'
__author__ = [
'qorzj <inull@qq.com>',
]
__license__ = "MIT"
# from . import application, context, model, storage, webapi
from .application import interceptor, Application
from .context import Context, Request, Response
from .storage import Storage
from .bridge import uint, ParamStr, MultipartFile, Jsonizable
from .webapi import BadParamError, NotFoundError, Cookie, HttpStatus, ResponseStatus
from .utils import _nil, eafp
from .client import Client
from .service import Service
| 26.238095 | 84 | 0.771325 | 64 | 551 | 6.4375 | 0.59375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008421 | 0.137931 | 551 | 20 | 85 | 27.55 | 0.858947 | 0.165154 | 0 | 0 | 0 | 0 | 0.061674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.615385 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
765bdbd234fb42e7aa71522393ad38a642d864a9 | 13,184 | py | Python | web/adoption_stories/adopteeStories/models.py | CuriousG102/Chinese-Adoption | 8aa487b859ee330bd524381e8688aae68c225437 | [
"MIT"
] | null | null | null | web/adoption_stories/adopteeStories/models.py | CuriousG102/Chinese-Adoption | 8aa487b859ee330bd524381e8688aae68c225437 | [
"MIT"
] | null | null | null | web/adoption_stories/adopteeStories/models.py | CuriousG102/Chinese-Adoption | 8aa487b859ee330bd524381e8688aae68c225437 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.encoding import force_text
from django.utils.translation import ugettext_lazy as _
from embed_video.fields import EmbedYoutubeField, EmbedSoundcloudField
from .custom_model_fields import RestrictedImageField
from .default_settings import ADOPTEE_STORIES_CONFIG as config
class NamesToStringMixin():
NAME_ATTRIBUTES = ['english_name', 'pinyin_name', 'chinese_name']
@property
def name(self):
s = []
for name in self.NAME_ATTRIBUTES:
name_string = getattr(self, name, None)
if name_string:
s.append(name_string)
return ' '.join(s)
class Adoptee(models.Model, NamesToStringMixin):
# english_name must have a value || (pinyin_name && chinese_name)
# must have a value implemented form level
english_name = models.CharField(max_length=150, null=True, blank=True,
# Translators: Name of a field in the admin page
db_index=True, verbose_name=_('English Name'))
pinyin_name = models.CharField(max_length=150, null=True, blank=True,
# Translators: Name of a field in the admin page
db_index=True, verbose_name=_('Pinyin Name'))
chinese_name = models.CharField(max_length=50, null=True, blank=True,
# Translators: Name of a field in the admin page
db_index=True, verbose_name=_('Chinese Name'))
photo_front_story = RestrictedImageField(maximum_size=config['PHOTO_FRONT_STORY_MAX_SIZE'],
required_width=config['PHOTO_FRONT_STORY_WIDTH'],
required_height=config['PHOTO_FRONT_STORY_HEIGHT'],
required_formats=config['FORMATS'],
null=True, blank=True,
# Translators: Name of a field in the admin page
verbose_name=_('Photo Front Story'))
# Translators: Name of a field in the admin page
front_story = models.ForeignKey('StoryTeller', null=True, verbose_name=_('Front Story'), blank=True,
limit_choices_to={'approved': True})
# Translators: Name of a field in the admin page
created = models.DateTimeField(auto_now_add=True, verbose_name=_('Created At'))
# Translators: Name of a field in the admin page
updated = models.DateTimeField(auto_now=True, verbose_name=_('Updated At'))
class Meta:
ordering = ['-created']
# Translators: Name of a field in the admin page
verbose_name = _('Adoptee')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Adoptees')
def __str__(self):
string = ' '.join([force_text(self._meta.verbose_name), self.name])
return string
class MultimediaItem(models.Model):
# english_caption || chinese_caption must have a value implemented
# form level
english_caption = models.CharField(max_length=200, null=True, blank=True,
# Translators: Name of a field in the admin page
verbose_name=_('English Caption'))
chinese_caption = models.CharField(max_length=200, null=True, blank=True,
# Translators: Name of a field in the admin page
verbose_name=_('Chinese Caption'))
# Translators: Name of a field in the admin page
approved = models.BooleanField(default=False, verbose_name=_('Approved'))
# Translators: Name of a field in the admin page
story_teller = models.ForeignKey('StoryTeller', null=True, verbose_name=_('Story Teller'))
# Translators: Name of a field in the admin pages
created = models.DateTimeField(auto_now_add=True, verbose_name=_('Created At'))
# Translators: Name of a field in the admin page
updated = models.DateTimeField(auto_now=True, verbose_name=_('Updated At'))
class Meta:
verbose_name = _('Multimedia Item')
abstract = True
ordering = ['-created']
def __str__(self):
return ' '.join([force_text(self._meta.verbose_name), self.story_teller.name, force_text(self.created)])
class Audio(MultimediaItem):
# Translators: name of field in the admin page
audio = EmbedSoundcloudField(verbose_name=_('Audio Soundcloud Embed'))
class Meta(MultimediaItem.Meta):
abstract = False
# Translators: Name of a field in the admin page
verbose_name = _('Audio item')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Audio items')
class Video(MultimediaItem):
# Translators: name of field in the admin page
video = EmbedYoutubeField(verbose_name=_('Video Youtube Embed'))
class Meta(MultimediaItem.Meta):
abstract = False
# Translators: Name of a field in the admin page
verbose_name = _('Video item')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Video items')
class Photo(MultimediaItem):
# file size and type checking added on form level
# Translators: Name of a field in the admin page
photo_file = models.ImageField(verbose_name=_('Photo File'))
class Meta(MultimediaItem.Meta):
abstract = False
# Translators: Name of a field in the admin page
verbose_name = _('Photo')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Photos')
class RelationshipCategory(models.Model, NamesToStringMixin):
# english_name must have a value || chinese name must have a value at first
# but to publish both must have a value or all stories with an untranslated
# category must only show up english side/chinese side
# Translators: Name of a field in the admin page
english_name = models.CharField(max_length=30, null=True, verbose_name=_('English Name'),
blank=True)
# Translators: Name of a field in the admin page
chinese_name = models.CharField(max_length=30, null=True, verbose_name=_('Chinese Name'),
blank=True)
# Translators: Name of a field in the admin page
approved = models.BooleanField(default=False, verbose_name=_('Approved'))
# Translators: Name of a field in the admin page
created = models.DateTimeField(auto_now_add=True,
verbose_name=_('Created At'))
# Translators: Name of a field in the admin page
updated = models.DateTimeField(auto_now=True,
verbose_name=_('Updated At'))
# Translators: Label for the number determining the order of the relationship category for admins
order = models.IntegerField(null=True, blank=True, verbose_name=_('Position of relationship category'))
class Meta:
ordering = ['order']
# Translators: Name of a field in the admin page
verbose_name = _('Relationship Category')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Relationship Categories')
def __str__(self):
string = ' '.join([force_text(self._meta.verbose_name), self.name])
return string
class StoryTeller(models.Model, NamesToStringMixin):
relationship_to_story = models.ForeignKey('RelationshipCategory',
# Translators: Name of a field in the admin page
verbose_name=_('Relationship to Story'))
# One version of story text because I don't want adoptee's stories to be different between who is viewing it
# Translators: Name of a field in the admin page
story_text = models.TextField(verbose_name=_('Story Text'))
# Translators: Name of a field in the admin page
email = models.EmailField(verbose_name=_('Email'))
# Translators: Name of a field in the admin page
approved = models.BooleanField(default=False, verbose_name=_('Approved'))
related_adoptee = models.ForeignKey('Adoptee', related_name='stories',
# Translators: Name of a field in the admin page
verbose_name=_('Related Adoptee'))
# english_name must have a value || (pinyin_name && chinese_name)
# must have a value implemented form level
english_name = models.CharField(max_length=150, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('English Name'),
blank=True)
chinese_name = models.CharField(max_length=50, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('Chinese Name'),
blank=True)
pinyin_name = models.CharField(max_length=150, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('Pinyin Name'),
blank=True)
created = models.DateTimeField(auto_now_add=True,
# Translators: Name of a field in the admin page
verbose_name=_('Created At'))
updated = models.DateTimeField(auto_now=True,
# Translators: Name of a field in the admin page
verbose_name=_('Updated At'))
class Meta:
ordering = ['-updated', '-created']
# Translators: Name of a field in the admin page
verbose_name = _('Story Teller')
# Translators: Name of a field in the admin page
verbose_name_plural = _('Story Tellers')
def __str__(self):
string = ' '.join([force_text(self._meta.verbose_name), self.name])
return string
class AboutPerson(models.Model, NamesToStringMixin):
photo = RestrictedImageField(maximum_size=config['PHOTO_FRONT_STORY_MAX_SIZE'],
required_height=config['ABOUT_PHOTO_HEIGHT'],
required_width=config['ABOUT_PHOTO_WIDTH'],
required_formats=config['FORMATS'],
verbose_name=_('Picture of person on about page'))
english_caption = models.CharField(max_length=200, null=True, blank=True,
# Translators: Name of a field in the admin page
verbose_name=_('English Caption'))
chinese_caption = models.CharField(max_length=200, null=True, blank=True,
# Translators: Name of a field in the admin page
verbose_name=_('Chinese Caption'))
about_text_english = models.TextField(verbose_name=_('About text for that person in English.'),
help_text=_('Should include paragraph markup:'
'e.g. <p>This is a paragraph</p>'
'<p>This is a different paragraph</p>'),
null=True, blank=True)
about_text_chinese = models.TextField(verbose_name=_('About text for that person in Chinese.'),
help_text=_('Should include paragraph markup:'
'e.g. <p>This is a paragraph</p>'
'<p>This is a different paragraph</p>'),
null=True, blank=True)
published = models.BooleanField(verbose_name=_('Published status'))
english_name = models.CharField(max_length=150, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('English Name'),
blank=True)
chinese_name = models.CharField(max_length=50, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('Chinese Name'),
blank=True)
pinyin_name = models.CharField(max_length=150, null=True,
# Translators: Name of a field in the admin page
verbose_name=_('Pinyin Name'),
blank=True)
order = models.IntegerField(verbose_name=_('Position of person in about page'))
class Meta:
ordering = ['order']
verbose_name = _('About Person')
verbose_name_plural = _('About People')
def __str__(self):
string = ' '.join([force_text(self._meta.verbose_name), self.name])
return string
| 51.701961 | 112 | 0.586999 | 1,463 | 13,184 | 5.101846 | 0.122351 | 0.091372 | 0.109325 | 0.096463 | 0.708467 | 0.700831 | 0.689309 | 0.669078 | 0.6597 | 0.609593 | 0 | 0.004548 | 0.332828 | 13,184 | 254 | 113 | 51.905512 | 0.84402 | 0.22679 | 0 | 0.515924 | 0 | 0 | 0.127504 | 0.00977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038217 | false | 0 | 0.038217 | 0.006369 | 0.496815 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
765c85c98a88ed131ecd9f838f2186d4fb82671f | 169 | py | Python | core/utils/gpu_check.py | andregri/keras-segmentation | 699043adc4dd74b97cbed3d3e5b8d8aafb03b71c | [
"Apache-2.0"
] | null | null | null | core/utils/gpu_check.py | andregri/keras-segmentation | 699043adc4dd74b97cbed3d3e5b8d8aafb03b71c | [
"Apache-2.0"
] | null | null | null | core/utils/gpu_check.py | andregri/keras-segmentation | 699043adc4dd74b97cbed3d3e5b8d8aafb03b71c | [
"Apache-2.0"
] | null | null | null | import tensorflow as tf
def check_gpu():
n_gpus = len(tf.config.experimental.list_physical_devices('GPU'))
print("Num GPUs Available: ", n_gpus)
check_gpu()
| 16.9 | 69 | 0.715976 | 25 | 169 | 4.6 | 0.72 | 0.13913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159763 | 169 | 9 | 70 | 18.777778 | 0.809859 | 0 | 0 | 0 | 0 | 0 | 0.136095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0.2 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
766aab70e53fbf30d766b2eb2c674ed0d93615eb | 604 | py | Python | cavoke_server/tasks.py | cavoke-project/cavoke_server | 5d2e39ca760e140ab9a8b0c50b45a6d4b22e21b5 | [
"MIT"
] | null | null | null | cavoke_server/tasks.py | cavoke-project/cavoke_server | 5d2e39ca760e140ab9a8b0c50b45a6d4b22e21b5 | [
"MIT"
] | 1 | 2020-01-06T17:47:03.000Z | 2020-01-06T17:47:03.000Z | cavoke_server/tasks.py | cavoke-project/cavoke-server | 5d2e39ca760e140ab9a8b0c50b45a6d4b22e21b5 | [
"MIT"
] | null | null | null | # from celery.schedules import crontab
# from celery.task import periodic_task
# from django.utils import timezone
# from cavoke_app.models import GameSession
#
#
# @periodic_task(run_every=crontab(minute='*/1'))
# def delete_old_foos():
# # Query all the foos in our database
# gss = GameSession.objects.all()
#
# # Iterate through them
# for gs in gss:
#
# # If the expiration date is bigger than now delete it
# if gs.expiresOn < timezone.now():
# gs.delete()
# # log deletion
# return "completed deleting foos at {}".format(timezone.now())
| 30.2 | 67 | 0.655629 | 78 | 604 | 5 | 0.653846 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002165 | 0.235099 | 604 | 19 | 68 | 31.789474 | 0.841991 | 0.928808 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
767abb3e3d12d3c5ac07f73a70a0a54a78206ee1 | 302 | py | Python | packages/models-library/src/models_library/__init__.py | elisabettai/osparc-simcore | ad7b6e05111b50fe95e49306a992170490a7247f | [
"MIT"
] | null | null | null | packages/models-library/src/models_library/__init__.py | elisabettai/osparc-simcore | ad7b6e05111b50fe95e49306a992170490a7247f | [
"MIT"
] | 55 | 2018-05-15T09:47:00.000Z | 2022-03-31T06:56:50.000Z | packages/models-library/src/models_library/__init__.py | mrnicegyu11/osparc-simcore | b6fa6c245dbfbc18cc74a387111a52de9b05d1f4 | [
"MIT"
] | 1 | 2020-04-22T15:06:58.000Z | 2020-04-22T15:06:58.000Z | """ osparc's service models library
"""
#
# NOTE:
# - "examples" = [ ...] keyword and NOT "example". See https://json-schema.org/understanding-json-schema/reference/generic.html#annotations
#
import pkg_resources
__version__: str = pkg_resources.get_distribution("simcore-models-library").version
| 25.166667 | 141 | 0.738411 | 36 | 302 | 6 | 0.805556 | 0.12037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109272 | 302 | 11 | 142 | 27.454545 | 0.802974 | 0.586093 | 0 | 0 | 0 | 0 | 0.196429 | 0.196429 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
76803418e7c483c10d751d87b7054156573f1939 | 586 | py | Python | src/waldur_ansible/python_management/migrations/0004_removed_virtual_env_field_from_global_requests.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | 1 | 2017-09-05T08:09:47.000Z | 2017-09-05T08:09:47.000Z | src/waldur_ansible/python_management/migrations/0004_removed_virtual_env_field_from_global_requests.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | null | null | null | src/waldur_ansible/python_management/migrations/0004_removed_virtual_env_field_from_global_requests.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | 3 | 2017-09-24T03:13:19.000Z | 2018-08-12T07:44:38.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.9 on 2018-03-27 14:01
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('python_management', '0003_added_unique_constraint'),
]
operations = [
migrations.RemoveField(
model_name='pythonmanagementdeleterequest',
name='virtual_env_name',
),
migrations.RemoveField(
model_name='pythonmanagementfindvirtualenvsrequest',
name='virtual_env_name',
),
]
| 24.416667 | 64 | 0.645051 | 55 | 586 | 6.6 | 0.709091 | 0.115702 | 0.143251 | 0.165289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048165 | 0.255973 | 586 | 23 | 65 | 25.478261 | 0.784404 | 0.116041 | 0 | 0.375 | 1 | 0 | 0.279612 | 0.184466 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
76adf99c53be9fdd9abcf7404d34ffbbdbf4e9ae | 1,422 | py | Python | api-reference-examples/python/pytx/pytx/threat_descriptor.py | b-bold/ThreatExchange | 6f8d0dc803faccf576c9398569bb52d54a4f9a87 | [
"BSD-3-Clause"
] | 997 | 2015-03-13T18:04:03.000Z | 2022-03-30T12:09:10.000Z | api-reference-examples/python/pytx/pytx/threat_descriptor.py | b-bold/ThreatExchange | 6f8d0dc803faccf576c9398569bb52d54a4f9a87 | [
"BSD-3-Clause"
] | 444 | 2015-03-26T17:28:49.000Z | 2022-03-28T19:34:05.000Z | api-reference-examples/python/pytx/pytx/threat_descriptor.py | b-bold/ThreatExchange | 6f8d0dc803faccf576c9398569bb52d54a4f9a87 | [
"BSD-3-Clause"
] | 294 | 2015-03-13T22:19:43.000Z | 2022-03-30T08:42:45.000Z | # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from .common import Common
from .vocabulary import ThreatDescriptor as td
from .vocabulary import ThreatExchange as t
class ThreatDescriptor(Common):
_URL = t.URL + t.VERSION + t.THREAT_DESCRIPTORS
_DETAILS = t.URL + t.VERSION
_RELATED = t.URL + t.VERSION
_fields = [
td.ADDED_ON,
td.CONFIDENCE,
td.DESCRIPTION,
td.EXPIRED_ON,
td.FIRST_ACTIVE,
td.ID,
td.INDICATOR,
td.LAST_ACTIVE,
td.LAST_UPDATED,
td.METADATA,
td.MY_REACTIONS,
td.OWNER,
td.PRECISION,
td.PRIVACY_MEMBERS,
td.PRIVACY_TYPE,
td.RAW_INDICATOR,
td.REVIEW_STATUS,
td.SEVERITY,
td.SHARE_LEVEL,
td.SOURCE_URI,
td.STATUS,
td.TAGS,
td.TYPE,
]
_default_fields = [
td.ADDED_ON,
td.CONFIDENCE,
td.DESCRIPTION,
td.EXPIRED_ON,
td.FIRST_ACTIVE,
td.ID,
td.INDICATOR,
td.LAST_ACTIVE,
td.LAST_UPDATED,
td.METADATA,
td.MY_REACTIONS,
td.OWNER,
td.PRECISION,
td.RAW_INDICATOR,
td.REVIEW_STATUS,
td.SEVERITY,
td.SHARE_LEVEL,
td.SOURCE_URI,
td.STATUS,
td.TAGS,
td.TYPE,
]
_connections = [
]
_unique = [
]
| 20.911765 | 70 | 0.553446 | 162 | 1,422 | 4.666667 | 0.358025 | 0.021164 | 0.019841 | 0.047619 | 0.600529 | 0.600529 | 0.600529 | 0.600529 | 0.600529 | 0.600529 | 0 | 0 | 0.357947 | 1,422 | 67 | 71 | 21.223881 | 0.828039 | 0.04782 | 0 | 0.711864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.050847 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
76bbc2598c25fd191128e41eb9c547440be4a5b9 | 86 | py | Python | src/integ_test_resources/common/platforms.py | kaichengyan/amplify-ci-support | 5a56acd7fa8fb37ec7db975e080be6ba838dcec7 | [
"Apache-2.0"
] | 9 | 2020-06-09T21:59:02.000Z | 2021-06-27T07:15:18.000Z | src/integ_test_resources/common/platforms.py | kaichengyan/amplify-ci-support | 5a56acd7fa8fb37ec7db975e080be6ba838dcec7 | [
"Apache-2.0"
] | 27 | 2020-05-06T13:48:06.000Z | 2022-02-14T10:10:33.000Z | src/integ_test_resources/common/platforms.py | kaichengyan/amplify-ci-support | 5a56acd7fa8fb37ec7db975e080be6ba838dcec7 | [
"Apache-2.0"
] | 12 | 2020-05-15T11:51:41.000Z | 2022-02-11T18:07:15.000Z | from enum import Enum
class Platform(Enum):
IOS = "ios"
ANDROID = "android"
| 12.285714 | 23 | 0.639535 | 11 | 86 | 5 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255814 | 86 | 6 | 24 | 14.333333 | 0.859375 | 0 | 0 | 0 | 0 | 0 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f1d60a2d4237bbd3efae93314b622ddcfa12232 | 408 | py | Python | examples/ivis_job/docker_build.py | smartarch/qoscloud | 13b11b0baaad0d9b234d7defccdbd8756c2618a1 | [
"MIT"
] | 2 | 2021-02-20T13:53:02.000Z | 2021-11-15T16:11:32.000Z | examples/ivis_job/docker_build.py | smartarch/qoscloud | 13b11b0baaad0d9b234d7defccdbd8756c2618a1 | [
"MIT"
] | null | null | null | examples/ivis_job/docker_build.py | smartarch/qoscloud | 13b11b0baaad0d9b234d7defccdbd8756c2618a1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
This script builds the docker images of image client and recognizer server and pushes them to dockerhub.
"""
from subprocess import call
print("Building the default docker image")
call("docker build -t d3srepo/qoscloud-default -f Dockerfile ../..", shell=True)
print("Pushing images to DockerHub")
call("docker push d3srepo/qoscloud-default", shell=True)
| 31.384615 | 104 | 0.742647 | 58 | 408 | 5.224138 | 0.689655 | 0.072607 | 0.145215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011268 | 0.129902 | 408 | 12 | 105 | 34 | 0.842254 | 0.362745 | 0 | 0 | 0 | 0 | 0.621514 | 0.191235 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f24e90f804e495cead994aea15903dbefcdaca4 | 141 | py | Python | Semana 07/frequencia.py | heltonricardo/grupo-estudos-maratonas-programacao | 0c07d84a900858616647d07574ec56b0533cddfb | [
"MIT"
] | null | null | null | Semana 07/frequencia.py | heltonricardo/grupo-estudos-maratonas-programacao | 0c07d84a900858616647d07574ec56b0533cddfb | [
"MIT"
] | null | null | null | Semana 07/frequencia.py | heltonricardo/grupo-estudos-maratonas-programacao | 0c07d84a900858616647d07574ec56b0533cddfb | [
"MIT"
] | null | null | null | n = int(input())
v = []
for i in range(n): v.append(int(input()))
s = sorted(set(v))
for i in s: print(f'{i} aparece {v.count(i)} vez (es)')
| 23.5 | 55 | 0.574468 | 30 | 141 | 2.7 | 0.6 | 0.197531 | 0.123457 | 0.17284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163121 | 141 | 5 | 56 | 28.2 | 0.686441 | 0 | 0 | 0 | 0 | 0 | 0.234043 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f42f0318bdc9f9e318bf4c6ab8ff73477869c44 | 1,337 | py | Python | tests/core/test_base_component.py | strickvl/zenml | f1499e9c3fee00fd1d66de14cab66c4472c0085d | [
"Apache-2.0"
] | 1,275 | 2020-11-19T14:18:25.000Z | 2021-08-13T07:31:39.000Z | tests/core/test_base_component.py | strickvl/zenml | f1499e9c3fee00fd1d66de14cab66c4472c0085d | [
"Apache-2.0"
] | 62 | 2020-11-30T16:06:14.000Z | 2021-08-10T08:34:52.000Z | tests/core/test_base_component.py | strickvl/zenml | f1499e9c3fee00fd1d66de14cab66c4472c0085d | [
"Apache-2.0"
] | 75 | 2020-12-22T19:15:08.000Z | 2021-08-13T03:07:50.000Z | # Copyright (c) ZenML GmbH 2021. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing
# permissions and limitations under the License.
from typing import Text
from zenml.core.base_component import BaseComponent
class MockComponent(BaseComponent):
"""Mocking the base component for testing."""
tmp_path: str
def get_serialization_dir(self) -> Text:
"""Mock serialization dir"""
return self.tmp_path
def test_base_component_serialization_logic(tmp_path):
"""Tests the UUID serialization logic of BaseComponent"""
# Application of the monkeypatch to replace Path.home
# with the behavior of mockreturn defined above.
# mc = MockComponent(tmp_path=str(tmp_path))
# Calling getssh() will use mockreturn in place of Path.home
# for this test with the monkeypatch.
# print(mc.get_serialization_dir())
| 33.425 | 70 | 0.735228 | 188 | 1,337 | 5.154255 | 0.569149 | 0.06192 | 0.026832 | 0.033024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007394 | 0.190726 | 1,337 | 39 | 71 | 34.282051 | 0.88817 | 0.727001 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.285714 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
4f43d6d7f8cc17aa055442affd0c29f290e5addd | 591 | py | Python | courses/migrations/0009_alter_skills_program_duration_and_more.py | sisekelohub/sisekelo | 7e1b0de6abf07e65ed746d0d929c3de37fb421c3 | [
"MIT"
] | 1 | 2022-02-20T16:03:04.000Z | 2022-02-20T16:03:04.000Z | courses/migrations/0009_alter_skills_program_duration_and_more.py | sisekelohub/sisekelo | 7e1b0de6abf07e65ed746d0d929c3de37fb421c3 | [
"MIT"
] | null | null | null | courses/migrations/0009_alter_skills_program_duration_and_more.py | sisekelohub/sisekelo | 7e1b0de6abf07e65ed746d0d929c3de37fb421c3 | [
"MIT"
] | null | null | null | # Generated by Django 4.0 on 2022-01-02 21:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('courses', '0008_alter_learnership_duration'),
]
operations = [
migrations.AlterField(
model_name='skills_program',
name='duration',
field=models.CharField(max_length=50, null=True),
),
migrations.AlterField(
model_name='specialized_course',
name='duration',
field=models.CharField(max_length=50, null=True),
),
]
| 24.625 | 61 | 0.602369 | 60 | 591 | 5.783333 | 0.65 | 0.115274 | 0.144092 | 0.167147 | 0.293948 | 0.293948 | 0.293948 | 0.293948 | 0.293948 | 0.293948 | 0 | 0.052257 | 0.287648 | 591 | 23 | 62 | 25.695652 | 0.771972 | 0.072758 | 0 | 0.470588 | 1 | 0 | 0.157509 | 0.056777 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
4f54da94c5e8d003456c1c27b57ad6a8d761118e | 355 | py | Python | ocdskingfisher/sources/digiwhist_germany.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | ocdskingfisher/sources/digiwhist_germany.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | ocdskingfisher/sources/digiwhist_germany.py | odscjames/lhs-alpha | d882cadfcf3464fd29529cb862567dc311d892e2 | [
"BSD-3-Clause"
] | null | null | null | from ocdskingfisher.sources.digiwhist_base import DigiwhistBaseSource
class DigiwhistGermanyRepublicSource(DigiwhistBaseSource):
publisher_name = 'Digiwhist Germany'
url = 'https://opentender.eu/download'
source_id = 'digiwhist_germany'
def get_data_url(self):
return 'https://opentender.eu/data/files/DE_ocds_data.json.tar.gz'
| 32.272727 | 74 | 0.771831 | 40 | 355 | 6.65 | 0.75 | 0.120301 | 0.12782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132394 | 355 | 10 | 75 | 35.5 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0.340845 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0.142857 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
4f5645014936668c159ef7897d13239bafc29c34 | 209 | py | Python | reagent/core/fb_checker.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | 1 | 2021-05-03T15:18:58.000Z | 2021-05-03T15:18:58.000Z | reagent/core/fb_checker.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | null | null | null | reagent/core/fb_checker.py | alexnikulkov/ReAgent | e404c5772ea4118105c2eb136ca96ad5ca8e01db | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
import importlib.util
def is_fb_environment():
if importlib.util.find_spec("fblearner") is not None:
return True
return False
IS_FB_ENVIRONMENT = is_fb_environment()
| 17.416667 | 57 | 0.727273 | 30 | 209 | 4.833333 | 0.666667 | 0.082759 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005848 | 0.181818 | 209 | 11 | 58 | 19 | 0.842105 | 0.100478 | 0 | 0 | 0 | 0 | 0.048128 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
4f7749d7fdeb213a71c50fac2777773a4cae4cde | 982 | py | Python | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_sweep/search_space/randint.py | dubiety/azure-sdk-for-python | 62ffa839f5d753594cf0fe63668f454a9d87a346 | [
"MIT"
] | 1 | 2022-02-01T18:50:12.000Z | 2022-02-01T18:50:12.000Z | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_sweep/search_space/randint.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | sdk/ml/azure-ai-ml/azure/ai/ml/_schema/_sweep/search_space/randint.py | ellhe-blaster/azure-sdk-for-python | 82193ba5e81cc5e5e5a5239bba58abe62e86f469 | [
"MIT"
] | null | null | null | # ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from azure.ai.ml.constants import SearchSpace
from marshmallow import fields, post_load, pre_dump, ValidationError
from azure.ai.ml._schema.core.fields import StringTransformedEnum
from azure.ai.ml._schema.core.schema import PatchedSchemaMeta
class RandintSchema(metaclass=PatchedSchemaMeta):
type = StringTransformedEnum(required=True, allowed_values=SearchSpace.RANDINT)
upper = fields.Integer(required=True)
@post_load
def make(self, data, **kwargs):
from azure.ai.ml.sweep import Randint
return Randint(**data)
@pre_dump
def predump(self, data, **kwargs):
from azure.ai.ml.sweep import Randint
if not isinstance(data, Randint):
raise ValidationError("Cannot dump non-Randint object into RandintSchema")
return data
| 35.071429 | 86 | 0.649695 | 105 | 982 | 6.009524 | 0.495238 | 0.071315 | 0.087163 | 0.103011 | 0.215531 | 0.215531 | 0.142631 | 0.142631 | 0.142631 | 0.142631 | 0 | 0 | 0.158859 | 982 | 27 | 87 | 36.37037 | 0.763923 | 0.176171 | 0 | 0.117647 | 0 | 0 | 0.06087 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.352941 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.