hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
06e9ddbd5d41c9f96f73c520c1781fa771359353 | 307 | py | Python | django/pactf_web/ctflex_helpers.py | milesmcc/pactf-2018 | cfd9d94a7b6828259220f52ab3c5893a28429c62 | [
"MIT"
] | 10 | 2016-02-29T00:30:54.000Z | 2021-04-18T17:47:54.000Z | django/pactf_web/ctflex_helpers.py | milesmcc/pactf-2018 | cfd9d94a7b6828259220f52ab3c5893a28429c62 | [
"MIT"
] | 1 | 2018-06-21T21:55:33.000Z | 2018-06-25T12:49:59.000Z | django/pactf_web/ctflex_helpers.py | milesmcc/pactf-2018 | cfd9d94a7b6828259220f52ab3c5893a28429c62 | [
"MIT"
] | 8 | 2016-04-01T14:57:05.000Z | 2019-04-01T03:37:21.000Z | """Define objects for CTFlex"""
# def eligible(team):
# """Determine eligibility of team
#
# This function is used by CTFlex.
# """
# return (team.standing == team.GOOD_STANDING
# and team.country == team.US_COUNTRY
# and team.background == team.SCHOOL_BACKGROUND)
| 25.583333 | 60 | 0.618893 | 35 | 307 | 5.342857 | 0.657143 | 0.074866 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257329 | 307 | 11 | 61 | 27.909091 | 0.820175 | 0.921824 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
06eed773166c195e6fef5822c122e9bbd71d88e9 | 1,207 | py | Python | purbeurre_project/settings/dev.py | etiennody/purbeurre-v2 | cee10b5ad3ccee6535f197070cd4ee80f2bad5d0 | [
"MIT"
] | null | null | null | purbeurre_project/settings/dev.py | etiennody/purbeurre-v2 | cee10b5ad3ccee6535f197070cd4ee80f2bad5d0 | [
"MIT"
] | 3 | 2020-10-12T13:58:38.000Z | 2020-11-12T01:02:14.000Z | purbeurre_project/settings/dev.py | etiennody/purbeurre-v2 | cee10b5ad3ccee6535f197070cd4ee80f2bad5d0 | [
"MIT"
] | 1 | 2021-02-03T18:49:31.000Z | 2021-02-03T18:49:31.000Z | """
Django settings for purbeurre_project project.
Generated by 'django-admin startproject' using Django 3.0.8.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
from .base import *
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get("PURBEURRE_SECRET_KEY", "idsqn%qsm!dfihzq@zml-fpvn9s_qdf")
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ["localhost", "127.0.0.1"]
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("PURBEURRE_DBNAME", "purbeurre_dbname"),
"USER": os.environ.get("PURBEURRE_DBUSER", "purbeurre_dbuser"),
"PASSWORD": os.environ.get("PURBEURRE_DBPASSWD", "purbeurre_dbpasswd"),
"HOST": "localhost",
"PORT": 5432,
}
}
| 28.738095 | 86 | 0.714167 | 164 | 1,207 | 5.182927 | 0.536585 | 0.011765 | 0.103529 | 0.117647 | 0.172941 | 0.172941 | 0.172941 | 0.172941 | 0.094118 | 0 | 0 | 0.021359 | 0.146645 | 1,207 | 41 | 87 | 29.439024 | 0.803884 | 0.529412 | 0 | 0 | 1 | 0 | 0.440433 | 0.108303 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
06faa994d32ecf945ca2ab1aed8e72b2c0d2fc89 | 1,188 | py | Python | python/factualaudio/plot.py | factualaudio/factualaudio | 8b9b824e98710470d9d09931688277f667b85a6e | [
"CC-BY-4.0"
] | 4 | 2017-11-20T21:25:24.000Z | 2019-12-13T06:52:51.000Z | python/factualaudio/plot.py | factualaudio/factualaudio | 8b9b824e98710470d9d09931688277f667b85a6e | [
"CC-BY-4.0"
] | null | null | null | python/factualaudio/plot.py | factualaudio/factualaudio | 8b9b824e98710470d9d09931688277f667b85a6e | [
"CC-BY-4.0"
] | null | null | null | from factualaudio.data import noise
from factualaudio.decibel import to_decibels
import numpy as np
def waveform(axes, wave, sample_rate, *args, **kwargs):
return axes.plot(np.arange(0, wave.size) * (1000 / sample_rate), wave, *args, **kwargs)
# Equivalent to axes.amplitude_spectrum(), but plots on an RMS amplitude scale.
# (i.e. an input sine wave of RMS amplitude X will show up as X on the plot)
def rms_amplitude_spectrum(axes, wave, noise_level=1e-14, *args, **kwargs):
kwargs.setdefault("window", np.ones(wave.size))
kwargs.setdefault("scale", "dB")
# Add some noise to avoid numerical issues when converting to dB
wave += noise(wave.size) * noise_level
return axes.magnitude_spectrum(wave * np.sqrt(2), *args, **kwargs)
def transfer_function_gain(axes, transfer_function, corner_frequency=1000):
x = np.linspace(0, 20000, num=1000)
return axes.semilogx(x, to_decibels(np.absolute(transfer_function(x * (1j / corner_frequency)))))
def transfer_function_phase(axes, transfer_function, corner_frequency=1000):
x = np.linspace(0, 20000, num=1000)
return axes.semilogx(x, np.angle(transfer_function(x * (1j / corner_frequency)), deg=True))
| 49.5 | 101 | 0.736532 | 179 | 1,188 | 4.765363 | 0.424581 | 0.112544 | 0.044549 | 0.060961 | 0.271981 | 0.271981 | 0.192263 | 0.192263 | 0.192263 | 0.192263 | 0 | 0.038198 | 0.140572 | 1,188 | 23 | 102 | 51.652174 | 0.797258 | 0.180976 | 0 | 0.125 | 0 | 0 | 0.013416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.1875 | 0.0625 | 0.6875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
06fdc9a1f105198d36e30385ea61aa831b5f5d77 | 1,239 | py | Python | gslides/__init__.py | michael-gracie/gslides | 6400184454630e05559b20e8c6e6f52f42175d9c | [
"MIT"
] | 17 | 2021-08-12T14:18:18.000Z | 2022-02-20T18:54:35.000Z | gslides/__init__.py | michael-gracie/gslides | 6400184454630e05559b20e8c6e6f52f42175d9c | [
"MIT"
] | 17 | 2021-05-01T02:05:30.000Z | 2022-03-04T06:09:02.000Z | gslides/__init__.py | michael-gracie/gslides | 6400184454630e05559b20e8c6e6f52f42175d9c | [
"MIT"
] | 3 | 2021-08-29T20:23:47.000Z | 2021-12-09T18:36:40.000Z | # -*- coding: utf-8 -*-
"""Top-level package for gslides."""
__author__ = """Michael Gracie"""
__email__ = ""
__version__ = "0.1.1"
from typing import Optional
from google.oauth2.credentials import Credentials
from .config import CHART_PARAMS, Creds, Font, PackagePalette
creds = Creds()
package_font = Font()
package_palette = PackagePalette()
def initialize_credentials(credentials: Optional[Credentials]) -> None:
"""Intializes credentials for all classes in the package.
:param credentials: Credentials to build api connection
:type credentials: google.oauth2.credentialsCredentials
"""
creds.set_credentials(credentials)
def set_font(font: str) -> None:
"""Sets the font for all objects
:param font: Font
:type font: str
"""
package_font.set_font(font)
def set_palette(palette: str) -> None:
"""Sets the palette for all charts
:param palette: The palette to use
:type palette: str
"""
package_palette.set_palette(palette)
from .chart import Chart, Series # noqa
from .colors import Palette # noqa
from .frame import Frame # noqa
from .presentation import Presentation # noqa
from .spreadsheet import Spreadsheet # noqa
from .table import Table # noqa
| 23.377358 | 71 | 0.716707 | 153 | 1,239 | 5.653595 | 0.372549 | 0.046243 | 0.025434 | 0.03237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005941 | 0.184826 | 1,239 | 52 | 72 | 23.826923 | 0.850495 | 0.325262 | 0 | 0 | 0 | 0 | 0.024675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
6606d949f05e809b7a1b6806050e0cbdb985a4ab | 226 | py | Python | venv/lib/python3.8/site-packages/crispy_forms/templates/bootstrap/uni_form.html.py | Solurix/Flashcards-Django | 03c863f6722936093927785a2b20b6b668bb743d | [
"MIT"
] | 1 | 2021-05-16T03:20:23.000Z | 2021-05-16T03:20:23.000Z | venv/lib/python3.8/site-packages/crispy_forms/templates/bootstrap/uni_form.html.py | Solurix/Flashcards-Django | 03c863f6722936093927785a2b20b6b668bb743d | [
"MIT"
] | 4 | 2021-03-30T14:06:09.000Z | 2021-09-22T19:26:31.000Z | venv/lib/python3.8/site-packages/crispy_forms/templates/bootstrap/uni_form.html.py | Solurix/Flashcards-Django | 03c863f6722936093927785a2b20b6b668bb743d | [
"MIT"
] | null | null | null | BBBB BBBBBBBBBBBBBBBBBB
BBBBBBBBBBBBBBBB
BB BBBBBBBBBBBBBBBBBB
BB BBBBBBBBBBBBBBBB
BBBBBBB BBBBBBBBBBBBBBBBBBBBBBB
BBBBB
BBB BBBBB BB BBBB
BBBBBBB BBBBBBBBBBBBBB
BBBBBB
BBBBBBBBBBBBBBBBBBB
| 18.833333 | 39 | 0.756637 | 18 | 226 | 9.5 | 0.611111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243363 | 226 | 11 | 40 | 20.545455 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
66240156d89516ae659c3ee7adc4f28dbb7842a2 | 3,487 | py | Python | mva_tools/build_roc_simple.py | kpedro88/koza4ok | 32eaad4c7f352c53c4455e89123585607d4f4f74 | [
"MIT"
] | 11 | 2016-01-12T17:38:59.000Z | 2021-07-12T09:05:09.000Z | mva_tools/build_roc_simple.py | kpedro88/koza4ok | 32eaad4c7f352c53c4455e89123585607d4f4f74 | [
"MIT"
] | 9 | 2015-12-15T23:10:18.000Z | 2016-11-12T10:59:39.000Z | mva_tools/build_roc_simple.py | kpedro88/koza4ok | 32eaad4c7f352c53c4455e89123585607d4f4f74 | [
"MIT"
] | 7 | 2016-02-15T04:10:26.000Z | 2021-12-27T05:52:25.000Z | import sys
from array import array
import numpy as np
import ROOT
from ROOT import TH1F
xrange_bins = lambda nbins: xrange(1, nbins+1)
def build_roc(h_sig, h_bkg, verbose=0):
nbins_sig = h_sig.GetXaxis().GetNbins()
nbins_bkg = h_bkg.GetXaxis().GetNbins()
nbins = nbins_sig if nbins_sig == nbins_bkg else -1
#print nbins
if nbins < 0.0:
sys.exit("Error: nbins_sig != nbins_bkg")
min_val = h_sig.GetXaxis().GetXmin()
max_val = h_sig.GetXaxis().GetXmax()
step = float(max_val - min_val)/(nbins-1)
sig_eff = array('f', [])
bkg_rej = array('f', [])
total_sig = h_sig.Integral()
total_bkg = h_bkg.Integral()
print total_sig
print total_bkg
sig_rejected = 0.0
bkg_rejected = 0.0
for i in xrange_bins(nbins):
sig_rejected += h_sig.GetBinContent(i)
#print sig_rejected
bkg_rejected += h_bkg.GetBinContent(i)
#print bkg_rejected
seff = float(total_sig-sig_rejected)/total_sig
brej = float(bkg_rejected)/total_bkg
#print seff, brej
sig_eff.append(seff)
bkg_rej.append(brej)
if verbose == 1:
bdt_score = min_val + i * step
print "bdt score =", bdt_score, "sig_eff =", seff
#bin_sig = h_sig.GetBinContent()
print "Overflow =", h_sig.GetBinContent(nbins_sig+1)
print "Underflow =", h_sig.GetBinContent(0)
g = ROOT.TGraph(nbins, sig_eff, bkg_rej)
g.GetXaxis().SetRangeUser(0.0,1.0)
g.GetYaxis().SetRangeUser(0.0,1.0)
#g.Draw("AC")
#g.SetLineColor(ROOT.kRed)
g.SetTitle("ROC curve")
return g
#print nbins_bkg
def column_or_1d(y):
shape = np.shape(y)
if len(shape) == 1:
return np.ravely(y)
if len(shape) == 2 and shape[1] == 1:
return np.ravel(y)
def roc_curve_sk(y_test, sk_y_predicted, step = 0.001):
pos_label = 1 # prompt lepton
neg_label = 0 # non-prompt lepton
y_test = column_or_1d(y_test)
assert y_test.size == sk_y_predicted.size, "Error: len(y_test) != len(sk_y_predicted)"
assert np.unique(y_test).size == 2, "Error: number of classes is %i. Expected 2 classes only." % np.unique(y_test).size
# calculate number of prompt and non-prompt leptons
num_prompts = np.sum(y_test == pos_label)
num_nonprompts = y_test.size - num_prompts
# array of probability cut values
# minimum probability is 0, maximum is one,
# and step is defind above
cuts = np.arange(0, 1, step)
tpr = []
fpr = []
for cut in cuts:
prompts_passed = 0
nonprompts_passed = 0
indxs = np.where(sk_y_predicted >= cut)
y_test_passed = np.take(y_test, indxs)
prompts_passed = np.sum(y_test_passed == pos_label)
nonprompts_passed = np.sum(y_test_passed == neg_label)
tpr.append(float(prompts_passed)/num_prompts)
fpr.append(float(nonprompts_passed)/num_nonprompts)
return np.array(fpr), np.array(tpr), 0
if __name__ == "__main__":
path = "/Users/musthero/Documents/Yura/Applications/tmva_local/BDT_score_distributions_electrons.root"
histo_sk_sig = "histo_sk_sig"
histo_sk_bkg = "histo_sk_bkg"
rootfile = ROOT.TFile.Open(path)
if rootfile.IsZombie():
print "Root file is corrupt"
h_sig = rootfile.Get(histo_sk_sig)
h_bkg = rootfile.Get(histo_sk_bkg)
g = build_roc(h_sig, h_bkg)
ll = [g]
g.Draw("AL") # draw TGraph with no marker dots
| 26.618321 | 123 | 0.641239 | 526 | 3,487 | 4.003802 | 0.269962 | 0.030864 | 0.032289 | 0.014245 | 0.081197 | 0.052232 | 0 | 0 | 0 | 0 | 0 | 0.0163 | 0.243476 | 3,487 | 130 | 124 | 26.823077 | 0.782032 | 0.103527 | 0 | 0 | 0 | 0 | 0.104569 | 0.029923 | 0 | 0 | 0 | 0 | 0.025316 | 0 | null | null | 0.088608 | 0.063291 | null | null | 0.075949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
664f2d91c6b8a25d0d209116cc1196d2264e5f1b | 73 | py | Python | homeassistant/components/senz/const.py | jlmaners/core | 9d016dd4346ec776da40f816764a5be441e34a3b | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | homeassistant/components/senz/const.py | jlmaners/core | 9d016dd4346ec776da40f816764a5be441e34a3b | [
"Apache-2.0"
] | 24,710 | 2016-04-13T08:27:26.000Z | 2020-03-02T12:59:13.000Z | homeassistant/components/senz/const.py | jlmaners/core | 9d016dd4346ec776da40f816764a5be441e34a3b | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Constants for the nVent RAYCHEM SENZ integration."""
DOMAIN = "senz"
| 18.25 | 55 | 0.712329 | 9 | 73 | 5.777778 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150685 | 73 | 3 | 56 | 24.333333 | 0.83871 | 0.671233 | 0 | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
664ffef782860b15cb2f82900081a3097e48b4db | 7,546 | py | Python | src/py42/clients/orgs.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | 1 | 2020-08-18T22:00:22.000Z | 2020-08-18T22:00:22.000Z | src/py42/clients/orgs.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | null | null | null | src/py42/clients/orgs.py | unparalleled-js/py42 | 8c6b054ddd8c2bfea92bf77b0d648af76f1efcf1 | [
"MIT"
] | 1 | 2021-05-10T23:33:34.000Z | 2021-05-10T23:33:34.000Z | import json
from py42 import settings
from py42.clients import BaseClient
from py42.clients.util import get_all_pages
class OrgClient(BaseClient):
"""A client for interacting with Code42 organization APIs.
Use the OrgClient to create and retrieve organizations. You can also use it to block and
deactivate organizations.
"""
def create_org(self, org_name, org_ext_ref=None, notes=None, parent_org_uid=None):
"""Creates a new organization.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-post>`__
Args:
org_name (str): The name of the new organization.
org_ext_ref (str, optional): External reference information,
such as a serial number, asset tag, employee ID, or help desk issue ID. Defaults to
None.
notes (str, optional): Descriptive information about the organization. Defaults to None.
parent_org_uid (int, optional): The org UID for the parent organization. Defaults to
None.
Returns:
:class:`py42.response.Py42Response`
"""
uri = u"/api/Org/"
data = {
u"orgName": org_name,
u"orgExtRef": org_ext_ref,
u"notes": notes,
u"parentOrgUid": parent_org_uid,
}
return self._session.post(uri, data=json.dumps(data))
def get_by_id(self, org_id, **kwargs):
"""Gets the organization with the given ID.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-get>`__
Args:
org_id (int): An ID for an organization.
Returns:
:class:`py42.response.Py42Response`: A response containing the organization.
"""
uri = u"/api/Org/{}".format(org_id)
return self._session.get(uri, params=kwargs)
def get_by_uid(self, org_uid, **kwargs):
"""Gets the organization with the given UID.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-get>`__
Args:
org_uid (str): A UID for an organization.
Returns:
:class:`py42.response.Py42Response`: A response containing the organization.
"""
uri = u"/api/Org/{}".format(org_uid)
params = dict(idType=u"orgUid", **kwargs)
return self._session.get(uri, params=params)
def get_page(self, page_num, page_size=None, **kwargs):
"""Gets an individual page of organizations.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-get>`__
Args:
page_num (int): The page number to request.
page_size (int, optional): The number of organizations to return per page.
Defaults to `py42.settings.items_per_page`.
kwargs (dict, optional): Additional advanced-user arguments. Defaults to None.
Returns:
:class:`py42.response.Py42Response`
"""
page_size = page_size or settings.items_per_page
uri = u"/api/Org"
params = dict(pgNum=page_num, pgSize=page_size, **kwargs)
return self._session.get(uri, params=params)
def get_all(self, **kwargs):
"""Gets all organizations.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-get>`__
Returns:
generator: An object that iterates over :class:`py42.response.Py42Response` objects
that each contain a page of organizations.
"""
return get_all_pages(self.get_page, u"orgs", **kwargs)
def block(self, org_id):
"""Blocks the organization with the given org ID as well as its child organizations. A
blocked organization will not allow any of its users or devices to log in. New
registrations will be rejected and all currently logged in clients will be logged out.
Backups continue for any devices that are still active.
`Rest Documentation <https://console.us.code42.com/apidocviewer/#OrgBlock-put>`__
Args:
org_id (int): An ID for an organization.
Returns:
:class:`py42.response.Py42Response`
"""
uri = u"/api/OrgBlock/{}".format(org_id)
return self._session.put(uri)
def unblock(self, org_id):
"""Removes a block, if one exists, on an organization and its descendants with the given
ID. All users in the organization remain blocked until they are unblocked individually.
`REST Documentation <https://console.us.code42.com/apidocviewer/#OrgBlock-delete>`__
Args:
org_id (int): An ID for an organization.
Returns:
:class:`py42.response.Py42Response`
"""
uri = u"/api/OrgBlock/{}".format(org_id)
return self._session.delete(uri)
def deactivate(self, org_id):
"""Deactivates the organization with the given ID, including all users, plans, and
devices. Backups stop and archives move to cold storage.
`REST Documentation <https://console.us.code42.com/apidocviewer/#OrgDeactivation-put>`__
Args:
org_id (int): An ID for an organization.
Returns:
:class:`py42.response.Py42Response`
"""
uri = u"/api/OrgDeactivation/{}".format(org_id)
return self._session.put(uri)
def reactivate(self, org_id):
"""Reactivates the organization with the given ID. Backups are *not* restarted
automatically.
`REST Documentation <https://console.us.code42.com/apidocviewer/#OrgDeactivation-delete>`__
Args:
org_id (int): An ID for an organization.
Returns:
:class:`py42.response.Py42Response`
"""
uri = u"/api/OrgDeactivation/{}".format(org_id)
return self._session.delete(uri)
def get_current(self, **kwargs):
"""Gets the organization for the currently signed-in user.
`REST Documentation <https://console.us.code42.com/apidocviewer/#Org-get>`__
Returns:
:class:`py42.response.Py42Response`: A response containing the organization for the
currently signed-in user.
"""
uri = u"/api/Org/my"
return self._session.get(uri, params=kwargs)
def get_agent_state(self, orgId, property_name):
"""Gets the agent state of the devices in the org.
`REST Documentation <https://console.us.code42.com/swagger/index.html?urls.primaryName=v14#/agent-state/AgentState_ViewByDeviceGuid>`__
Args:
orgId (str): The org's identifier.
property_name (str): The name of the property to retrieve (e.g. `fullDiskAccess`).
Returns:
:class:`py42.response.Py42Response`: A response containing settings information.
"""
uri = u"/api/v14/agent-state/view-by-organization-id"
params = {u"orgId": orgId, u"propertyName": property_name}
return self._session.get(uri, params=params)
def get_agent_full_disk_access_states(self, guid):
"""Gets the full disk access status for devices in an org.
`REST Documentation <https://console.us.code42.com/swagger/index.html?urls.primaryName=v14#/agent-state/AgentState_ViewByDeviceGuid>`__
Args:
orgId (str): The org's identifier.
Returns:
:class:`py42.response.Py42Response`: A response containing settings information.
"""
return self.get_agent_state(guid, u"fullDiskAccess")
| 39.715789 | 147 | 0.633978 | 924 | 7,546 | 5.062771 | 0.219697 | 0.017101 | 0.056434 | 0.074391 | 0.532065 | 0.526293 | 0.504917 | 0.489098 | 0.451475 | 0.35015 | 0 | 0.015833 | 0.263451 | 7,546 | 189 | 148 | 39.925926 | 0.825837 | 0.587464 | 0 | 0.265306 | 0 | 0 | 0.1078 | 0.039439 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244898 | false | 0 | 0.081633 | 0 | 0.591837 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b07e2566915be2e011fe1f35222fbd0ceed58b08 | 467 | py | Python | 9.py | manuelsousa7/ia-labs | 81a7cda3478e174ca2c9dad40b4ef68471931004 | [
"MIT"
] | null | null | null | 9.py | manuelsousa7/ia-labs | 81a7cda3478e174ca2c9dad40b4ef68471931004 | [
"MIT"
] | null | null | null | 9.py | manuelsousa7/ia-labs | 81a7cda3478e174ca2c9dad40b4ef68471931004 | [
"MIT"
] | null | null | null | def Agente_corredor_ex_9():
def __init__(self):
self.cur = 1
def invoca(self):
pos = 0
if pos == 0:
if(self.cur == 0):
return "fica parado"
if(self.cur > pos and self.cur > 1):
self.cur = self.cur - 1
return "andar-"
if(self.cur < pos and self.cur < 8):
self.cur = self.cur + 1
return "andar+"
return ""
asd = Agente_corredor_ex_9() | 24.578947 | 44 | 0.475375 | 62 | 467 | 3.419355 | 0.322581 | 0.330189 | 0.150943 | 0.160377 | 0.45283 | 0.45283 | 0.45283 | 0 | 0 | 0 | 0 | 0.035971 | 0.404711 | 467 | 19 | 45 | 24.578947 | 0.726619 | 0 | 0 | 0 | 0 | 0 | 0.049145 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b07ff0f4df821323ec4de2bef89169f8ba5dfd61 | 365 | py | Python | mmdet/models/recognizer/__init__.py | liangxiaoyun/mmdetection-1.1.0-pse-sar | d4f80368949ba74dad13d0a4744a53c82f89577d | [
"Apache-2.0"
] | null | null | null | mmdet/models/recognizer/__init__.py | liangxiaoyun/mmdetection-1.1.0-pse-sar | d4f80368949ba74dad13d0a4744a53c82f89577d | [
"Apache-2.0"
] | 1 | 2021-05-25T07:16:12.000Z | 2021-09-01T07:35:01.000Z | mmdet/models/recognizer/__init__.py | liangxiaoyun/mmdetection-1.1.0-pse-sar | d4f80368949ba74dad13d0a4744a53c82f89577d | [
"Apache-2.0"
] | null | null | null | from .base import BaseRecognizer
from .decoder import Decoder
from .encoder import Encoder
from .sar import SAR
from .sar_resnet import SAR_ResNet, BasicBlock
from .str_lable_converter_for_attention import strLabelConverterForAttention
__all__ = [
'BaseRecognizer', 'Decoder', 'Encoder', 'SAR', 'strLabelConverterForAttention',
'BasicBlock', 'SAR_ResNet'
] | 33.181818 | 83 | 0.79726 | 40 | 365 | 7 | 0.4 | 0.096429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.120548 | 365 | 11 | 84 | 33.181818 | 0.872274 | 0 | 0 | 0 | 0 | 0 | 0.218579 | 0.079235 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b08646958e478b3ea9759f646bf21bc6c9578d01 | 3,367 | py | Python | chainer_graphics/image/warp.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | 3 | 2019-07-01T04:38:50.000Z | 2021-12-03T06:22:58.000Z | chainer_graphics/image/warp.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | null | null | null | chainer_graphics/image/warp.py | Idein/chainer-graphics | 3646fd961003297ff7e3f5efb71360c16d5eb9f5 | [
"MIT"
] | 1 | 2021-12-03T06:22:59.000Z | 2021-12-03T06:22:59.000Z | from chainer import backend
import chainer.functions as F
from chainer_graphics.image import *
def affine(A, t, x):
"""Compute Ax+t
Args:
A (:class `~chainer.Variable` or :ref:`ndarray`):
A 3-D array of shape `(B, M, M)`
t (:class `~chainer.Variable` or :ref:`ndarray`):
A 2-D array of shape `(B, M)`
x (:class `~chainer.Variable` or :ref:`ndarray`):
A 2-D array of shape `(B, M, N)`
Returns:
~chainer.Variable:
A 2-D array of shape `(B, M, N)`
"""
return A @ x + F.expand_dims(t, axis=2)
def inverse_affine(A, t, x):
"""Compute A^-1(x-t)
Args:
A (:class `~chainer.Variable` or :ref:`ndarray`):
A 3-D array of shape `(B, M, M)`
t (:class `~chainer.Variable` or :ref:`ndarray`):
A 2-D array of shape `(B, M)`
x (:class `~chainer.Variable` or :ref:`ndarray`):
A 2-D array of shape `(B, M, N)`
Returns:
~chainer.Variable:
A 2-D array of shape `(B, M, N)`
"""
return F.batch_inv(A) @ (x - F.expand_dims(t, axis=2))
def warp_dense(image, ps):
"""Dense image warping
Args:
image (:class `~chainer.Variable` or :ref:`ndarray`):
A 4-D array of shape `(B, C, H, W)`.
ps (:class `~chainer.Variable` or :ref:`ndarray`):
A 4-D array of shape `(B, 2, H, W)`
Pixel coordinates in source images.
Returns:
~chainer.Variable:
Warped image.
A 4-D array of shape `(B, C, H, W)`.
"""
xp = backend.get_array_module(image)
B, _, H, W = image.shape
ps = 2 * ps / xp.array([W-1, H-1]).reshape(-1, 2, 1, 1) - 1
ps = ps.reshape(B, 2, H, W)
return F.spatial_transformer_sampler(image, ps)
def warp_affine(image, mat):
"""Warp images with affine transformation
Args:
image (:class `~chainer.Variable` or :ref:`ndarray`):
A 4-D array of shape `(B, C, H, W)`.
mat (:class `~chainer.Variable` or :ref:`ndarray`):
Affine transformation matrices [[a, b, tx], [c, d, ty]].
A 3-D array of shape `(B, 2, 3)`.
Returns:
~chainer.Variable:
Warped image.
A 4-D array of shape `(B, C, H, W)`.
"""
xp = backend.get_array_module(image)
B, _, H, W = image.shape
ps1 = pixel_coords(xp, H, W, mat.dtype).reshape(1, 2, -1)
ps0 = inverse_affine(mat[:, :, :2], mat[:, :, 2], ps1)
return warp_dense(image, ps0.reshape(1, 2, H, W))
def warp_perspective(image, mat):
"""Warp images with perspective transformation
Args:
image (:class `~chainer.Variable` or :ref:`ndarray`):
A 4-D array of shape `(B, C, H, W)`.
mat (:class `~chainer.Variable` or :ref:`ndarray`):
Perspective transformaion matrices.
A 3-D array of shape `(B, 3, 3)`.
Returns:
~chainer.Variable:
Warped image.
A 4-D array of shape `(B, C, H, W)`.
"""
xp = backend.get_array_module(image)
B, _, H, W = image.shape
ps1 = pixel_coords(xp, H, W, mat.dtype).reshape(1, 2, -1)
num = affine(mat[:,:2,:2], mat[:,:2,2], ps1)
denom = affine(mat[:,2,:2].reshape(-1, 1, 2), mat[:,2,2].reshape(-1, 1), ps1)
ps0 = num / denom
return warp_dense(image, ps0.reshape(1, 2, H, W))
| 30.609091 | 81 | 0.535492 | 509 | 3,367 | 3.495088 | 0.147348 | 0.143339 | 0.076447 | 0.124227 | 0.750422 | 0.707701 | 0.690838 | 0.67285 | 0.67285 | 0.648117 | 0 | 0.028728 | 0.297 | 3,367 | 109 | 82 | 30.889908 | 0.722856 | 0.576181 | 0 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.111111 | 0 | 0.481481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b08e28184309528059d04d9b444f7192dc24209c | 396 | py | Python | Data Preparetion/1.3 Metrics Module/A.3.14.1.roc_auc_togther.py | mosalaheg/sklearn | d275bdfaf0db4b01f45231889a37a8e004e2f77c | [
"MIT"
] | null | null | null | Data Preparetion/1.3 Metrics Module/A.3.14.1.roc_auc_togther.py | mosalaheg/sklearn | d275bdfaf0db4b01f45231889a37a8e004e2f77c | [
"MIT"
] | null | null | null | Data Preparetion/1.3 Metrics Module/A.3.14.1.roc_auc_togther.py | mosalaheg/sklearn | d275bdfaf0db4b01f45231889a37a8e004e2f77c | [
"MIT"
] | null | null | null | # Import Libraries
from sklearn.metrics import roc_auc_score
#----------------------------------------------------
# Calculating ROC AUC Score together in the same statement:
# roc_auc_score(y_true, y_score, average=’macro’, sample_weight=None,max_fpr=None)
ROCAUCScore = roc_auc_score(y_test,y_pred, average='micro') #it can be : macro,weighted,samples
#print('ROCAUC Score : ', ROCAUCScore) | 44 | 95 | 0.676768 | 53 | 396 | 4.830189 | 0.641509 | 0.09375 | 0.171875 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10101 | 396 | 9 | 96 | 44 | 0.719101 | 0.707071 | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b0908c633f0709f6002f5e1414100bb8b82462ba | 3,494 | py | Python | scripts/mkcpio.py | comptuzus/alchemy | 4e9423193422d212b718c0f95dc31e2fc6d004a3 | [
"BSD-3-Clause"
] | 17 | 2015-11-06T10:00:04.000Z | 2021-11-14T03:43:29.000Z | scripts/mkcpio.py | comptuzus/alchemy | 4e9423193422d212b718c0f95dc31e2fc6d004a3 | [
"BSD-3-Clause"
] | 5 | 2015-11-06T09:59:20.000Z | 2021-10-04T07:42:50.000Z | scripts/mkcpio.py | comptuzus/alchemy | 4e9423193422d212b718c0f95dc31e2fc6d004a3 | [
"BSD-3-Clause"
] | 20 | 2015-11-03T06:12:49.000Z | 2021-07-07T21:43:22.000Z | #===============================================================================
# Generate a file system image in 'cpio' format.
#===============================================================================
import os
import stat
#===============================================================================
#===============================================================================
class Cpio(object):
def __init__(self, fout):
self.writeLen = 0
self.inode = 0
self.fout = fout
def align4(self):
for _ in range(0, (4 - self.writeLen % 4) % 4):
self.fout.write(b"\0")
self.writeLen += 1
def align512(self):
for _ in range(0, (512 - self.writeLen % 512) % 512):
self.fout.write(b"\0")
self.writeLen += 1
def write(self, buf):
self.fout.write(buf)
self.writeLen += len(buf)
def writeHeader(self, entry):
buf = ("%s%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x" % (
"070701",
self.inode,
entry.st.st_mode,
entry.st.st_uid,
entry.st.st_gid,
1,
int(entry.st.st_mtime),
entry.dataSize,
0,
0,
os.major(entry.st.st_dev),
os.minor(entry.st.st_dev),
len(entry.filePath) + 1, 0))
self.write(buf.encode("UTF-8"))
self.write((entry.filePath + "\0").encode("UTF-8"))
self.align4()
self.inode += 1
def writeTrailer(self):
filePath = "TRAILER!!!"
buf = ("%s%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x" % (
"070701",
self.inode,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
len(filePath) + 1, 0))
self.write(buf.encode("UTF-8"))
self.write((filePath + "\0").encode("UTF-8"))
self.align512()
self.inode += 1
#===============================================================================
#===============================================================================
def processTree(cpio, tree):
for child in tree.children.values():
if stat.S_IFMT(child.st.st_mode) == stat.S_IFDIR:
# No data, only header, do recursion
cpio.writeHeader(child)
processTree(cpio, child)
elif stat.S_IFMT(child.st.st_mode) == stat.S_IFREG:
# Write file header and file content
cpio.writeHeader(child)
cpio.write(child.getData())
cpio.align4()
elif stat.S_IFMT(child.st.st_mode) == stat.S_IFLNK:
# Data is link target
cpio.writeHeader(child)
cpio.write(child.getData())
cpio.align4()
elif stat.S_IFMT(child.st.st_mode) == stat.S_IFBLK:
# No data, only header
cpio.writeHeader(child)
elif stat.S_IFMT(child.st.st_mode) == stat.S_IFCHR:
# No data, only header
cpio.writeHeader(child)
#===============================================================================
#===============================================================================
def genImage(image, root):
cpio = Cpio(image.fout)
processTree(cpio, root)
cpio.writeTrailer()
| 34.254902 | 80 | 0.398397 | 347 | 3,494 | 3.933718 | 0.227666 | 0.105495 | 0.145055 | 0.175824 | 0.492308 | 0.465934 | 0.432234 | 0.38315 | 0.38315 | 0.317949 | 0 | 0.049604 | 0.313394 | 3,494 | 101 | 81 | 34.594059 | 0.519383 | 0.232112 | 0 | 0.461538 | 1 | 0.025641 | 0.059198 | 0.040465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.025641 | 0 | 0.141026 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b0bdd8302542f715c9fdd53b53ab2d4c294af373 | 300 | py | Python | 2019-08-10/08/zad8_Maria.py | SzczecinTech/zadania-rekrutacyjne | fad5d7a6ab7ae77e567610c56fbbcd4a38f8b2c2 | [
"MIT"
] | 1 | 2019-09-05T20:35:07.000Z | 2019-09-05T20:35:07.000Z | 2019-08-10/08/zad8_Maria.py | SzczecinTech/zadania-rekrutacyjne | fad5d7a6ab7ae77e567610c56fbbcd4a38f8b2c2 | [
"MIT"
] | 1 | 2020-03-20T15:27:10.000Z | 2020-03-20T15:27:10.000Z | 2019-08-10/08/zad8_Maria.py | SzczecinTech/zadania-rekrutacyjne | fad5d7a6ab7ae77e567610c56fbbcd4a38f8b2c2 | [
"MIT"
] | 10 | 2019-08-10T08:26:09.000Z | 2019-08-15T13:50:48.000Z | import re
def is_palindrome(arg):
string = str(arg)
re.compile('([^\w])')
string = re.sub('([^\w])', '', string).lower()
return reverse(string) == string
def reverse(string):
string = string[::-1]
return string
print(is_palindrome('kajak'))
print(is_palindrome('Kajak, kajak'))
| 20 | 48 | 0.63 | 39 | 300 | 4.769231 | 0.435897 | 0.193548 | 0.204301 | 0.236559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003984 | 0.163333 | 300 | 14 | 49 | 21.428571 | 0.737052 | 0 | 0 | 0 | 0 | 0 | 0.103333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.454545 | 0.181818 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b0c908495552632d6217bd5dee2235f82f8f74cb | 224 | py | Python | webServer/input_DB.py | CSID-DGU/2021-2-OSSP2-Coconut-1 | d910af8760ee5b8e29f3bde61a0695656deadd9d | [
"MIT"
] | null | null | null | webServer/input_DB.py | CSID-DGU/2021-2-OSSP2-Coconut-1 | d910af8760ee5b8e29f3bde61a0695656deadd9d | [
"MIT"
] | null | null | null | webServer/input_DB.py | CSID-DGU/2021-2-OSSP2-Coconut-1 | d910af8760ee5b8e29f3bde61a0695656deadd9d | [
"MIT"
] | null | null | null | import pandas as pd
data = pd.read_csv("./2021-2-OSSP2-Coconut-1/algorithm/matrix_231x201.csv")
for row in range(len(data)):
for col in range(len(data.iloc[0])-2):
len(data.iloc[row])
print(data.iloc[0,1]) | 24.888889 | 75 | 0.665179 | 41 | 224 | 3.585366 | 0.585366 | 0.142857 | 0.136054 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089947 | 0.15625 | 224 | 9 | 76 | 24.888889 | 0.687831 | 0 | 0 | 0 | 0 | 0 | 0.235556 | 0.235556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b0d33894d5d22c28cd9b48a75c6ac91817248637 | 1,190 | py | Python | auth_service/entities/user.py | rahungria/clean_arquitecture_auth | 1d0276c2fd5b83705805dfe33f6781dde3bc064a | [
"MIT"
] | 1 | 2022-03-11T21:33:36.000Z | 2022-03-11T21:33:36.000Z | auth_service/entities/user.py | rahungria/clean_arquitecture_auth | 1d0276c2fd5b83705805dfe33f6781dde3bc064a | [
"MIT"
] | null | null | null | auth_service/entities/user.py | rahungria/clean_arquitecture_auth | 1d0276c2fd5b83705805dfe33f6781dde3bc064a | [
"MIT"
] | null | null | null | from datetime import datetime
class User:
'''
User model
name -- string
password -- hashed hex (bytes)
created -- time of creation
last_access -- last time of any use_case by user
'''
def __init__(
self,
username: str,
password: bytes,
created: datetime,
last_access: datetime
):
self.username = username
self.password = password
self.created = created
self.last_access = last_access
self.validate()
# TODO validate
def validate(self):
pass
def to_dict(self):
return dict(
username=self.username,
created=self.created.timestamp(),
last_access=self.last_access.timestamp()
)
def build_make_user():
'''
Injects dependencies (in kwargs) into user model factory
'''
def make_user(
username: str,
password: bytes, created:
datetime,
last_access: datetime
) -> User:
return User(
username=username,
password=password,
created=created,
last_access=last_access
)
return make_user
| 20.517241 | 60 | 0.565546 | 121 | 1,190 | 5.404959 | 0.347107 | 0.137615 | 0.06422 | 0.073395 | 0.174312 | 0.174312 | 0.174312 | 0.174312 | 0.174312 | 0 | 0 | 0 | 0.355462 | 1,190 | 57 | 61 | 20.877193 | 0.852673 | 0.17563 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017544 | 0 | 1 | 0.138889 | false | 0.138889 | 0.027778 | 0.055556 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
9fd3402e164c4c99693bbf3c5e5f4252e5113069 | 480 | py | Python | app/__init__.py | pedroermarinho/Dominik | 3be83039f4622c52ee43e5f1e6ea277d4c0fb484 | [
"MIT"
] | null | null | null | app/__init__.py | pedroermarinho/Dominik | 3be83039f4622c52ee43e5f1e6ea277d4c0fb484 | [
"MIT"
] | null | null | null | app/__init__.py | pedroermarinho/Dominik | 3be83039f4622c52ee43e5f1e6ea277d4c0fb484 | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
from flask import Flask
from app.modules import init_app
from app.controllers.database import init_db
from app.modules.chat_bot import init_chat_bot
from app.modules.arduino import init_arduino
from app.modules.services import init_services
def create_app():
app: Flask = Flask(__name__)
app.config.from_object('configFlask')
init_db(app)
init_app(app)
init_services(app)
init_chat_bot(app)
init_arduino(app)
return app
| 22.857143 | 46 | 0.75 | 73 | 480 | 4.671233 | 0.315068 | 0.102639 | 0.164223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002494 | 0.164583 | 480 | 20 | 47 | 24 | 0.84788 | 0.04375 | 0 | 0 | 0 | 0 | 0.02407 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | true | 0 | 0.4 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
9fe625f50dffc1408e4e691f6448288f446c9cf1 | 1,112 | py | Python | robot/migrations/0011_auto_20200105_1702.py | Misschl/wechat | 8ce76dae32b1086bb83ee6e3fe64cf84845012c0 | [
"Apache-2.0"
] | 1 | 2020-01-07T06:51:19.000Z | 2020-01-07T06:51:19.000Z | robot/migrations/0011_auto_20200105_1702.py | Misschl/wechat | 8ce76dae32b1086bb83ee6e3fe64cf84845012c0 | [
"Apache-2.0"
] | null | null | null | robot/migrations/0011_auto_20200105_1702.py | Misschl/wechat | 8ce76dae32b1086bb83ee6e3fe64cf84845012c0 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.1.5 on 2020-01-05 17:02
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('robot', '0010_auto_20200105_1342'),
]
operations = [
migrations.AlterModelOptions(
name='appmodel',
options={'verbose_name': '对接应用', 'verbose_name_plural': '对接应用'},
),
migrations.AlterField(
model_name='message',
name='is_at',
field=models.BooleanField(default=False, null=True),
),
migrations.AlterField(
model_name='wxuser',
name='friend',
field=models.BooleanField(null=True, verbose_name='我的好友'),
),
migrations.AlterField(
model_name='wxuser',
name='is_friend',
field=models.BooleanField(null=True, verbose_name='和我有好友关系'),
),
migrations.AlterField(
model_name='wxuser',
name='puid',
field=models.CharField(help_text='微信用户的外键', max_length=15, primary_key=True, serialize=False),
),
]
| 29.263158 | 106 | 0.573741 | 108 | 1,112 | 5.75 | 0.546296 | 0.070853 | 0.161031 | 0.186795 | 0.342995 | 0.342995 | 0.154589 | 0.154589 | 0 | 0 | 0 | 0.042471 | 0.301259 | 1,112 | 37 | 107 | 30.054054 | 0.756757 | 0.040468 | 0 | 0.387097 | 1 | 0 | 0.133333 | 0.021596 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b000d7e32a3360c54c1b81b0de380d94f4b300e6 | 128 | py | Python | Mundos/Mundo 1/Aulas/Aula 7/ex05.py | NicolasdeLimaAlves/Curso-de-Python | 4987a2c8075a76f676aa69bfd968fdf8d1c7fa52 | [
"MIT"
] | null | null | null | Mundos/Mundo 1/Aulas/Aula 7/ex05.py | NicolasdeLimaAlves/Curso-de-Python | 4987a2c8075a76f676aa69bfd968fdf8d1c7fa52 | [
"MIT"
] | null | null | null | Mundos/Mundo 1/Aulas/Aula 7/ex05.py | NicolasdeLimaAlves/Curso-de-Python | 4987a2c8075a76f676aa69bfd968fdf8d1c7fa52 | [
"MIT"
] | null | null | null | nu = int(input('Digite um número: '))
an = nu - 1
su = nu + 1
print('O sucessor de {} é {} e o\nsucessor é {}'.format(nu,an,su)) | 32 | 66 | 0.585938 | 25 | 128 | 3 | 0.68 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.195313 | 128 | 4 | 66 | 32 | 0.708738 | 0 | 0 | 0 | 0 | 0 | 0.449612 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b003f0adb7332186a94fe1cb096faeb8062e78b6 | 2,773 | py | Python | dayu_ffmpeg/network/node/codec.py | phenom-films/dayu_ffmpeg | 981c00ff3354c881d2cef8ae5e5b9f1880bc9a36 | [
"MIT"
] | 31 | 2018-10-11T07:44:15.000Z | 2022-03-30T19:36:08.000Z | dayu_ffmpeg/network/node/codec.py | phenom-films/dayu_ffmpeg | 981c00ff3354c881d2cef8ae5e5b9f1880bc9a36 | [
"MIT"
] | 6 | 2019-03-22T02:43:12.000Z | 2019-11-20T14:43:28.000Z | dayu_ffmpeg/network/node/codec.py | phenom-films/dayu_ffmpeg | 981c00ff3354c881d2cef8ae5e5b9f1880bc9a36 | [
"MIT"
] | 12 | 2019-01-07T09:13:04.000Z | 2022-02-16T13:42:37.000Z | #!/usr/bin/env python
# -*- encoding: utf-8 -*-
__author__ = 'andyguo'
from base import BaseNode
from dayu_ffmpeg.config import CODEC_ORDER_SCORE
class BaseCodecNode(BaseNode):
type = 'base_code_node'
order_score = CODEC_ORDER_SCORE
def simple_cmd_string(self):
return self.type
def complex_cmd_string(self):
return '{stream_in}{cmd}{stream_out}'.format(
stream_in=''.join(['[{}]'.format(x) for x in self.stream_in_num]),
cmd=self.simple_cmd_string(),
stream_out='[{}]'.format(self.stream_out_num))
class Codec(BaseCodecNode):
type = 'codec'
def __init__(self, video='prores_ks', audio=None, **kwargs):
self.video = video
self.audio = audio
super(Codec, self).__init__(**kwargs)
def simple_cmd_string(self):
self._cmd = u'-codec:v {video}'.format(video=self.video)
if self.audio:
self._cmd += u' -codec:a {audio}'.format(audio=self.audio)
return self._cmd
def complex_cmd_string(self):
return '{cmd}'.format(
stream_in=''.join(['[{}]'.format(x) for x in self.stream_in_num]),
cmd=self.simple_cmd_string(),
stream_out='[{}]'.format(self.stream_out_num))
class WriteTimecode(BaseCodecNode):
type = 'timecode'
def __init__(self, timecode=None, **kwargs):
self.timecode = timecode
super(WriteTimecode, self).__init__(**kwargs)
def simple_cmd_string(self):
self._cmd = u'-timecode {tc}'.format(tc=self.timecode)
return self._cmd
class WriteReel(BaseCodecNode):
type = 'metadata'
def __init__(self, reel=None, **kwargs):
self.reel = reel
super(WriteReel, self).__init__(**kwargs)
def simple_cmd_string(self):
self._cmd = u'-metadata:s:v:0 reel_name={reel}'.format(reel=self.reel)
return self._cmd
class Quality(BaseCodecNode):
_name = 'qscale'
def __init__(self, qscale=2, **kwargs):
self.qscale = qscale
super(Quality, self).__init__(**kwargs)
def simple_cmd_string(self):
self._cmd = u'-qscale:v {qscale}'.format(qscale=self.qscale) if self.qscale else u''
return self._cmd
class PixelFormat(BaseCodecNode):
type = 'pix_fmt'
def __init__(self, pixel_format='yuv422p10le', profile=2, **kwargs):
self.pixel_format = pixel_format
self.profile = profile
super(PixelFormat, self).__init__(**kwargs)
def simple_cmd_string(self):
self._cmd = u'{pix}{profile}'.format(
pix='-pix_fmt {}'.format(self.pixel_format) if self.pixel_format else '',
profile=' -profile:v {}'.format(self.profile) if self.profile else '')
return self._cmd
| 29.817204 | 92 | 0.627119 | 348 | 2,773 | 4.689655 | 0.201149 | 0.047181 | 0.073529 | 0.066176 | 0.311275 | 0.297794 | 0.262255 | 0.262255 | 0.262255 | 0.262255 | 0 | 0.004233 | 0.233321 | 2,773 | 92 | 93 | 30.141304 | 0.763405 | 0.015867 | 0 | 0.296875 | 0 | 0 | 0.095343 | 0.010268 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203125 | false | 0 | 0.03125 | 0.046875 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b0041e9cd05d1f7d1d72e22934e2919b7e364887 | 1,900 | py | Python | src/app/forms.py | deadlock-delegate/arkdelegates | 8a5262f51b519ba3bc10094756c8866fc550df65 | [
"MIT"
] | 2 | 2018-05-22T13:47:09.000Z | 2018-05-23T12:45:05.000Z | src/app/forms.py | deadlock-delegate/arkdelegates | 8a5262f51b519ba3bc10094756c8866fc550df65 | [
"MIT"
] | 21 | 2018-05-08T12:56:46.000Z | 2020-06-05T18:59:38.000Z | src/app/forms.py | deadlock-delegate/arkdelegates | 8a5262f51b519ba3bc10094756c8866fc550df65 | [
"MIT"
] | 4 | 2018-05-04T15:00:59.000Z | 2019-02-13T02:39:07.000Z | import json
from django import forms
from django.contrib.auth import password_validation
from django.core.validators import validate_email
from app.models import Contribution, Delegate, Node, StatusUpdate
class ClaimAccountForm(forms.Form):
message_json = forms.CharField(widget=forms.Textarea(attrs={"rows": 2}))
email = forms.CharField(widget=forms.EmailInput)
password = forms.CharField(widget=forms.PasswordInput)
def clean_email(self):
email = self.cleaned_data.get("email")
validate_email(email)
return email
def clean_password(self):
password = self.cleaned_data.get("password")
password_validation.validate_password(password)
return password
def clean_message_json(self):
data = self.cleaned_data["message_json"]
try:
json_data = json.loads(data)
except: # noqa
raise forms.ValidationError("Invalid data")
if not (json_data.get("message") and json_data.get("signature")):
raise forms.ValidationError("JSON does not have an expected structure")
return json_data
class LoginForm(forms.Form):
email = forms.CharField(widget=forms.EmailInput)
password = forms.CharField(widget=forms.PasswordInput)
def clean_email(self):
email = self.cleaned_data.get("email")
validate_email(email)
return email
class ContributionForm(forms.ModelForm):
class Meta:
model = Contribution
fields = ["title", "description"]
class NodeForm(forms.ModelForm):
class Meta:
model = Node
fields = ["network", "cpu", "memory", "is_dedicated", "is_backup"]
class ProposalForm(forms.ModelForm):
class Meta:
model = Delegate
fields = ["proposal"]
class StatusUpdateForm(forms.ModelForm):
class Meta:
model = StatusUpdate
fields = ["message"]
| 27.142857 | 83 | 0.676842 | 212 | 1,900 | 5.957547 | 0.339623 | 0.055424 | 0.079177 | 0.098971 | 0.342043 | 0.253365 | 0.253365 | 0.253365 | 0.253365 | 0.253365 | 0 | 0.000678 | 0.223684 | 1,900 | 69 | 84 | 27.536232 | 0.855593 | 0.002105 | 0 | 0.326531 | 0 | 0 | 0.089757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0.142857 | 0.102041 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b0106280867ea69e20fdcb60880b704341cadc13 | 495 | py | Python | js_rss_feed/cms_apps.py | compoundpartners/js-rss-feed | 5dc5184e907008de73b1244b6d5e6443d775f4c0 | [
"BSD-3-Clause"
] | null | null | null | js_rss_feed/cms_apps.py | compoundpartners/js-rss-feed | 5dc5184e907008de73b1244b6d5e6443d775f4c0 | [
"BSD-3-Clause"
] | null | null | null | js_rss_feed/cms_apps.py | compoundpartners/js-rss-feed | 5dc5184e907008de73b1244b6d5e6443d775f4c0 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from aldryn_apphooks_config.app_base import CMSConfigApp
from cms.apphook_pool import apphook_pool
from django.utils.translation import ugettext_lazy as _
from .cms_appconfig import RSSFeedConfig
class RSSFeedApp(CMSConfigApp):
name = _('RSS Feed')
app_name = 'js_rss_feed'
app_config = RSSFeedConfig
def get_urls(self, *args, **kwargs):
return ['js_rss_feed.urls']
apphook_pool.register(RSSFeedApp)
| 23.571429 | 56 | 0.759596 | 65 | 495 | 5.446154 | 0.6 | 0.09322 | 0.056497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002392 | 0.155556 | 495 | 20 | 57 | 24.75 | 0.844498 | 0.042424 | 0 | 0 | 0 | 0 | 0.074153 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.416667 | 0.083333 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b012ad0c971c267943ac3d2a61abc9a5cd4191c0 | 1,264 | py | Python | tests/test_headers.py | gezpage/chocs | cf64a792989e3f23dc7f400045898761511a229a | [
"MIT"
] | null | null | null | tests/test_headers.py | gezpage/chocs | cf64a792989e3f23dc7f400045898761511a229a | [
"MIT"
] | null | null | null | tests/test_headers.py | gezpage/chocs | cf64a792989e3f23dc7f400045898761511a229a | [
"MIT"
] | null | null | null | import pytest
from chocs.headers import Headers
def test_can_instantiate():
headers = Headers()
assert isinstance(headers, Headers)
def test_normalize_wsgi_headers():
headers = Headers({"HTTP_USER_AGENT": "Test Agent", "HTTP_ACCEPT": "plain/text"})
assert headers["User-Agent"] == "Test Agent"
assert headers["HTTP_USER_AGENT"] == "Test Agent"
assert headers["USER_AGENT"] == "Test Agent"
assert headers["USER-AGENT"] == "Test Agent"
assert headers.get("User-Agent") == "Test Agent"
def test_add_header():
headers = Headers()
headers.set("USER_AGENT", "Test Agent")
assert headers["HTTP_USER_AGENT"] == "Test Agent"
assert headers["USER_AGENT"] == "Test Agent"
assert headers["USER-AGENT"] == "Test Agent"
assert headers.get("User-Agent") == "Test Agent"
def test_non_unique_headers():
headers = Headers()
headers.set("Set-Cookie", "123")
headers.set("Set-Cookie", "456")
headers.set("Set-Cookie", "789")
assert headers["Set-Cookie"] == [
"123",
"456",
"789",
]
# test items view
assert [(key, value) for key, value in headers.items()] == [
("set-cookie", "123"),
("set-cookie", "456"),
("set-cookie", "789"),
]
| 26.333333 | 85 | 0.61788 | 153 | 1,264 | 4.973856 | 0.228758 | 0.130092 | 0.187911 | 0.260184 | 0.465177 | 0.465177 | 0.42707 | 0.42707 | 0.409987 | 0.409987 | 0 | 0.027108 | 0.212025 | 1,264 | 47 | 86 | 26.893617 | 0.736948 | 0.011867 | 0 | 0.323529 | 0 | 0 | 0.283079 | 0 | 0 | 0 | 0 | 0 | 0.352941 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b016d60c38d3c0a566ff51537431da767f666349 | 918 | py | Python | pipeline/schemas/friend_invite.py | mystic-ai/pipeline | 487c5e755a862a12c90572b0eff170853ecb3790 | [
"Apache-2.0"
] | 7 | 2022-01-28T20:27:50.000Z | 2022-02-22T15:30:00.000Z | pipeline/schemas/friend_invite.py | mystic-ai/pipeline | 487c5e755a862a12c90572b0eff170853ecb3790 | [
"Apache-2.0"
] | 17 | 2022-01-11T12:05:38.000Z | 2022-03-25T15:29:43.000Z | pipeline/schemas/friend_invite.py | neuro-ai-dev/pipeline | c7edcc83576158062fe48f266dfaea62d754e761 | [
"Apache-2.0"
] | null | null | null | from enum import Enum
from .base import BaseModel
class FriendInviteStatus(Enum):
CREATED = "created"
#: Friend has accepted the invite
ACCEPTED = "accepted"
#: The friend is still under trial period
TRIAL = "trial"
#: Cycle complete, the inviter has been credited
COMPLETE = "complete"
class FriendInviteBase(BaseModel):
#: The ID of the User who sent the invite
inviter_id: str
#: The email the invite is to be sent to
invitee_email: str
class FriendInviteCreate(FriendInviteBase):
"""Create an invitation for a friend to join"""
pass
class FriendInviteGet(FriendInviteBase):
"""View of an invitation for a friend to join"""
#: The ID of this invite
id: str
#: The status of the invite
status: FriendInviteStatus
class FriendInvitePatch(BaseModel):
"""Patch the status of a friend invitation"""
status: FriendInviteStatus
| 21.857143 | 52 | 0.696078 | 115 | 918 | 5.53913 | 0.417391 | 0.056515 | 0.021978 | 0.050235 | 0.087912 | 0.087912 | 0.087912 | 0 | 0 | 0 | 0 | 0 | 0.232026 | 918 | 41 | 53 | 22.390244 | 0.903546 | 0.405229 | 0 | 0.117647 | 0 | 0 | 0.053435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.117647 | 0 | 0.941176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b01c04404aef4086cf534109b8de74aa4cb5ab99 | 5,083 | py | Python | ccontrol/utils/run.py | PierreMsy/DRL_continuous_control | 40a443ea2c7415a5dd7185114da0f06902122c4b | [
"MIT"
] | null | null | null | ccontrol/utils/run.py | PierreMsy/DRL_continuous_control | 40a443ea2c7415a5dd7185114da0f06902122c4b | [
"MIT"
] | null | null | null | ccontrol/utils/run.py | PierreMsy/DRL_continuous_control | 40a443ea2c7415a5dd7185114da0f06902122c4b | [
"MIT"
] | null | null | null | import os
import numpy as np
from collections import deque
from ccontrol.utils import save_scores, save_AC_models, save_configuration
class Runner:
def __init__(self) -> None:
file_location = os.path.dirname(__file__)
self.path_score = os.path.join(file_location, r'./../../output/score')
self.path_model = os.path.join(file_location, r'./../../output/model')
self.path_config = os.path.join(file_location, r'./../../output/configuration')
def run(self, agent, env, brain_name, nb_episodes, key,
average_on=10, target_score=None, target_over=100,
save_score=True, save_config=True, save_weights=False, save_interaction=False):
scores = deque()
scores_target = deque(maxlen=target_over)
scores_window = deque(maxlen=average_on)
is_solved = ''
for episode in range(1, nb_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
states = env_info.vector_observations
score = 0
while True:
actions = agent.act(states, noise=True)
env_info = env.step(actions)[brain_name]
next_states = env_info.vector_observations
rewards = env_info.rewards
dones = env_info.local_done
for state, action, reward, next_state, done in zip(states, actions, rewards, next_states, dones):
agent.step(state, action, reward, next_state, done)
states = next_states
score += np.mean(rewards)
if np.any(dones):
scores.append(score)
scores_target.append(score)
scores_window.append(score)
score_averaged = np.mean(list(scores_window))
print(f"\rEpisode {episode} Score: {score_averaged}{is_solved}",
end='\r')
if target_score:
if (len(is_solved) == 0) & (np.mean(list(scores_target)) > target_score):
is_solved = f' -> Solved in {episode} episodes'
break
print(f"\nLast score: {round(score_averaged,5)} {is_solved}")
if save_score:
save_scores(scores, key, self.path_score)
if save_config:
save_configuration(agent, key, self.path_config)
if save_weights:
save_AC_models(agent, key, self.path_model)
if save_interaction:
raise Exception('not implemented yet')
def run_single_agent(self, agent, env, brain_name, nb_episodes, key,
average_on=10, target_score=None, target_over=100,
save_score=True, save_config=True, save_weights=False, save_interaction=False):
scores = deque()
scores_target = deque(maxlen=target_over)
scores_window = deque(maxlen=average_on)
is_solved = ''
for episode in range(1, nb_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
state = env_info.vector_observations[0]
score = 0
while True:
action = agent.act(state, noise=True)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0]
reward = env_info.rewards[0]
done = env_info.local_done[0]
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
scores.append(score)
scores_target.append(score)
scores_window.append(score)
score_averaged = np.mean(list(scores_window))
print(f"\rEpisode {episode} Score: {score_averaged}{is_solved}",
end='\r')
if target_score:
if (len(is_solved) == 0) & (np.mean(list(scores_target)) > target_score):
is_solved = f' -> Solved in {episode} episodes'
break
print(f"\rLast score: {round(score_averaged,5)} {is_solved}")
if save_score:
save_scores(scores, key, self.path_score)
if save_config:
save_configuration(agent, key, self.path_config)
if save_weights:
save_AC_models(agent, key, self.path_model)
if save_interaction:
raise Exception('not implemented yet') | 40.664 | 117 | 0.508951 | 525 | 5,083 | 4.678095 | 0.194286 | 0.034202 | 0.026873 | 0.040717 | 0.765879 | 0.740635 | 0.684446 | 0.649023 | 0.617264 | 0.617264 | 0 | 0.007931 | 0.404682 | 5,083 | 125 | 118 | 40.664 | 0.803701 | 0 | 0 | 0.637363 | 0 | 0 | 0.075531 | 0.025964 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032967 | false | 0 | 0.043956 | 0 | 0.087912 | 0.043956 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b02767cc5efab8bcfaef8c88a74cc6262d4114af | 8,325 | py | Python | pyd3d/input/SedMor.py | JulesBlm/pyDelft3D-FLOW | 6cd8578ce3b64242b26b3829f6eec6094abf261a | [
"BSD-3-Clause"
] | 3 | 2020-09-30T22:35:39.000Z | 2022-02-04T16:10:22.000Z | pyd3d/input/SedMor.py | JulesBlm/pyDelft3D-FLOW | 6cd8578ce3b64242b26b3829f6eec6094abf261a | [
"BSD-3-Clause"
] | null | null | null | pyd3d/input/SedMor.py | JulesBlm/pyDelft3D-FLOW | 6cd8578ce3b64242b26b3829f6eec6094abf261a | [
"BSD-3-Clause"
] | 1 | 2021-08-31T17:02:32.000Z | 2021-08-31T17:02:32.000Z | import re
from collections import OrderedDict
from datetime import date
import json
class Mor(object):
'''
Read and write Delft3D morphology file
Parameters
----------
filename: str
Filename of sediment or morphology file that is read
TODO
----
* Maybe split into separate Mor and Sed subclass?
* Add description with units from FLOW manual and display those when a sed or mor file is loaded
* Should check against hardcoded dict to see if options are ok and show possible options and unitss
* Same for loading descriptions of keywords
* Parse number values
'''
def __init__(self, filename=None):
self.filename = filename
self.data = OrderedDict()
self.read(filename)
def __repr__(self):
return json.dumps(self.data, indent=4)
def read(self, filename=None):
valid_extension = ".mor"
mor_header_name = ["[MorphologyFileInformation]", "[Morphology]", "[Underlayer]", "[Output]"]
if not filename:
raise Exception("No file name supplied!")
elif not filename.endswith(valid_extension):
raise Exception("Filename does not end with .mor")
with open(filename, "r") as sed_file:
data = OrderedDict()
for line in sed_file.readlines():
stripped_lined = line.strip()
if stripped_lined in mor_header_name:
header_name = stripped_lined[1:-1] # remove square brackets
data[header_name] = OrderedDict()
if "=" in line:
keyword, values = line.split("=", 1)
keyword = keyword.strip()
list_of_values = re.split(r'\s{2,}', values.strip()) # split on more than two spaces
data[header_name][keyword] = list_of_values[0]
self.data = data
return self.data
def write(self, filename=None, exclude=[]):
"""Write OrderedDict to Delft3D-FLOW *.sed file.
To ignore a keyword pass a list to keyword argument 'exclude',
Parameters
----------
filename: str
Paths to sed/mor file
exclude: list
List of keyword to exclude from writing
Examples
--------
>>> sed.write(inp, '5050.sed', exclude=['NearBedRefConcentration'])
"""
if not filename:
raise Exception("No filename provided")
data = self.data
# update file info
today = date.today()
creation_date_string = today.strftime("%m/%d/%Y, %H:%M:%S")
info_header = "MorphologyFileInformation"
data[info_header]['FileCreatedBy'] = "pyDelft3D-FLOW v ????" # TODO add global version here
data[info_header]['FileCreationDate'] = creation_date_string
with open(filename, 'w') as new_file:
for header_name in data:
new_file.write(f"[{header_name}]\n")
for keyword in data[header_name]:
if not (keyword in exclude):
new_file.write(f" {keyword.ljust(16)} = {data[header_name][keyword]}\n")
class Sed(object):
def __init__(self, filename=None):
self.names = []
self.filename = filename
self.data = OrderedDict()
self.read(filename)
def __repr__(self):
return json.dumps(self.data, indent=4)
def read(self, filename=None):
valid_extension = ".sed"
sed_header_names = ["[SedimentFileInformation]", "[SedimentOverall]", "[Sediment]"]
if not filename:
raise Exception("No file name supplied!")
elif not filename.endswith(valid_extension):
raise Exception("Filename does not end with .mor or .sed!")
with open(filename, "r") as sed_file:
data = OrderedDict()
for line in sed_file.readlines():
stripped_lined = line.strip()
if stripped_lined in sed_header_names:
header_name = stripped_lined[1:-1] # remove square brackets
# to prevent overwriting the previous sediment
if header_name == 'Sediment':
sed_nr = len(self.names)
header_name = header_name + str(sed_nr)
data[header_name] = OrderedDict()
data[header_name] = OrderedDict()
if "=" in line:
keyword, values = line.split("=", 1)
keyword = keyword.strip()
list_of_values = re.split(r'\s{2,}', values.strip()) # split on more than two spaces
if keyword == "Name":
self.names.append(list_of_values[0][1:-1]) # add sediment to list of sediment names
data[header_name][keyword] = list_of_values[0]
self.data = data
return self.data
def write(self, filename=None, exclude=[]):
"""Write OrderedDict to Delft3D-FLOW *.sed file.
To ignore a keyword pass a list to keyword argument 'exclude',
Parameters
----------
filename: str
Paths to sed/mor file
exclude: list
List of keyword to exclude from writing
Examples
--------
>>> sed.write(inp, '5050.sed', exclude=['NearBedRefConcentration'])
"""
if not filename:
raise Exception("No filename provided")
data = self.data
# update file info
today = date.today()
creation_date_string = today.strftime("%m/%d/%Y, %H:%M:%S")
info_header = "SedimentFileInformation"
data[info_header]['FileCreatedBy'] = "pyDelft3D-FLOW v ?" # TODO add global version here
data[info_header]['FileCreationDate'] = creation_date_string
with open(filename, 'w') as new_file:
for header_name in data:
# remove the sediment nr again when writing to file
if re.search(r'Sediment[0-9]*$', header_name):
new_file.write("[Sediment]\n")
else:
new_file.write(f"[{header_name}]\n")
for keyword in data[header_name]:
if not (keyword in exclude):
new_file.write(f" {keyword.ljust(16)} = {data[header_name][keyword]}\n")
# TODO:
# Make this a hardcoded dict to match values against
# mor_string_keywords = ['FileCreatedBy', 'FileCreationDate', 'FileVersion', 'MorUpd', 'IHidExp', 'ISlope', 'BcFil', 'IBedCond', 'ICmpCond', 'IUnderLyr', 'TTLForm', 'ThTrLyr', 'UpdBaseLyr', 'IniComp']
# mor_bool_keywords = ['NeuBcSand', 'NeuBcMud', 'DensIn', 'MorUpd', 'BedUpd', 'CmpUpd', 'NeglectEntrainment', 'EqmBc', 'UpdInf', 'Multi', 'UpwindBedload']
# mor_bool_output_keywords = ['VelocAtZeta', 'VelocMagAtZeta', 'VelocZAtZeta', 'ShearVeloc','MaximumWaterdepth','BedTranspAtFlux',
# 'BedTranspDueToCurrentsAtZeta','BedTranspDueToCurrentsAtFlux','BedTranspDueToWavesAtZeta','BedTranspDueToWavesAtFlux',
# 'SuspTranspDueToWavesAtZeta','SuspTranspDueToWavesAtFlux','SuspTranspAtFlux','NearBedTranspCorrAtFlux','NearBedRefConcentration',
# 'EquilibriumConcentration','SettlingVelocity','SourceSinkTerms','Bedslope', 'Taurat','Bedforms','Dm','Dg',
# 'Frac','MudFrac','FixFac','HidExp','Percentiles','CumNetSedimentationFlux','BedLayerSedimentMass','BedLayerDepth',
# 'BedLayerVolumeFractions','BedLayerPorosity','StatWaterDepth',
# ]
# bool_keywords = mor_bool_keywords + mor_bool_output_keywords
#
# string_keywords = bool_keywords + mor_string_keywords # temporary hack
# elif filename.endswith(".sed"):
# string_keywords = ['FileCreatedBy', 'VERSION', 'IopSus', 'SedTyp', 'Name']
# bool_keywords = [] | 38.541667 | 204 | 0.56048 | 837 | 8,325 | 5.444444 | 0.268817 | 0.046083 | 0.02765 | 0.0158 | 0.563968 | 0.563968 | 0.552995 | 0.552995 | 0.552995 | 0.533246 | 0 | 0.006098 | 0.33021 | 8,325 | 216 | 205 | 38.541667 | 0.811155 | 0.343664 | 0 | 0.73 | 0 | 0 | 0.12839 | 0.03122 | 0 | 0 | 0 | 0.013889 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0.02 | 0.18 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b02d364fad492b49625b2e4436c0c11396e28860 | 787 | py | Python | algorithm/greedy/luck_balance/solution.py | delaanthonio/hackerrank | b1f2e1e93b3260be90eb3b8cb8e86e9a700acf27 | [
"MIT"
] | 1 | 2017-07-02T01:35:39.000Z | 2017-07-02T01:35:39.000Z | algorithm/greedy/luck_balance/solution.py | delaanthonio/hackerrank | b1f2e1e93b3260be90eb3b8cb8e86e9a700acf27 | [
"MIT"
] | null | null | null | algorithm/greedy/luck_balance/solution.py | delaanthonio/hackerrank | b1f2e1e93b3260be90eb3b8cb8e86e9a700acf27 | [
"MIT"
] | 1 | 2018-04-03T15:11:56.000Z | 2018-04-03T15:11:56.000Z | #!/usr/bin/env python3
"""
Luck Balance
:author: Dela Anthonio
:hackerrank: https://hackerrank.com/delaanthonio
:problem: https://www.hackerrank.com/challenges/luck-balance
"""
import sys
from typing import List, Tuple
def luck_balance(contests: List[Tuple[int, int]], important) -> int:
important_contests = sorted((l for l, i in contests if i), reverse=True)
luck = 0
luck += sum(important_contests[:important])
luck -= sum(important_contests[important:])
luck += sum(l for l, i in contests if not i)
return luck
def main():
_, important = [int(x) for x in input().split()]
contests = [tuple(int(x) for x in line.split()) for line in sys.stdin]
luck = luck_balance(contests, important)
print(luck)
if __name__ == '__main__':
main()
| 23.848485 | 76 | 0.678526 | 112 | 787 | 4.642857 | 0.410714 | 0.084615 | 0.073077 | 0.023077 | 0.248077 | 0.209615 | 0.209615 | 0 | 0 | 0 | 0 | 0.00311 | 0.182973 | 787 | 32 | 77 | 24.59375 | 0.805599 | 0.213469 | 0 | 0 | 0 | 0 | 0.013093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.5 | 0 | 0.6875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b02e8d705f4433a8a8c345556bf727f88bf3da10 | 715 | py | Python | django_attachments/fields.py | mireq/django-attachments | 53060499e931723ac510a42dd7ac72addbbdf653 | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2020-02-08T02:43:15.000Z | 2020-02-08T02:43:15.000Z | django_attachments/fields.py | mireq/django-attachments | 53060499e931723ac510a42dd7ac72addbbdf653 | [
"BSD-2-Clause-FreeBSD"
] | 2 | 2020-06-05T17:38:51.000Z | 2021-04-30T08:28:39.000Z | django_attachments/fields.py | mireq/django-attachments | 53060499e931723ac510a42dd7ac72addbbdf653 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # -*- coding: utf-8 -*-
from django import forms
from django.db import models
class LibraryFormField(forms.ModelChoiceField):
pass
class GalleryFormField(LibraryFormField):
pass
class LibraryField(models.ForeignKey):
description = "Attachments"
def __init__(self, to='django_attachments.Library', *args, **kwargs):
super().__init__(to, *args, **kwargs)
def formfield(self, **kwargs):
defaults = {'form_class': LibraryFormField}
defaults.update(kwargs)
return super().formfield(**defaults)
class GalleryField(LibraryField):
description = "Gallery"
def formfield(self, **kwargs):
defaults = {'form_class': GalleryFormField}
defaults.update(kwargs)
return super().formfield(**defaults)
| 21.666667 | 70 | 0.73986 | 76 | 715 | 6.815789 | 0.434211 | 0.03861 | 0.061776 | 0.084942 | 0.335907 | 0.335907 | 0.335907 | 0 | 0 | 0 | 0 | 0.001597 | 0.124476 | 715 | 32 | 71 | 22.34375 | 0.825879 | 0.029371 | 0 | 0.4 | 0 | 0 | 0.092486 | 0.037572 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0.1 | 0.1 | 0 | 0.65 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
b0356226ddef8e18389d2064c0c05b8962713f0b | 339 | py | Python | uri/1103.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null | uri/1103.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null | uri/1103.py | italo-batista/problems-solving | f83ad34f0abebd52925c4020635556f20743ba06 | [
"MIT"
] | null | null | null |
while True:
h1, m1, h2, m2 = map(int, raw_input().split())
if h1 == 0 and h2 == 0 and m1 == 0 and m2 == 0:
break
if h1 == h2:
if m1 < m2:
print m2 - m1
elif m1 > m2:
print 24 * 60 - (m1 - m2)
else:
print 0
elif h1 < h2:
print (h2 - h1) * 60 - m1 + m2
else:
print (24 - h1) * 60 - m1 + (h2 * 60) + m2
| 14.73913 | 48 | 0.480826 | 62 | 339 | 2.612903 | 0.322581 | 0.098765 | 0.111111 | 0.123457 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206422 | 0.356932 | 339 | 22 | 49 | 15.409091 | 0.536697 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b036cd71a269e078fad0cb3cf80cc057055758e6 | 699 | py | Python | src/ralph/deployment/tests/utils.py | vi4m/ralph | 2af767ee23d89be9e6cec0a537350a1ce8840bd1 | [
"Apache-2.0"
] | 1 | 2018-09-01T14:14:08.000Z | 2018-09-01T14:14:08.000Z | src/ralph/deployment/tests/utils.py | srikanth4372/sample | 127b5742ae464d42909a14d71e3c10c241ec3a23 | [
"Apache-2.0"
] | 1 | 2019-08-14T10:03:45.000Z | 2019-08-14T10:03:45.000Z | src/ralph/deployment/tests/utils.py | srikanth4372/sample | 127b5742ae464d42909a14d71e3c10c241ec3a23 | [
"Apache-2.0"
] | 1 | 2019-08-14T09:59:42.000Z | 2019-08-14T09:59:42.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import factory
from factory.django import DjangoModelFactory
from ralph.deployment.models import MassDeployment, Preboot, Deployment
from ralph.discovery.tests.util import DeviceFactory
class DeploymentFactory(DjangoModelFactory):
FACTORY_FOR = Deployment
device = factory.SubFactory(DeviceFactory)
mac = "000000000000"
ip = ""
class MassDeploymentFactory(DjangoModelFactory):
FACTORY_FOR = MassDeployment
class PrebootFactory(DjangoModelFactory):
FACTORY_FOR = Preboot
| 23.3 | 71 | 0.798283 | 73 | 699 | 7.342466 | 0.520548 | 0.074627 | 0.119403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021595 | 0.13877 | 699 | 29 | 72 | 24.103448 | 0.868771 | 0.060086 | 0 | 0 | 0 | 0 | 0.018321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.470588 | 0 | 1 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b03992e2193f2551da86bfb640cba3cd09cb558b | 4,095 | py | Python | sdk/python/pulumi_aws/s3/__init__.py | sibuthomasmathew/pulumi-aws | 6351f2182eb6f693d4e09e4136c385adfa0ab674 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/s3/__init__.py | sibuthomasmathew/pulumi-aws | 6351f2182eb6f693d4e09e4136c385adfa0ab674 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_aws/s3/__init__.py | sibuthomasmathew/pulumi-aws | 6351f2182eb6f693d4e09e4136c385adfa0ab674 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
# Export this package's modules as members:
from ._enums import *
from .access_point import *
from .account_public_access_block import *
from .analytics_configuration import *
from .bucket import *
from .bucket_metric import *
from .bucket_notification import *
from .bucket_object import *
from .bucket_ownership_controls import *
from .bucket_policy import *
from .bucket_public_access_block import *
from .get_bucket import *
from .get_bucket_object import *
from .get_bucket_objects import *
from .get_canonical_user_id import *
from .inventory import *
from .object_copy import *
from ._inputs import *
from . import outputs
def _register_module():
import pulumi
from .. import _utilities
class Module(pulumi.runtime.ResourceModule):
_version = _utilities.get_semver_version()
def version(self):
return Module._version
def construct(self, name: str, typ: str, urn: str) -> pulumi.Resource:
if typ == "aws:s3/accessPoint:AccessPoint":
return AccessPoint(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/accountPublicAccessBlock:AccountPublicAccessBlock":
return AccountPublicAccessBlock(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/analyticsConfiguration:AnalyticsConfiguration":
return AnalyticsConfiguration(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucket:Bucket":
return Bucket(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketMetric:BucketMetric":
return BucketMetric(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketNotification:BucketNotification":
return BucketNotification(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketObject:BucketObject":
return BucketObject(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketOwnershipControls:BucketOwnershipControls":
return BucketOwnershipControls(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketPolicy:BucketPolicy":
return BucketPolicy(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/bucketPublicAccessBlock:BucketPublicAccessBlock":
return BucketPublicAccessBlock(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/inventory:Inventory":
return Inventory(name, pulumi.ResourceOptions(urn=urn))
elif typ == "aws:s3/objectCopy:ObjectCopy":
return ObjectCopy(name, pulumi.ResourceOptions(urn=urn))
else:
raise Exception(f"unknown resource type {typ}")
_module_instance = Module()
pulumi.runtime.register_resource_module("aws", "s3/accessPoint", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/accountPublicAccessBlock", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/analyticsConfiguration", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucket", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketMetric", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketNotification", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketObject", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketOwnershipControls", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketPolicy", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/bucketPublicAccessBlock", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/inventory", _module_instance)
pulumi.runtime.register_resource_module("aws", "s3/objectCopy", _module_instance)
_register_module()
| 50.555556 | 99 | 0.715751 | 441 | 4,095 | 6.453515 | 0.222222 | 0.042164 | 0.033732 | 0.11806 | 0.418833 | 0.388967 | 0.388967 | 0.374912 | 0.374912 | 0 | 0 | 0.007467 | 0.182418 | 4,095 | 80 | 100 | 51.1875 | 0.842593 | 0.05348 | 0 | 0 | 1 | 0 | 0.191423 | 0.145957 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.313433 | 0.014925 | 0.58209 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b03dab621e15669ced2924bc726c7d5c98dc8039 | 334 | py | Python | model-optimizer/mo/front/common/extractors/utils.py | monroid/openvino | 8272b3857ef5be0aaa8abbf7bd0d5d5615dc40b6 | [
"Apache-2.0"
] | 2,406 | 2020-04-22T15:47:54.000Z | 2022-03-31T10:27:37.000Z | model-optimizer/mo/front/common/extractors/utils.py | thomas-yanxin/openvino | 031e998a15ec738c64cc2379d7f30fb73087c272 | [
"Apache-2.0"
] | 4,948 | 2020-04-22T15:12:39.000Z | 2022-03-31T18:45:42.000Z | model-optimizer/mo/front/common/extractors/utils.py | thomas-yanxin/openvino | 031e998a15ec738c64cc2379d7f30fb73087c272 | [
"Apache-2.0"
] | 991 | 2020-04-23T18:21:09.000Z | 2022-03-31T18:40:57.000Z | # Copyright (C) 2018-2021 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import numpy as np
def layout_attrs():
return {
'spatial_dims': np.array([2, 3], dtype=np.int64),
'channel_dims': np.array([1], dtype=np.int64),
'batch_dims': np.array([0], dtype=np.int64),
'layout': 'NCHW'
}
| 23.857143 | 57 | 0.610778 | 47 | 334 | 4.255319 | 0.638298 | 0.09 | 0.165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076628 | 0.218563 | 334 | 13 | 58 | 25.692308 | 0.689655 | 0.230539 | 0 | 0 | 0 | 0 | 0.173228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | true | 0 | 0.125 | 0.125 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b042d7e2b23075d8bbab3fe77f9c5aab496ac644 | 543 | py | Python | {{cookiecutter.project_name}}/{{cookiecutter.app_name}}/extensions.py | T8840/FlaskSkeleton | 544d2a509d992c0d3d7f20855811c384fca1a7f5 | [
"MIT"
] | null | null | null | {{cookiecutter.project_name}}/{{cookiecutter.app_name}}/extensions.py | T8840/FlaskSkeleton | 544d2a509d992c0d3d7f20855811c384fca1a7f5 | [
"MIT"
] | null | null | null | {{cookiecutter.project_name}}/{{cookiecutter.app_name}}/extensions.py | T8840/FlaskSkeleton | 544d2a509d992c0d3d7f20855811c384fca1a7f5 | [
"MIT"
] | null | null | null | """扩展模块。每个扩展在app.py中的app工厂中初始化."""
from flask_bcrypt import Bcrypt
from flask_caching import Cache
from flask_cors import CORS
from flask_debugtoolbar import DebugToolbarExtension
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from flask_marshmallow import Marshmallow
from {{cookiecutter.app_name}}.commons.apispec import APISpecExt
bcrypt = Bcrypt()
db = SQLAlchemy()
migrate = Migrate(db=db)
cache = Cache()
cors = CORS()
apispec = APISpecExt()
debug_toolbar = DebugToolbarExtension()
serialize = Marshmallow()
| 27.15 | 64 | 0.815838 | 65 | 543 | 6.676923 | 0.384615 | 0.145161 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108656 | 543 | 19 | 65 | 28.578947 | 0.896694 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b048ab0524ac49c5dd48e70b3fcf3da7bb5e9fa0 | 826 | py | Python | main/views.py | marc-marquez/issue-tracker | d9f2a589f416d74174499e59c5ba94900f2be063 | [
"MIT"
] | 1 | 2018-12-07T19:52:44.000Z | 2018-12-07T19:52:44.000Z | main/views.py | marc-marquez/issue-tracker | d9f2a589f416d74174499e59c5ba94900f2be063 | [
"MIT"
] | 6 | 2020-02-11T23:12:02.000Z | 2022-02-10T08:17:07.000Z | main/views.py | marc-marquez/issue-tracker | d9f2a589f416d74174499e59c5ba94900f2be063 | [
"MIT"
] | 1 | 2018-12-06T20:55:42.000Z | 2018-12-06T20:55:42.000Z | """
Creates the views for the:
- index page
- FAQ page
- About page
- Contact Us page
"""
from django.shortcuts import render
def get_index(request):
"""
Returns index page
:param request: The request type
:return: index page
"""
return render(request, 'index.html')
def get_faq(request):
"""
Returns FAQ page
:param request: The request type
:return: FAQ page
"""
return render(request, 'faq.html')
def get_about(request):
"""
Returns the About Us page
:param request: The request type
:return: About Us page
"""
return render(request, 'about.html')
def get_contact(request):
"""
Returns the Contact Us page
:param request: The request type
:return: Contact Us page
"""
return render(request, 'contact.html')
| 20.146341 | 42 | 0.62954 | 105 | 826 | 4.914286 | 0.219048 | 0.05814 | 0.124031 | 0.147287 | 0.383721 | 0.286822 | 0.286822 | 0.147287 | 0 | 0 | 0 | 0 | 0.263923 | 826 | 40 | 43 | 20.65 | 0.848684 | 0.490315 | 0 | 0 | 0 | 0 | 0.124611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0.111111 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b04b6ec673eccf2193c77faa7009efb413e1e18a | 4,348 | py | Python | Code/config.py | CerpStern/CapStone | 4c09db174b1ff9a1b96b69e5c298b29c610aefaa | [
"MIT"
] | 1 | 2018-03-30T03:21:23.000Z | 2018-03-30T03:21:23.000Z | Code/config.py | KevinNovak/Syllabus-Management-System | 4c09db174b1ff9a1b96b69e5c298b29c610aefaa | [
"MIT"
] | null | null | null | Code/config.py | KevinNovak/Syllabus-Management-System | 4c09db174b1ff9a1b96b69e5c298b29c610aefaa | [
"MIT"
] | null | null | null | import os
import json
import datetime
from flask import Flask, url_for, redirect, \
render_template, session, request
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, login_required, login_user, \
logout_user, current_user, UserMixin
from requests_oauthlib import OAuth2Session
from requests.exceptions import HTTPError
basedir = os.path.abspath(os.path.dirname(__file__))
"""App Configuration"""
class Auth:
"""Google Project Credentials"""
CLIENT_ID = ('486646633132-nc0hlcn0vt7khirhmhkh518d84omqjea'
'.apps.googleusercontent.com')
CLIENT_SECRET = 'NNKF51kaBY-5RIMpOW1S2bjd'
REDIRECT_URI = 'https://localhost:5000/gCallback'
AUTH_URI = 'https://accounts.google.com/o/oauth2/auth'
TOKEN_URI = 'https://accounts.google.com/o/oauth2/token'
USER_INFO = 'https://www.googleapis.com/userinfo/v2/me'
SCOPE = ['profile', 'email']
class Config:
"""Base config"""
APP_NAME = "SMS"
SECRET_KEY = "NNKF51kaBY-5RIMpOW1S2bjd"
class DevConfig(Config):
"""Dev config"""
DEBUG = True
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, "test.db")
SQLALCHEMY_TRACK_MODIFICATIONS = False
class ProdConfig(Config):
"""Production config"""
DEBUG = False
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, "test.db")
SQLALCHEMY_TRACK_MODIFICATIONS = False
config = {
"dev": DevConfig,
"prod": ProdConfig,
"default": DevConfig
}
class Template():
basic = '<h3>Intro to Blah Blah Blah</h3><p>Meeting Time: XX:XX - XX:XX Day1, Day2</p><p>Meeting Place: Room XXX Foo Hall</p><p>Course Website:</p><p>Intructor Name: Bob Loblaw</p><p>Office Hours: XX:XX - XX:XXX Day1, Day2 Room XXX Building</p><p>Required Materials if any:</p><p>Prerequisites if any:</p>'
description = 'Course Description Goes Here'
topics = 'Topics Covered Go Here'
outcomes = 'Learning Outcomes Go Here'
grading = 'Grading Policy Goes Here'
schedule = 'Planned Schedule Goes Here'
honesty = 'Cheating means to misrepresent the source, nature, or other conditions of your academic work (e.g., tests, papers, projects, assignments) so as to get underserved credit. The use of the intellectual property of others without giving them appropriate credit is a serious academic offense. The University considers cheating and plagiarism very serious offenses and provides for sanctions up to and including dismissal from the University or revocation of a degree. The University’s administrative policy and procedures regarding student cheating and plagiarism can be found in the <a href="https://www.kent.edu/policyreg/administrative-policy-regarding-student-cheating-and-plagiarism" target="_blank" rel="noopener noreferrer">Administrative Policy, 3-01.8</a>. By submitting any material in this (or any other class) you are certifying that it is free of plagiarism.'
deadlines = 'Students have responsibility to ensure they are properly enrolled in classes. You are advised to review your official class schedule (using Student Tools in FlashLine) during the first two weeks of the semester to ensure you are properly enrolled in this class and section. Should you find an error in your class schedule, you have until cut-off date provided by the Undergraduate Office to correct the error with your advising office. If registration errors are not corrected by the cut-off date and you continue to attend and participate in classes for which you are not officially enrolled, you are advised now that you will not receive a grade at the conclusion of the semester for any class in which you are not properly registered.'
accessibility = 'University policy 3342-3-01.3 requires that students with disabilities be provided reasonable accommodations to ensure their equal access to course content. If you have a documented disability and require accommodations, please contact the instructor at the beginning of the semester to make arrangements for necessary classroom adjustments. Please note, you must first verify your eligibility for these through the Student Accessibility Services (contact 330-672-3391 or visit <a href="http://www.kent.edu/sas" target="_blank" rel="noopener noreferrer">www.kent.edu/sas</a> for more information on registration procedures).'
keywords = 'Learning'
| 64.895522 | 888 | 0.76058 | 611 | 4,348 | 5.361702 | 0.487725 | 0.003663 | 0.005495 | 0.013431 | 0.112332 | 0.070208 | 0.070208 | 0.050672 | 0.050672 | 0.050672 | 0 | 0.017984 | 0.155934 | 4,348 | 66 | 889 | 65.878788 | 0.874659 | 0.015409 | 0 | 0.085106 | 0 | 0.085106 | 0.708284 | 0.058768 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.170213 | 0 | 0.808511 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b0518d75fb6c1539ec406a12c1c23f326937ba27 | 489 | py | Python | Data Structures/Queue.py | Royals-Aeo-Gamer/MyPyMods | be3a521e9f823ce0b704f925b19f6f34dcb5405d | [
"MIT"
] | null | null | null | Data Structures/Queue.py | Royals-Aeo-Gamer/MyPyMods | be3a521e9f823ce0b704f925b19f6f34dcb5405d | [
"MIT"
] | null | null | null | Data Structures/Queue.py | Royals-Aeo-Gamer/MyPyMods | be3a521e9f823ce0b704f925b19f6f34dcb5405d | [
"MIT"
] | null | null | null | class queue:
def __init__(self):
self.main = []
self.max_c = 10
def deque(self):
del self.main[-1]
def enque(self,val):
if len(self.main) == self.max_c:
self.deque()
self.main.insert(0,val)
else:
self.main.insert(0,val)
def show(self):
return self.main
def __getitem__(self,index):
return self.main[index]
def __setitem__(self,*args):
raise AssertionError("Queue doesn't supports its assertion")
def __delitem__(self,index):
del self.main[index] | 18.111111 | 62 | 0.678937 | 76 | 489 | 4.131579 | 0.434211 | 0.203822 | 0.076433 | 0.095541 | 0.216561 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012438 | 0.177914 | 489 | 27 | 63 | 18.111111 | 0.768657 | 0 | 0 | 0.1 | 0 | 0 | 0.073469 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.35 | false | 0 | 0 | 0.1 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b053b7e26c23de7b1065b90c154b234c85c386b8 | 657 | py | Python | pyfftlog/__init__.py | ShazAlvi/pyfftlog | 6e99cc3d708f106d8fe0b3fdfc02c6c5c4f74b24 | [
"CC0-1.0"
] | 14 | 2018-08-22T17:14:18.000Z | 2021-12-30T07:05:30.000Z | pyfftlog/__init__.py | ShazAlvi/pyfftlog | 6e99cc3d708f106d8fe0b3fdfc02c6c5c4f74b24 | [
"CC0-1.0"
] | 12 | 2019-08-16T18:44:35.000Z | 2021-06-28T21:57:13.000Z | pyfftlog/__init__.py | ShazAlvi/pyfftlog | 6e99cc3d708f106d8fe0b3fdfc02c6c5c4f74b24 | [
"CC0-1.0"
] | 9 | 2017-08-24T12:51:14.000Z | 2022-03-30T13:51:46.000Z | from datetime import datetime
from pyfftlog.pyfftlog import fhti, fftl, fht, fhtq, krgood
__all__ = ['fhti', 'fftl', 'fht', 'fhtq', 'krgood']
# Version
try:
# - Released versions just tags: 1.10.0
# - GitHub commits add .dev#+hash: 1.10.1.dev3+g973038c
# - Uncommitted changes add timestamp: 1.10.1.dev3+g973038c.d20191022
from .version import version as __version__
except ImportError:
# If it was not installed, then we don't know the version. We could throw a
# warning here, but this case *should* be rare. pyfftlog should be
# installed properly!
__version__ = 'unknown-'+datetime.today().strftime('%Y%m%d')
| 38.647059 | 79 | 0.687976 | 93 | 657 | 4.731183 | 0.666667 | 0.020455 | 0.05 | 0.068182 | 0.168182 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064394 | 0.196347 | 657 | 16 | 80 | 41.0625 | 0.768939 | 0.509893 | 0 | 0 | 0 | 0 | 0.111821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b0645d80696549ddac03a65ee683a85f214b7e9d | 1,048 | py | Python | src/wallet/puzzles/p2_puzzle_hash.py | reghacker/chia-blockchain | 51ab6260dcfbd95b4be9686f717f8d63133ef50f | [
"Apache-2.0"
] | 1 | 2021-04-12T09:10:51.000Z | 2021-04-12T09:10:51.000Z | src/wallet/puzzles/p2_puzzle_hash.py | reghacker/chia-blockchain | 51ab6260dcfbd95b4be9686f717f8d63133ef50f | [
"Apache-2.0"
] | 1 | 2022-03-25T19:10:51.000Z | 2022-03-25T19:10:51.000Z | src/wallet/puzzles/p2_puzzle_hash.py | fakecoinbase/Chia-Networkslashchia-blockchain | 84e6a4da18fb0a790a870cbd516f13c9bc7f0716 | [
"Apache-2.0"
] | 1 | 2022-01-26T11:57:29.000Z | 2022-01-26T11:57:29.000Z | """
Pay to puzzle hash
In this puzzle program, the solution must be a reveal of the puzzle with the given
hash along with its solution.
"""
from clvm_tools import binutils
from src.types.program import Program
"""
solution: (puzzle_reveal . solution_to_puzzle)
(if (= (sha256 (wrap puzzle_reveal)) puzzle_hash) ((c puzzle_reveal solution_to_puzzle (a))) (x))
((c (i (= (sha256 (wrap puzzle_reveal)) puzzle_hash) (q (e (f (a)) (r (a)))) (q (x))) (a)))
((c (i (= (sha256 (wrap (f (a)))) CONST) (q (e (f (a)) (r (a)))) (q (x))) (a)))
"""
def puzzle_for_puzzle_hash(underlying_puzzle_hash):
TEMPLATE = "((c (i (= (sha256tree (f (a))) (q 0x%s)) (q ((c (f (a)) (f (r (a)))))) (q (x))) (a)))"
return Program.to(binutils.assemble(TEMPLATE % underlying_puzzle_hash.hex()))
def solution_for_puzzle_and_solution(underlying_puzzle, underlying_solution):
underlying_puzzle_hash = underlying_puzzle.get_hash()
puzzle_program = puzzle_for_puzzle_hash(underlying_puzzle_hash)
return Program.to([puzzle_program, underlying_solution])
| 32.75 | 102 | 0.685115 | 160 | 1,048 | 4.2625 | 0.3 | 0.131965 | 0.117302 | 0.017595 | 0.324047 | 0.234604 | 0.140762 | 0.026393 | 0.026393 | 0 | 0 | 0.014541 | 0.146947 | 1,048 | 31 | 103 | 33.806452 | 0.748322 | 0.125954 | 0 | 0 | 0 | 0.111111 | 0.146299 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c68152cd50cdad7a8061d4413c8cf518b5742931 | 1,153 | py | Python | plaso/containers/errors.py | ir4n6/plaso | 010f9cbdfc82e21ed6658657fd09a7b44115c464 | [
"Apache-2.0"
] | null | null | null | plaso/containers/errors.py | ir4n6/plaso | 010f9cbdfc82e21ed6658657fd09a7b44115c464 | [
"Apache-2.0"
] | null | null | null | plaso/containers/errors.py | ir4n6/plaso | 010f9cbdfc82e21ed6658657fd09a7b44115c464 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""Error attribute containers."""
from __future__ import unicode_literals
from plaso.containers import interface
from plaso.containers import manager
# TODO: add AnalysisError.
class ExtractionError(interface.AttributeContainer):
"""Extraction error attribute container.
Attributes:
message (str): error message.
parser_chain (str): parser chain to which the error applies.
path_spec (dfvfs.PathSpec):
path specification of the file entry to which the error applies.
"""
CONTAINER_TYPE = 'extraction_error'
def __init__(self, message=None, parser_chain=None, path_spec=None):
"""Initializes a parse error.
Args:
message (Optional[str]): error message.
parser_chain (Optional[str]): parser chain to which the error applies.
path_spec (Optional[dfvfs.PathSpec]):
path specification of the file entry to which the error applies.
"""
super(ExtractionError, self).__init__()
self.message = message
self.parser_chain = parser_chain
self.path_spec = path_spec
manager.AttributeContainersManager.RegisterAttributeContainer(ExtractionError)
| 28.825 | 78 | 0.732871 | 135 | 1,153 | 6.074074 | 0.385185 | 0.093902 | 0.04878 | 0.073171 | 0.331707 | 0.268293 | 0.268293 | 0.268293 | 0.268293 | 0.268293 | 0 | 0.001057 | 0.179532 | 1,153 | 39 | 79 | 29.564103 | 0.865751 | 0.50477 | 0 | 0 | 0 | 0 | 0.031008 | 0 | 0 | 0 | 0 | 0.025641 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c6a614791b278e999b67693c9a9be6e1817e38df | 156 | py | Python | pfr_api/__init__.py | aadamson/pfr-api | b5cab2763db71e57e231507e03747fc0922cdba3 | [
"MIT"
] | 1 | 2021-10-12T01:39:04.000Z | 2021-10-12T01:39:04.000Z | pfr_api/__init__.py | aadamson/pfr-api | b5cab2763db71e57e231507e03747fc0922cdba3 | [
"MIT"
] | null | null | null | pfr_api/__init__.py | aadamson/pfr-api | b5cab2763db71e57e231507e03747fc0922cdba3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Top-level package for pfr_api."""
__author__ = """Alex Adamson"""
__email__ = 'alex.b.adamson@gmail.com'
__version__ = '0.1.0'
| 19.5 | 38 | 0.634615 | 22 | 156 | 3.909091 | 0.863636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02963 | 0.134615 | 156 | 7 | 39 | 22.285714 | 0.607407 | 0.339744 | 0 | 0 | 0 | 0 | 0.42268 | 0.247423 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c6c8367ac4444b6d37311239e2cb319cf37c2360 | 211 | py | Python | CursoEmVideo/Aula10/ex031.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | CursoEmVideo/Aula10/ex031.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | CursoEmVideo/Aula10/ex031.py | lucashsouza/Desafios-Python | abb5b11ebdfd4c232b4f0427ef41fd96013f2802 | [
"MIT"
] | null | null | null | km = float(input('Qual vai ser a distância da viagem: '))
if km <= 200:
print('O valor a ser pago é de: R${:.2f}'.format(km * 0.50))
else:
print('O valor a ser pago é de: R${:.2f}'.format(km * 0.45)) | 42.2 | 65 | 0.582938 | 42 | 211 | 2.928571 | 0.595238 | 0.097561 | 0.178862 | 0.195122 | 0.552846 | 0.552846 | 0.552846 | 0.552846 | 0.552846 | 0.552846 | 0 | 0.066667 | 0.218009 | 211 | 5 | 66 | 42.2 | 0.678788 | 0 | 0 | 0 | 0 | 0 | 0.490385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
c6e362bf6a246f7a2ad3404554bd7be6976605a3 | 3,102 | py | Python | torch_glow/tests/nodes/trig_ops_test.py | YaronBenAtar/glow | a13706a4239fa7eaf059c670dc573e3eb0768f86 | [
"Apache-2.0"
] | 2,838 | 2018-05-02T16:57:22.000Z | 2022-03-31T14:35:26.000Z | torch_glow/tests/nodes/trig_ops_test.py | YaronBenAtar/glow | a13706a4239fa7eaf059c670dc573e3eb0768f86 | [
"Apache-2.0"
] | 4,149 | 2018-05-02T17:50:14.000Z | 2022-03-31T23:56:43.000Z | torch_glow/tests/nodes/trig_ops_test.py | LaudateCorpus1/glow-1 | cda5383b1609ebad1a3631ca77b41b8a863443d4 | [
"Apache-2.0"
] | 685 | 2018-05-02T16:54:09.000Z | 2022-03-24T01:12:24.000Z | # Copyright (c) Glow Contributors. See CONTRIBUTORS file.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import torch
from tests import utils
class SimpleCosModule(torch.nn.Module):
def __init__(self):
super(SimpleCosModule, self).__init__()
def forward(self, a):
return torch.cos(a + a)
class SimpleSinModule(torch.nn.Module):
def __init__(self):
super(SimpleSinModule, self).__init__()
def forward(self, a):
return torch.sin(a + a)
class SimpleACosModule(torch.nn.Module):
def __init__(self):
super(SimpleACosModule, self).__init__()
def forward(self, a):
return torch.acos(a + a)
class SimpleASinModule(torch.nn.Module):
def __init__(self):
super(SimpleASinModule, self).__init__()
def forward(self, a):
return torch.asin(a + a)
class SimpleATanModule(torch.nn.Module):
def __init__(self):
super(SimpleATanModule, self).__init__()
def forward(self, a):
return torch.atan(a + a)
class TestCos(utils.TorchGlowTestCase):
def test_cos(self, skip_to_glow=False):
# Ensures range is in [-2*pi, 2*pi]
x = 4 * np.pi * (torch.rand(2, 3, 4) - 0.5)
utils.compare_tracing_methods(
SimpleCosModule(), x, fusible_ops={"aten::cos"}, skip_to_glow=skip_to_glow
)
class TestSin(utils.TorchGlowTestCase):
def test_sin(self, skip_to_glow=False):
# Ensures range is in [-2*pi, 2*pi]
x = 4 * np.pi * (torch.rand(2, 3, 4) - 0.5)
utils.compare_tracing_methods(
SimpleSinModule(), x, fusible_ops={"aten::sin"}, skip_to_glow=skip_to_glow
)
class TestACos(utils.TorchGlowTestCase):
def test_acos(self, skip_to_glow=False):
x = torch.rand(2, 3, 4) - 0.5 # Ensures range is in [-1,1]
utils.compare_tracing_methods(
SimpleACosModule(), x, fusible_ops={"aten::acos"}, skip_to_glow=skip_to_glow
)
class TestASin(utils.TorchGlowTestCase):
def test_asin(self, skip_to_glow=False):
x = torch.rand(2, 3, 4) - 0.5 # Ensures range is in [-1,1]
utils.compare_tracing_methods(
SimpleASinModule(), x, fusible_ops={"aten::asin"}, skip_to_glow=skip_to_glow
)
class TestATan(utils.TorchGlowTestCase):
def test_atan(self, skip_to_glow=False):
x = torch.randn(2, 3, 4)
utils.compare_tracing_methods(
SimpleATanModule(), x, fusible_ops={"aten::atan"}, skip_to_glow=skip_to_glow
)
| 30.411765 | 88 | 0.670535 | 430 | 3,102 | 4.611628 | 0.281395 | 0.045386 | 0.075643 | 0.040343 | 0.398386 | 0.398386 | 0.388301 | 0.252143 | 0.166415 | 0.166415 | 0 | 0.01522 | 0.216312 | 3,102 | 101 | 89 | 30.712871 | 0.800494 | 0.225338 | 0 | 0.322034 | 0 | 0 | 0.020126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.254237 | false | 0 | 0.067797 | 0.084746 | 0.576271 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
c6ede76c080acb8d7b0d7af65bbda56a035368b5 | 1,061 | py | Python | scripts/data_preparation.py | hupili/hk_census_explorer_2011 | c130cf02a14c05bad5770d8477489fff78f32411 | [
"MIT"
] | 14 | 2015-04-13T16:30:45.000Z | 2017-03-30T02:54:13.000Z | scripts/data_preparation.py | hupili/hk_census_explorer_2011 | c130cf02a14c05bad5770d8477489fff78f32411 | [
"MIT"
] | 5 | 2015-03-04T13:44:41.000Z | 2017-10-22T16:03:18.000Z | scripts/data_preparation.py | hupili/hk_census_explorer_2011 | c130cf02a14c05bad5770d8477489fff78f32411 | [
"MIT"
] | 4 | 2015-01-23T02:56:40.000Z | 2018-03-05T11:42:02.000Z | import sh
from log import logger
import config
import download_constituency_area_data
import extract_data_from_xls
import geo_naming
import combine_json
import public_facilities
import translation_for_i18next
print(config.DIR_DATA_PREFIX)
sh.mkdir('-p', config.DIR_DATA_PREFIX)
logger.info('Start data preparation')
logger.info('Start to download data')
download_constituency_area_data.main()
logger.info('Start to extract data from xls to JSON')
extract_data_from_xls.main()
logger.info('Generate unified geo-naming information')
geo_naming.main()
logger.info('Combine JSONs to single CSV')
combine_json.main()
logger.info('Appending public facility data')
public_facilities.main()
# Data preparation pipeline is for general purpose.
# The following generates translation maps for our frontend use,
# and the output is under version control.
# So it is removed from the pipeline.
#
# Translation maintainer should manually execute it and commit new files.
#
#logger.info('Convert translation dicts to i18next format')
#translation_for_i18next.main()
| 25.878049 | 73 | 0.812441 | 155 | 1,061 | 5.393548 | 0.43871 | 0.083732 | 0.066986 | 0.064593 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006376 | 0.113101 | 1,061 | 40 | 74 | 26.525 | 0.88204 | 0.329877 | 0 | 0 | 0 | 0 | 0.256776 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.409091 | 0 | 0.409091 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
c6eff97b974ce0ef264ee680260232ccbd58f94b | 3,705 | py | Python | top.py | hao707822882/Bichon | 54092e69c9316ee592ee392dc85e1f7fd0c47b68 | [
"Apache-2.0"
] | null | null | null | top.py | hao707822882/Bichon | 54092e69c9316ee592ee392dc85e1f7fd0c47b68 | [
"Apache-2.0"
] | null | null | null | top.py | hao707822882/Bichon | 54092e69c9316ee592ee392dc85e1f7fd0c47b68 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
A clone of top / htop.
Author: Giampaolo Rodola' <g.rodola@gmail.com>
$ python examples/top.py
CPU0 [| ] 4.9%
CPU1 [||| ] 7.8%
CPU2 [ ] 2.0%
CPU3 [||||| ] 13.9%
Mem [||||||||||||||||||| ] 49.8% 4920M/9888M
Swap [ ] 0.0% 0M/0M
Processes: 287 (running=1 sleeping=286)
Load average: 0.34 0.54 0.46 Uptime: 3 days, 10:16:37
PID USER NI VIRT RES CPU% MEM% TIME+ NAME
------------------------------------------------------------
989 giampaol 0 66M 12M 7.4 0.1 0:00.61 python
2083 root 0 506M 159M 6.5 1.6 0:29.26 Xorg
4503 giampaol 0 599M 25M 6.5 0.3 3:32.60 gnome-terminal
3868 giampaol 0 358M 8M 2.8 0.1 23:12.60 pulseaudio
3936 giampaol 0 1G 111M 2.8 1.1 33:41.67 compiz
4401 giampaol 0 536M 141M 2.8 1.4 35:42.73 skype
4047 giampaol 0 743M 76M 1.8 0.8 42:03.33 unity-panel-service
13155 giampaol 0 1G 280M 1.8 2.8 41:57.34 chrome
10 root 0 0B 0B 0.9 0.0 4:01.81 rcu_sched
339 giampaol 0 1G 113M 0.9 1.1 8:15.73 chrome
...
"""
from datetime import datetime
from datetime import timedelta
import atexit
import os
import sys
import time
import psutil
lineno = 0
def bytes2human(n):
"""
>>> bytes2human(10000)
'9K'
>>> bytes2human(100001221)
'95M'
"""
symbols = ('K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y')
prefix = {}
for i, s in enumerate(symbols):
prefix[s] = 1 << (i + 1) * 10
for s in reversed(symbols):
if n >= prefix[s]:
value = int(float(n) / prefix[s])
return '%s%s' % (value, s)
return "%sB" % n
# def poll(interval):
# sleep some time
# # return processes sorted by CPU percent usage
# processes = sorted(procs, key=lambda p: p.dict['cpu_percent'],
# reverse=True)
# return processes
def print_header(procs_status, num_procs):
"""Print system-related info, above the process list."""
def refresh_window(procs):
"""Print results on screen by using curses."""
print "---------------------------------------"
#for p in procs:
# TIME+ column shows process CPU cumulative time and it
# is expressed as: "mm:ss.ms"
#print p.dict['cpu_percent']
def main():
try:
interval = 1
for p in psutil.process_iter():
try:
p.dict = p.as_dict(['username', 'nice', 'memory_info',
'memory_percent', 'cpu_percent',
'cpu_times', 'name', 'status'])
print p.dict["cpu_percent"]
except psutil.NoSuchProcess:
pass
print "----------------------------------"
time.sleep(interval)
for p in psutil.process_iter():
try:
p.dict = p.as_dict(['username', 'nice', 'memory_info',
'memory_percent', 'cpu_percent',
'cpu_times', 'name', 'status'])
print p.dict["cpu_percent"]
except psutil.NoSuchProcess:
pass
except (KeyboardInterrupt, SystemExit):
pass
if __name__ == '__main__':
main() | 29.173228 | 75 | 0.48556 | 468 | 3,705 | 3.782051 | 0.49359 | 0.040678 | 0.018079 | 0.033898 | 0.180791 | 0.169492 | 0.169492 | 0.169492 | 0.169492 | 0.169492 | 0 | 0.113859 | 0.367072 | 3,705 | 127 | 76 | 29.173228 | 0.640938 | 0.134143 | 0 | 0.391304 | 0 | 0 | 0.152174 | 0.044082 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.065217 | 0.152174 | null | null | 0.108696 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
c6f489678bdd92fad0aa56de553d0c9827d2a18f | 910 | py | Python | phangsPipeline/__init__.py | low-sky/phangs_imaging_scripts | 7d60e8ef4d70e81442f46c844829ab70cbc62500 | [
"MIT"
] | null | null | null | phangsPipeline/__init__.py | low-sky/phangs_imaging_scripts | 7d60e8ef4d70e81442f46c844829ab70cbc62500 | [
"MIT"
] | null | null | null | phangsPipeline/__init__.py | low-sky/phangs_imaging_scripts | 7d60e8ef4d70e81442f46c844829ab70cbc62500 | [
"MIT"
] | null | null | null | # Licensed under a MIT license - see LICENSE.rst
# Packages may add whatever they like to this file, but
# should keep this content at the top.
# ----------------------------------------------------------------------------
from ._astropy_init import * # noqa
# ----------------------------------------------------------------------------
from .casa_check import is_casa_installed
casa_enabled = is_casa_installed()
from .phangsLogger import setup_logger
from .handlerKeys import KeyHandler
from .handlerSingleDish import SingleDishHandler
from .handlerVis import VisHandler
from .handlerPostprocess import PostProcessHandler
from .handlerDerived import DerivedHandler
if casa_enabled:
from .handlerImaging import ImagingHandler
__all__ = ["setup_logger", "KeyHandler", "SingleDishHandler", "VisHandler", "PostProcessHandler", "DerivedHandler"]
if casa_enabled:
__all__.append("ImagingHandler")
| 35 | 115 | 0.665934 | 88 | 910 | 6.659091 | 0.579545 | 0.056314 | 0.051195 | 0.09215 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112088 | 910 | 25 | 116 | 36.4 | 0.725248 | 0.325275 | 0 | 0.142857 | 0 | 0 | 0.156507 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.642857 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
c6fcc9cbca56f5df041a4ed9ea2f7598577810cf | 7,168 | py | Python | nova/tests/unit/fake_processutils.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/fake_processutils.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | null | null | null | nova/tests/unit/fake_processutils.py | bopopescu/nova-token | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | [
"Apache-2.0"
] | 2 | 2017-07-20T17:31:34.000Z | 2020-07-24T02:42:19.000Z | begin_unit
comment|'# Copyright (c) 2011 Citrix Systems, Inc.'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
string|'"""This modules stubs out functions in oslo_concurrency.processutils."""'
newline|'\n'
nl|'\n'
name|'import'
name|'re'
newline|'\n'
nl|'\n'
name|'from'
name|'eventlet'
name|'import'
name|'greenthread'
newline|'\n'
name|'from'
name|'oslo_concurrency'
name|'import'
name|'processutils'
newline|'\n'
name|'from'
name|'oslo_log'
name|'import'
name|'log'
name|'as'
name|'logging'
newline|'\n'
name|'import'
name|'six'
newline|'\n'
nl|'\n'
DECL|variable|LOG
name|'LOG'
op|'='
name|'logging'
op|'.'
name|'getLogger'
op|'('
name|'__name__'
op|')'
newline|'\n'
nl|'\n'
DECL|variable|_fake_execute_repliers
name|'_fake_execute_repliers'
op|'='
op|'['
op|']'
newline|'\n'
DECL|variable|_fake_execute_log
name|'_fake_execute_log'
op|'='
op|'['
op|']'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|fake_execute_get_log
name|'def'
name|'fake_execute_get_log'
op|'('
op|')'
op|':'
newline|'\n'
indent|' '
name|'return'
name|'_fake_execute_log'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|fake_execute_clear_log
dedent|''
name|'def'
name|'fake_execute_clear_log'
op|'('
op|')'
op|':'
newline|'\n'
indent|' '
name|'global'
name|'_fake_execute_log'
newline|'\n'
name|'_fake_execute_log'
op|'='
op|'['
op|']'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|fake_execute_set_repliers
dedent|''
name|'def'
name|'fake_execute_set_repliers'
op|'('
name|'repliers'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""Allows the client to configure replies to commands."""'
newline|'\n'
name|'global'
name|'_fake_execute_repliers'
newline|'\n'
name|'_fake_execute_repliers'
op|'='
name|'repliers'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|fake_execute_default_reply_handler
dedent|''
name|'def'
name|'fake_execute_default_reply_handler'
op|'('
op|'*'
name|'ignore_args'
op|','
op|'**'
name|'ignore_kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""A reply handler for commands that haven\'t been added to the reply list.\n\n Returns empty strings for stdout and stderr.\n\n """'
newline|'\n'
name|'return'
string|"''"
op|','
string|"''"
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|fake_execute
dedent|''
name|'def'
name|'fake_execute'
op|'('
op|'*'
name|'cmd_parts'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
string|'"""This function stubs out execute.\n\n It optionally executes a preconfigued function to return expected data.\n\n """'
newline|'\n'
name|'global'
name|'_fake_execute_repliers'
newline|'\n'
nl|'\n'
name|'process_input'
op|'='
name|'kwargs'
op|'.'
name|'get'
op|'('
string|"'process_input'"
op|','
name|'None'
op|')'
newline|'\n'
name|'check_exit_code'
op|'='
name|'kwargs'
op|'.'
name|'get'
op|'('
string|"'check_exit_code'"
op|','
number|'0'
op|')'
newline|'\n'
name|'delay_on_retry'
op|'='
name|'kwargs'
op|'.'
name|'get'
op|'('
string|"'delay_on_retry'"
op|','
name|'True'
op|')'
newline|'\n'
name|'attempts'
op|'='
name|'kwargs'
op|'.'
name|'get'
op|'('
string|"'attempts'"
op|','
number|'1'
op|')'
newline|'\n'
name|'run_as_root'
op|'='
name|'kwargs'
op|'.'
name|'get'
op|'('
string|"'run_as_root'"
op|','
name|'False'
op|')'
newline|'\n'
name|'cmd_str'
op|'='
string|"' '"
op|'.'
name|'join'
op|'('
name|'str'
op|'('
name|'part'
op|')'
name|'for'
name|'part'
name|'in'
name|'cmd_parts'
op|')'
newline|'\n'
nl|'\n'
name|'LOG'
op|'.'
name|'debug'
op|'('
string|'"Faking execution of cmd (subprocess): %s"'
op|','
name|'cmd_str'
op|')'
newline|'\n'
name|'_fake_execute_log'
op|'.'
name|'append'
op|'('
name|'cmd_str'
op|')'
newline|'\n'
nl|'\n'
name|'reply_handler'
op|'='
name|'fake_execute_default_reply_handler'
newline|'\n'
nl|'\n'
name|'for'
name|'fake_replier'
name|'in'
name|'_fake_execute_repliers'
op|':'
newline|'\n'
indent|' '
name|'if'
name|'re'
op|'.'
name|'match'
op|'('
name|'fake_replier'
op|'['
number|'0'
op|']'
op|','
name|'cmd_str'
op|')'
op|':'
newline|'\n'
indent|' '
name|'reply_handler'
op|'='
name|'fake_replier'
op|'['
number|'1'
op|']'
newline|'\n'
name|'LOG'
op|'.'
name|'debug'
op|'('
string|"'Faked command matched %s'"
op|','
name|'fake_replier'
op|'['
number|'0'
op|']'
op|')'
newline|'\n'
name|'break'
newline|'\n'
nl|'\n'
dedent|''
dedent|''
name|'if'
name|'isinstance'
op|'('
name|'reply_handler'
op|','
name|'six'
op|'.'
name|'string_types'
op|')'
op|':'
newline|'\n'
comment|'# If the reply handler is a string, return it as stdout'
nl|'\n'
indent|' '
name|'reply'
op|'='
name|'reply_handler'
op|','
string|"''"
newline|'\n'
dedent|''
name|'else'
op|':'
newline|'\n'
indent|' '
name|'try'
op|':'
newline|'\n'
comment|'# Alternative is a function, so call it'
nl|'\n'
indent|' '
name|'reply'
op|'='
name|'reply_handler'
op|'('
name|'cmd_parts'
op|','
nl|'\n'
name|'process_input'
op|'='
name|'process_input'
op|','
nl|'\n'
name|'delay_on_retry'
op|'='
name|'delay_on_retry'
op|','
nl|'\n'
name|'attempts'
op|'='
name|'attempts'
op|','
nl|'\n'
name|'run_as_root'
op|'='
name|'run_as_root'
op|','
nl|'\n'
name|'check_exit_code'
op|'='
name|'check_exit_code'
op|')'
newline|'\n'
dedent|''
name|'except'
name|'processutils'
op|'.'
name|'ProcessExecutionError'
name|'as'
name|'e'
op|':'
newline|'\n'
indent|' '
name|'LOG'
op|'.'
name|'debug'
op|'('
string|"'Faked command raised an exception %s'"
op|','
name|'e'
op|')'
newline|'\n'
name|'raise'
newline|'\n'
nl|'\n'
dedent|''
dedent|''
name|'LOG'
op|'.'
name|'debug'
op|'('
string|'"Reply to faked command is stdout=\'%(stdout)s\' "'
nl|'\n'
string|'"stderr=\'%(stderr)s\'"'
op|','
op|'{'
string|"'stdout'"
op|':'
name|'reply'
op|'['
number|'0'
op|']'
op|','
string|"'stderr'"
op|':'
name|'reply'
op|'['
number|'1'
op|']'
op|'}'
op|')'
newline|'\n'
nl|'\n'
comment|'# Replicate the sleep call in the real function'
nl|'\n'
name|'greenthread'
op|'.'
name|'sleep'
op|'('
number|'0'
op|')'
newline|'\n'
name|'return'
name|'reply'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|function|stub_out_processutils_execute
dedent|''
name|'def'
name|'stub_out_processutils_execute'
op|'('
name|'stubs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'fake_execute_set_repliers'
op|'('
op|'['
op|']'
op|')'
newline|'\n'
name|'fake_execute_clear_log'
op|'('
op|')'
newline|'\n'
name|'stubs'
op|'.'
name|'Set'
op|'('
name|'processutils'
op|','
string|"'execute'"
op|','
name|'fake_execute'
op|')'
newline|'\n'
dedent|''
endmarker|''
end_unit
| 14.964509 | 147 | 0.637695 | 1,085 | 7,168 | 4.086636 | 0.168664 | 0.079838 | 0.074425 | 0.042174 | 0.541046 | 0.41723 | 0.294317 | 0.179522 | 0.122463 | 0.091791 | 0 | 0.002506 | 0.109375 | 7,168 | 478 | 148 | 14.995816 | 0.692043 | 0 | 0 | 0.851464 | 0 | 0 | 0.435965 | 0.053153 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.01046 | 0 | 0.01046 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
05a09c59f42d79343103ac5e44a06cea594a1201 | 675 | py | Python | top2vec/configuration_top2vec.py | shpotes/top2vec | edf36fc5a81e03235cbffa2aa17c91937410b29f | [
"Apache-2.0"
] | null | null | null | top2vec/configuration_top2vec.py | shpotes/top2vec | edf36fc5a81e03235cbffa2aa17c91937410b29f | [
"Apache-2.0"
] | null | null | null | top2vec/configuration_top2vec.py | shpotes/top2vec | edf36fc5a81e03235cbffa2aa17c91937410b29f | [
"Apache-2.0"
] | null | null | null | from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
class Top2VecConfig(PretrainedConfig):
r"""
:class:`Top2VecConfig` is the configuration class to store the configuration of a
:class:`~Top2VecModel`. It is used to instantiate a Top2Vec model according to the
specified arguments, defining the defining the model architecture.
Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be
used to control the model outputs. Read the documentation from
:class:`~transformers.PretrainedConfig` for more information.
"""
pass | 42.1875 | 89 | 0.777778 | 80 | 675 | 6.4875 | 0.5375 | 0.061657 | 0.080925 | 0.142582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007067 | 0.161481 | 675 | 16 | 90 | 42.1875 | 0.909894 | 0.657778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.166667 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
05a1ba3132402582cd7e2a98818df2839f7bfe6e | 436 | py | Python | 201214/step_01.py | kurganITteacher/python-basics | a398c0e90ee76969d3fa4b8a0aa0a9cdd43f7af0 | [
"Apache-2.0"
] | null | null | null | 201214/step_01.py | kurganITteacher/python-basics | a398c0e90ee76969d3fa4b8a0aa0a9cdd43f7af0 | [
"Apache-2.0"
] | null | null | null | 201214/step_01.py | kurganITteacher/python-basics | a398c0e90ee76969d3fa4b8a0aa0a9cdd43f7af0 | [
"Apache-2.0"
] | 1 | 2020-10-11T16:05:00.000Z | 2020-10-11T16:05:00.000Z | def show_user(user):
print('имя:', user[0])
print('фамилия:', user[2])
print('возраст:', user[3])
user_1 = ['Иван', 'Иванович', 'Иванов', 21, 'г.Курган']
# 0 1 2 3 4
user_2 = ['Петр', 'Петрович', 'Петров', 19, 'г.Далматово']
user_3 = ['Сидорова', 'Валерьевна', 'Оксана', 23, 'г.Шадринск']
# show_user(user_1)
# show_user(user_2)
# show_user(user_3)
for el in user_1:
print(el)
| 24.222222 | 63 | 0.557339 | 64 | 436 | 3.625 | 0.484375 | 0.137931 | 0.206897 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063444 | 0.240826 | 436 | 17 | 64 | 25.647059 | 0.637462 | 0.211009 | 0 | 0 | 0 | 0 | 0.331307 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.111111 | 0.444444 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
05ba3a011d75ac8d190937c8a318b1cb85575559 | 1,324 | py | Python | tensorflow/python/ipu/ops/experimental/popfloat_ops_grad.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 74 | 2020-07-06T17:11:39.000Z | 2022-01-28T06:31:28.000Z | tensorflow/python/ipu/ops/experimental/popfloat_ops_grad.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 9 | 2020-10-13T23:25:29.000Z | 2022-02-10T06:54:48.000Z | tensorflow/python/ipu/ops/experimental/popfloat_ops_grad.py | chenzhengda/tensorflow | 8debb698097670458b5f21d728bc6f734a7b5a53 | [
"Apache-2.0"
] | 12 | 2020-07-08T07:27:17.000Z | 2021-12-27T08:54:27.000Z | # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Gradients for Popfloat operators."""
from tensorflow.python.framework import ops
"""
These gradient function should *never* be called directly.
"""
@ops.RegisterGradient("CalcGfloatParams")
def _calc_gfloat_params_backward(op, *grads):
"""Gradients for the CalcGfloatParams op."""
return None
@ops.RegisterGradient("CastNativeToGfloat")
def _cast_native_to_gfloat_backward(op, *grads):
"""Gradients for the CastToGfloat op."""
return [grads[0], None]
@ops.RegisterGradient("CastGfloatToNative")
def _cast_gfloat_to_native_backward(op, *grads):
"""Gradients for the CastFromGfloat op."""
return [grads[0], None]
| 35.783784 | 80 | 0.713746 | 169 | 1,324 | 5.508876 | 0.585799 | 0.064447 | 0.048335 | 0.077336 | 0.135338 | 0.09667 | 0 | 0 | 0 | 0 | 0 | 0.008734 | 0.135196 | 1,324 | 36 | 81 | 36.777778 | 0.804367 | 0.608761 | 0 | 0.2 | 0 | 0 | 0.125604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
05c2eb28dd4d34747e85c37c6545aa3547c74bfe | 775 | py | Python | bindings/python/tests/varargs.py | mewbak/dragonffi | 2a205dbe4dd980d5dd53026c871514795573a7fb | [
"Apache-2.0"
] | null | null | null | bindings/python/tests/varargs.py | mewbak/dragonffi | 2a205dbe4dd980d5dd53026c871514795573a7fb | [
"Apache-2.0"
] | null | null | null | bindings/python/tests/varargs.py | mewbak/dragonffi | 2a205dbe4dd980d5dd53026c871514795573a7fb | [
"Apache-2.0"
] | null | null | null | # RUN: "%python" "%s" | "%FileCheck" "%s"
#
import pydffi
F = pydffi.FFI()
CU = F.cdef('''
#include <stdarg.h>
#include <stdio.h>
#include <stdint.h>
void print(const char* prefix, ...)
{
va_list args;
va_start(args, prefix);
while (1) {
int16_t v = va_arg(args, int16_t);
if (v == 0) break;
printf("%s: %d\\n", prefix, v);
}
va_end(args);
}
''')
print_ = getattr(CU.funcs, "print")
# CHECK: pref: 1
print_("pref", F.Int16Ty(1), F.Int16Ty(0))
# CHECK: pref: -1
print_("pref", F.Int16Ty(-1), F.Int16Ty(0))
# CHECK: pref: 1
# CHECK: pref: -1
print_("pref", F.Int16Ty(1), F.Int16Ty(0))
print_("pref", F.Int16Ty(-1), F.Int16Ty(0))
# CHECK: pref: 1
# CHECK: pref: -1
print_("pref", F.Int16Ty(1), F.Int16Ty(0))
print_("pref", F.Int16Ty(-1), F.Int16Ty(0))
| 20.394737 | 43 | 0.596129 | 128 | 775 | 3.507813 | 0.328125 | 0.213808 | 0.13363 | 0.227171 | 0.494432 | 0.494432 | 0.494432 | 0.494432 | 0.494432 | 0.494432 | 0 | 0.073846 | 0.16129 | 775 | 37 | 44 | 20.945946 | 0.616923 | 0.170323 | 0 | 0.24 | 0 | 0 | 0.479495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.04 | 0.36 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
05c30aaeb0dcbe5f7b955869b6f67650a025a8c9 | 3,620 | py | Python | uvicore/database/OBSOLETE/db_OLD_sync.py | coboyoshi/uvicore | 9cfdeeac83000b156fe48f068b4658edaf51c8de | [
"MIT"
] | 11 | 2021-03-22T22:07:49.000Z | 2022-03-08T16:18:33.000Z | uvicore/database/OBSOLETE/db_OLD_sync.py | coboyoshi/uvicore | 9cfdeeac83000b156fe48f068b4658edaf51c8de | [
"MIT"
] | 12 | 2021-03-04T05:51:24.000Z | 2021-09-22T05:16:18.000Z | uvicore/database/OBSOLETE/db_OLD_sync.py | coboyoshi/uvicore | 9cfdeeac83000b156fe48f068b4658edaf51c8de | [
"MIT"
] | 2 | 2021-03-25T14:49:56.000Z | 2021-11-17T23:20:29.000Z | import uvicore
from typing import Dict, List
from uvicore.contracts import Connection
from uvicore.contracts import Database as DatabaseInterface
from uvicore.support.dumper import dd, dump
import sqlalchemy as sa
# from sqlalchemy import MetaData as SaMetaData
# from sqlalchemy.engine import Engine as SaEngine
# from sqlalchemy.engine import Connection as SaConnection
class _Db(DatabaseInterface):
@property
def default(self) -> str:
return self._default
@property
def connections(self) -> Dict[str, Connection]:
return self._connections
@property
def engines(self) -> Dict[str, str]:
return self._engines
@property
def metadatas(self) -> Dict[str, sa.MetaData]:
return self._metadatas
@default.setter
def default(self, value: str) -> None:
self._default = value
def __init__(self) -> None:
self._default = None
self._connections = {}
self._engines = {}
self._metadatas = {}
def init(self, default: str, connections: List[Connection]) -> None:
self._default = default
for connection in connections:
self._connections[connection.name] = connection
self._engines[connection.metakey] = sa.create_engine(connection.url)
self._metadatas[connection.metakey] = sa.MetaData()
def packages(self, connection: str = None, metakey: str = None) -> Connection:
"""Get all packages with the metakey derived from the connection name
or passed in metakey.
"""
if not metakey:
if not connection: connection = self.default
metakey = self.connection(connection).metakey
packages = []
for package in uvicore.app.packages.values():
for conn in package.connections:
if conn.metakey == metakey:
packages.append(package)
return packages
def connection(self, connection: str = None) -> Connection:
"""Get one connection by connection name"""
if not connection: connection = self.default
return self.connections.get(connection)
def metadata(self, connection: str = None, metakey: str = None) -> sa.MetaData:
"""Get one metadata by connection name or metakey"""
if metakey:
return self.metadatas.get(metakey)
else:
if not connection: connection = self.default
metakey = self.connection(connection).metakey
return self.metadatas.get(metakey)
def engine(self, connection: str = None, metakey: str = None) -> sa.engine.Engine:
"""get one engine by connection name or metakey"""
if metakey:
return self.engines.get(metakey)
else:
if not connection: connection = self.default
metakey = self.connection(connection).metakey
return self.engines.get(metakey)
def connect(self, connection: str = None, metakey: str = None) -> sa.engine.Connection:
"""Connect to one engine by connection name or metakey"""
if metakey:
return self.engine(metakey=metakey).connect()
else:
if not connection: connection = self.default
return self.engine(connection).connect()
def execute(self, entity, *args, **kwargs):
conn = self.connect(entity.__connection__)
return conn.execute(*args, **kwargs)
def fetchone(self, entity, query):
return self.execute(entity, query).fetchone()
def fetchall(self, entity, query):
return self.execute(entity, query).fetchall()
| 34.807692 | 91 | 0.643094 | 403 | 3,620 | 5.719603 | 0.173697 | 0.056399 | 0.036876 | 0.045553 | 0.352278 | 0.352278 | 0.329718 | 0.312798 | 0.219523 | 0.163124 | 0 | 0 | 0.26326 | 3,620 | 103 | 92 | 35.145631 | 0.864267 | 0.116851 | 0 | 0.30137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205479 | false | 0 | 0.082192 | 0.082192 | 0.506849 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
05cb3e4162d39f094fca41216679d69629de7c8d | 68 | py | Python | BOJ17174.py | INYEONGKIM/BOJ | 5e83d77a92d18b0d20d26645c7cfe4ba3e2d25bc | [
"MIT"
] | 2 | 2019-03-05T15:42:46.000Z | 2019-07-24T15:52:36.000Z | BOJ17174.py | INYEONGKIM/BOJ | 5e83d77a92d18b0d20d26645c7cfe4ba3e2d25bc | [
"MIT"
] | null | null | null | BOJ17174.py | INYEONGKIM/BOJ | 5e83d77a92d18b0d20d26645c7cfe4ba3e2d25bc | [
"MIT"
] | null | null | null | n,m=map(int,input().split());r=0
while n>0:
r+=n;n//=m
print(r)
| 13.6 | 32 | 0.544118 | 17 | 68 | 2.176471 | 0.588235 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033898 | 0.132353 | 68 | 4 | 33 | 17 | 0.59322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
05de904e37453461597a7ec7188e2d84c93dc859 | 2,806 | py | Python | ws_icra2022/src/my_pkg/ur5_eyeInhand/frompitoangle.py | yanseim/Vision-Based-Control | 4a92103d99703ac2a45d4ad8d01a663e29c0aa7d | [
"MIT"
] | null | null | null | ws_icra2022/src/my_pkg/ur5_eyeInhand/frompitoangle.py | yanseim/Vision-Based-Control | 4a92103d99703ac2a45d4ad8d01a663e29c0aa7d | [
"MIT"
] | null | null | null | ws_icra2022/src/my_pkg/ur5_eyeInhand/frompitoangle.py | yanseim/Vision-Based-Control | 4a92103d99703ac2a45d4ad8d01a663e29c0aa7d | [
"MIT"
] | null | null | null | #!/usr/bin/env python
def getpi(listb):
lista=[]
listcc=[]
for i in listb:
temp=i/180*3.14
lista.append((temp,i))
listcc.append(temp)
return lista
def getangle(listb):
lista=[]
for i in listb:
temp=i/3.14*180
lista.append((i,temp))
return lista
def getangle_new(listb):
lista=[]
for i in listb:
temp=i/3.14*180
lista.append(temp)
return lista
def display(listc):
listpi=[]
listangle=[]
for i in listc:
listpi.append(i[0])
listangle.append(i[1])
#print(i)
print("pi====:",listpi)
print("angle==:",listangle)
return listpi
def getpi_for_py(listc):
listpi=[]
listangle=[]
for i in listc:
listpi.append(i[0])
listangle.append(i[1])
#print(i)
#print("pi====:",listpi)
#print("angle==:",listangle)
return listpi
if __name__=="__main__":
#joint_positions_inpi = [-1.565429989491598, -1.6473701635943812, 0.05049753189086914, -1.4097726980792444,-1.14049534956561487, -0.8895475069154912]
#kk=getangle(joint_positions_inpi)
#joint_position_inangle=[-89.73802487531454, -94.43523230795815, 2.894762974635811, -80.81499543129426, -65.37871430630913, -50.993169186238354]
#aa=getpi(joint_position_inangle)
#display(kk)
reslut=[]
#display(aa)
Q0=[-0.69,-100.70,102.06,-174.24,-90.16,-45.35]
Q1=[-0.69,-91.22,62.51,-151.14,-90.16,-45.36]
#Q0=[-14.49,-49.08,69.91,-203.91,-75.70,-65.06]
#Q1=[-0.12,-50.14,69.91,-199.52,-89.52,-65.07]
Q2=[14.27,-44.13,32.03,-165.34,-100.79,-65.07]
#Q23 = [8.24, -40.93, 15.51, -155.41, -97.56, -65.07]
Q3=[8.24,-40.93,6.51,-146.41,-97.56,-65.07]
Q4=[-0.55,-41.01,6.27,-151.46,-93.72,-65.07]
Q5=[-15.41,-42.01,6.29,-144.16,-73.85,-65.07]
Q6=[-16.84,-54.61,56.91,-186.31,-73.86,-65.08]
Q7=[-3.04,-52.20,49.33,-180.78,-84.35,-65.07]
Q8=[84.81, -124.65, -78.10, 99.59, -96.62, 89.99]
Q9=[1.4794633333333334, -2.17445, -1.3624111111111112, 1.7372922222222222, -1.6854822222222223, 1.5698255555555556]
Q10=[5.,-139.,-79.,-51.,91.,177.]
# print getpi(Q10)
display(getpi(Q10))
# display(getangle(Q9))
#display(getpi(Q0))
#reslut.append(display(getpi(Q8)))
#display(getpi(Q1))
#reslut.append(display(getpi(Q1)))
# display(getpi(Q2))
# reslut.append(display(getpi(Q2)))
# #display(getpi(Q23))
# #reslut.append(display(getpi(Q23)))
# display(getpi(Q3))
# reslut.append(display(getpi(Q3)))
# display(getpi(Q4))
# reslut.append(display(getpi(Q4)))
# display(getpi(Q5))
# reslut.append(display(getpi(Q5)))
# display(getpi(Q6))
# reslut.append(display(getpi(Q6)))
# display(getpi(Q7))
# reslut.append(display(getpi(Q7)))
# print(reslut)
| 32.252874 | 153 | 0.600143 | 428 | 2,806 | 3.890187 | 0.345794 | 0.136937 | 0.102703 | 0.12973 | 0.253453 | 0.195796 | 0.186186 | 0.186186 | 0.186186 | 0.186186 | 0 | 0.289093 | 0.186386 | 2,806 | 86 | 154 | 32.627907 | 0.44021 | 0.40449 | 0 | 0.46 | 0 | 0 | 0.014085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.2 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
05e7e76d6e1fca6bc6bee14678cf8bc1619a9c09 | 1,484 | py | Python | pirate/symbolic_regression/util.py | 212726320/PIRATE-1 | eac2d090286e0a5c13be4829259ea12cbda2f75c | [
"MIT"
] | null | null | null | pirate/symbolic_regression/util.py | 212726320/PIRATE-1 | eac2d090286e0a5c13be4829259ea12cbda2f75c | [
"MIT"
] | null | null | null | pirate/symbolic_regression/util.py | 212726320/PIRATE-1 | eac2d090286e0a5c13be4829259ea12cbda2f75c | [
"MIT"
] | 1 | 2022-01-27T22:34:45.000Z | 2022-01-27T22:34:45.000Z | # File: util
# File Created: Wednesday, 20th November 2019 3:56:01 pm
# Author: Steven Atkinson (212726320@ge.com)
from typing import Callable
import deap.gp
import torch
from ..data.experiment import Experiment
from ..function import Function
def get_residual_function(
op: Callable, experiment: Experiment, pset: deap.gp.PrimitiveSet
) -> Function:
"""
Create the parametric residual function r(x; Theta)
:param op: Operator over functions, aka a graph as a compiled function
:return: Callable with signature r(x, theta_0=val, ...theta_m-1=val)
"""
# TODO would like to make it easier to see how many parameters "op" expects
def residual(x, **parameters):
# First, evaluate the operator over functions and parameters:
func = op(
*[experiment.left_hand_side[key] for key in pset.arguments], **parameters
)
# Then, subtract the inhomogeneous function
if experiment.inhomogeneous is not None:
func = func - experiment.inhomogeneous
return func(x)
return residual
def tensor_to_parameter_dict(x: torch.Tensor) -> dict:
"""
Take an array of parameter values and restructure it as a dict that's a
valid input for the **parameters kwarg for a residual function returned by
`get_residual_function()`
:param x: 1D array of parameters
:return: (dict) parameter specification
"""
return {"theta_%i" % i: val for i, val in enumerate(x)}
| 28.538462 | 85 | 0.686658 | 201 | 1,484 | 5.00995 | 0.522388 | 0.063555 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020105 | 0.229111 | 1,484 | 51 | 86 | 29.098039 | 0.86014 | 0.49124 | 0 | 0 | 0 | 0 | 0.011511 | 0 | 0 | 0 | 0 | 0.019608 | 0 | 1 | 0.166667 | false | 0 | 0.277778 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
af0e6ebdb9c748e825743a36b9af270de0a88c0a | 1,511 | py | Python | baseq/snv/docs.py | basedata10/baseq | 0f1786c3392a51a6ec7cb0f32355cd28eaa5df29 | [
"MIT"
] | 1 | 2018-08-30T20:29:17.000Z | 2018-08-30T20:29:17.000Z | baseq/snv/docs.py | basedata10/baseq | 0f1786c3392a51a6ec7cb0f32355cd28eaa5df29 | [
"MIT"
] | null | null | null | baseq/snv/docs.py | basedata10/baseq | 0f1786c3392a51a6ec7cb0f32355cd28eaa5df29 | [
"MIT"
] | null | null | null | doc="""
#Enrich Quality
baseq-SNV qc_enrich ./bam ./bedfile ./out
#Alignment
baseq-SNV run_bwa -1 Read1M.P457.1.fq.gz -2 Read1M.P457.2.fq.gz -g hg38 -n Test -o Test.bam -t 10
#MarkDuplicate
baseq-SNV run_markdup -b Test.bam -m Test.marked.bam
#bqsr
baseq-SNV run_bqsr -m Test.marked.bam -g hg38 -q Test.marked.bqsr.bam
#call variants
baseq-SNV run_callvar -q Test.marked.bqsr.bam -r Test.raw.indel.snp.vcf -g hg38
#select variants
baseq-SNV run_selectvar -r Test.raw.indel.snp.vcf -s Test.raw.snp.vcf -f Test.filtered.snp.vcf -g hg38
#annovar annotation
baseq-SNV run_annovar -g hg38 -n Test -f Test.filtered.snp.vcf -a Test.snps.avinput
#run gatk pipeline
baseq-SNV run_gatkpipe -1 Read1M.P457.1.fq.gz -2 Read1M.P457.2.fq.gz -n Test -g hg38 -d ./
#run gatk pipeline from bam file
baseq-SNV run_gatkpipe -m
#create PoN files
baseq-SNV create_pon -p /mnt/gpfs/Users/wufan/p12_HEC/GATK/baseq_mutect_test/ -l /mnt/gpfs/Users/wufan/p12_HEC/GATK/28wuchen/mutect_call.txt -L /mnt/gpfs/Users/wufan/p12_HEC/GATK/28wuchen/merge.all.target.1.list -g hg37
#single mutect:/mnt/gpfs/Users/wufan/p12_HEC/GATK/28wuchen/N506/
baseq-SNV run_mutect2 -g hg37 -n N506 -N N506_marked_bqsr.bam -t T506 -T T506_marked_bqsr.bam -o ./
#filter mutect call
baseq-SNV filter_mutect_call -r /mnt/gpfs/Users/wufan/p12_HEC/GATK/resources/ref_b37/small_exac_common_3_b37.vcf.gz -s /mnt/gpfs/Users/wufan/p12_HEC/GATK/mutect_test/T506.vcf.gz -T /mnt/gpfs/Users/wufan/p12_HEC/GATK/28wuchen/T506/T506_marked_bqsr.bam -o ./ -t T506
""" | 40.837838 | 264 | 0.762409 | 295 | 1,511 | 3.786441 | 0.294915 | 0.085944 | 0.08863 | 0.106535 | 0.389436 | 0.290958 | 0.256938 | 0.184423 | 0.121755 | 0.057296 | 0 | 0.072218 | 0.101919 | 1,511 | 37 | 265 | 40.837838 | 0.750921 | 0 | 0 | 0 | 0 | 0.346154 | 0.992725 | 0.35119 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af285478a423ef756d66218779af643e917bf77d | 246 | py | Python | Python/Errors-and-Exceptions/incorrect-regex.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | 9 | 2017-03-19T16:27:31.000Z | 2022-02-17T11:42:21.000Z | Python/Errors-and-Exceptions/incorrect-regex.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | null | null | null | Python/Errors-and-Exceptions/incorrect-regex.py | ekant1999/HackerRank | 084d4550b4eaf130837ab26a4efdbcaf8b667cdc | [
"MIT"
] | 6 | 2019-02-18T11:26:24.000Z | 2022-03-21T14:13:15.000Z | # Enter your code here. Read input from STDIN. Print output to STDOUT
import re
t = int(raw_input())
for i in range(t):
try:
x = re.compile(raw_input())
if x:
print True
except:
print False | 20.5 | 70 | 0.556911 | 36 | 246 | 3.75 | 0.777778 | 0.118519 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.361789 | 246 | 12 | 71 | 20.5 | 0.859873 | 0.272358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | null | null | 0 | 0.111111 | null | null | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af4195bc1ea026174c050f1c4d310f8ae19a4317 | 1,325 | py | Python | src/ovals.py | CTimmerman/PyPico8 | a68c83ae5a9dc53221ab39d6e55bb68bb5a1e479 | [
"MIT"
] | null | null | null | src/ovals.py | CTimmerman/PyPico8 | a68c83ae5a9dc53221ab39d6e55bb68bb5a1e479 | [
"MIT"
] | null | null | null | src/ovals.py | CTimmerman/PyPico8 | a68c83ae5a9dc53221ab39d6e55bb68bb5a1e479 | [
"MIT"
] | null | null | null | """Ovals ported from https://www.lexaloffle.com/bbs/?tid=38665
TODO: Fix character patterns.
"""
# flake8:noqa
from pypico8 import *
printh(
pico8_to_python(
r"""
pattern={[0]=
…,∧,░,⧗,▤,✽,★,✽,
ˇ,░,▤,♪,░,✽,★,☉,
░,▤,♪,░,✽,★,☉,…,
∧,░,⧗,▤,✽,★,✽,★
}
function _draw()
cls(1)
for i=0,31/32,1/32 do
local x=64+cos(i+t()/8)*48
local y=64+sin(i+t()/8)*44
local w=8+cos(i*2+t()/2)*6
local h=8+sin(i*3+t()/2)*6
fillp(pattern[i*32])
ovalfill(x-w,y-h,x+w,y+h,
(i*32)%8+8)
end
print("pico-8 0.2.1",40,62,13)
end
"""
)
)
def _init():
global pattern
# fmt: off
pattern = [
"…","∧","░","⧗","▤","✽","★","✽",
"ˇ","░","▤","♪","░","✽","★","☉",
"░","▤","♪","░","✽","★","☉","…",
"∧","░","⧗","▤","✽","★","✽","★",
]
# fmt: on
def _update():
pass
def _draw():
cls(1)
i = 0
while i <= 31 / 32:
x = 64 + cos(i + t() / 8) * 48
y = 64 + sin(i + t() / 8) * 44
w = 8 + cos(i * 2 + t() / 2) * 6
h = 8 + sin(i * 3 + t() / 2) * 6
fillp(pattern[int(i * 32)])
ovalfill(x - w, y - h, x + w, y + h, (i * 32) % 8 + 8)
i += 1 / 32
print("PICO-8 0.2.1", 40, 62, 13)
run(_init, _update, _draw) | 18.928571 | 63 | 0.366038 | 250 | 1,325 | 2.184 | 0.312 | 0.03663 | 0.03663 | 0.043956 | 0.487179 | 0.487179 | 0.487179 | 0.406593 | 0.369963 | 0.300366 | 0 | 0.107103 | 0.330566 | 1,325 | 70 | 64 | 18.928571 | 0.429538 | 0.089811 | 0 | 0 | 0 | 0 | 0.058124 | 0 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0.103448 | false | 0.034483 | 0.034483 | 0 | 0.137931 | 0.068966 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af516fa3ba27b060e07e80c89c7397cde58dac91 | 1,863 | py | Python | dfvo/libs/geometry/reprojection.py | best-of-acrv/toposlam | 3ec1dda210722d86bf77f101dca57ba27baa5833 | [
"BSD-3-Clause"
] | 26 | 2021-06-21T09:31:30.000Z | 2022-03-22T12:08:58.000Z | dfvo/libs/geometry/reprojection.py | best-of-acrv/toposlam | 3ec1dda210722d86bf77f101dca57ba27baa5833 | [
"BSD-3-Clause"
] | 1 | 2021-06-22T12:47:13.000Z | 2021-06-30T23:17:57.000Z | dfvo/libs/geometry/reprojection.py | best-of-acrv/toposlam | 3ec1dda210722d86bf77f101dca57ba27baa5833 | [
"BSD-3-Clause"
] | 2 | 2021-09-02T06:04:12.000Z | 2021-12-17T05:44:25.000Z | ''''''
'''
@Author: Huangying Zhan (huangying.zhan.work@gmail.com)
@Date: 2019-09-01
@Copyright: Copyright (C) Huangying Zhan 2020. All rights reserved. Please refer to the license file.
@LastEditTime: 2020-05-27
@LastEditors: Huangying Zhan
@Description: Layer to transform pixel coordinates from one view to another view via
backprojection, transformation in 3D, and projection
'''
import torch
import torch.nn as nn
from dfvo.libs.geometry.backprojection import Backprojection
from dfvo.libs.geometry.transformation3d import Transformation3D
from dfvo.libs.geometry.projection import Projection
class Reprojection(nn.Module):
"""Layer to transform pixel coordinates from one view to another view via
backprojection, transformation in 3D, and projection
"""
def __init__(self, height, width):
"""
Args:
height (int): image height
width (int): image width
"""
super(Reprojection, self).__init__()
# layers
self.backproj = Backprojection(height, width)
self.transform = Transformation3D()
self.project = Projection(height, width)
def forward(self, depth, T, K, inv_K, normalized=True):
"""Forward pass
Args:
depth (tensor, [Nx1xHxW]): depth map
T (tensor, [Nx4x4]): transformation matrice
inv_K (tensor, [Nx4x4]): inverse camera intrinsics
K (tensor, [Nx4x4]): camera intrinsics
normalized (bool):
- True: normalized to [-1, 1]
- False: [0, W-1] and [0, H-1]
Returns:
xy (NxHxWx2): pixel coordinates
"""
points3d = self.backproj(depth, inv_K)
points3d_trans = self.transform(points3d, T)
xy = self.project(points3d_trans, K, normalized)
return xy
| 32.684211 | 101 | 0.634461 | 211 | 1,863 | 5.540284 | 0.440758 | 0.044482 | 0.030796 | 0.051326 | 0.17793 | 0.17793 | 0.17793 | 0.17793 | 0.17793 | 0.17793 | 0 | 0.031548 | 0.268384 | 1,863 | 56 | 102 | 33.267857 | 0.826119 | 0.301127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.3125 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
af5befdb12fe17c1adb267429dd5b06cccac3012 | 327 | py | Python | Hackerrank/Stock Maximize/Stock Maximize.py | rahil-1407/Data-Structure-and-Algorithms | ea3eb9849aeb2716ef5812a0b5621a28120b1880 | [
"MIT"
] | 51 | 2021-01-14T04:05:55.000Z | 2022-01-25T11:25:37.000Z | Hackerrank/Stock Maximize/Stock Maximize.py | rahil-1407/Data-Structure-and-Algorithms | ea3eb9849aeb2716ef5812a0b5621a28120b1880 | [
"MIT"
] | 638 | 2020-12-27T18:49:53.000Z | 2021-11-21T05:22:52.000Z | Hackerrank/Stock Maximize/Stock Maximize.py | rahil-1407/Data-Structure-and-Algorithms | ea3eb9849aeb2716ef5812a0b5621a28120b1880 | [
"MIT"
] | 124 | 2021-01-30T06:40:20.000Z | 2021-11-21T15:14:40.000Z | def stockmax(p):
ind_max = p.index(max(p)) #find the max price
inv = sum(p[:ind_max]) #split the array before and after max price
pf = len(p[:ind_max])*p[ind_max] - inv #buy all stocks before max price
if len(p[ind_max+1:]) > 0:
pf += stockmax(p[ind_max+1:]) #then sell them at max price
return pf
| 40.875 | 76 | 0.633028 | 62 | 327 | 3.241935 | 0.467742 | 0.119403 | 0.208955 | 0.149254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011905 | 0.229358 | 327 | 7 | 77 | 46.714286 | 0.785714 | 0.363914 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af6d9918ed41195425420db27e38e038e07c86e8 | 1,096 | py | Python | tests/test_doc_upload.py | blue-yonder/devpi-acceptancetests | 32d59c4948960d5471c7d10851e80f14d186a330 | [
"BSD-3-Clause"
] | null | null | null | tests/test_doc_upload.py | blue-yonder/devpi-acceptancetests | 32d59c4948960d5471c7d10851e80f14d186a330 | [
"BSD-3-Clause"
] | 20 | 2015-11-20T12:48:52.000Z | 2021-03-16T00:15:29.000Z | tests/test_doc_upload.py | blue-yonder/devpi-acceptancetests | 32d59c4948960d5471c7d10851e80f14d186a330 | [
"BSD-3-Clause"
] | 2 | 2016-03-09T13:25:39.000Z | 2020-11-06T09:34:37.000Z | import requests
from twitter.common.contextutil import pushd
import unittest
from devpi_plumber.server import TestServer
from tests.config import NATIVE_PASSWORD, NATIVE_USER
from tests.fixture import PACKAGE_VERSION, SOURCE_DIR
from tests.utils import wait_until
class DocUploadTests(unittest.TestCase):
def test_upload(self):
users = {NATIVE_USER: {'password': NATIVE_PASSWORD}}
indices = {NATIVE_USER + '/index': {}}
with TestServer(users=users, indices=indices) as devpi:
devpi.use(NATIVE_USER, 'index')
devpi.login(NATIVE_USER, NATIVE_PASSWORD)
with pushd(SOURCE_DIR):
devpi.upload(path=None, with_docs=True)
def doc_present(version=PACKAGE_VERSION):
return requests.get(
devpi.server_url + "/{}/index/test-package/{}/+d/index.html".format(NATIVE_USER, version),
).status_code == 200,
wait_until(doc_present, maxloop=300)
self.assertTrue(doc_present('+latest'))
self.assertTrue(doc_present('+stable'))
| 33.212121 | 110 | 0.662409 | 127 | 1,096 | 5.527559 | 0.464567 | 0.08547 | 0.042735 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00716 | 0.235401 | 1,096 | 32 | 111 | 34.25 | 0.830549 | 0 | 0 | 0 | 0 | 0 | 0.065693 | 0.035584 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.086957 | false | 0.130435 | 0.304348 | 0.043478 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
af71b19523456cf3eb111125d5b90857585c2b94 | 309 | py | Python | problems/linkedlist/Solution206.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | problems/linkedlist/Solution206.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | problems/linkedlist/Solution206.py | akalu/cs-problems-python | 9b1bd8e3932be62135a38a77f955ded9a766b654 | [
"MIT"
] | null | null | null | """
Reverse a singly linked list.
Example:
Input: 1->2->3->4->5->NULL Output: 5->4->3->2->1->NULL
// 4 -> 5 -> 6
//prev cur nextTemp
// 4 -> 5 -> 6
// prev cur nextTemp
"""
class Solution206:
pass
| 17.166667 | 57 | 0.375405 | 35 | 309 | 3.314286 | 0.6 | 0.051724 | 0.051724 | 0.12069 | 0.310345 | 0.310345 | 0 | 0 | 0 | 0 | 0 | 0.116564 | 0.472492 | 309 | 17 | 58 | 18.176471 | 0.595092 | 0.744337 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
af83923c9100dc6ae4a0fe867d6f85a13ad53bef | 2,396 | py | Python | week_two_quiz.py | Ronlin1/cit_quiz_two | afb9cd676b833caf7f9f65a16ba8c857d6d72b6b | [
"MIT"
] | null | null | null | week_two_quiz.py | Ronlin1/cit_quiz_two | afb9cd676b833caf7f9f65a16ba8c857d6d72b6b | [
"MIT"
] | null | null | null | week_two_quiz.py | Ronlin1/cit_quiz_two | afb9cd676b833caf7f9f65a16ba8c857d6d72b6b | [
"MIT"
] | null | null | null | """
WEEK TWO QUIZ AS AT 05 MARCH 2020 ___PYTHON-BLOCKCHAIN-AWS-CIT-CLASS 2021___
"""
# WELCOME SCREEN
print('------------------------- WEEK TWO QUIZ --------------------------')
# TODO: QUIZ ANSWERS
# Global variable --->End of question separator in the ---- terminal ----
end_of_answer = '------------------------------------------------------------------'
# 1 --> Identified, created and re-fixed a few errors.
# 2 --> UNICODE? Converting a sting to unicode -UTF8
"""
Unicode is a universal character encoding standard that assigns a code to every
character and symbol in every language in the world.
The difference between ASCII and Unicode is that ASCII represents lowercase letters (a-z),
uppercase letters (A-Z), digits (0–9) and symbols such as punctuation marks while Unicode
represents letters of English, Arabic, Greek etc....
UTF-8 is the most widely used way to represent Unicode text in web pages
UTF-8 is a Unicode Transformation Format that uses 8-bit blocks to represent
a character and is essential for software localization.
"""
# In Python 3 and higher, all strings are already in unicode, however, we can deode every letter
# Using functions ord() and chr()
my_string = "Hello World!"
print(f'My String: {my_string}')
my_string_decoded = []
for letter in my_string:
my_string_decoded.append((ord(letter)))
print(f'My String decoded: {my_string_decoded}')
# Note : Unicode characters include space, comma, punctuations etc...
# To convert back , we use chr()
my_string_encoded_again = []
for i in my_string_decoded:
my_string_encoded_again.append(chr(i))
print(f'Encoded: {my_string_encoded_again}')
print('So we know H=72, e=101, l=108 and so on ... ')
print(end_of_answer)
print("\U0001F606")
print("\N{grinning face}")
# EMOJI CHALLENGE
# importing emoji module
import emoji
# simple emoji print-outs
print(emoji.emojize("Here we gat :grinning_face_with_big_eyes: and :winking_face_with_tongue: and :zipper-mouth_face:"))
# without import -> Use of \N and {} specifies Unicode
print("\N{cherry blossom}")
print("\N{slightly smiling face}")
print("\N{winking face}")
# We can use shortcodes too
print("\U0001f600 , \U0001F606 , \U0001F923, \U0001F601, \U0001F609, \U0001F60B, \U0001F60D, \U0001F680")
print(end_of_answer)
print(emoji.emojize("Yeah I think Python is :thumbs_up: and my favourite emoji is :nerd_face:"))
| 35.761194 | 120 | 0.699499 | 350 | 2,396 | 4.671429 | 0.514286 | 0.058716 | 0.045872 | 0.029358 | 0.081957 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043222 | 0.15025 | 2,396 | 66 | 121 | 36.30303 | 0.759332 | 0.273372 | 0 | 0.083333 | 0 | 0.041667 | 0.571946 | 0.178281 | 0 | 0 | 0 | 0.015152 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0.625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
af89ecde19b8ea0aa6f7eec4d787b4af1c5c28e9 | 2,564 | py | Python | VMD 3D Pose Baseline Multi-Objects/applications/VmdWriter.py | kyapp69/OpenMMD | 795d4dd660cf7e537ceb599fdb038c5388b33390 | [
"MIT"
] | 717 | 2018-10-31T16:52:42.000Z | 2022-03-31T16:13:47.000Z | VMD 3D Pose Baseline Multi-Objects/applications/VmdWriter.py | Pixis5566/OpenMMD | 795d4dd660cf7e537ceb599fdb038c5388b33390 | [
"MIT"
] | 48 | 2018-11-08T12:16:43.000Z | 2020-08-10T00:24:50.000Z | VMD 3D Pose Baseline Multi-Objects/applications/VmdWriter.py | Pixis5566/OpenMMD | 795d4dd660cf7e537ceb599fdb038c5388b33390 | [
"MIT"
] | 180 | 2018-10-31T18:41:33.000Z | 2022-03-27T23:49:06.000Z | # -*- coding: utf-8 -*-
import struct
from PyQt5.QtGui import QQuaternion, QVector3D
class VmdBoneFrame():
def __init__(self, frame=0):
self.name = ''
self.frame = frame
self.position = QVector3D(0, 0, 0)
self.rotation = QQuaternion()
def write(self, fout):
fout.write(self.name)
fout.write(bytearray([0 for i in range(len(self.name), 15)])) # ボーン名15Byteの残りを\0で埋める
fout.write(struct.pack('<L', self.frame))
fout.write(struct.pack('<f', self.position.x()))
fout.write(struct.pack('<f', self.position.y()))
fout.write(struct.pack('<f', self.position.z()))
v = self.rotation.toVector4D()
fout.write(struct.pack('<f', v.x()))
fout.write(struct.pack('<f', v.y()))
fout.write(struct.pack('<f', v.z()))
fout.write(struct.pack('<f', v.w()))
fout.write(bytearray([0 for i in range(0, 64)])) # 補間パラメータ(64Byte)
class VmdInfoIk():
def __init__(self, name='', onoff=0):
self.name = name
self.onoff = onoff
class VmdShowIkFrame():
def __init__(self):
self.frame = 0
self.show = 0
self.ik = []
def write(self, fout):
fout.write(struct.pack('<L', self.frame))
fout.write(struct.pack('b', self.show))
fout.write(struct.pack('<L', len(self.ik)))
for k in (self.ik):
fout.write(k.name)
fout.write(bytearray([0 for i in range(len(k.name), 20)])) # IKボーン名20Byteの残りを\0で埋める
fout.write(struct.pack('b', k.onoff))
class VmdWriter():
def __init__(self):
pass
def write_vmd_file(self, filename, bone_frames, showik_frames):
"""Write VMD data to a file"""
fout = open(filename, "wb")
# header
fout.write(b'Vocaloid Motion Data 0002\x00\x00\x00\x00\x00')
fout.write(b'Dummy Model Name ')
# bone frames
fout.write(struct.pack('<L', len(bone_frames))) # ボーンフレーム数
for bf in bone_frames:
bf.write(fout)
fout.write(struct.pack('<L', 0)) # 表情キーフレーム数
fout.write(struct.pack('<L', 0)) # カメラキーフレーム数
fout.write(struct.pack('<L', 0)) # 照明キーフレーム数
fout.write(struct.pack('<L', 0)) # セルフ影キーフレーム数
if showik_frames == None:
fout.write(struct.pack('<L', 0)) # モデル表示・IK on/offキーフレーム数
else:
fout.write(struct.pack('<L', len(showik_frames))) # モデル表示・IK on/offキーフレーム数
for sf in showik_frames:
sf.write(fout)
fout.close()
| 34.648649 | 95 | 0.565523 | 343 | 2,564 | 4.16035 | 0.253644 | 0.16398 | 0.19972 | 0.252978 | 0.459005 | 0.40925 | 0.194114 | 0.140154 | 0.119131 | 0.119131 | 0 | 0.026161 | 0.269501 | 2,564 | 73 | 96 | 35.123288 | 0.73465 | 0.087363 | 0 | 0.186441 | 0 | 0 | 0.044358 | 0.010336 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0.016949 | 0.033898 | 0 | 0.220339 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af93a6ee9fb8f56c875213959c0b84e1cd07aefc | 8,710 | py | Python | mlflow/protos/databricks_artifacts_pb2.py | abhiramr/mlflow | 2bbdc20f2d90d551fb7d40f982f2f799da9feca8 | [
"Apache-2.0"
] | null | null | null | mlflow/protos/databricks_artifacts_pb2.py | abhiramr/mlflow | 2bbdc20f2d90d551fb7d40f982f2f799da9feca8 | [
"Apache-2.0"
] | null | null | null | mlflow/protos/databricks_artifacts_pb2.py | abhiramr/mlflow | 2bbdc20f2d90d551fb7d40f982f2f799da9feca8 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: databricks_artifacts.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import service as _service
from google.protobuf import service_reflection
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from .scalapb import scalapb_pb2 as scalapb_dot_scalapb__pb2
from . import databricks_pb2 as databricks__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1a\x64\x61tabricks_artifacts.proto\x12\x06mlflow\x1a\x15scalapb/scalapb.proto\x1a\x10\x64\x61tabricks.proto\"\xdf\x01\n\x16\x41rtifactCredentialInfo\x12\x0e\n\x06run_id\x18\x01 \x01(\t\x12\x0c\n\x04path\x18\x02 \x01(\t\x12\x12\n\nsigned_uri\x18\x03 \x01(\t\x12:\n\x07headers\x18\x04 \x03(\x0b\x32).mlflow.ArtifactCredentialInfo.HttpHeader\x12,\n\x04type\x18\x05 \x01(\x0e\x32\x1e.mlflow.ArtifactCredentialType\x1a)\n\nHttpHeader\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t\"\x95\x02\n\x15GetCredentialsForRead\x12\x14\n\x06run_id\x18\x01 \x01(\tB\x04\xf8\x86\x19\x01\x12\x0c\n\x04path\x18\x02 \x03(\t\x12\x12\n\npage_token\x18\x03 \x01(\t\x1a\x63\n\x08Response\x12\x38\n\x10\x63redential_infos\x18\x02 \x03(\x0b\x32\x1e.mlflow.ArtifactCredentialInfo\x12\x17\n\x0fnext_page_token\x18\x03 \x01(\tJ\x04\x08\x01\x10\x02:_\xe2?(\n&com.databricks.rpc.RPC[$this.Response]\xe2?1\n/com.databricks.mlflow.api.MlflowTrackingMessage\"\x96\x02\n\x16GetCredentialsForWrite\x12\x14\n\x06run_id\x18\x01 \x01(\tB\x04\xf8\x86\x19\x01\x12\x0c\n\x04path\x18\x02 \x03(\t\x12\x12\n\npage_token\x18\x03 \x01(\t\x1a\x63\n\x08Response\x12\x38\n\x10\x63redential_infos\x18\x02 \x03(\x0b\x32\x1e.mlflow.ArtifactCredentialInfo\x12\x17\n\x0fnext_page_token\x18\x03 \x01(\tJ\x04\x08\x01\x10\x02:_\xe2?(\n&com.databricks.rpc.RPC[$this.Response]\xe2?1\n/com.databricks.mlflow.api.MlflowTrackingMessage*V\n\x16\x41rtifactCredentialType\x12\x11\n\rAZURE_SAS_URI\x10\x01\x12\x15\n\x11\x41WS_PRESIGNED_URL\x10\x02\x12\x12\n\x0eGCP_SIGNED_URL\x10\x03\x32\xe4\x02\n DatabricksMlflowArtifactsService\x12\x9c\x01\n\x15getCredentialsForRead\x12\x1d.mlflow.GetCredentialsForRead\x1a&.mlflow.GetCredentialsForRead.Response\"<\xf2\x86\x19\x38\n4\n\x04POST\x12&/mlflow/artifacts/credentials-for-read\x1a\x04\x08\x02\x10\x00\x10\x03\x12\xa0\x01\n\x16getCredentialsForWrite\x12\x1e.mlflow.GetCredentialsForWrite\x1a\'.mlflow.GetCredentialsForWrite.Response\"=\xf2\x86\x19\x39\n5\n\x04POST\x12\'/mlflow/artifacts/credentials-for-write\x1a\x04\x08\x02\x10\x00\x10\x03\x42,\n\x1f\x63om.databricks.api.proto.mlflow\x90\x01\x01\xa0\x01\x01\xe2?\x02\x10\x01')
_ARTIFACTCREDENTIALTYPE = DESCRIPTOR.enum_types_by_name['ArtifactCredentialType']
ArtifactCredentialType = enum_type_wrapper.EnumTypeWrapper(_ARTIFACTCREDENTIALTYPE)
AZURE_SAS_URI = 1
AWS_PRESIGNED_URL = 2
GCP_SIGNED_URL = 3
_ARTIFACTCREDENTIALINFO = DESCRIPTOR.message_types_by_name['ArtifactCredentialInfo']
_ARTIFACTCREDENTIALINFO_HTTPHEADER = _ARTIFACTCREDENTIALINFO.nested_types_by_name['HttpHeader']
_GETCREDENTIALSFORREAD = DESCRIPTOR.message_types_by_name['GetCredentialsForRead']
_GETCREDENTIALSFORREAD_RESPONSE = _GETCREDENTIALSFORREAD.nested_types_by_name['Response']
_GETCREDENTIALSFORWRITE = DESCRIPTOR.message_types_by_name['GetCredentialsForWrite']
_GETCREDENTIALSFORWRITE_RESPONSE = _GETCREDENTIALSFORWRITE.nested_types_by_name['Response']
ArtifactCredentialInfo = _reflection.GeneratedProtocolMessageType('ArtifactCredentialInfo', (_message.Message,), {
'HttpHeader' : _reflection.GeneratedProtocolMessageType('HttpHeader', (_message.Message,), {
'DESCRIPTOR' : _ARTIFACTCREDENTIALINFO_HTTPHEADER,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.ArtifactCredentialInfo.HttpHeader)
})
,
'DESCRIPTOR' : _ARTIFACTCREDENTIALINFO,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.ArtifactCredentialInfo)
})
_sym_db.RegisterMessage(ArtifactCredentialInfo)
_sym_db.RegisterMessage(ArtifactCredentialInfo.HttpHeader)
GetCredentialsForRead = _reflection.GeneratedProtocolMessageType('GetCredentialsForRead', (_message.Message,), {
'Response' : _reflection.GeneratedProtocolMessageType('Response', (_message.Message,), {
'DESCRIPTOR' : _GETCREDENTIALSFORREAD_RESPONSE,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.GetCredentialsForRead.Response)
})
,
'DESCRIPTOR' : _GETCREDENTIALSFORREAD,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.GetCredentialsForRead)
})
_sym_db.RegisterMessage(GetCredentialsForRead)
_sym_db.RegisterMessage(GetCredentialsForRead.Response)
GetCredentialsForWrite = _reflection.GeneratedProtocolMessageType('GetCredentialsForWrite', (_message.Message,), {
'Response' : _reflection.GeneratedProtocolMessageType('Response', (_message.Message,), {
'DESCRIPTOR' : _GETCREDENTIALSFORWRITE_RESPONSE,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.GetCredentialsForWrite.Response)
})
,
'DESCRIPTOR' : _GETCREDENTIALSFORWRITE,
'__module__' : 'databricks_artifacts_pb2'
# @@protoc_insertion_point(class_scope:mlflow.GetCredentialsForWrite)
})
_sym_db.RegisterMessage(GetCredentialsForWrite)
_sym_db.RegisterMessage(GetCredentialsForWrite.Response)
_DATABRICKSMLFLOWARTIFACTSSERVICE = DESCRIPTOR.services_by_name['DatabricksMlflowArtifactsService']
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\037com.databricks.api.proto.mlflow\220\001\001\240\001\001\342?\002\020\001'
_GETCREDENTIALSFORREAD.fields_by_name['run_id']._options = None
_GETCREDENTIALSFORREAD.fields_by_name['run_id']._serialized_options = b'\370\206\031\001'
_GETCREDENTIALSFORREAD._options = None
_GETCREDENTIALSFORREAD._serialized_options = b'\342?(\n&com.databricks.rpc.RPC[$this.Response]\342?1\n/com.databricks.mlflow.api.MlflowTrackingMessage'
_GETCREDENTIALSFORWRITE.fields_by_name['run_id']._options = None
_GETCREDENTIALSFORWRITE.fields_by_name['run_id']._serialized_options = b'\370\206\031\001'
_GETCREDENTIALSFORWRITE._options = None
_GETCREDENTIALSFORWRITE._serialized_options = b'\342?(\n&com.databricks.rpc.RPC[$this.Response]\342?1\n/com.databricks.mlflow.api.MlflowTrackingMessage'
_DATABRICKSMLFLOWARTIFACTSSERVICE.methods_by_name['getCredentialsForRead']._options = None
_DATABRICKSMLFLOWARTIFACTSSERVICE.methods_by_name['getCredentialsForRead']._serialized_options = b'\362\206\0318\n4\n\004POST\022&/mlflow/artifacts/credentials-for-read\032\004\010\002\020\000\020\003'
_DATABRICKSMLFLOWARTIFACTSSERVICE.methods_by_name['getCredentialsForWrite']._options = None
_DATABRICKSMLFLOWARTIFACTSSERVICE.methods_by_name['getCredentialsForWrite']._serialized_options = b'\362\206\0319\n5\n\004POST\022\'/mlflow/artifacts/credentials-for-write\032\004\010\002\020\000\020\003'
_ARTIFACTCREDENTIALTYPE._serialized_start=866
_ARTIFACTCREDENTIALTYPE._serialized_end=952
_ARTIFACTCREDENTIALINFO._serialized_start=80
_ARTIFACTCREDENTIALINFO._serialized_end=303
_ARTIFACTCREDENTIALINFO_HTTPHEADER._serialized_start=262
_ARTIFACTCREDENTIALINFO_HTTPHEADER._serialized_end=303
_GETCREDENTIALSFORREAD._serialized_start=306
_GETCREDENTIALSFORREAD._serialized_end=583
_GETCREDENTIALSFORREAD_RESPONSE._serialized_start=387
_GETCREDENTIALSFORREAD_RESPONSE._serialized_end=486
_GETCREDENTIALSFORWRITE._serialized_start=586
_GETCREDENTIALSFORWRITE._serialized_end=864
_GETCREDENTIALSFORWRITE_RESPONSE._serialized_start=387
_GETCREDENTIALSFORWRITE_RESPONSE._serialized_end=486
_DATABRICKSMLFLOWARTIFACTSSERVICE._serialized_start=955
_DATABRICKSMLFLOWARTIFACTSSERVICE._serialized_end=1311
DatabricksMlflowArtifactsService = service_reflection.GeneratedServiceType('DatabricksMlflowArtifactsService', (_service.Service,), dict(
DESCRIPTOR = _DATABRICKSMLFLOWARTIFACTSSERVICE,
__module__ = 'databricks_artifacts_pb2'
))
DatabricksMlflowArtifactsService_Stub = service_reflection.GeneratedServiceStubType('DatabricksMlflowArtifactsService_Stub', (DatabricksMlflowArtifactsService,), dict(
DESCRIPTOR = _DATABRICKSMLFLOWARTIFACTSSERVICE,
__module__ = 'databricks_artifacts_pb2'
))
# @@protoc_insertion_point(module_scope)
| 68.582677 | 2,193 | 0.828817 | 1,003 | 8,710 | 6.881356 | 0.208375 | 0.013909 | 0.020864 | 0.032454 | 0.452912 | 0.34135 | 0.307157 | 0.251521 | 0.237612 | 0.210953 | 0 | 0.079133 | 0.056946 | 8,710 | 126 | 2,194 | 69.126984 | 0.76114 | 0.075086 | 0 | 0.208333 | 1 | 0.0625 | 0.404726 | 0.370896 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.104167 | 0 | 0.104167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af93e031b575f53e9e8a2efce8d1de3819e957dd | 969 | py | Python | buildroot/support/testing/tests/package/test_python_txaio.py | TonyApuzzo/hassos | bb201fb84209a1bb5cf0611bd09e3610701d737d | [
"Apache-2.0"
] | 1 | 2019-02-12T06:53:47.000Z | 2019-02-12T06:53:47.000Z | buildroot/support/testing/tests/package/test_python_txaio.py | berg/hassos | 30b599acc6fda01e6a07181d01e8e03b365424f4 | [
"Apache-2.0"
] | null | null | null | buildroot/support/testing/tests/package/test_python_txaio.py | berg/hassos | 30b599acc6fda01e6a07181d01e8e03b365424f4 | [
"Apache-2.0"
] | null | null | null | from tests.package.test_python import TestPythonBase
class TestPythonPy2Txaio(TestPythonBase):
config = TestPythonBase.config + \
"""
BR2_PACKAGE_PYTHON=y
BR2_PACKAGE_PYTHON_TXAIO=y
BR2_PACKAGE_PYTHON_TWISTED=y
"""
def test_run(self):
self.login()
cmd = self.interpreter + " -c 'import txaio;"
cmd += "txaio.use_twisted();"
cmd += "f0 = txaio.create_future()'"
_, exit_code = self.emulator.run(cmd)
self.assertEqual(exit_code, 0)
class TestPythonPy3Txaio(TestPythonBase):
config = TestPythonBase.config + \
"""
BR2_PACKAGE_PYTHON3=y
BR2_PACKAGE_PYTHON_TXAIO=y
"""
def test_run(self):
self.login()
cmd = self.interpreter + " -c 'import txaio;"
cmd += "txaio.use_asyncio();"
cmd += "f0 = txaio.create_future()'"
_, exit_code = self.emulator.run(cmd)
self.assertEqual(exit_code, 0)
| 27.685714 | 53 | 0.605779 | 106 | 969 | 5.292453 | 0.311321 | 0.089127 | 0.114082 | 0.090909 | 0.746881 | 0.746881 | 0.488414 | 0.488414 | 0.488414 | 0.488414 | 0 | 0.017094 | 0.275542 | 969 | 34 | 54 | 28.5 | 0.782051 | 0 | 0 | 0.736842 | 0 | 0 | 0.168176 | 0.056921 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.157895 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
af9ffa2afa6f222eae7f47b0d9c40faec1abf95b | 763 | py | Python | app/adaptation/strategies/__init__.py | ManCla/TRAPP | 415dd302de0cf573584b57b6718cfed64380d353 | [
"MIT"
] | null | null | null | app/adaptation/strategies/__init__.py | ManCla/TRAPP | 415dd302de0cf573584b57b6718cfed64380d353 | [
"MIT"
] | null | null | null | app/adaptation/strategies/__init__.py | ManCla/TRAPP | 415dd302de0cf573584b57b6718cfed64380d353 | [
"MIT"
] | null | null | null | import app.Config as Config
from app.adaptation.strategies.AvoidOverloadedStreets import AvoidOverLoadedStreets
from app.adaptation.strategies.LoadBalancing import LoadBalancing
from app.adaptation.strategies.TunePlanningResolution import TunePlanningResolution
from app.adaptation.strategies.NoAdaptation import NoAdaptation
def get_adaptation_stategy(tick):
if Config.adaptation_strategy == "load_balancing":
return LoadBalancing(tick)
elif Config.adaptation_strategy == "avoid_overloaded_streets":
return AvoidOverLoadedStreets(tick)
elif Config.adaptation_strategy == "tune_planning_resolution":
return TunePlanningResolution(tick)
elif Config.adaptation_strategy == "no_adaptation":
return NoAdaptation(tick) | 47.6875 | 83 | 0.812582 | 76 | 763 | 8 | 0.368421 | 0.046053 | 0.111842 | 0.177632 | 0.157895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124509 | 763 | 16 | 84 | 47.6875 | 0.91018 | 0 | 0 | 0 | 0 | 0 | 0.098168 | 0.062827 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.357143 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
afa008fa3636a4fda9344f5f425591dcd3ce2b35 | 2,250 | py | Python | robobet/bet_placer/site_routers.py | Toffooo/robobet | e3522be66b963bec1439a0e06dde3ae2edbcf22c | [
"MIT"
] | null | null | null | robobet/bet_placer/site_routers.py | Toffooo/robobet | e3522be66b963bec1439a0e06dde3ae2edbcf22c | [
"MIT"
] | null | null | null | robobet/bet_placer/site_routers.py | Toffooo/robobet | e3522be66b963bec1439a0e06dde3ae2edbcf22c | [
"MIT"
] | null | null | null | import time
from abc import ABC, abstractmethod
from contextlib import contextmanager
from selenium import webdriver
from settings import Bet1X
from .schemas import RouterResponse, create_router_response
class AbstractRouter(ABC):
def __init__(self, username: int, password: str) -> None:
self.username = username
self.password = password
@abstractmethod
def get_balance(self) -> RouterResponse:
pass
class Bet1XRouter(AbstractRouter):
def __init__(self, username: int, password: str) -> None:
super().__init__(username, password)
@contextmanager
def login(self):
driver = webdriver.Firefox()
try:
driver.get(Bet1X.base_url)
drop_login_btn = driver.find_element_by_id("curLoginForm")
drop_login_btn.click()
auth_id_email = driver.find_element_by_id("auth_id_email")
auth_id_email.send_keys(self.username)
auth_form_password = driver.find_element_by_id("auth-form-password")
auth_form_password.send_keys(self.password)
login_btn = driver.find_element_by_class_name("auth-button")
login_btn.click()
time.sleep(3) # TODO: replace this block to `.wait_until`
yield driver
finally:
exit_btn = driver.find_element_by_class_name("exitLink")
exit_btn.click()
driver.close()
def get_balance(self) -> RouterResponse:
response = create_router_response(
path=Bet1X.get_balance,
linter={
"type": "ELEMENT",
"tag": "p",
"attrs": {"class": "top-b-acc__amount"},
},
)
return response
def get_live_matches(self) -> RouterResponse:
response = create_router_response(
path=Bet1X.base_url,
linter={
"type": "LIST",
"tag": "div",
"attrs": {"data-name": "dashboard-champ-content"},
"children": {
"type": "LIST",
"tag": "div",
"attrs": {"class": "c-events__item"},
},
},
)
return response
| 28.125 | 80 | 0.572889 | 230 | 2,250 | 5.313043 | 0.386957 | 0.040917 | 0.069558 | 0.077741 | 0.337152 | 0.268412 | 0.201309 | 0.150573 | 0 | 0 | 0 | 0.003953 | 0.325333 | 2,250 | 79 | 81 | 28.481013 | 0.801054 | 0.018222 | 0 | 0.233333 | 0 | 0 | 0.091074 | 0.010421 | 0 | 0 | 0 | 0.012658 | 0 | 1 | 0.1 | false | 0.116667 | 0.1 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
afa38416a46ffbadd01ef6f9767c7d7dcd6cff8c | 728 | py | Python | test_autoarray/plot/mat_wrap/test_visuals.py | jonathanfrawley/PyAutoArray_copy | c21e8859bdb20737352147b9904797ac99985b73 | [
"MIT"
] | 5 | 2019-09-26T02:18:25.000Z | 2021-12-11T16:29:20.000Z | test_autoarray/plot/mat_wrap/test_visuals.py | jonathanfrawley/PyAutoArray_copy | c21e8859bdb20737352147b9904797ac99985b73 | [
"MIT"
] | 3 | 2020-03-30T14:25:57.000Z | 2021-12-21T17:10:55.000Z | test_autoarray/plot/mat_wrap/test_visuals.py | jonathanfrawley/PyAutoArray_copy | c21e8859bdb20737352147b9904797ac99985b73 | [
"MIT"
] | 4 | 2020-03-03T11:35:41.000Z | 2022-01-21T17:37:35.000Z | import autoarray.plot as aplt
class TestAbstractVisuals:
def test__add_visuals_together__replaces_nones(self):
visuals_1 = aplt.Visuals2D(mask=1)
visuals_0 = aplt.Visuals2D(border=10)
visuals = visuals_0 + visuals_1
assert visuals.mask == 1
assert visuals.border == 10
assert visuals_1.mask == 1
assert visuals_1.border == 10
assert visuals_0.border == 10
assert visuals_0.mask == None
visuals_0 = aplt.Visuals2D(mask=1)
visuals_1 = aplt.Visuals2D(mask=2)
visuals = visuals_1 + visuals_0
assert visuals.mask == 1
assert visuals.border == None
assert visuals_1.mask == 2
| 26.962963 | 58 | 0.615385 | 90 | 728 | 4.755556 | 0.266667 | 0.273364 | 0.130841 | 0.126168 | 0.453271 | 0.172897 | 0.172897 | 0 | 0 | 0 | 0 | 0.063492 | 0.307692 | 728 | 26 | 59 | 28 | 0.785714 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
afb4f71db275429df6cd541025699f17931c91f6 | 791 | py | Python | hypernet/src/thermophysicalModels/specie/partitionFun/basic.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | hypernet/src/thermophysicalModels/specie/partitionFun/basic.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | hypernet/src/thermophysicalModels/specie/partitionFun/basic.py | christian-jacobsen/hypernet | 9f62e1531eb152cc08af0b0c6b09d6fde8d42400 | [
"Apache-2.0"
] | null | null | null | import abc
class Basic(object):
# Initialization
###########################################################################
def __init__(
self,
specie,
*args,
**kwargs
):
# Specie Properties
self.specie = specie
self.Q = 1.
self.dQdT = 0.
# Methods
###########################################################################
# Update ------------------------------------------------------------------
def update(self, T):
self.Q = self.Q_(T)
self.dQdT = self.dQdT_(T)
# Partition functions -----------------------------------------------------
@abc.abstractmethod
def Q_(self, T):
pass
@abc.abstractmethod
def dQdT_(self, T):
pass
| 22.6 | 79 | 0.324905 | 55 | 791 | 4.527273 | 0.436364 | 0.060241 | 0.160643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003466 | 0.270544 | 791 | 34 | 80 | 23.264706 | 0.428076 | 0.237674 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0.1 | 0.05 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
afd6664432eb837b50bee769a174cb19c16d821f | 329 | py | Python | server/bumf/api/views/auth.py | bumfiness/bumf | 71c404c0a8f804b8f0e127df3de6d8916db4c660 | [
"Apache-2.0"
] | 6 | 2017-01-07T17:59:46.000Z | 2017-02-10T13:19:46.000Z | server/bumf/api/views/auth.py | rixx/bumf | 71c404c0a8f804b8f0e127df3de6d8916db4c660 | [
"Apache-2.0"
] | null | null | null | server/bumf/api/views/auth.py | rixx/bumf | 71c404c0a8f804b8f0e127df3de6d8916db4c660 | [
"Apache-2.0"
] | null | null | null | from rest_framework import mixins, permissions, viewsets
from bumf.api.serializers import UserSerializer
from bumf.core.models import User
class UserView(mixins.CreateModelMixin, viewsets.GenericViewSet):
queryset = User.objects.none()
permission_classes = [permissions.AllowAny]
serializer_class = UserSerializer
| 29.909091 | 65 | 0.808511 | 36 | 329 | 7.305556 | 0.694444 | 0.060837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12462 | 329 | 10 | 66 | 32.9 | 0.913194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.428571 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
bb68e821c0d22516b0ff182dda7205c9ac77a256 | 763 | py | Python | huawei/temp.py | chengchengXu/try_python | 45d4b450d4503ee37072a1471bceddbfb48601ae | [
"MIT"
] | null | null | null | huawei/temp.py | chengchengXu/try_python | 45d4b450d4503ee37072a1471bceddbfb48601ae | [
"MIT"
] | null | null | null | huawei/temp.py | chengchengXu/try_python | 45d4b450d4503ee37072a1471bceddbfb48601ae | [
"MIT"
] | null | null | null | import requests
url = "https://iam.cn-south-1.myhuaweicloud.com/v3/auth/tokens"
payload="{\r\n\t\"auth\": {\r\n\t\t\"identity\": {\r\n\t\t\t\"methods\": [\"password\"],\r\n\t\t\t\"password\": {\r\n\t\t\t\t\"user\": {\r\n\t\t\t\t\t\"name\": \"hw13258492\",\r\n\t\t\t\t\t\"password\": \"Dq20201020\",\r\n\t\t\t\t\t\"domain\": {\r\n\t\t\t\t\t\t\"name\": \"hw13258492\"\r\n\t\t\t\t\t}\r\n\t\t\t\t}\r\n\t\t\t}\r\n\t\t},\r\n\t\t\"scope\": {\r\n\t\t\t\"domain\": {\r\n\t\t\t\t\"name\": \"hw13258492\"\r\n\t\t\t}\r\n\t\t}\r\n\t}\r\n }"
headers = {
'Content-Type': 'application/json;charset=utf8',
'Cookie': 'HWWAFSESID=77fa97f18e6595ee97; HWWAFSESTIME=1608188573994'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
| 54.5 | 448 | 0.605505 | 158 | 763 | 2.924051 | 0.278481 | 0.190476 | 0.175325 | 0.147186 | 0.354978 | 0.339827 | 0.279221 | 0.266234 | 0.266234 | 0.253247 | 0 | 0.083449 | 0.057667 | 763 | 13 | 449 | 58.692308 | 0.55911 | 0 | 0 | 0 | 0 | 0.222222 | 0.568807 | 0.228047 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.111111 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
bb933016aedc5ef107f7ebf8c513626667bcb53b | 2,259 | py | Python | app/models.py | Rahmatullina/FinalYearProject | 326f521b9f600dbbc7ace2223bd5aafc79b2267c | [
"Apache-2.0"
] | null | null | null | app/models.py | Rahmatullina/FinalYearProject | 326f521b9f600dbbc7ace2223bd5aafc79b2267c | [
"Apache-2.0"
] | 9 | 2020-09-26T01:09:35.000Z | 2022-02-10T01:32:30.000Z | app/models.py | Rahmatullina/FinalYearProject | 326f521b9f600dbbc7ace2223bd5aafc79b2267c | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth.models import User
from django.db import models
# class Group(models.Model):
# name = models.CharField(max_length=10, unique=True)
#
# def __str__(self):
# return self.name
class Student(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
middle_name = models.CharField(max_length=100, default='default')
group = models.CharField(max_length=100)
degree = models.CharField(
max_length=20,
choices=[('UG','Студент Бакалавриата'),('G','Магистрант')],
default='UG',
)
entry_year = models.IntegerField()
graduate_year = models.IntegerField()
study_form = models.CharField(
max_length=20,
choices=[('FT','Очная форма обучения'),('PT','Заочная форма обучения')],
default='FT',
)
def __str__(self):
return self.user.first_name + " " + self.user.last_name + " " + self.group
class Educator(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
middle_name = models.CharField(max_length=100, default='default')
departament = models.CharField(max_length=100)
position = models.CharField(max_length=100)
def __str__(self):
return self.user.first_name + " " + self.user.last_name + " " + self.middle_name + " " + self.position + " " + self.departament
class Timetable(models.Model):
id = models.AutoField(primary_key=True)
group = models.CharField(max_length=100)
start = models.CharField(default='2020-03-02T09:30:00', max_length=25, blank=True)
end = models.CharField(default='2020-03-03T09:30:00', max_length=25, blank=True)
subject = models.CharField(max_length=500)
educator = models.ForeignKey('Educator', on_delete=models.CASCADE)
def __str__(self):
return str(self.start) + " " + self.group + " " + self.subject
class Attendance(models.Model):
timetable = models.ForeignKey('Timetable', on_delete=models.CASCADE)
student = models.ForeignKey('Student', on_delete=models.CASCADE)
attended = models.BooleanField()
emotion = models.CharField(max_length=200, blank=True)
def __str__(self):
return str(self.timetable.start) + " " + str(self.student) + " " + self.timetable.subject | 36.435484 | 137 | 0.681718 | 277 | 2,259 | 5.382671 | 0.281588 | 0.130785 | 0.132797 | 0.177062 | 0.480215 | 0.36888 | 0.250838 | 0.218645 | 0.218645 | 0.218645 | 0 | 0.033441 | 0.179283 | 2,259 | 62 | 138 | 36.435484 | 0.770766 | 0.057548 | 0 | 0.27907 | 0 | 0 | 0.079567 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.046512 | 0.093023 | 0.813953 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
bba41de40149cd72d66cc91246bf3368afe15ae5 | 45,688 | py | Python | pysnmp/SNA-SDLC-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/SNA-SDLC-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/SNA-SDLC-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module SNA-SDLC-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/SNA-SDLC-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 16:52:15 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, Integer, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "OctetString", "Integer", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ValueRangeConstraint, ConstraintsUnion, ValueSizeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ValueRangeConstraint", "ConstraintsUnion", "ValueSizeConstraint", "SingleValueConstraint")
ifOperStatus, ifAdminStatus, ifIndex = mibBuilder.importSymbols("IF-MIB", "ifOperStatus", "ifAdminStatus", "ifIndex")
ModuleCompliance, ObjectGroup, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "ObjectGroup", "NotificationGroup")
TimeTicks, IpAddress, NotificationType, MibIdentifier, Counter32, ModuleIdentity, ObjectIdentity, mib_2, Integer32, Bits, MibScalar, MibTable, MibTableRow, MibTableColumn, Counter64, iso, Gauge32, Unsigned32 = mibBuilder.importSymbols("SNMPv2-SMI", "TimeTicks", "IpAddress", "NotificationType", "MibIdentifier", "Counter32", "ModuleIdentity", "ObjectIdentity", "mib-2", "Integer32", "Bits", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Counter64", "iso", "Gauge32", "Unsigned32")
RowStatus, DisplayString, TimeInterval, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "RowStatus", "DisplayString", "TimeInterval", "TextualConvention")
snaDLC = ModuleIdentity((1, 3, 6, 1, 2, 1, 41))
if mibBuilder.loadTexts: snaDLC.setLastUpdated('9411150000Z')
if mibBuilder.loadTexts: snaDLC.setOrganization('IETF SNA DLC MIB Working Group')
sdlc = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1))
sdlcPortGroup = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 1))
sdlcLSGroup = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 2))
sdlcPortAdminTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 1, 1), )
if mibBuilder.loadTexts: sdlcPortAdminTable.setStatus('current')
sdlcPortAdminEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: sdlcPortAdminEntry.setStatus('current')
sdlcPortAdminName = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 10))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminName.setStatus('current')
sdlcPortAdminRole = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("primary", 1), ("secondary", 2), ("negotiable", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminRole.setStatus('current')
sdlcPortAdminType = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("leased", 1), ("switched", 2))).clone('leased')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminType.setStatus('current')
sdlcPortAdminTopology = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("pointToPoint", 1), ("multipoint", 2))).clone('pointToPoint')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminTopology.setStatus('current')
sdlcPortAdminISTATUS = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("inactive", 1), ("active", 2))).clone('active')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminISTATUS.setStatus('current')
sdlcPortAdminACTIVTO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 6), TimeInterval()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminACTIVTO.setStatus('current')
sdlcPortAdminPAUSE = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 7), TimeInterval().clone(200)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminPAUSE.setStatus('current')
sdlcPortAdminSERVLIM = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 8), Integer32().clone(20)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminSERVLIM.setStatus('current')
sdlcPortAdminSlowPollTimer = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 1, 1, 9), TimeInterval().clone(2000)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: sdlcPortAdminSlowPollTimer.setStatus('current')
sdlcPortOperTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 1, 2), )
if mibBuilder.loadTexts: sdlcPortOperTable.setStatus('current')
sdlcPortOperEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: sdlcPortOperEntry.setStatus('current')
sdlcPortOperName = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 8))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperName.setStatus('current')
sdlcPortOperRole = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("primary", 1), ("secondary", 2), ("undefined", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperRole.setStatus('current')
sdlcPortOperType = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("leased", 1), ("switched", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperType.setStatus('current')
sdlcPortOperTopology = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("pointToPoint", 1), ("multipoint", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperTopology.setStatus('current')
sdlcPortOperISTATUS = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("inactive", 1), ("active", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperISTATUS.setStatus('current')
sdlcPortOperACTIVTO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 6), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperACTIVTO.setStatus('current')
sdlcPortOperPAUSE = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 7), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperPAUSE.setStatus('current')
sdlcPortOperSlowPollMethod = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("servlim", 1), ("pollpause", 2), ("other", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperSlowPollMethod.setStatus('current')
sdlcPortOperSERVLIM = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 9), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperSERVLIM.setStatus('current')
sdlcPortOperSlowPollTimer = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 10), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperSlowPollTimer.setStatus('current')
sdlcPortOperLastModifyTime = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 11), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperLastModifyTime.setStatus('current')
sdlcPortOperLastFailTime = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 12), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperLastFailTime.setStatus('current')
sdlcPortOperLastFailCause = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 2, 1, 13), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("undefined", 1), ("physical", 2))).clone('undefined')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortOperLastFailCause.setStatus('current')
sdlcPortStatsTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 1, 3), )
if mibBuilder.loadTexts: sdlcPortStatsTable.setStatus('current')
sdlcPortStatsEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: sdlcPortStatsEntry.setStatus('current')
sdlcPortStatsPhysicalFailures = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsPhysicalFailures.setStatus('current')
sdlcPortStatsInvalidAddresses = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsInvalidAddresses.setStatus('current')
sdlcPortStatsDwarfFrames = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsDwarfFrames.setStatus('current')
sdlcPortStatsPollsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsPollsIn.setStatus('current')
sdlcPortStatsPollsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsPollsOut.setStatus('current')
sdlcPortStatsPollRspsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsPollRspsIn.setStatus('current')
sdlcPortStatsPollRspsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsPollRspsOut.setStatus('current')
sdlcPortStatsLocalBusies = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsLocalBusies.setStatus('current')
sdlcPortStatsRemoteBusies = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsRemoteBusies.setStatus('current')
sdlcPortStatsIFramesIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsIFramesIn.setStatus('current')
sdlcPortStatsIFramesOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsIFramesOut.setStatus('current')
sdlcPortStatsOctetsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsOctetsIn.setStatus('current')
sdlcPortStatsOctetsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsOctetsOut.setStatus('current')
sdlcPortStatsProtocolErrs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsProtocolErrs.setStatus('current')
sdlcPortStatsActivityTOs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 15), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsActivityTOs.setStatus('current')
sdlcPortStatsRNRLIMITs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 16), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsRNRLIMITs.setStatus('current')
sdlcPortStatsRetriesExps = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsRetriesExps.setStatus('current')
sdlcPortStatsRetransmitsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 18), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsRetransmitsIn.setStatus('current')
sdlcPortStatsRetransmitsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 1, 3, 1, 19), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcPortStatsRetransmitsOut.setStatus('current')
sdlcLSAdminTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 2, 1), )
if mibBuilder.loadTexts: sdlcLSAdminTable.setStatus('current')
sdlcLSAdminEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"), (0, "SNA-SDLC-MIB", "sdlcLSAddress"))
if mibBuilder.loadTexts: sdlcLSAdminEntry.setStatus('current')
sdlcLSAddress = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAddress.setStatus('current')
sdlcLSAdminName = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 2), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 10))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminName.setStatus('current')
sdlcLSAdminState = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("inactive", 1), ("active", 2))).clone('active')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminState.setStatus('current')
sdlcLSAdminISTATUS = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("inactive", 1), ("active", 2))).clone('active')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminISTATUS.setStatus('current')
sdlcLSAdminMAXDATASend = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 5), Integer32()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminMAXDATASend.setStatus('current')
sdlcLSAdminMAXDATARcv = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 6), Integer32()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminMAXDATARcv.setStatus('current')
sdlcLSAdminREPLYTO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 7), TimeInterval().clone(100)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminREPLYTO.setStatus('current')
sdlcLSAdminMAXIN = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 127)).clone(7)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminMAXIN.setStatus('current')
sdlcLSAdminMAXOUT = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 127)).clone(1)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminMAXOUT.setStatus('current')
sdlcLSAdminMODULO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 10), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(8, 128))).clone(namedValues=NamedValues(("eight", 8), ("onetwentyeight", 128))).clone('eight')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminMODULO.setStatus('current')
sdlcLSAdminRETRIESm = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 128)).clone(15)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminRETRIESm.setStatus('current')
sdlcLSAdminRETRIESt = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 12), TimeInterval()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminRETRIESt.setStatus('current')
sdlcLSAdminRETRIESn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 13), Integer32()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminRETRIESn.setStatus('current')
sdlcLSAdminRNRLIMIT = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 14), TimeInterval().clone(18000)).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminRNRLIMIT.setStatus('current')
sdlcLSAdminDATMODE = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 15), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("half", 1), ("full", 2))).clone('half')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminDATMODE.setStatus('current')
sdlcLSAdminGPoll = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 16), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 254))).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminGPoll.setStatus('current')
sdlcLSAdminSimRim = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("no", 1), ("yes", 2))).clone('no')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminSimRim.setStatus('current')
sdlcLSAdminXmitRcvCap = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 18), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("twa", 1), ("tws", 2))).clone('twa')).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminXmitRcvCap.setStatus('current')
sdlcLSAdminRowStatus = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 1, 1, 19), RowStatus()).setMaxAccess("readcreate")
if mibBuilder.loadTexts: sdlcLSAdminRowStatus.setStatus('current')
sdlcLSOperTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 2, 2), )
if mibBuilder.loadTexts: sdlcLSOperTable.setStatus('current')
sdlcLSOperEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"), (0, "SNA-SDLC-MIB", "sdlcLSAddress"))
if mibBuilder.loadTexts: sdlcLSOperEntry.setStatus('current')
sdlcLSOperName = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 1), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(1, 10))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperName.setStatus('current')
sdlcLSOperRole = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("primary", 1), ("secondary", 2), ("undefined", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperRole.setStatus('current')
sdlcLSOperState = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("discontacted", 1), ("contactPending", 2), ("contacted", 3), ("discontactPending", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperState.setStatus('current')
sdlcLSOperMAXDATASend = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 4), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperMAXDATASend.setStatus('current')
sdlcLSOperREPLYTO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 5), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperREPLYTO.setStatus('current')
sdlcLSOperMAXIN = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 127))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperMAXIN.setStatus('current')
sdlcLSOperMAXOUT = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 127))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperMAXOUT.setStatus('current')
sdlcLSOperMODULO = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 8), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(8, 128))).clone(namedValues=NamedValues(("eight", 8), ("onetwentyeight", 128))).clone('eight')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperMODULO.setStatus('current')
sdlcLSOperRETRIESm = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 128))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperRETRIESm.setStatus('current')
sdlcLSOperRETRIESt = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 10), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperRETRIESt.setStatus('current')
sdlcLSOperRETRIESn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 127))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperRETRIESn.setStatus('current')
sdlcLSOperRNRLIMIT = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 12), TimeInterval()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperRNRLIMIT.setStatus('current')
sdlcLSOperDATMODE = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 13), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("half", 1), ("full", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperDATMODE.setStatus('current')
sdlcLSOperLastModifyTime = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 14), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastModifyTime.setStatus('current')
sdlcLSOperLastFailTime = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 15), TimeTicks()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailTime.setStatus('current')
sdlcLSOperLastFailCause = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 16), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8))).clone(namedValues=NamedValues(("undefined", 1), ("rxFRMR", 2), ("txFRMR", 3), ("noResponse", 4), ("protocolErr", 5), ("noActivity", 6), ("rnrLimit", 7), ("retriesExpired", 8))).clone('undefined')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailCause.setStatus('current')
sdlcLSOperLastFailCtrlIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 17), OctetString().subtype(subtypeSpec=ValueSizeConstraint(1, 2))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailCtrlIn.setStatus('current')
sdlcLSOperLastFailCtrlOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 18), OctetString().subtype(subtypeSpec=ValueSizeConstraint(1, 2))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailCtrlOut.setStatus('current')
sdlcLSOperLastFailFRMRInfo = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 19), OctetString().subtype(subtypeSpec=ValueSizeConstraint(3, 3)).setFixedLength(3)).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailFRMRInfo.setStatus('current')
sdlcLSOperLastFailREPLYTOs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 20), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperLastFailREPLYTOs.setStatus('current')
sdlcLSOperEcho = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 21), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("no", 1), ("yes", 2))).clone('no')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperEcho.setStatus('current')
sdlcLSOperGPoll = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 22), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 254))).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperGPoll.setStatus('current')
sdlcLSOperSimRim = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 23), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("no", 1), ("yes", 2))).clone('no')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperSimRim.setStatus('current')
sdlcLSOperXmitRcvCap = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 2, 1, 24), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("twa", 1), ("tws", 2))).clone('twa')).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSOperXmitRcvCap.setStatus('current')
sdlcLSStatsTable = MibTable((1, 3, 6, 1, 2, 1, 41, 1, 2, 3), )
if mibBuilder.loadTexts: sdlcLSStatsTable.setStatus('current')
sdlcLSStatsEntry = MibTableRow((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1), ).setIndexNames((0, "IF-MIB", "ifIndex"), (0, "SNA-SDLC-MIB", "sdlcLSAddress"))
if mibBuilder.loadTexts: sdlcLSStatsEntry.setStatus('current')
sdlcLSStatsBLUsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsBLUsIn.setStatus('current')
sdlcLSStatsBLUsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsBLUsOut.setStatus('current')
sdlcLSStatsOctetsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsOctetsIn.setStatus('current')
sdlcLSStatsOctetsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsOctetsOut.setStatus('current')
sdlcLSStatsPollsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsPollsIn.setStatus('current')
sdlcLSStatsPollsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsPollsOut.setStatus('current')
sdlcLSStatsPollRspsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsPollRspsOut.setStatus('current')
sdlcLSStatsPollRspsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsPollRspsIn.setStatus('current')
sdlcLSStatsLocalBusies = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsLocalBusies.setStatus('current')
sdlcLSStatsRemoteBusies = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 10), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRemoteBusies.setStatus('current')
sdlcLSStatsIFramesIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 11), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsIFramesIn.setStatus('current')
sdlcLSStatsIFramesOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsIFramesOut.setStatus('current')
sdlcLSStatsUIFramesIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsUIFramesIn.setStatus('current')
sdlcLSStatsUIFramesOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsUIFramesOut.setStatus('current')
sdlcLSStatsXIDsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 15), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsXIDsIn.setStatus('current')
sdlcLSStatsXIDsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 16), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsXIDsOut.setStatus('current')
sdlcLSStatsTESTsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsTESTsIn.setStatus('current')
sdlcLSStatsTESTsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 18), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsTESTsOut.setStatus('current')
sdlcLSStatsREJsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 19), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsREJsIn.setStatus('current')
sdlcLSStatsREJsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 20), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsREJsOut.setStatus('current')
sdlcLSStatsFRMRsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 21), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsFRMRsIn.setStatus('current')
sdlcLSStatsFRMRsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 22), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsFRMRsOut.setStatus('current')
sdlcLSStatsSIMsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 23), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsSIMsIn.setStatus('current')
sdlcLSStatsSIMsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 24), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsSIMsOut.setStatus('current')
sdlcLSStatsRIMsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 25), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRIMsIn.setStatus('current')
sdlcLSStatsRIMsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 26), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRIMsOut.setStatus('current')
sdlcLSStatsDISCIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 27), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsDISCIn.setStatus('current')
sdlcLSStatsDISCOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 28), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsDISCOut.setStatus('current')
sdlcLSStatsUAIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 29), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsUAIn.setStatus('current')
sdlcLSStatsUAOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 30), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsUAOut.setStatus('current')
sdlcLSStatsDMIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 31), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsDMIn.setStatus('current')
sdlcLSStatsDMOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 32), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsDMOut.setStatus('current')
sdlcLSStatsSNRMIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 33), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsSNRMIn.setStatus('current')
sdlcLSStatsSNRMOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 34), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsSNRMOut.setStatus('current')
sdlcLSStatsProtocolErrs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 35), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsProtocolErrs.setStatus('current')
sdlcLSStatsActivityTOs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 36), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsActivityTOs.setStatus('current')
sdlcLSStatsRNRLIMITs = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 37), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRNRLIMITs.setStatus('current')
sdlcLSStatsRetriesExps = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 38), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRetriesExps.setStatus('current')
sdlcLSStatsRetransmitsIn = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 39), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRetransmitsIn.setStatus('current')
sdlcLSStatsRetransmitsOut = MibTableColumn((1, 3, 6, 1, 2, 1, 41, 1, 2, 3, 1, 40), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: sdlcLSStatsRetransmitsOut.setStatus('current')
sdlcTraps = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 3))
sdlcPortStatusChange = NotificationType((1, 3, 6, 1, 2, 1, 41, 1, 3, 1)).setObjects(("IF-MIB", "ifIndex"), ("IF-MIB", "ifAdminStatus"), ("IF-MIB", "ifOperStatus"), ("SNA-SDLC-MIB", "sdlcPortOperLastFailTime"), ("SNA-SDLC-MIB", "sdlcPortOperLastFailCause"))
if mibBuilder.loadTexts: sdlcPortStatusChange.setStatus('current')
sdlcLSStatusChange = NotificationType((1, 3, 6, 1, 2, 1, 41, 1, 3, 2)).setObjects(("IF-MIB", "ifIndex"), ("SNA-SDLC-MIB", "sdlcLSAddress"), ("SNA-SDLC-MIB", "sdlcLSOperState"), ("SNA-SDLC-MIB", "sdlcLSAdminState"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailTime"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCause"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailFRMRInfo"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCtrlIn"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCtrlOut"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailREPLYTOs"))
if mibBuilder.loadTexts: sdlcLSStatusChange.setStatus('current')
sdlcConformance = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 4))
sdlcCompliances = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 4, 1))
sdlcGroups = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 4, 2))
sdlcCoreCompliance = ModuleCompliance((1, 3, 6, 1, 2, 1, 41, 1, 4, 1, 1)).setObjects(("SNA-SDLC-MIB", "sdlcCorePortAdminGroup"), ("SNA-SDLC-MIB", "sdlcCorePortOperGroup"), ("SNA-SDLC-MIB", "sdlcCorePortStatsGroup"), ("SNA-SDLC-MIB", "sdlcCoreLSAdminGroup"), ("SNA-SDLC-MIB", "sdlcCoreLSOperGroup"), ("SNA-SDLC-MIB", "sdlcCoreLSStatsGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCoreCompliance = sdlcCoreCompliance.setStatus('current')
sdlcPrimaryCompliance = ModuleCompliance((1, 3, 6, 1, 2, 1, 41, 1, 4, 1, 2)).setObjects(("SNA-SDLC-MIB", "sdlcPrimaryGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcPrimaryCompliance = sdlcPrimaryCompliance.setStatus('current')
sdlcPrimaryMultipointCompliance = ModuleCompliance((1, 3, 6, 1, 2, 1, 41, 1, 4, 1, 3)).setObjects(("SNA-SDLC-MIB", "sdlcPrimaryMultipointGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcPrimaryMultipointCompliance = sdlcPrimaryMultipointCompliance.setStatus('current')
sdlcCoreGroups = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1))
sdlcCorePortAdminGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 1)).setObjects(("SNA-SDLC-MIB", "sdlcPortAdminName"), ("SNA-SDLC-MIB", "sdlcPortAdminRole"), ("SNA-SDLC-MIB", "sdlcPortAdminType"), ("SNA-SDLC-MIB", "sdlcPortAdminTopology"), ("SNA-SDLC-MIB", "sdlcPortAdminISTATUS"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCorePortAdminGroup = sdlcCorePortAdminGroup.setStatus('current')
sdlcCorePortOperGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 2)).setObjects(("SNA-SDLC-MIB", "sdlcPortOperName"), ("SNA-SDLC-MIB", "sdlcPortOperRole"), ("SNA-SDLC-MIB", "sdlcPortOperType"), ("SNA-SDLC-MIB", "sdlcPortOperTopology"), ("SNA-SDLC-MIB", "sdlcPortOperISTATUS"), ("SNA-SDLC-MIB", "sdlcPortOperACTIVTO"), ("SNA-SDLC-MIB", "sdlcPortOperLastFailTime"), ("SNA-SDLC-MIB", "sdlcPortOperLastFailCause"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCorePortOperGroup = sdlcCorePortOperGroup.setStatus('current')
sdlcCorePortStatsGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 3)).setObjects(("SNA-SDLC-MIB", "sdlcPortStatsPhysicalFailures"), ("SNA-SDLC-MIB", "sdlcPortStatsInvalidAddresses"), ("SNA-SDLC-MIB", "sdlcPortStatsDwarfFrames"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCorePortStatsGroup = sdlcCorePortStatsGroup.setStatus('current')
sdlcCoreLSAdminGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 4)).setObjects(("SNA-SDLC-MIB", "sdlcLSAddress"), ("SNA-SDLC-MIB", "sdlcLSAdminName"), ("SNA-SDLC-MIB", "sdlcLSAdminState"), ("SNA-SDLC-MIB", "sdlcLSAdminISTATUS"), ("SNA-SDLC-MIB", "sdlcLSAdminMAXDATASend"), ("SNA-SDLC-MIB", "sdlcLSAdminMAXDATARcv"), ("SNA-SDLC-MIB", "sdlcLSAdminMAXIN"), ("SNA-SDLC-MIB", "sdlcLSAdminMAXOUT"), ("SNA-SDLC-MIB", "sdlcLSAdminMODULO"), ("SNA-SDLC-MIB", "sdlcLSAdminRETRIESm"), ("SNA-SDLC-MIB", "sdlcLSAdminRETRIESt"), ("SNA-SDLC-MIB", "sdlcLSAdminRETRIESn"), ("SNA-SDLC-MIB", "sdlcLSAdminRNRLIMIT"), ("SNA-SDLC-MIB", "sdlcLSAdminDATMODE"), ("SNA-SDLC-MIB", "sdlcLSAdminGPoll"), ("SNA-SDLC-MIB", "sdlcLSAdminSimRim"), ("SNA-SDLC-MIB", "sdlcLSAdminRowStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCoreLSAdminGroup = sdlcCoreLSAdminGroup.setStatus('current')
sdlcCoreLSOperGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 5)).setObjects(("SNA-SDLC-MIB", "sdlcLSOperRole"), ("SNA-SDLC-MIB", "sdlcLSOperState"), ("SNA-SDLC-MIB", "sdlcLSOperMAXDATASend"), ("SNA-SDLC-MIB", "sdlcLSOperMAXIN"), ("SNA-SDLC-MIB", "sdlcLSOperMAXOUT"), ("SNA-SDLC-MIB", "sdlcLSOperMODULO"), ("SNA-SDLC-MIB", "sdlcLSOperRETRIESm"), ("SNA-SDLC-MIB", "sdlcLSOperRETRIESt"), ("SNA-SDLC-MIB", "sdlcLSOperRETRIESn"), ("SNA-SDLC-MIB", "sdlcLSOperRNRLIMIT"), ("SNA-SDLC-MIB", "sdlcLSOperDATMODE"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailTime"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCause"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCtrlIn"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailCtrlOut"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailFRMRInfo"), ("SNA-SDLC-MIB", "sdlcLSOperLastFailREPLYTOs"), ("SNA-SDLC-MIB", "sdlcLSOperEcho"), ("SNA-SDLC-MIB", "sdlcLSOperGPoll"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCoreLSOperGroup = sdlcCoreLSOperGroup.setStatus('current')
sdlcCoreLSStatsGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 1, 6)).setObjects(("SNA-SDLC-MIB", "sdlcLSStatsBLUsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsBLUsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsOctetsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsOctetsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsPollsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsPollsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsPollRspsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsPollRspsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsLocalBusies"), ("SNA-SDLC-MIB", "sdlcLSStatsRemoteBusies"), ("SNA-SDLC-MIB", "sdlcLSStatsIFramesIn"), ("SNA-SDLC-MIB", "sdlcLSStatsIFramesOut"), ("SNA-SDLC-MIB", "sdlcLSStatsRetransmitsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsRetransmitsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsUIFramesIn"), ("SNA-SDLC-MIB", "sdlcLSStatsUIFramesOut"), ("SNA-SDLC-MIB", "sdlcLSStatsXIDsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsXIDsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsTESTsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsTESTsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsREJsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsREJsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsFRMRsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsFRMRsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsSIMsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsSIMsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsRIMsIn"), ("SNA-SDLC-MIB", "sdlcLSStatsRIMsOut"), ("SNA-SDLC-MIB", "sdlcLSStatsProtocolErrs"), ("SNA-SDLC-MIB", "sdlcLSStatsRNRLIMITs"), ("SNA-SDLC-MIB", "sdlcLSStatsRetriesExps"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcCoreLSStatsGroup = sdlcCoreLSStatsGroup.setStatus('current')
sdlcPrimaryGroups = MibIdentifier((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 2))
sdlcPrimaryGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 2, 1)).setObjects(("SNA-SDLC-MIB", "sdlcPortAdminPAUSE"), ("SNA-SDLC-MIB", "sdlcPortOperPAUSE"), ("SNA-SDLC-MIB", "sdlcLSAdminREPLYTO"), ("SNA-SDLC-MIB", "sdlcLSOperREPLYTO"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcPrimaryGroup = sdlcPrimaryGroup.setStatus('current')
sdlcPrimaryMultipointGroup = ObjectGroup((1, 3, 6, 1, 2, 1, 41, 1, 4, 2, 2, 2)).setObjects(("SNA-SDLC-MIB", "sdlcPortAdminSERVLIM"), ("SNA-SDLC-MIB", "sdlcPortAdminSlowPollTimer"), ("SNA-SDLC-MIB", "sdlcPortOperSlowPollMethod"), ("SNA-SDLC-MIB", "sdlcPortOperSERVLIM"), ("SNA-SDLC-MIB", "sdlcPortOperSlowPollTimer"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
sdlcPrimaryMultipointGroup = sdlcPrimaryMultipointGroup.setStatus('current')
mibBuilder.exportSymbols("SNA-SDLC-MIB", sdlcPortStatsRetriesExps=sdlcPortStatsRetriesExps, sdlcLSAdminTable=sdlcLSAdminTable, sdlcLSStatsXIDsOut=sdlcLSStatsXIDsOut, sdlcLSOperLastFailFRMRInfo=sdlcLSOperLastFailFRMRInfo, sdlcPortOperSlowPollMethod=sdlcPortOperSlowPollMethod, sdlcLSOperEcho=sdlcLSOperEcho, sdlcPortStatsRetransmitsIn=sdlcPortStatsRetransmitsIn, sdlcPortOperLastModifyTime=sdlcPortOperLastModifyTime, sdlcLSAdminState=sdlcLSAdminState, sdlcLSOperREPLYTO=sdlcLSOperREPLYTO, sdlcCoreLSAdminGroup=sdlcCoreLSAdminGroup, sdlcLSOperMAXIN=sdlcLSOperMAXIN, sdlcLSStatsRIMsIn=sdlcLSStatsRIMsIn, sdlcPortStatsOctetsIn=sdlcPortStatsOctetsIn, sdlcPortOperTopology=sdlcPortOperTopology, sdlcPortStatsInvalidAddresses=sdlcPortStatsInvalidAddresses, sdlcLSStatsDISCIn=sdlcLSStatsDISCIn, sdlcLSStatsRetransmitsIn=sdlcLSStatsRetransmitsIn, sdlcLSStatsDISCOut=sdlcLSStatsDISCOut, sdlcLSStatusChange=sdlcLSStatusChange, sdlcLSAdminREPLYTO=sdlcLSAdminREPLYTO, sdlcPortStatsRetransmitsOut=sdlcPortStatsRetransmitsOut, sdlcLSAdminRETRIESm=sdlcLSAdminRETRIESm, sdlcLSOperRETRIESt=sdlcLSOperRETRIESt, sdlcLSOperGPoll=sdlcLSOperGPoll, sdlcLSStatsTESTsOut=sdlcLSStatsTESTsOut, sdlcLSStatsFRMRsOut=sdlcLSStatsFRMRsOut, sdlcLSStatsUIFramesIn=sdlcLSStatsUIFramesIn, sdlcLSAdminEntry=sdlcLSAdminEntry, sdlcPortStatsIFramesIn=sdlcPortStatsIFramesIn, sdlcPortAdminType=sdlcPortAdminType, sdlcLSStatsXIDsIn=sdlcLSStatsXIDsIn, sdlcLSStatsPollsOut=sdlcLSStatsPollsOut, sdlcPortAdminACTIVTO=sdlcPortAdminACTIVTO, sdlcLSOperLastModifyTime=sdlcLSOperLastModifyTime, sdlcLSStatsREJsIn=sdlcLSStatsREJsIn, sdlcLSOperState=sdlcLSOperState, sdlcLSStatsOctetsOut=sdlcLSStatsOctetsOut, sdlcLSOperDATMODE=sdlcLSOperDATMODE, sdlcLSStatsRetransmitsOut=sdlcLSStatsRetransmitsOut, sdlcPrimaryMultipointGroup=sdlcPrimaryMultipointGroup, sdlcLSAddress=sdlcLSAddress, sdlcPrimaryCompliance=sdlcPrimaryCompliance, sdlcPortStatsEntry=sdlcPortStatsEntry, sdlcLSOperMAXDATASend=sdlcLSOperMAXDATASend, sdlcCoreLSStatsGroup=sdlcCoreLSStatsGroup, sdlcPortOperACTIVTO=sdlcPortOperACTIVTO, sdlcLSOperMODULO=sdlcLSOperMODULO, sdlcLSStatsUAIn=sdlcLSStatsUAIn, sdlcPortAdminSlowPollTimer=sdlcPortAdminSlowPollTimer, sdlcPrimaryMultipointCompliance=sdlcPrimaryMultipointCompliance, sdlcConformance=sdlcConformance, sdlcLSOperXmitRcvCap=sdlcLSOperXmitRcvCap, sdlcCoreGroups=sdlcCoreGroups, sdlcLSStatsUIFramesOut=sdlcLSStatsUIFramesOut, sdlcPortOperLastFailCause=sdlcPortOperLastFailCause, sdlcLSStatsOctetsIn=sdlcLSStatsOctetsIn, sdlcLSOperLastFailCtrlIn=sdlcLSOperLastFailCtrlIn, sdlcLSStatsBLUsOut=sdlcLSStatsBLUsOut, sdlcPortOperTable=sdlcPortOperTable, sdlcLSOperSimRim=sdlcLSOperSimRim, sdlcLSStatsEntry=sdlcLSStatsEntry, sdlcPortAdminSERVLIM=sdlcPortAdminSERVLIM, sdlcLSStatsRNRLIMITs=sdlcLSStatsRNRLIMITs, sdlcLSStatsTESTsIn=sdlcLSStatsTESTsIn, sdlcGroups=sdlcGroups, sdlcPortOperRole=sdlcPortOperRole, sdlcLSStatsTable=sdlcLSStatsTable, sdlcLSStatsFRMRsIn=sdlcLSStatsFRMRsIn, sdlcCoreLSOperGroup=sdlcCoreLSOperGroup, sdlcPortStatsPollsOut=sdlcPortStatsPollsOut, sdlcLSOperRole=sdlcLSOperRole, sdlcLSStatsRIMsOut=sdlcLSStatsRIMsOut, sdlcPortOperLastFailTime=sdlcPortOperLastFailTime, sdlcLSOperLastFailTime=sdlcLSOperLastFailTime, sdlcPortStatsProtocolErrs=sdlcPortStatsProtocolErrs, sdlcLSAdminMAXDATASend=sdlcLSAdminMAXDATASend, sdlcLSStatsProtocolErrs=sdlcLSStatsProtocolErrs, sdlcTraps=sdlcTraps, sdlcLSOperEntry=sdlcLSOperEntry, sdlcLSOperRETRIESm=sdlcLSOperRETRIESm, sdlcPrimaryGroups=sdlcPrimaryGroups, sdlcLSAdminISTATUS=sdlcLSAdminISTATUS, sdlcLSAdminSimRim=sdlcLSAdminSimRim, sdlcLSStatsSNRMIn=sdlcLSStatsSNRMIn, sdlcPortAdminTable=sdlcPortAdminTable, snaDLC=snaDLC, sdlcPortStatsPhysicalFailures=sdlcPortStatsPhysicalFailures, sdlcLSOperTable=sdlcLSOperTable, sdlcPortGroup=sdlcPortGroup, sdlcPortOperSlowPollTimer=sdlcPortOperSlowPollTimer, sdlcLSStatsPollsIn=sdlcLSStatsPollsIn, sdlcPortStatsLocalBusies=sdlcPortStatsLocalBusies, sdlcPortStatsRemoteBusies=sdlcPortStatsRemoteBusies, sdlcPortAdminName=sdlcPortAdminName, sdlcPortAdminTopology=sdlcPortAdminTopology, sdlcLSStatsIFramesOut=sdlcLSStatsIFramesOut, sdlcLSStatsUAOut=sdlcLSStatsUAOut, sdlcLSStatsRetriesExps=sdlcLSStatsRetriesExps, sdlcPortOperType=sdlcPortOperType, sdlcLSStatsREJsOut=sdlcLSStatsREJsOut, sdlcPortAdminEntry=sdlcPortAdminEntry, sdlcPortStatsOctetsOut=sdlcPortStatsOctetsOut, sdlcPortAdminRole=sdlcPortAdminRole, sdlcCoreCompliance=sdlcCoreCompliance, sdlcPortAdminISTATUS=sdlcPortAdminISTATUS, sdlcPrimaryGroup=sdlcPrimaryGroup, sdlcPortOperEntry=sdlcPortOperEntry, sdlcLSStatsRemoteBusies=sdlcLSStatsRemoteBusies, sdlcLSAdminMODULO=sdlcLSAdminMODULO, sdlcCorePortOperGroup=sdlcCorePortOperGroup, sdlcLSStatsDMIn=sdlcLSStatsDMIn, sdlcLSAdminXmitRcvCap=sdlcLSAdminXmitRcvCap, sdlcLSAdminRETRIESn=sdlcLSAdminRETRIESn, sdlcLSAdminRNRLIMIT=sdlcLSAdminRNRLIMIT, sdlcLSStatsDMOut=sdlcLSStatsDMOut, PYSNMP_MODULE_ID=snaDLC, sdlcLSAdminName=sdlcLSAdminName, sdlcCompliances=sdlcCompliances, sdlcLSOperLastFailCtrlOut=sdlcLSOperLastFailCtrlOut, sdlcLSStatsSIMsIn=sdlcLSStatsSIMsIn, sdlcPortOperISTATUS=sdlcPortOperISTATUS, sdlcLSAdminRETRIESt=sdlcLSAdminRETRIESt, sdlcLSAdminMAXOUT=sdlcLSAdminMAXOUT, sdlcLSOperName=sdlcLSOperName, sdlcLSOperMAXOUT=sdlcLSOperMAXOUT, sdlcLSStatsLocalBusies=sdlcLSStatsLocalBusies, sdlcLSAdminDATMODE=sdlcLSAdminDATMODE, sdlcLSAdminMAXIN=sdlcLSAdminMAXIN, sdlcPortOperName=sdlcPortOperName, sdlcLSGroup=sdlcLSGroup, sdlcLSAdminGPoll=sdlcLSAdminGPoll, sdlcPortStatsPollRspsOut=sdlcPortStatsPollRspsOut, sdlcLSStatsPollRspsIn=sdlcLSStatsPollRspsIn, sdlcLSAdminMAXDATARcv=sdlcLSAdminMAXDATARcv, sdlcPortStatsTable=sdlcPortStatsTable, sdlcLSOperLastFailCause=sdlcLSOperLastFailCause, sdlcPortStatsActivityTOs=sdlcPortStatsActivityTOs, sdlcLSOperRETRIESn=sdlcLSOperRETRIESn, sdlcPortAdminPAUSE=sdlcPortAdminPAUSE, sdlcLSAdminRowStatus=sdlcLSAdminRowStatus, sdlcPortOperPAUSE=sdlcPortOperPAUSE, sdlcPortStatsPollRspsIn=sdlcPortStatsPollRspsIn, sdlcLSStatsBLUsIn=sdlcLSStatsBLUsIn, sdlcPortOperSERVLIM=sdlcPortOperSERVLIM, sdlcLSStatsIFramesIn=sdlcLSStatsIFramesIn, sdlcPortStatsIFramesOut=sdlcPortStatsIFramesOut, sdlcCorePortAdminGroup=sdlcCorePortAdminGroup, sdlcLSOperRNRLIMIT=sdlcLSOperRNRLIMIT, sdlcLSStatsSNRMOut=sdlcLSStatsSNRMOut, sdlc=sdlc, sdlcPortStatsPollsIn=sdlcPortStatsPollsIn, sdlcPortStatusChange=sdlcPortStatusChange, sdlcPortStatsDwarfFrames=sdlcPortStatsDwarfFrames, sdlcLSStatsActivityTOs=sdlcLSStatsActivityTOs, sdlcLSOperLastFailREPLYTOs=sdlcLSOperLastFailREPLYTOs, sdlcCorePortStatsGroup=sdlcCorePortStatsGroup, sdlcLSStatsPollRspsOut=sdlcLSStatsPollRspsOut, sdlcLSStatsSIMsOut=sdlcLSStatsSIMsOut, sdlcPortStatsRNRLIMITs=sdlcPortStatsRNRLIMITs)
| 134.376471 | 6,632 | 0.757179 | 5,035 | 45,688 | 6.870109 | 0.068719 | 0.017114 | 0.016825 | 0.018386 | 0.450117 | 0.406435 | 0.344512 | 0.322512 | 0.317944 | 0.241566 | 0 | 0.062923 | 0.078905 | 45,688 | 339 | 6,633 | 134.772861 | 0.759048 | 0.00696 | 0 | 0.033435 | 0 | 0 | 0.154273 | 0.022376 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.021277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
bbcedb79e863608779477209f699fa7b175b6341 | 511 | py | Python | jenkinscli/cli/main.py | bernardoVale/jenkins-cli | fe2507e1c9cfaee3eee05d7802857313071c9d5f | [
"Apache-2.0"
] | null | null | null | jenkinscli/cli/main.py | bernardoVale/jenkins-cli | fe2507e1c9cfaee3eee05d7802857313071c9d5f | [
"Apache-2.0"
] | 2 | 2017-04-23T00:12:13.000Z | 2017-04-23T21:28:25.000Z | jenkinscli/cli/main.py | bernardoVale/jenkins-cli | fe2507e1c9cfaee3eee05d7802857313071c9d5f | [
"Apache-2.0"
] | null | null | null | import sys
import jenkins
from .parser import parser
# Tries to fetch configuration from the environment
try:
from . import DEFAULT_CONFIG
url, username, password = DEFAULT_CONFIG
except:
print("Cannot collect configuration from the environment")
exit(1)
def main():
server = jenkins.Jenkins(url, username=username, password=password, timeout=240)
args = parser.parse_args()
if len(sys.argv) <= 1:
parser.print_help()
parser.exit()
args.func(args, server)
| 20.44 | 84 | 0.696673 | 65 | 511 | 5.415385 | 0.538462 | 0.096591 | 0.113636 | 0.176136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012407 | 0.21135 | 511 | 24 | 85 | 21.291667 | 0.861042 | 0.09589 | 0 | 0 | 0 | 0 | 0.106522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.125 | 0.25 | 0 | 0.3125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
bbd54565cd953fb1907ec1f41bd0d9c23b09d825 | 280 | py | Python | tests.py | esoergel/submission_api_example | bdeb5633b1139f565a74ef578ee566115fecbf01 | [
"Apache-2.0"
] | null | null | null | tests.py | esoergel/submission_api_example | bdeb5633b1139f565a74ef578ee566115fecbf01 | [
"Apache-2.0"
] | 1 | 2021-12-06T20:29:54.000Z | 2021-12-13T20:32:37.000Z | tests.py | esoergel/submission_api_example | bdeb5633b1139f565a74ef578ee566115fecbf01 | [
"Apache-2.0"
] | 1 | 2021-12-06T20:27:00.000Z | 2021-12-06T20:27:00.000Z | #!/usr/bin/env python3
import doctest
import unittest
import submit_data
class DocTests(unittest.TestCase):
def test_doctests(self):
results = doctest.testmod(submit_data)
self.assertEqual(results.failed, 0)
if __name__ == '__main__':
unittest.main()
| 17.5 | 46 | 0.714286 | 34 | 280 | 5.558824 | 0.676471 | 0.10582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008734 | 0.182143 | 280 | 15 | 47 | 18.666667 | 0.816594 | 0.075 | 0 | 0 | 0 | 0 | 0.031008 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
bbec070aa2ff91436f601475c91ff9c89d4a0f38 | 328 | py | Python | Projetos/exercicio_mes_por_extenso.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | null | null | null | Projetos/exercicio_mes_por_extenso.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | null | null | null | Projetos/exercicio_mes_por_extenso.py | anderson-br-ti/python | d65d851f0934267dff9256dfdac09b100efb3b45 | [
"MIT"
] | null | null | null | #dia, mês, ano = input ('Qual é a data do seu nascimento (no formato dd/mm/aaaa)? ').split('/')
#meses = '''janeiro fevereiro março abril maio junho julho
# agosto setembro outubro novembro dezembro'''.split()
#print (dia, 'de', meses [int (mês) -1], 'de', ano)
for letra in 'aeiou':
print (letra)
| 23.428571 | 95 | 0.609756 | 46 | 328 | 4.347826 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003968 | 0.231707 | 328 | 13 | 96 | 25.230769 | 0.789683 | 0.832317 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
a566fd8c43c862cab7e4250aec18c8b8495f8ba7 | 84 | py | Python | Python/if-else.py | bunny8469/Hello-World | 722b5961cbcd9b2c2eec2cb6aa700eaa451e008b | [
"MIT"
] | 133 | 2021-01-15T16:29:40.000Z | 2022-03-21T16:35:42.000Z | Python/if-else.py | bunny8469/Hello-World | 722b5961cbcd9b2c2eec2cb6aa700eaa451e008b | [
"MIT"
] | 117 | 2021-01-17T08:54:22.000Z | 2022-01-17T16:38:11.000Z | Python/if-else.py | bunny8469/Hello-World | 722b5961cbcd9b2c2eec2cb6aa700eaa451e008b | [
"MIT"
] | 146 | 2021-01-15T12:57:19.000Z | 2022-03-15T20:10:23.000Z | t=int(input())
if t>=1:
print("bigger than 1")
else:
print("smaller than 1") | 16.8 | 27 | 0.595238 | 15 | 84 | 3.333333 | 0.666667 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044776 | 0.202381 | 84 | 5 | 27 | 16.8 | 0.701493 | 0 | 0 | 0 | 0 | 0 | 0.317647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
a586943fd221daa122c231bb2dd76459100f8c10 | 1,720 | py | Python | lib/dataprovider.py | mycarcoach/engine | bfc777c5ff26acb58716fe53da28f1d5dca548a0 | [
"MIT"
] | null | null | null | lib/dataprovider.py | mycarcoach/engine | bfc777c5ff26acb58716fe53da28f1d5dca548a0 | [
"MIT"
] | 1 | 2019-03-09T19:50:31.000Z | 2019-03-09T19:50:31.000Z | lib/dataprovider.py | mycarcoach/engine | bfc777c5ff26acb58716fe53da28f1d5dca548a0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from enum import Enum
import json
import queue
import threading
import paho.mqtt.client as mqtt
# must be here because of the shitty library design of paho-mqtt
speedQueue = queue.Queue()
accelQueue = queue.Queue()
breakePressQueue = queue.Queue()
signals = ["ESP_v_Signal", "ESP_Laengsbeschl", "ESP_Bremsdruck"]
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
for signal in signals:
print(f"subscribing to /signal/{signal}")
client.subscribe(f"/signal/{signal}")
def on_message(client, userdata, msg):
if "/signal/ESP_v_Signal"== msg.topic:
speedQueue.put(msg)
elif "/signal/ESP_Laengsbeschl" == msg.topic:
accelQueue.put(msg)
elif "/signal/ESP_Bremsdruck" == msg.topic:
breakePressQueue.put(msg)
else:
print("other topic")
# provides the data acquired from the websocket server
class DataProvider():
def __init__(self):
self.t = threading.Thread(target=self.inputThread)
self.t.start()
def inputThread(self):
client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message
client.connect("46.101.168.60", 1883, 60)
client.loop_forever()
def getSpeed(self):
return speedQueue.get()
def getAccel(self):
return accelQueue.get()
def getBreakePress(self):
return breakePressQueue.get()
def getSpeedSize(self):
return speedQueue.qsize()
def accelQueueSize(self):
return accelQueue.qsize()
def breakePressQueueSize(self):
return breakePressQueue.qsize()
| 26.875 | 65 | 0.647093 | 202 | 1,720 | 5.415842 | 0.435644 | 0.054845 | 0.018282 | 0.02925 | 0.034735 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012336 | 0.24593 | 1,720 | 63 | 66 | 27.301587 | 0.831149 | 0.07907 | 0 | 0 | 0 | 0 | 0.135705 | 0.030303 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.133333 | 0.488889 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
a591cc8ee2915d9acf7cce2ee91c62003443ed0b | 178 | py | Python | Python/FizzBuzz(O(n)).py | Hinal-Srivastava/Hacktoberfest-2020-FizzBuzz | f49d658f90b462daab01b0be6d9356f42f2f7b92 | [
"Unlicense"
] | null | null | null | Python/FizzBuzz(O(n)).py | Hinal-Srivastava/Hacktoberfest-2020-FizzBuzz | f49d658f90b462daab01b0be6d9356f42f2f7b92 | [
"Unlicense"
] | null | null | null | Python/FizzBuzz(O(n)).py | Hinal-Srivastava/Hacktoberfest-2020-FizzBuzz | f49d658f90b462daab01b0be6d9356f42f2f7b92 | [
"Unlicense"
] | null | null | null | for i in range(1,101):
if(i%3==0 and i%5==0):
print("FizzBuzz")
if(i%3==0):
print("Fizz")
if(i%5==0):
print("Buzz")
else:
print(i) | 19.777778 | 26 | 0.432584 | 31 | 178 | 2.483871 | 0.516129 | 0.116883 | 0.103896 | 0.12987 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0.342697 | 178 | 9 | 27 | 19.777778 | 0.555556 | 0 | 0 | 0 | 0 | 0 | 0.089385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.444444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
a5a3ae23d259efe022e464d250946d6077202b1c | 134 | py | Python | hotkey/clicker.py | Saevon/Recipes | ab8ca9b5244805d545da2dd1d80d249f1ec6057d | [
"MIT"
] | null | null | null | hotkey/clicker.py | Saevon/Recipes | ab8ca9b5244805d545da2dd1d80d249f1ec6057d | [
"MIT"
] | null | null | null | hotkey/clicker.py | Saevon/Recipes | ab8ca9b5244805d545da2dd1d80d249f1ec6057d | [
"MIT"
] | null | null | null | from key_events import *
import os
os.nice(40)
pos = position()
for x in range(1000):
#time.sleep(.25)
mouseclick(*pos)
| 9.571429 | 24 | 0.641791 | 21 | 134 | 4.047619 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 0.223881 | 134 | 13 | 25 | 10.307692 | 0.740385 | 0.11194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3c20bb728cd9dc2786ccc7d0797ccf4b518033f1 | 10,398 | py | Python | src/investigations/TFCorrectedFeatureAssociations.py | willgdjones/GTEx | c56a5d548978545ab8a98e74236d52343113e9e6 | [
"MIT"
] | 2 | 2019-02-21T13:05:31.000Z | 2020-02-02T14:37:29.000Z | src/investigations/TFCorrectedFeatureAssociations.py | willgdjones/GTEx | c56a5d548978545ab8a98e74236d52343113e9e6 | [
"MIT"
] | null | null | null | src/investigations/TFCorrectedFeatureAssociations.py | willgdjones/GTEx | c56a5d548978545ab8a98e74236d52343113e9e6 | [
"MIT"
] | null | null | null | import os
import sys
import pickle
import matplotlib.pyplot as plt
import numpy as np
import h5py
import argparse
from sklearn.decomposition import PCA
from sklearn.linear_model import LinearRegression
from matplotlib.colors import Normalize
sys.path.insert(0, os.getcwd())
from src.utils.helpers import *
import statsmodels.stats.multitest as smm
from gprofiler import GProfiler
import multiprocess as mp
from tqdm import tqdm
import time
from pebble import ProcessPool, ProcessExpired
from concurrent.futures import TimeoutError
GTEx_directory = '/hps/nobackup/research/stegle/users/willj/GTEx'
parser = argparse.ArgumentParser(description='Collection of experiments. Runs on the cluster.')
parser.add_argument('-g', '--group', help='Experiment group', required=True)
parser.add_argument('-n', '--name', help='Experiment name', required=True)
parser.add_argument('-p', '--params', help='Parameters')
parser.add_argument('-l', '--parallel', help='Parallel')
args = vars(parser.parse_args())
group = args['group']
name = args['name']
parameter_key = args['params']
parallel = args['parallel']
class TFCorrectedFeatureAssociations():
@staticmethod
def compute_pvalues():
os.makedirs(GTEx_directory + '/intermediate_results/{}'.format(group), exist_ok=True)
M = 2000
k = 1
print('Computing technical factor corrected associations')
t, a, m, s = parameter_key.split('_')
Y, X, dIDs, filt_tIDs, tfs, ths, t_idx = filter_and_correct_expression_and_image_features(t, m, a, s, M, k, pc_correction=False, tf_correction=True)
N = Y.shape[1]
print('Computing {} x {} = {} associations for: '.format(N, M, N*M), t, a, m, s)
print ("Normalising data")
n_Y = np.zeros_like(Y)
for i in range(1024):
original_feature = Y[:,i]
normalized_feature = normalize_feature(original_feature)
n_Y[:,i] = normalized_feature
res = compute_pearsonR(n_Y, X)
results = [res, filt_tIDs]
pickle.dump(results, open(GTEx_directory + '/intermediate_results/{group}/{name}_{key}.pickle'.format(group=group, name=name, key=parameter_key), 'wb'))
Rs_real, pvs_real, pvs_1 = res
Rs_real[np.isnan(Rs_real)] = 0
sorted_idx = np.argsort(Rs_real.flatten()**2)[::-1]
top10associations = []
for i in range(10):
position = sorted_idx[i]
expected_R = Rs_real.flatten()[sorted_idx[i]]
f, t = np.argwhere(Rs_real == expected_R)[0]
print (f,t)
feature = n_Y[:,f]
transcript = X[:,t]
R, pv = pearsonr(feature, transcript)
assert R == expected_R
transcript_name = get_gene_name(filt_tIDs[t])
association_data = [feature, f, transcript, transcript_name, pv, R]
top10associations.append(association_data)
pickle.dump(top10associations, open(GTEx_directory + '/results/{group}/top10associations_{key}.pickle'.format(group=group, key=parameter_key), 'wb'))
@staticmethod
def top10associations():
t, a, m, s = parameter_key.split('_')
results = pickle.load(open(GTEx_directory + '/intermediate_results/{group}/compute_pvalues_{key}.pickle'.format(group=group, key=parameter_key), 'rb'))
res, filt_tIDs = results
Rs_real, pvs_real, pvs_1 = res
Rs_real[np.isnan(Rs_real)] = 0
sorted_idx = np.argsort(Rs_real.flatten()**2)[::-1]
import pdb; pdb.set_trace()
top10associations = []
for i in range(10):
position = sorted_idx[i]
expected_R = Rs_real.flatten()[sorted_idx[i]]
f, t = np.argwhere(Rs_real == expected_R)[0]
print (f,t)
feature = n_Y[:,f]
transcript = X[:,t]
R, pv = pearsonr(feature, transcript)
assert R == expected_R
transcript_name = get_gene_name(filt_tIDs[t])
association_data = [feature, f, transcript, transcript_name, pv, R]
top10associations.append(association_data)
pickle.dump(top10associations, open(GTEx_directory + '/results/{group}/top10associations_{key}.pickle'.format(group=group, key=parameter_key), 'wb'))
@staticmethod
def associations_across_patchsizes():
import statsmodels.stats.multitest as smm
os.makedirs(GTEx_directory + '/results/{}'.format(group), exist_ok=True)
print ("Loading association data")
association_results, most_varying_feature_idx, filt_transcriptIDs = pickle.load(open(GTEx_directory + '/intermediate_results/TFCorrectedFeatureAssociations/corrected_pvalues.pickle', 'rb'))
SIZES = [128, 256, 512, 1024, 2048, 4096]
ALPHAS = [0.01, 0.001, 0.0001,0.00001]
print ("Calculating Bonferroni significant associations:")
all_counts = []
for alph in ALPHAS:
print ("Alpha: ", alph)
size_counts = []
for s in SIZES:
print ("Patch size: ", s)
pvalues = association_results['{}_{}_{}_{}'.format('Lung','mean','retrained',s)][1].flatten()
counts = sum(smm.multipletests(pvalues, method='bonferroni',alpha=alph)[0])
size_counts.append(counts)
all_counts.append(size_counts)
print ("Saving results")
pickle.dump(all_counts, open(GTEx_directory + '/results/{group}/{name}.pickle'.format(group=group, name=name), 'wb'))
@staticmethod
def associations_raw_vs_retrained():
import statsmodels.stats.multitest as smm
os.makedirs(GTEx_directory + '/results/{}'.format(group), exist_ok=True)
print ("Loading association data")
association_results, most_varying_feature_idx, filt_transcriptIDs = pickle.load(open(GTEx_directory + '/intermediate_results/TFCorrectedFeatureAssociations/corrected_pvalues.pickle', 'rb'))
alpha = 0.0001
SIZES = [128, 256, 512, 1024, 2048, 4096]
MODELS = ['retrained', 'raw']
print ("Calculating Bonferroni significant associations:")
all_counts = []
for m in MODELS:
print ("Model: ", m)
model_counts = []
for s in SIZES:
print ("Patch size: ", s)
pvalues = association_results['{}_{}_{}_{}'.format('Lung','mean',m,s)][1].flatten()
counts = sum(smm.multipletests(pvalues, method='bonferroni',alpha=alpha)[0])
model_counts.append(counts)
all_counts.append(model_counts)
print ("Saving results")
pickle.dump(all_counts, open(GTEx_directory + '/results/{group}/{name}.pickle'.format(group=group, name=name), 'wb'))
@staticmethod
def associations_mean_vs_median():
import statsmodels.stats.multitest as smm
os.makedirs(GTEx_directory + '/results/{}'.format(group), exist_ok=True)
print ("Loading association data")
association_results, most_varying_feature_idx, filt_transcriptIDs = pickle.load(open(GTEx_directory + '/intermediate_results/TFCorrectedFeatureAssociations/corrected_pvalues.pickle', 'rb'))
alpha = 0.0001
SIZES = [128, 256, 512, 1024, 2048, 4096]
AGGREGATIONS = ['mean', 'median']
print ("Calculating Bonferroni significant associations:")
all_counts = []
for a in AGGREGATIONS:
print ("Aggregation: ", a)
aggregation_counts = []
for s in SIZES:
print ("Patch size: ", s)
pvalues = association_results['{}_{}_{}_{}'.format('Lung',a,'retrained',s)][1].flatten()
counts = sum(smm.multipletests(pvalues, method='bonferroni',alpha=alpha)[0])
aggregation_counts.append(counts)
all_counts.append(aggregation_counts)
print ("Saving results")
pickle.dump(all_counts, open(GTEx_directory + '/results/{group}/{name}.pickle'.format(group=group, name=name), 'wb'))
@staticmethod
def features_with_significant_transcripts():
import statsmodels.stats.multitest as smm
os.makedirs(GTEx_directory + '/results/{}'.format(group), exist_ok=True)
print ("Loading association data")
association_results, most_varying_feature_idx, filt_transcriptIDs = pickle.load(open(GTEx_directory + '/intermediate_results/TFCorrectedFeatureAssociations/corrected_pvalues.pickle', 'rb'))
alpha = 0.0001
SIZES = [128, 256, 512, 1024, 2048, 4096]
print ("Calculating Bonferroni significant associations:")
size_counts = []
for s in SIZES:
print ("Patch size: ", s)
pvalues = association_results['{}_{}_{}_{}'.format('Lung','mean','retrained',s)][1]
original_shape = pvalues.shape
counts = sum(np.sum(smm.multipletests(pvalues.flatten(),method='bonferroni',alpha=alpha)[0].reshape(original_shape),axis=1) > 0)
size_counts.append(counts)
print ("Saving results")
pickle.dump(size_counts, open(GTEx_directory + '/results/{group}/{name}.pickle'.format(group=group, name=name), 'wb'))
@staticmethod
def transcripts_with_significant_features():
import statsmodels.stats.multitest as smm
os.makedirs(GTEx_directory + '/results/{}'.format(group), exist_ok=True)
print ("Loading association data")
association_results, most_varying_feature_idx, filt_transcriptIDs = pickle.load(open(GTEx_directory + '/intermediate_results/TFCorrectedFeatureAssociations/corrected_pvalues.pickle', 'rb'))
alpha = 0.0001
SIZES = [128, 256, 512, 1024, 2048, 4096]
print ("Calculating Bonferroni significant associations:")
size_counts = []
for s in SIZES:
print ("Patch size: ", s)
pvalues = association_results['{}_{}_{}_{}'.format('Lung','mean','retrained',s)][1]
original_shape = pvalues.shape
counts = sum(np.sum(smm.multipletests(pvalues.flatten(),method='bonferroni',alpha=alpha)[0].reshape(original_shape),axis=0) > 0)
size_counts.append(counts)
print ("Saving results")
pickle.dump(size_counts, open(GTEx_directory + '/results/{group}/{name}.pickle'.format(group=group, name=name), 'wb'))
if __name__ == '__main__':
eval(group + '().' + name + '()')
| 37.537906 | 197 | 0.642624 | 1,201 | 10,398 | 5.378018 | 0.17985 | 0.042267 | 0.036848 | 0.030655 | 0.735872 | 0.725654 | 0.688497 | 0.670847 | 0.636321 | 0.636321 | 0 | 0.024984 | 0.226294 | 10,398 | 276 | 198 | 37.673913 | 0.777874 | 0 | 0 | 0.585492 | 0 | 0 | 0.181189 | 0.077515 | 0 | 0 | 0 | 0 | 0.010363 | 1 | 0.036269 | false | 0 | 0.124352 | 0 | 0.165803 | 0.145078 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c275e44b313e5315243ebf3ab70ee8d232f76e8 | 378 | py | Python | manage.py | dragoon/prestashop-sync | 7106909ba7a1ce7d0368afd1b41be6462c6d2310 | [
"MIT"
] | 4 | 2016-01-19T11:35:28.000Z | 2019-03-25T20:06:22.000Z | manage.py | dragoon/prestashop-sync | 7106909ba7a1ce7d0368afd1b41be6462c6d2310 | [
"MIT"
] | null | null | null | manage.py | dragoon/prestashop-sync | 7106909ba7a1ce7d0368afd1b41be6462c6d2310 | [
"MIT"
] | 2 | 2019-03-25T20:06:24.000Z | 2021-08-31T08:53:52.000Z | #!/usr/bin/env python
import os
import sys
proj_dir = os.path.abspath(os.path.dirname(__file__))
external_dir = os.path.join(proj_dir, 'external')
sys.path.insert(0, external_dir)
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
| 29.076923 | 64 | 0.772487 | 56 | 378 | 4.785714 | 0.571429 | 0.067164 | 0.067164 | 0.164179 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002959 | 0.10582 | 378 | 12 | 65 | 31.5 | 0.789941 | 0.05291 | 0 | 0 | 0 | 0 | 0.128852 | 0.061625 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3c450644985aeac8aba6404d0d9b48c5deb59426 | 210 | py | Python | service.py | nricklin/roadmapper9000 | 2f351a6a6c010a925448cf27a680cc4a3e4de31f | [
"MIT"
] | null | null | null | service.py | nricklin/roadmapper9000 | 2f351a6a6c010a925448cf27a680cc4a3e4de31f | [
"MIT"
] | null | null | null | service.py | nricklin/roadmapper9000 | 2f351a6a6c010a925448cf27a680cc4a3e4de31f | [
"MIT"
] | null | null | null | import roadmapper
def handler(event, context):
print event
print context
try:
roadmapper.run()
except Exception as e:
print str(e)
return str(e)
return "{\"msg\":\"looks like something worked!\"}" | 16.153846 | 52 | 0.695238 | 29 | 210 | 5.034483 | 0.689655 | 0.054795 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17619 | 210 | 13 | 52 | 16.153846 | 0.843931 | 0 | 0 | 0 | 0 | 0 | 0.023697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.1 | null | null | 0.3 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c460df0f7a01efd9383c07408ff16c08c69be3f | 427 | py | Python | simple_exercises/profiti/sum_of_two_matrices.py | ilante/programming_immanuela_englander | 45d51c99b09ae335a67e03ac5ea79fc775bdf0bd | [
"MIT"
] | null | null | null | simple_exercises/profiti/sum_of_two_matrices.py | ilante/programming_immanuela_englander | 45d51c99b09ae335a67e03ac5ea79fc775bdf0bd | [
"MIT"
] | null | null | null | simple_exercises/profiti/sum_of_two_matrices.py | ilante/programming_immanuela_englander | 45d51c99b09ae335a67e03ac5ea79fc775bdf0bd | [
"MIT"
] | null | null | null | A=[[1,2, 8], [3, 7,4]]
B=[[5,6, 9], [7,6 ,0]]
c=[]
for i in range(len(A)): #range of len gives me indecesc.
c.append([])
for j in range(len(A[i])):
sum = A[i][j]+B[i][j]
c[i].append(sum)
print(c)
# for i in range(len(A)): #range of len gives me indeces
# for j in range(len(A[i])): #again indeces
# sum = A[i][j]+B[i][j]
# rowlist.append(sum)
# print(rowlist)
# rowlist =[]
| 25.117647 | 56 | 0.503513 | 83 | 427 | 2.590361 | 0.349398 | 0.130233 | 0.186047 | 0.204651 | 0.539535 | 0.539535 | 0.539535 | 0.306977 | 0.306977 | 0.306977 | 0 | 0.037975 | 0.259953 | 427 | 16 | 57 | 26.6875 | 0.642405 | 0.47541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c4ec73cc65108ee20c2a64c2347e068d28407dc | 223 | py | Python | weather/forms.py | Arvind-4/Weather-app-using-Django | c57382e32d08835afcb93c7d212448a0c8f30324 | [
"MIT"
] | null | null | null | weather/forms.py | Arvind-4/Weather-app-using-Django | c57382e32d08835afcb93c7d212448a0c8f30324 | [
"MIT"
] | null | null | null | weather/forms.py | Arvind-4/Weather-app-using-Django | c57382e32d08835afcb93c7d212448a0c8f30324 | [
"MIT"
] | null | null | null | from django import forms
class WeatherForm(forms.Form):
name = forms.CharField(max_length=100, label=False,
widget=forms.TextInput(attrs={
'placeholder': 'Enter a Valid Place ...',
'class': 'form-control',
})) | 27.875 | 52 | 0.695067 | 28 | 223 | 5.5 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015789 | 0.147982 | 223 | 8 | 53 | 27.875 | 0.794737 | 0 | 0 | 0 | 0 | 0 | 0.227679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c561f143b1ba09d2b7c2115bc4452774b7deb85 | 268 | py | Python | acceptance_tests/04-change_photon_energy.py | nsls-ii-esm-2/profile_collection | 2684967eda34049d64e64b1b7fd4e62b1992c386 | [
"BSD-3-Clause"
] | null | null | null | acceptance_tests/04-change_photon_energy.py | nsls-ii-esm-2/profile_collection | 2684967eda34049d64e64b1b7fd4e62b1992c386 | [
"BSD-3-Clause"
] | null | null | null | acceptance_tests/04-change_photon_energy.py | nsls-ii-esm-2/profile_collection | 2684967eda34049d64e64b1b7fd4e62b1992c386 | [
"BSD-3-Clause"
] | null | null | null | from ophyd.utils.errors import ReadOnlyError
print("This should raise an error because EPU is not writeable during shutdown.")
try:
RE(Eph.move_to(90, grating='600', EPU='105', shutter='open'))
except ReadOnlyError:
print("The test raised the expected error.") | 44.666667 | 81 | 0.75 | 40 | 268 | 5 | 0.875 | 0.18 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.134328 | 268 | 6 | 82 | 44.666667 | 0.827586 | 0 | 0 | 0 | 0 | 0 | 0.434944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c5877f21d017f6f67fb87a52e0d120e2f6e0f24 | 2,608 | py | Python | modules/admin_account/code/login_views.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | 1 | 2018-11-27T08:08:07.000Z | 2018-11-27T08:08:07.000Z | modules/admin_account/code/login_views.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | null | null | null | modules/admin_account/code/login_views.py | xuhuiliang-maybe/ace_office | 07fae18676a193206802e8fb9aa32a805b1da24c | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
from django.contrib import messages
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.contrib.auth import authenticate, login, logout
from django.contrib.sites.shortcuts import get_current_site
from django.http import HttpResponseRedirect
from django.shortcuts import resolve_url
from django.template.response import TemplateResponse
from django.utils.http import is_safe_url
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_protect
from django.views.decorators.debug import sensitive_post_parameters
from ace_office import settings
@sensitive_post_parameters()
@csrf_protect
@never_cache
def user_login(request, template_name='login.html', redirect_field_name=REDIRECT_FIELD_NAME, current_app=None):
"""显示登录表单和处理登录动作
:param request:请求对象
:param template_name:返回模板的名字
:param redirect_field_name:重定向字段名
:param current_app:当前应用程序
:return:
"""
redirect_to = request.POST.get(redirect_field_name, request.GET.get(redirect_field_name, '/'))
remember_me = request.POST.get('remember_me', 0)
username, password = "", ""
if request.method == "POST":
username = request.POST.get("username", "")
password = request.POST.get("password", "")
user = authenticate(username=username, password=password)
print "login user==", user
if user:
print "is_staff==", user.is_staff
print "is_active==", user.is_active
if user and user.is_staff and user.is_active:
# 确保用户始发重定向的网址是安全的。.
if not is_safe_url(url=redirect_to, host=request.get_host()):
redirect_to = resolve_url(settings.LOGIN_REDIRECT_URL)
# Okay, 安全检查完成,登录用户。
else:
login(request, user)
# 检查是否保存
if remember_me:
expiry_time = 14 * 24 * 3600 # 14天后登陆超时
request.session.set_expiry(expiry_time)
return HttpResponseRedirect(redirect_to)
elif user and not user.is_staff:
messages.warning(request, u"您没有登录权限")
elif user and not user.is_active:
messages.warning(request, u"您已经离职,无登录权限")
else:
messages.warning(request, u"用户名或密码错误,登录失败")
current_site = get_current_site(request)
context = {
redirect_field_name: redirect_to,
'site': current_site,
'site_name': current_site.name,
"username": username,
"password": password,
}
if current_app:
request.current_app = current_app
return TemplateResponse(request, template_name, context)
def user_logout(request):
def go_back():
# 管理后台登录
if '/login' in request.META.get('HTTP_REFERER', ""):
redirect_to = "/login/"
# 其他
else:
redirect_to = '/'
return HttpResponseRedirect(redirect_to)
logout(request)
return go_back()
| 28.977778 | 111 | 0.759586 | 352 | 2,608 | 5.414773 | 0.284091 | 0.057712 | 0.062434 | 0.039349 | 0.049318 | 0.020986 | 0 | 0 | 0 | 0 | 0 | 0.005324 | 0.135736 | 2,608 | 89 | 112 | 29.303371 | 0.840284 | 0.029141 | 0 | 0.080645 | 0 | 0 | 0.067647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.064516 | 0.193548 | null | null | 0.048387 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
3c776eec43754dd26c87acf3c198f54133d72f26 | 3,939 | py | Python | automergetool/solvers/java_imports.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 41 | 2017-04-10T10:12:32.000Z | 2022-02-11T09:34:43.000Z | automergetool/solvers/java_imports.py | xgouchet/AutoMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 14 | 2017-02-17T09:58:57.000Z | 2018-02-12T14:38:51.000Z | automergetool/solvers/java_imports.py | xgouchet/ArachneMergeTool | d63c057440a99e868e5eb25720f8d89640112f04 | [
"Apache-2.0"
] | 5 | 2017-04-11T13:03:20.000Z | 2021-06-23T08:41:10.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from argparse import ArgumentParser, Namespace
import re
import sys
from automergetool.amt_import_solver import ImportsSolver
IMPORT_REGEX = re.compile('^\s*import\s+(static\s+)?(.*)\s*;\s*$')
EMPTY_REGEX = re.compile('^[\s\n]*$')
IMPORT_GROUPS_ORDER_ANDROID = [("import android.", 0), ("import com.", 1), ("import junit.", 2),
("import net.", 3), ("import org.", 4), ("import java.", 5),
("import javax.", 6), ("import ", 7), ("import static ", 8)]
IMPORT_GROUPS_ORDER_IJ_IDEA = [("import ", 0), ("import javax.", 1), ("import java.", 2),
("import static ", 3)]
IMPORT_GROUPS_ORDER_ECLIPSE = [("import static ", 0), ("import java.", 1), ("import javax.", 2),
("import org.", 3), ("import com.", 4), ("import ", 5)]
ORDER_ANDROID = "android"
ORDER_IJ_IDEA = "idea"
ORDER_ECLIPSE = "eclipse"
IMPORT_PRESETS = {
ORDER_ANDROID: IMPORT_GROUPS_ORDER_ANDROID,
ORDER_IJ_IDEA: IMPORT_GROUPS_ORDER_IJ_IDEA,
ORDER_ECLIPSE: IMPORT_GROUPS_ORDER_ECLIPSE,
}
class JavaImportSolver(ImportsSolver):
# TODO allow custom order configuration
def __init__(self, order: str = None):
super().__init__()
if order is not None:
self.set_import_groups(order)
else:
self.import_groups = []
def is_allowed_within_import_section(self, line: str) -> bool:
if re.match(EMPTY_REGEX, line):
return True
else:
return False
def is_import_line(self, line: str) -> bool:
return re.match(IMPORT_REGEX, line) is not None
def set_import_groups(self, preset: str = None):
"""
Sets the preferred ordering for imports
preset -- one of "android", "eclipse", "idea"
"""
if preset is not None:
if preset in IMPORT_PRESETS:
groups = IMPORT_PRESETS[preset]
self.import_groups = sorted(groups, key=lambda grp: len(grp[0]), reverse=True)
def get_import_group(self, imp: str):
"""
Returns the group index the imports belongs to
imp -- the import line
"""
for group in self.import_groups:
if imp.startswith(group[0]):
return group[1]
return len(self.import_groups)
def are_imports_the_same(self, imp: str, other_imp: str):
match = re.match(IMPORT_REGEX, imp)
other_match = re.match(IMPORT_REGEX, other_imp)
canonical = match.group(2).replace(" ", "").replace("\t", "")
other_canonical = other_match.group(2).replace(" ", "").replace("\t", "")
if match.group(1) is not None:
static = match.group(1).replace(" ", "").replace("\t", "")
else:
static = ""
if other_match.group(1) is not None:
other_static = other_match.group(1).replace(" ", "").replace("\t", "")
else:
other_static = ""
return (canonical == other_canonical) and (static == other_static)
def parse_arguments(args: list) -> Namespace:
"""Parses the arguments passed on invocation in a dict and return it"""
parser = ArgumentParser(description="A tool to combine multiple merge tools")
parser.add_argument('-b', '--base', required=True)
parser.add_argument('-l', '--local', required=True)
parser.add_argument('-r', '--remote', required=True)
parser.add_argument('-m', '--merged', required=True)
parser.add_argument(
'-o', '--order', choices=[ORDER_ECLIPSE, ORDER_IJ_IDEA, ORDER_ANDROID], required=False)
return parser.parse_args(args)
if __name__ == '__main__':
args = parse_arguments(sys.argv[1:])
solver = JavaImportSolver(args.order)
if solver.solve_import_conflicts(args.base, args.local, args.remote, args.merge):
sys.exit(0)
else:
sys.exit(1)
| 35.486486 | 96 | 0.602437 | 486 | 3,939 | 4.679012 | 0.286008 | 0.063325 | 0.052331 | 0.036939 | 0.158311 | 0.066843 | 0.026385 | 0 | 0 | 0 | 0 | 0.011209 | 0.252602 | 3,939 | 110 | 97 | 35.809091 | 0.761209 | 0.076923 | 0 | 0.066667 | 0 | 0 | 0.109086 | 0.010376 | 0 | 0 | 0 | 0.009091 | 0 | 1 | 0.093333 | false | 0 | 0.453333 | 0.013333 | 0.653333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
3c7dff9d63589468ace1a9dda5b7f3225be4754f | 1,639 | py | Python | src/daily-coding-problem/medium/sort-singly-linked-lists/test_sort_singly_linked_lists.py | nwthomas/code-challenges | 49c2532ff597495474e67b13f2ed9b9ad93d40b5 | [
"MIT"
] | 1 | 2020-12-11T05:54:59.000Z | 2020-12-11T05:54:59.000Z | src/daily-coding-problem/medium/sort-singly-linked-lists/test_sort_singly_linked_lists.py | nwthomas/code-challenges | 49c2532ff597495474e67b13f2ed9b9ad93d40b5 | [
"MIT"
] | 1 | 2021-04-10T06:53:30.000Z | 2021-04-10T06:53:30.000Z | src/daily-coding-problem/medium/sort-singly-linked-lists/test_sort_singly_linked_lists.py | nwthomas/code-challenges | 49c2532ff597495474e67b13f2ed9b9ad93d40b5 | [
"MIT"
] | 7 | 2019-11-24T12:10:35.000Z | 2020-12-14T22:36:31.000Z | from sort_singly_linked_lists import Node, sort_singly_linked_lists
import unittest
class TestSortSinglyLinkedLists(unittest.TestCase):
def create_singly_linked_list(self, values_list):
"""Creates a new SinglyLinkedList class and adds all values to it"""
new_singly_linked_list = None
for value in values_list:
if not new_singly_linked_list:
new_singly_linked_list = Node(value)
else:
new_singly_linked_list.add_node(value)
return new_singly_linked_list
def get_sorted_singly_linked_lists_values(self, sorted_singly_linked_list):
"""Takes in a sorted singly linked list and returns the values from it"""
current = sorted_singly_linked_list
final = [current.value]
while current.next:
current = current.next
final.append(current.value)
return final
def test_returns_single_sorted_list(self):
"""Tests that two separate sorted singly linked lists are returned as one sorted one"""
s1 = self.create_singly_linked_list([1, 3, 4, 6, 8, 40, 89, 28, 90, 100])
s2 = self.create_singly_linked_list([2, 5, 8, 11, 12, 100, 567, 1000])
s3 = self.create_singly_linked_list([1, 4, 6, 9, 10, 34, 162873, 126381672389, 123961987263])
result = sort_singly_linked_lists(s1, s2, s3)
self.assertEqual(self.get_sorted_singly_linked_lists_values(result), [1, 1, 2, 3, 4, 4, 5, 6, 6, 8, 8, 9, 10, 11, 12, 28, 34, 40, 89, 90, 100, 100, 567, 1000, 162873, 123961987263, 126381672389])
if __name__ == "__main__":
unittest.main() | 43.131579 | 203 | 0.670531 | 232 | 1,639 | 4.448276 | 0.357759 | 0.209302 | 0.186047 | 0.092054 | 0.19186 | 0.114341 | 0 | 0 | 0 | 0 | 0 | 0.11878 | 0.23978 | 1,639 | 38 | 204 | 43.131579 | 0.70947 | 0.129347 | 0 | 0 | 0 | 0 | 0.005666 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.115385 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c8687b7693dba00ca1c9fbc84868f02f8aa0663 | 584 | py | Python | datatypes/binary/Binary.py | goeckslab/jbrowse-archive-creator | 438557136c9dd4eb0db89835e5d253e44b50a7a3 | [
"AFL-3.0"
] | 2 | 2017-07-07T21:24:17.000Z | 2021-01-15T02:58:12.000Z | datatypes/binary/Binary.py | goeckslab/jbrowse-archive-creator | 438557136c9dd4eb0db89835e5d253e44b50a7a3 | [
"AFL-3.0"
] | 1 | 2017-06-26T13:58:05.000Z | 2017-06-26T13:58:05.000Z | datatypes/binary/Binary.py | Yating-L/jbrowse-archive-creator | c97fd2f545cf7b3115d5ddda5da7bb5e0936e73b | [
"AFL-3.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf8 -*-
"""
Super Class of the managed datatype
"""
import os
import tempfile
import collections
import shutil
import util
from datatypes.Datatype import Datatype
class Binary(Datatype):
def __init__(self):
super(Binary, self).__init__()
def initSettings(self):
super(Binary, self).initSettings()
self.trackDataURL = os.path.join(self.myBinaryFolderPath, self.trackName)
def createTrack(self):
shutil.copy(self.inputFile, self.trackDataURL)
| 15.783784 | 81 | 0.631849 | 61 | 584 | 5.918033 | 0.52459 | 0.077562 | 0.083102 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002336 | 0.267123 | 584 | 37 | 82 | 15.783784 | 0.841122 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.214286 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
3c910df74e09b655f2bb6a2ea90498ee844cabc2 | 532 | py | Python | funcaoParametros.py | renatabezerratech/Exerc-cios-Iniciais-Python | 14eae25213542e68e01b1ad2ee74ce3f610c3281 | [
"MIT"
] | null | null | null | funcaoParametros.py | renatabezerratech/Exerc-cios-Iniciais-Python | 14eae25213542e68e01b1ad2ee74ce3f610c3281 | [
"MIT"
] | null | null | null | funcaoParametros.py | renatabezerratech/Exerc-cios-Iniciais-Python | 14eae25213542e68e01b1ad2ee74ce3f610c3281 | [
"MIT"
] | null | null | null | #Criei uma função e esta recebeu parâmetros.
def funcaoParametros(Par1,Par2):
print(Par1+" "+"é a mãe da"+" "+Par2)
#Vou chamar a função, passando o valor dos parâmetros:
funcaoParametros("Renata","Rafaela")
#Criei nova função com parâmetro que retorna um valor.
def funcaoMultiplica(x):
return x * x
#Vou chamar a função, atribuir seu retorno a uma variável e imprimir a variável:
f=funcaoMultiplica(5)
print(f)
#Posso mandar imprimir direto, chamando a função com um valor atribuído:
print(funcaoMultiplica(2))
| 22.166667 | 80 | 0.744361 | 78 | 532 | 5.076923 | 0.589744 | 0.05303 | 0.050505 | 0.080808 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013483 | 0.163534 | 532 | 23 | 81 | 23.130435 | 0.876404 | 0.56203 | 0 | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.125 | 0.375 | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
3c92fff51d24d5fb34c572e52b233bd86eaa8941 | 4,691 | py | Python | PrognosAIs/IO/utils.py | Svdvoort/prognosais | f5f144a2aac5610ea2450392853aa37bce311085 | [
"Apache-2.0"
] | 3 | 2020-11-10T09:21:51.000Z | 2021-01-12T15:14:19.000Z | PrognosAIs/IO/utils.py | Svdvoort/prognosais | f5f144a2aac5610ea2450392853aa37bce311085 | [
"Apache-2.0"
] | 38 | 2020-11-18T14:19:25.000Z | 2022-03-17T09:28:48.000Z | PrognosAIs/IO/utils.py | Svdvoort/prognosais | f5f144a2aac5610ea2450392853aa37bce311085 | [
"Apache-2.0"
] | null | null | null | import importlib
import os
import shutil
import logging
import sys
from pathlib import Path
import numba.cuda
import psutil
import tensorflow as tf
from slurmpie import slurmpie
def create_directory(file_path, exist_ok=True):
if not os.path.exists(file_path):
os.makedirs(file_path, exist_ok=exist_ok)
def delete_directory(file_path):
if os.path.exists(file_path):
shutil.rmtree(file_path)
def copy_directory(original_directory, out_directory):
shutil.copytree(original_directory, out_directory, dirs_exist_ok=True)
def get_root_name(file_path):
return os.path.basename(os.path.normpath(file_path))
def get_file_name_from_full_path(file_path):
return os.path.basename(os.path.normpath(file_path))
def get_file_name(file_path, file_extension):
root_name = get_root_name(file_path)
if file_extension[0] != ".":
file_extension = ".".join(file_extension)
return root_name.split(file_extension)[0]
def find_files_with_extension(file_path, file_extension):
if file_extension[0] == ".":
file_extension = file_extension[1:]
return sorted(
f_path.path
for f_path in os.scandir(file_path)
if (f_path.is_file() and file_extension in "".join(Path(f_path.name).suffixes))
)
def get_parent_directory(file_path):
return os.path.dirname(os.path.normpath(os.path.abspath(file_path)))
def get_file_path(file_path):
file_name = get_root_name(file_path)
file_path = file_path.split(file_name)[0]
return normalize_path(file_path)
def normalize_path(path):
if path[-1] == os.sep:
path = path[:-1]
return path
def get_number_of_cpus():
return len(os.sched_getaffinity(0))
def get_subdirectories(root_dir: str) -> list:
return [f_path.path for f_path in os.scandir(root_dir) if f_path.is_dir()]
def get_available_ram(used_memory: int = 0) -> int:
"""
Get the available RAM in bytes.
Returns:
int: available in RAM in bytes
"""
slurm_mem = slurmpie.System().get_job_memory()
if slurm_mem is None:
available_ram = psutil.virtual_memory().available
else:
# Convert from megabytes to bytes (*1024*1024)
slurm_mem *= 1048576
available_ram = slurm_mem - used_memory
return available_ram
def get_dir_size(root_dir):
"""Returns total size of all files in dir (and subdirs)"""
root_directory = Path(os.path.normpath(root_dir))
return sum(f.stat().st_size for f in root_directory.glob("**/*") if f.is_file())
def get_gpu_compute_capability(gpu: tf.config.PhysicalDevice) -> tuple:
try:
gpu_number = int(gpu.name.split(":")[-1])
cuda_device = numba.cuda.select_device(gpu_number)
cuda_capability = cuda_device.compute_capability
cuda_device.reset()
except numba.cuda.cudadrv.error.CudaSupportError:
# We do not actually have a cuda device
cuda_capability = (0, 0)
return cuda_capability
def gpu_supports_float16(gpu: tf.config.PhysicalDevice) -> bool:
gpu_compute_capability = get_gpu_compute_capability(gpu)
# Float16 support is supported with at least compute capability 5.3
supports_float16 = (gpu_compute_capability[0] == 5 and gpu_compute_capability[1] >= 3) or (
gpu_compute_capability[0] > 5
)
return supports_float16
def gpu_supports_mixed_precision(gpu: tf.config.PhysicalDevice) -> bool:
gpu_compute_capability = get_gpu_compute_capability(gpu)
# Mixed precision has benefits on compute 7.5 and higher
return gpu_compute_capability[0] >= 7 and gpu_compute_capability[1] >= 5
def get_gpu_devices() -> list:
return tf.config.list_physical_devices("GPU")
def get_number_of_gpu_devices() -> int:
return len(tf.config.list_physical_devices("GPU"))
def get_cpu_devices() -> list:
return tf.config.list_physical_devices("CPU")
def get_number_of_slurm_nodes() -> int:
if "SLURM_JOB_NUM_NODES" in os.environ:
number_of_slurm_nodes = int(os.environ["SLURM_JOB_NUM_NODES"])
else:
number_of_slurm_nodes = 0
return number_of_slurm_nodes
def load_module_from_file(module_path):
if module_path is None:
return None
class_name = get_root_name(module_path).split(".")[0]
module_file_spec = importlib.util.spec_from_file_location(class_name, module_path,)
module = importlib.util.module_from_spec(module_file_spec)
module_file_spec.loader.exec_module(module)
return module
def setup_logger():
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s prognosais %(levelname)-1s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
stream=sys.stdout,
)
| 27.922619 | 95 | 0.71456 | 687 | 4,691 | 4.582242 | 0.245997 | 0.055909 | 0.063532 | 0.022872 | 0.269377 | 0.181385 | 0.148348 | 0.148348 | 0.106099 | 0.08831 | 0 | 0.013566 | 0.182903 | 4,691 | 167 | 96 | 28.08982 | 0.807722 | 0.0712 | 0 | 0.057143 | 0 | 0 | 0.028439 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.219048 | false | 0 | 0.114286 | 0.07619 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
3c940b5aac25806702b3d23e26467910178789cb | 17,717 | py | Python | google-cloud-sdk/lib/third_party/cloud_ml_engine_sdk/dataflow/_ml_functions.py | bopopescu/searchparty | afdc2805cb1b77bd5ac9fdd1a76217f4841f0ea6 | [
"Apache-2.0"
] | null | null | null | google-cloud-sdk/lib/third_party/cloud_ml_engine_sdk/dataflow/_ml_functions.py | bopopescu/searchparty | afdc2805cb1b77bd5ac9fdd1a76217f4841f0ea6 | [
"Apache-2.0"
] | null | null | null | google-cloud-sdk/lib/third_party/cloud_ml_engine_sdk/dataflow/_ml_functions.py | bopopescu/searchparty | afdc2805cb1b77bd5ac9fdd1a76217f4841f0ea6 | [
"Apache-2.0"
] | 3 | 2017-07-27T18:44:13.000Z | 2020-07-25T17:48:53.000Z | # Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Private Cloud ML dataflow transforms and functions.
"""
import datetime
import logging
import os
import subprocess
import time
import apache_beam as beam
from google.cloud.ml.io.coders import TrainingJobResult
from google.cloud.ml.util import _file as ml_file
from google.cloud.ml.util._api import ApiBeta
class TrainingJobDo(beam.DoFn):
"""A DoFn that submits a training job and waits for it to finish.
The input PCollection should be a PCollection of TrainingJobRequest.
"""
def __init__(self, api_class=ApiBeta, sleep_func=time.sleep):
"""Construct a DoFn to train a model.
Args:
api_class: (Optional intended for testing only) A subclass of ApiBase
that acts a client library for the Cloud ML Apis.
sleep_func: (Optional intended for testing only) A function to call to
wait some amount of time.
"""
self._api = None
# Handle to the function to use to sleep between calls.
# Makes it easy to inject a mock during testing
self._sleep_func = sleep_func
# This bit of indirection is intended to make it easy to inject a mock Api
# during testing.
self._api_class = api_class
# TODO(user): Remove the try catch after sdk update
def process(self, train_spec):
try:
train_spec = train_spec.element
except AttributeError:
pass
logging.info('Setting endpoint to %s', train_spec.endpoint)
self._api = self._api_class(project_id=train_spec.project,
endpoint=train_spec.endpoint)
# Check if the job already exists.
training_job = self._api.get_job(train_spec.job_name)
if not training_job:
print 'Submitting training job', train_spec.job_name
logging.info('Submitting training job %s.', train_spec.job_name)
training_job = self._api.submit_training_job(
name=train_spec.job_name, package_uris=train_spec.package_uris,
python_module=train_spec.python_module, args=train_spec.job_args,
hyperparameters=train_spec.hyperparameters,
region=train_spec.region, scale_tier=train_spec.scale_tier,
master_type=train_spec.master_type,
worker_type=train_spec.worker_type,
ps_type=train_spec.ps_type, worker_count=train_spec.worker_count,
ps_count=train_spec.ps_count,
runtime_version=train_spec.runtime_version)
else:
logging.info('The training job %s already exists.',
train_spec.job_name)
timeout = train_spec.timeout
if 'createTime' in training_job:
# Adjust our timeout based on when the job was actually created,
# in case dataflow is retrying this operation.
# createTime is formatted like 2016-10-05T17:27:53Z
start_time = datetime.datetime.strptime(training_job['createTime'],
'%Y-%m-%dT%H:%M:%SZ')
already_ran = datetime.datetime.now() - start_time
timeout -= already_ran
if timeout.total_seconds() < 0:
# Don't allow a negative timeout. We'll always check the job status
# at least once before timing out.
timeout = datetime.timedelta(seconds=0)
logging.info('Waiting for Cloud ML training job: %s', training_job)
# Print as well, so that users can see the job id during local runs.
print 'Waiting for Cloud ML training job:', training_job
final_job = self._api.wait_for_job(
train_spec.job_name,
timeout=timeout,
polling_interval=train_spec.polling_interval)
result = TrainingJobResult()
result.training_request = train_spec
result.training_job_metadata = final_job
result.training_job_result = final_job.get('trainingOutput', None)
result.error = final_job.get('errorMessage', None)
if final_job.get('state', None) not in [
'SUCCEEDED', 'FAILED', 'CANCELLED', 'CANCELLING'
]:
msg = ('The training job {0} did not complete in the time '
'allotted.').format(training_job)
logging.error(msg)
# Cancel the job ourselves and raise this as an error.
self._api.cancel_job(train_spec.job_name)
raise RuntimeError(msg)
else:
# The job completed. So now we need to check whether it completed
# successfully or with an error.
if final_job.get('errorMessage', None) is not None:
# The job finished with an error.
msg = 'The training job {0} finished with {1} error: {1}.'.format(
training_job, final_job['errorMessage'])
logging.error(msg)
raise RuntimeError(msg)
return [result]
class _TrainingJobLocalDo(beam.DoFn):
"""A DoFn that runs training locally.
The input PCollection should be a PCollection of TrainingJobRequest.
Training runs locally by running TensorFlow in a container as opposed to
firing off a CloudML job.
The input PCollection should be a PCollection of TrainingJobRequest.
The value of trainer_uri should be the docker image to use.
This requires docker is installed locally. As a result, it will not work
when running on the Dataflow service.
"""
# TODO(user): Remove the try catch after sdk update
def process(self, train_spec):
try:
train_spec = train_spec.element
except AttributeError:
pass
args = train_spec.job_args or []
if not train_spec.python_module:
raise ValueError('python_module must be provided.')
command = ['python', '-m', train_spec.python_module]
command.extend(args)
logging.info('Running command: %s', ' '.join(command))
subprocess.check_call(command)
# TODO(user): Is there other data we should be outputting?
result = TrainingJobResult()
result.training_request = train_spec
return [result]
class PredictionJobRequest(object):
"""This class contains the parameters for running a batch prediction job.
"""
def __init__(self,
project_id=None,
job_name=None,
input_uri=None,
output_uri=None,
region=None,
data_format='TEXT',
timeout=datetime.timedelta(hours=1),
polling_interval=datetime.timedelta(seconds=30),
endpoint=None,
runtime_version=None):
"""Construct an instance of PredictionJobRequest.
Args:
project_id: The id of the project, used as credentials for the API
job_name: A job name. This must be unique within the project.
input_uri: A URI to input files to do prediction on.
output_uri: The output directory where the results of the job will be
written
region: Cloud region in which to run the request.
data_format: The data format for the prediction api call. Either TEXT or
TF_RECORD.
timeout: A datetime.timedelta expressing the amount of time to wait before
giving up. The timeout applies to a single invocation of the process
method in TrainModelDo. A DoFn can be retried several times before a
pipeline fails.
polling_interval: A datetime.timedelta to represent the amount of time to
wait between requests polling for the files.
endpoint: (Optional) The endpoint for the Cloud ML API.
runtime_version: (Optional) the Google Cloud ML runtime version to use.
"""
# Parent, model_name, and version_id get set by _AugmentPredictArgsDo
self.parent = None
self.model_name = None
self.version_id = None
self.project_id = project_id
self.job_name = job_name
self.input_uri = input_uri
self.output_uri = output_uri
self.region = region
self.data_format = data_format
self.endpoint = endpoint
self.timeout = timeout
self.polling_interval = polling_interval
self.runtime_version = runtime_version
@property
def project(self):
return self.parent
def copy(self):
"""Return a copy of the object."""
r = PredictionJobRequest()
r.__dict__.update(self.__dict__)
return r
def __eq__(self, o):
for f in ['parent', 'model_name', 'version_id', 'project_id', 'job_name',
'input_uri', 'output_uri', 'region', 'data_format', 'timeout',
'polling_interval', 'endpoint', 'runtime_version']:
if getattr(self, f) != getattr(o, f):
return False
return True
def __ne__(self, o):
return not self == o
def __repr__(self):
fields = []
for k, v in self.__dict__.iteritems():
fields.append('{0}={1}'.format(k, v))
return 'PredictionJobRequest({0})'.format(', '.join(fields))
class _AugmentPredictArgsDo(beam.DoFn):
# TODO(user): Remove the try catch after sdk update
def process(self, element, deployed_model):
try:
predict_request = element.element.copy()
except AttributeError:
predict_request = element.copy()
if len(deployed_model) > 1:
msg = ('The ml Predict PTransform was called with multiple models. Only 1'
' deployed model is currently supported per Predict call.')
logging.error(msg)
raise RuntimeError(msg)
(predict_request.model_name, predict_request.version_id) = deployed_model[0]
parent = '/projects/%s/models/%s/' % (predict_request.project_id,
predict_request.model_name)
if predict_request.version_id:
parent = os.path.join(parent, 'versions', predict_request.version_id)
predict_request.parent = parent
logging.info('model dir: %s', parent)
return [predict_request]
class PredictionJobResult(object):
"""Result of running batch prediction on a model."""
def __init__(self):
# A copy of the prediction request that created the job.
self.prediction_request = None
# At most one of error and prediction_job_result will be specified.
# These fields will only be supplied if the job completed.
# prediction_job_result will be provided if the job completed successfully
# and error will be supplied otherwise.
self.error = None
self.prediction_job_result = None
def __eq__(self, o):
for f in ['prediction_request', 'error', 'prediction_job_result']:
if getattr(self, f) != getattr(o, f):
return False
return True
def __ne__(self, o):
return not self == o
def __repr__(self):
fields = []
for k, v in self.__dict__.iteritems():
fields.append('{0}={1}'.format(k, v))
return 'PredictionJobResult({0})'.format(', '.join(fields))
class BatchPredictionJobDo(beam.DoFn):
"""A DoFn that submits a batch predition job and waits for it to finish.
The input PCollection should be a PCollection of PredictionJobRequest.
"""
def __init__(self, api_class=ApiBeta, sleep_func=time.sleep):
"""Construct a DoFn and submit to the APIl.
Args:
api_class: (Optional intended for testing only) A subclass of ApiBase
that acts a client library for the Cloud ML Apis.
sleep_func: (Optional intended for testing only) A function to call to
wait some amount of time.
"""
self._api = None
# Handle to the function to use to sleep between calls.
# Makes it easy to inject a mock during testing
self._sleep_func = sleep_func
# This bit of indirection is intended to make it easy to inject a mock Api
# during testing.
self._api_class = api_class
# TODO(user): Remove the try catch after sdk update
def process(self, prediction_spec, input_files=None):
try:
prediction_spec = prediction_spec.element
except AttributeError:
pass
print 'Job Name:', prediction_spec.job_name
print 'Input files:', prediction_spec.input_uri
print 'Output Files:', prediction_spec.output_uri
self._api = self._api_class(project_id=prediction_spec.project_id,
endpoint=prediction_spec.endpoint)
logging.info('Running Job %s', prediction_spec.job_name)
logging.info('Input files %s', prediction_spec.input_uri)
logging.info('Output Files %s', prediction_spec.output_uri)
# Check if the job already exists.
prediction_job = self._api.get_job(prediction_spec.job_name)
if not prediction_job:
prediction_job = self._api.submit_batch_prediction_job(
name=prediction_spec.job_name,
input_paths=prediction_spec.input_uri,
output_path=prediction_spec.output_uri,
model_name=prediction_spec.model_name,
version_name=prediction_spec.version_id,
data_format=prediction_spec.data_format,
region=prediction_spec.region,
runtime_version=prediction_spec.runtime_version)
else:
logging.info('The prediction job %s already exists.',
prediction_spec.job_name)
logging.info('Waiting for Cloud ML prediction job: %s', prediction_job)
final_job = self._api.wait_for_job(
prediction_spec.job_name,
timeout=prediction_spec.timeout,
polling_interval=prediction_spec.polling_interval)
result = PredictionJobResult()
result.prediction_request = prediction_spec
result.prediction_job_result = final_job.get('response', None)
result.error = final_job.get('errorMessage', None)
if final_job.get('state', None) not in [
'SUCCEEDED', 'FAILED', 'CANCELLED', 'CANCELLING'
]:
msg = ('The batch prediction job {0} did not complete in the time '
'allotted.').format(prediction_job)
logging.error(msg)
# Cancel the job ourselves and raise this as an error.
self._api.cancel_job(prediction_spec.job_name)
raise RuntimeError(msg)
else:
# The job completed. So now we need to check whether it completed
# successfully or with an error.
if final_job.get('errorMessage', None) is not None:
# The job finished with an error.
msg = ('The batch prediction job {0} finished with {1}'
' error: {1}.'.format(prediction_job, final_job['errorMessage']))
logging.error(msg)
raise RuntimeError(msg)
print 'Batch Prediction Job Completed succesfully'
logging.info('Batch Prediction Job Completed succesfully')
return [result]
def stage_packages(packages, staging_location):
"""Stage packages to GCS.
Args:
packages: List of local paths to stage to GCS.
staging_location: Location on GCS where packages should be staged.
Returns:
gcs_uris: A, possibly empty, list of gcs uris to which the packages were
staged.
Raises:
ValueError: If the inputs are invalid.
"""
if not packages:
return []
staged_pip_packages = []
staging_location = staging_location.rstrip('/')
# We only allow files which are likely to be pip packages.
valid_suffixes = ['.tar', '.tar.gz', '.zip']
for package_path in packages:
if package_path.startswith('gs://'):
logging.info('Package %s is already on GCS', package_path)
staged_pip_packages.append(package_path)
continue
# only allow packages that are likely to be pip packages based on the
# file extension.
if not any(package_path.endswith(s) for s in valid_suffixes):
logging.info('Skipping package %s because its not of type %s',
package_path, valid_suffixes)
continue
rpath = package_path.rstrip('/')
# We use full relative path of the local file because we want to allow
# for the case where we have to local files with the same name in
# different directories.
gcs_location = staging_location + '/' + rpath
logging.info('Staging %s to %s', package_path, gcs_location)
ml_file.copy_file(package_path, gcs_location)
staged_pip_packages.append(gcs_location)
return staged_pip_packages
class _WrapCallable(beam.PTransform):
"""Wraps a callable as a PTransform."""
def __init__(self, fn, *args):
super(_WrapCallable, self).__init__()
self.fn = fn
self.args = args
# TODO(b/33677990): Remove apply method.
def apply(self, input_var):
return self.expand(input_var)
def expand(self, input_var):
return self.fn(input_var, *self.args)
# Because of b/29179299 AugmentTrainArgsDo is its own DoFn as opposed to
# function that we pass to a MapFn.
class _AugmentTrainArgsDo(beam.DoFn):
def __init__(self, spec):
self.tf_main_spec = spec
# TODO(user): Remove the try catch after sdk update
def process(self, element, train_files, test_files, output_dir,
metadata_path):
try:
train_request = element.element.copy()
except AttributeError:
train_request = element.copy()
# Force the train/test files into a list, to ensure they are extracted
# from their Emulated Iterator.
files = []
files.extend(train_files)
files = []
files.extend(test_files)
train_request.job_args = train_request.job_args or []
train_request.job_args += self.tf_main_spec.construct_io_args(train_files,
test_files,
output_dir,
metadata_path)
return [train_request]
| 36.010163 | 80 | 0.682226 | 2,380 | 17,717 | 4.89958 | 0.184034 | 0.027013 | 0.013206 | 0.009605 | 0.372953 | 0.321928 | 0.278964 | 0.251865 | 0.235229 | 0.223566 | 0 | 0.004432 | 0.235932 | 17,717 | 491 | 81 | 36.083503 | 0.856985 | 0.163797 | 0 | 0.294964 | 0 | 0 | 0.122267 | 0.007913 | 0 | 0 | 0 | 0.010183 | 0 | 0 | null | null | 0.010791 | 0.032374 | null | null | 0.021583 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
3c96d4c7c7b5453ce3a300d45611e15014682272 | 522 | py | Python | tests/test_utils.py | narnikgamarnikus/django-referrals | 1c4b3f49cd41facd7e0a1e8fbba04c07bdb95294 | [
"MIT"
] | 2 | 2021-07-28T12:19:33.000Z | 2022-01-05T23:57:10.000Z | tests/test_utils.py | narnikgamarnikus/django-referrals | 1c4b3f49cd41facd7e0a1e8fbba04c07bdb95294 | [
"MIT"
] | 5 | 2020-05-01T14:40:18.000Z | 2021-12-15T04:17:30.000Z | tests/test_utils.py | narnikgamarnikus/django-referrals | 1c4b3f49cd41facd7e0a1e8fbba04c07bdb95294 | [
"MIT"
] | 4 | 2020-03-16T15:20:51.000Z | 2022-01-23T12:55:14.000Z | from test_plus.test import TestCase
from referrals import utils
from uuid import uuid4
class TestValidateUUID4(TestCase):
def setUp(self):
self.valid_uuid = str(uuid4())
self.invalid_uuid = 'FAKE_STRING'
def test_with_valid_uuid4(self):
self.assertTrue(utils.validate_uuid4(self.valid_uuid))
def test_with_invalid_uuid4(self):
self.assertFalse(utils.validate_uuid4(self.invalid_uuid))
def test_without_uuid4(self):
self.assertFalse(utils.validate_uuid4(''))
| 23.727273 | 65 | 0.726054 | 68 | 522 | 5.323529 | 0.352941 | 0.149171 | 0.107735 | 0.110497 | 0.232044 | 0.232044 | 0.232044 | 0 | 0 | 0 | 0 | 0.021077 | 0.181992 | 522 | 21 | 66 | 24.857143 | 0.826698 | 0 | 0 | 0 | 0 | 0 | 0.021073 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 1 | 0.307692 | false | 0 | 0.230769 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b1fb4363fe864d6dc0da35ff80e2e41841113c20 | 3,870 | py | Python | Pjono/Signal/signal.py | Xp-op/Pjono | 4db6c8626ec0ade0d46629597972ab07556e7b00 | [
"MIT"
] | 7 | 2021-06-02T21:46:30.000Z | 2021-12-19T07:11:45.000Z | Pjono/Signal/signal.py | filipporomani/Pjono | 4db6c8626ec0ade0d46629597972ab07556e7b00 | [
"MIT"
] | 1 | 2021-06-02T17:18:50.000Z | 2021-06-03T10:19:36.000Z | Pjono/Signal/signal.py | filipporomani/Pjono | 4db6c8626ec0ade0d46629597972ab07556e7b00 | [
"MIT"
] | 1 | 2021-07-17T07:56:16.000Z | 2021-07-17T07:56:16.000Z | from Pjono.Response import Http_File
from Pjono.Server import PjonoApp
import os
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
def _get_path(path: set):
return os.path.join(__location__, path)
class ClientEvent:
"""
Class for adding and getting event
```py
app = PjonoApp()
client = ClientEvent(app)
```
"""
def __init__(self, server: PjonoApp):
self.app = server
self.app.add_file("/Pjono/signal.js", Http_File(_get_path("signal.js"), "application/javascript"))
self.events = {}
def addEvent(self, name: str):
"""
decorator function for adding new event or overwriting existed event
```py
@client.addEvent("yell")
def yell(msg):
return msg.upper()+"!!!"
@app.register("/")
def index(request):
signal, event = client.getEvent(request)
if signal and event == "yell":
return signal
return HTML("index.html")
```
to fire the event, we used signal.js:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>index</title>
<script src="/Pjono/signal.js"></script>
</head>
<body>
<div id="root">
<input name="msg" type="text" id="inp-1"><br><br>
<button onclick="yell()">yell</button>
</div>
<script>
const Signal = new PjSignal();
function yell(){
Signal.fireEvent("yell", document.getElementById("inp-1").value, function(response, status){
var ele = document.createElement("h2");
ele.textContent = response;
document.getElementById("root").appendChild(ele);
});
}
</script>
</body>
```
"""
def inner(_func):
self.events[name] = _func
return inner
def getEvent(self, request: dict):
"""
Getting event with message.
Will return None if there's no event happen
```py
@app.register("/")
def index(request):
signal, event = client.getEvent(request)
if signal and event == "onclick":
return signal
return HTML("index.html")
```
"""
if "PjEvent" in request["Headers"] and "Pjmsg" in request["Headers"]:
event = request["Headers"]["PjEvent"]
msg = request["Headers"]["Pjmsg"]
if event in self.events:
return self.events[event](msg), event
return None, None
class SignalCode:
"""
SignalCode
Telling the server about current event status.
### Example:
```py
@client.addEvent("user_id")
def get_user_id(id):
if id in user:
return user[id]["name"]
return SignalCode("Not Found", 404)
@app.register("/")
def index(request):
signal, event = client.getEvent(request)
if signal and event == "user_id":
if signal == 404:
return "User Not Found"
return signal
return HTML("html/index.html")
```
"""
def __init__(self, name: str, code: int):
self.code = code
self.name = name
def __repr__(self) -> str:
return f"<{self.name}:{self.code}>"
def __eq__(self, o: int) -> bool:
return self.code == o | 31.721311 | 109 | 0.506202 | 402 | 3,870 | 4.771144 | 0.348259 | 0.012513 | 0.021898 | 0.029718 | 0.148071 | 0.148071 | 0.115746 | 0.115746 | 0.115746 | 0.115746 | 0 | 0.004884 | 0.365116 | 3,870 | 122 | 110 | 31.721311 | 0.775743 | 0.509044 | 0 | 0 | 0 | 0 | 0.10139 | 0.03843 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.1 | 0.1 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5901f60ebbe05967f5545f30d19613948f0dcb43 | 233 | py | Python | Standard Library/logging/1. Real Python/06_basic_config.py | shubhamnag14/Python-Documents | d3fee0ad90232b413f6ac1b562588fb255b79e42 | [
"Apache-2.0"
] | 2 | 2020-11-27T13:21:05.000Z | 2021-04-19T21:14:21.000Z | Standard Library/logging/1. Real Python/06_basic_config.py | shubhamnag14/Python-Documents | d3fee0ad90232b413f6ac1b562588fb255b79e42 | [
"Apache-2.0"
] | null | null | null | Standard Library/logging/1. Real Python/06_basic_config.py | shubhamnag14/Python-Documents | d3fee0ad90232b413f6ac1b562588fb255b79e42 | [
"Apache-2.0"
] | 1 | 2021-06-27T20:31:42.000Z | 2021-06-27T20:31:42.000Z | import logging
import logging.config
logging.config.fileConfig(fname='file.conf', disable_existing_loggers=False)
# Get the logger specified in the file
logger = logging.getLogger(__name__)
logger.debug('This is a debug message')
| 23.3 | 76 | 0.802575 | 33 | 233 | 5.484848 | 0.69697 | 0.143646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107296 | 233 | 9 | 77 | 25.888889 | 0.870192 | 0.154506 | 0 | 0 | 0 | 0 | 0.164103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
5903a1f3d5a34b8ea276642d7839289f9485fc91 | 315 | py | Python | src/DNASignText.py | Toonerz/libdna | 9729a4cee7a43f869353b6ecc84ec02f071a6f30 | [
"MIT"
] | 1 | 2021-02-23T12:05:39.000Z | 2021-02-23T12:05:39.000Z | src/DNASignText.py | Toonerz/libdna | 9729a4cee7a43f869353b6ecc84ec02f071a6f30 | [
"MIT"
] | null | null | null | src/DNASignText.py | Toonerz/libdna | 9729a4cee7a43f869353b6ecc84ec02f071a6f30 | [
"MIT"
] | null | null | null | from pandac.PandaModules import *
from panda3d.core import *
from DNASignGraphic import DNASignGraphic
class DNASignText(DNASignGraphic):
def __init__(self):
DNASignGraphic.__init__(self)
self.letters = ''
def setLetters(self, letters):
self.letters = letters
def getLetters(self):
return self.letters | 22.5 | 41 | 0.777778 | 36 | 315 | 6.583333 | 0.444444 | 0.185654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003676 | 0.136508 | 315 | 14 | 42 | 22.5 | 0.867647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.272727 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5910bd31955fb8d5ba7ebe7d4e588d6a90229782 | 562 | py | Python | src/librekpi/utils.py | LibreKPI/librekpi | 07bfbf18ff9f99a4b3347060b25699cb09f6f6b6 | [
"MIT"
] | null | null | null | src/librekpi/utils.py | LibreKPI/librekpi | 07bfbf18ff9f99a4b3347060b25699cb09f6f6b6 | [
"MIT"
] | 10 | 2015-01-12T20:49:21.000Z | 2015-03-12T17:20:18.000Z | src/librekpi/utils.py | LibreKPI/librekpi | 07bfbf18ff9f99a4b3347060b25699cb09f6f6b6 | [
"MIT"
] | 1 | 2015-01-11T23:54:09.000Z | 2015-01-11T23:54:09.000Z | """Flask-like routes for Tornado
Taken from https://github.com/quadloops/loopchat/blob/master/utils.py
"""
import tornado.web
class routes(object):
_routes = []
def __init__(self, uri, name=None):
self._uri = uri
self.name = name
def __call__(self, _handler):
"""gets called when we class decorate"""
name = self.name or _handler.__name__
self._routes.append(tornado.web.url(self._uri, _handler, name=name))
return _handler
@classmethod
def get_routes(self):
return self._routes
| 22.48 | 76 | 0.651246 | 72 | 562 | 4.777778 | 0.541667 | 0.061047 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.234875 | 562 | 24 | 77 | 23.416667 | 0.8 | 0.240214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0.076923 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
5914b0c92c8fd681f7b5539beccbf3cfc88f8648 | 879 | py | Python | pxsshUtil.py | yun-percy/OpenSSHClient | d24381e4e6cb4d4c78d650d6e07d71d7b16dcfb0 | [
"Apache-2.0"
] | null | null | null | pxsshUtil.py | yun-percy/OpenSSHClient | d24381e4e6cb4d4c78d650d6e07d71d7b16dcfb0 | [
"Apache-2.0"
] | null | null | null | pxsshUtil.py | yun-percy/OpenSSHClient | d24381e4e6cb4d4c78d650d6e07d71d7b16dcfb0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
from pexpect import pxssh
import getpass
def pxssh_cmd(addr,port,passwd,cmd):
try:
# 调用构造函数,创建一个 pxssh 类的对象:
s = pxssh.pxssh()
# 获得用户指定 ssh 主机域名.
# print addr.split()
hostname = addr.split('@')[-1]
# 获得用户指定 ssh 主机用户名.
username = addr.split('@')[0]
# 获得用户指定 ssh 主机密码.
# password = getpass.getpass('password: ')
# 利用 pxssh 类的 login 方法进行 ssh 登录,原始 prompt 为'$' , '#'或'>'
s.login (hostname, username, passwd, original_prompt='[$#>]')
# 发送命令 ' ls -l '
s.sendline (cmd )
# 匹配 prompt
s.prompt()
# 将 prompt 前所有内容打印出,即命令 ' ls -l ' 的执行结果.
ret=s.before
# 退出 ssh session
s.logout()
return ret
except pxssh.ExceptionPxssh, e:
print "pxssh failed on login."
return str(e)
| 28.354839 | 69 | 0.540387 | 109 | 879 | 4.33945 | 0.59633 | 0.057082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005076 | 0.327645 | 879 | 30 | 70 | 29.3 | 0.795262 | 0.343572 | 0 | 0 | 0 | 0 | 0.05151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.1875 | 0.125 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
5914e4aaae3b77fcd11b51dc646c1566a7b5b152 | 196 | py | Python | demo_app/demo/settings/__init__.py | A-tiantian/xadmin_bugfix | 025cdf38245fcaf234ebcf785976b50919ba641c | [
"BSD-3-Clause"
] | 120 | 2018-04-19T12:03:17.000Z | 2021-11-18T10:29:54.000Z | demo_app/demo/settings/__init__.py | A-tiantian/xadmin_bugfix | 025cdf38245fcaf234ebcf785976b50919ba641c | [
"BSD-3-Clause"
] | 9 | 2018-12-10T14:52:16.000Z | 2021-11-24T14:07:06.000Z | demo_app/demo/settings/__init__.py | A-tiantian/xadmin_bugfix | 025cdf38245fcaf234ebcf785976b50919ba641c | [
"BSD-3-Clause"
] | 68 | 2018-05-28T09:46:47.000Z | 2022-02-23T14:39:25.000Z | from .base import *
DEBUG = True
if DEBUG is True:
from .dev import *
else:
from .production import *
# 站点名称
SITE_NAME = 'manage'
# 后台首页
SITE_PAGE = '/%s/service/article/' % SITE_NAME
| 13.066667 | 46 | 0.658163 | 28 | 196 | 4.5 | 0.678571 | 0.126984 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 196 | 14 | 47 | 14 | 0.828947 | 0.045918 | 0 | 0 | 0 | 0 | 0.141304 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.