hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
345164971a2c93830e3358826acf86441dbdf9b6 | 1,294 | py | Python | mail/models.py | drscream/kumquat | 7bd3d84cc4d0fbdbefa46849210fa787176d6091 | [
"MIT"
] | 12 | 2015-12-10T03:13:28.000Z | 2022-03-06T15:43:40.000Z | mail/models.py | drscream/kumquat | 7bd3d84cc4d0fbdbefa46849210fa787176d6091 | [
"MIT"
] | 53 | 2020-10-09T06:38:09.000Z | 2022-03-16T23:04:34.000Z | mail/models.py | drscream/kumquat | 7bd3d84cc4d0fbdbefa46849210fa787176d6091 | [
"MIT"
] | 2 | 2017-02-17T10:13:02.000Z | 2018-07-06T11:02:26.000Z | from django.db import models
from django.utils.translation import ugettext_lazy as _
from passlib.hash import sha512_crypt
from kumquat.models import Domain
default_length = 255
class Account(models.Model):
name = models.CharField(max_length=default_length)
domain = models.ForeignKey(Domain, related_name='mail_accounts', on_delete=models.CASCADE)
password = models.CharField(max_length=default_length)
subaddress = models.BooleanField(verbose_name=_('Subaddress extension'), help_text=_('Enable subaddress extension (e.g. primary+sub@example.com'), default=False)
def set_password(self, password):
self.password = sha512_crypt.encrypt(password)
def __str__(self):
return str(self.name) + '@' + str(self.domain)
def save(self, **kwargs):
self.name = self.name.lower()
super().save(**kwargs)
class Meta:
unique_together = (('name', 'domain'),)
class Redirect(models.Model):
name = models.CharField(max_length=default_length)
domain = models.ForeignKey(Domain, on_delete=models.CASCADE)
to = models.TextField()
def __str__(self):
return self.name + '@' + str(self.domain)
def save(self, **kwargs):
self.name = self.name.lower()
self.to = self.to.lower()
super().save(**kwargs)
class Meta:
unique_together = (('name', 'domain'),)
| 29.409091 | 162 | 0.729521 | 170 | 1,294 | 5.376471 | 0.370588 | 0.052516 | 0.059081 | 0.078775 | 0.455142 | 0.455142 | 0.414661 | 0.414661 | 0.414661 | 0.414661 | 0 | 0.008029 | 0.133694 | 1,294 | 43 | 163 | 30.093023 | 0.807315 | 0 | 0 | 0.451613 | 0 | 0 | 0.08662 | 0.017788 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0.129032 | 0.129032 | 0.064516 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
3468df68fd8fc6a0c52f0380581de6898df0a999 | 658 | py | Python | src/schema/create_minio_bucket.py | mdpham/minio-loompy-graphene | 11bad237bd10658a97dd2a28057c83ad47e432be | [
"BSD-3-Clause"
] | null | null | null | src/schema/create_minio_bucket.py | mdpham/minio-loompy-graphene | 11bad237bd10658a97dd2a28057c83ad47e432be | [
"BSD-3-Clause"
] | 9 | 2020-03-27T07:55:20.000Z | 2022-02-18T23:47:56.000Z | src/schema/create_minio_bucket.py | mdpham/minio-loompy-graphene | 11bad237bd10658a97dd2a28057c83ad47e432be | [
"BSD-3-Clause"
] | null | null | null | from graphene import Schema, Mutation, String, Field, ID, List
from minio import Minio
from minio.error import ResponseError
from .minio_bucket import MinioBucket
from minio_client.client import minio_client
class CreateMinioBucket(Mutation):
# Use minio bucket type definition to be returned when created
Output = MinioBucket
# Subclass for describing what arguments mutation takes
class Arguments:
bucket_name = String()
# Resolver function with arguments
def mutate(root, info, bucket_name):
try:
minio_client.make_bucket(bucket_name)
return {'bucket_name': bucket_name}
except ResponseError as err:
print(err) | 31.333333 | 64 | 0.767477 | 85 | 658 | 5.823529 | 0.552941 | 0.10101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177812 | 658 | 21 | 65 | 31.333333 | 0.914972 | 0.223404 | 0 | 0 | 0 | 0 | 0.021654 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.666667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
34779ad96b37a7ece57e092fb2f969041bbfc0ed | 27,098 | py | Python | tests/test_tree.py | tgragnato/geneva | 2fc5b2f2f4766278902cff25af50b753d1d26a76 | [
"BSD-3-Clause"
] | 1,182 | 2019-11-15T02:56:47.000Z | 2022-03-30T16:09:04.000Z | tests/test_tree.py | Nekotekina/geneva | 3eb6b7342f9afd7add1f4aba9e2aadf0b9a5f196 | [
"BSD-3-Clause"
] | 21 | 2019-11-15T15:08:02.000Z | 2022-01-03T16:22:45.000Z | tests/test_tree.py | Nekotekina/geneva | 3eb6b7342f9afd7add1f4aba9e2aadf0b9a5f196 | [
"BSD-3-Clause"
] | 102 | 2019-11-15T15:01:07.000Z | 2022-03-30T13:52:47.000Z | import logging
import os
from scapy.all import IP, TCP
import actions.tree
import actions.drop
import actions.tamper
import actions.duplicate
import actions.utils
import layers.packet
def test_init():
"""
Tests initialization
"""
print(actions.action.Action.get_actions("out"))
def test_count_leaves():
"""
Tests leaf count is correct.
"""
a = actions.tree.ActionTree("out")
logger = logging.getLogger("test")
assert not a.parse("TCP:reserved:0tamper{TCP:flags:replace:S}-|", logger), "Tree parsed malformed DNA"
a.parse("[TCP:reserved:0]-tamper{TCP:flags:replace:S}-|", logger)
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
assert a.count_leaves() == 1
assert a.remove_one()
a.add_action(duplicate)
assert a.count_leaves() == 1
duplicate.left = duplicate2
assert a.count_leaves() == 1
duplicate.right = drop
assert a.count_leaves() == 2
def test_check():
"""
Tests action tree check function.
"""
a = actions.tree.ActionTree("out")
logger = logging.getLogger("test")
a.parse("[TCP:flags:RA]-tamper{TCP:flags:replace:S}-|", logger)
p = layers.packet.Packet(IP()/TCP(flags="A"))
assert not a.check(p, logger)
p = layers.packet.Packet(IP(ttl=64)/TCP(flags="RA"))
assert a.check(p, logger)
assert a.remove_one()
assert a.check(p, logger)
a.parse("[TCP:reserved:0]-tamper{TCP:flags:replace:S}-|", logger)
assert a.check(p, logger)
a.parse("[IP:ttl:64]-tamper{TCP:flags:replace:S}-|", logger)
assert a.check(p, logger)
p = layers.packet.Packet(IP(ttl=15)/TCP(flags="RA"))
assert not a.check(p, logger)
def test_scapy():
"""
Tests misc. scapy aspects relevant to strategies.
"""
a = actions.tree.ActionTree("out")
logger = logging.getLogger("test")
a.parse("[TCP:reserved:0]-tamper{TCP:flags:replace:S}-|", logger)
p = layers.packet.Packet(IP()/TCP(flags="A"))
assert a.check(p, logger)
packets = a.run(p, logger)
assert packets[0][TCP].flags == "S"
p = layers.packet.Packet(IP()/TCP(flags="A"))
assert a.check(p, logger)
a.parse("[TCP:reserved:0]-tamper{TCP:chksum:corrupt}-|", logger)
packets = a.run(p, logger)
assert packets[0][TCP].chksum
assert a.check(p, logger)
def test_str():
"""
Tests string representation.
"""
logger = logging.getLogger("test")
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
assert str(a).strip() == "[%s]-|" % str(t)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
assert a.add_action(tamper)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}-|"
# Tree will not add a duplicate action
assert not a.add_action(tamper)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}-|"
assert a.add_action(tamper2)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R},)-|"
assert a.add_action(actions.duplicate.DuplicateAction())
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(duplicate,),)-|"
drop = actions.drop.DropAction()
assert a.add_action(drop)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(duplicate(drop,),),)-|" or \
str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(duplicate(,drop),),)-|"
assert a.remove_action(drop)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(duplicate,),)-|"
# Cannot remove action that is not present
assert not a.remove_action(drop)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(duplicate,),)-|"
a = actions.tree.ActionTree("out", trigger=t)
orig = "[TCP:urgptr:15963]-duplicate(,drop)-|"
a.parse(orig, logger)
assert a.remove_one()
assert orig != str(a)
assert str(a) in ["[TCP:urgptr:15963]-drop-|", "[TCP:urgptr:15963]-duplicate-|"]
def test_pretty_print_send():
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
duplicate = actions.duplicate.DuplicateAction()
a.add_action(duplicate)
correct_string = "TCP:flags:0\nduplicate\n├── ===> \n└── ===> "
assert a.pretty_print() == correct_string
def test_pretty_print(logger):
"""
Print complex tree, although difficult to test
"""
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
duplicate3 = actions.duplicate.DuplicateAction()
duplicate4 = actions.duplicate.DuplicateAction()
duplicate5 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
drop2 = actions.drop.DropAction()
drop3 = actions.drop.DropAction()
drop4 = actions.drop.DropAction()
duplicate.left = duplicate2
duplicate.right = duplicate3
duplicate2.left = tamper
duplicate2.right = drop
duplicate3.left = duplicate4
duplicate3.right = drop2
duplicate4.left = duplicate5
duplicate4.right = drop3
duplicate5.left = drop4
duplicate5.right = tamper2
a.add_action(duplicate)
correct_string = "TCP:flags:0\nduplicate\n├── duplicate\n│ ├── tamper{TCP:flags:replace:S}\n│ │ └── ===> \n│ └── drop\n└── duplicate\n ├── duplicate\n │ ├── duplicate\n │ │ ├── drop\n │ │ └── tamper{TCP:flags:replace:R}\n │ │ └── ===> \n │ └── drop\n └── drop"
assert a.pretty_print() == correct_string
assert a.pretty_print(visual=True)
assert os.path.exists("tree.png")
os.remove("tree.png")
a.parse("[TCP:flags:0]-|", logger)
a.pretty_print(visual=True) # Empty action tree
assert not os.path.exists("tree.png")
def test_pretty_print_order():
"""
Tests the left/right ordering by reading in a new tree
"""
logger = logging.getLogger("test")
a = actions.tree.ActionTree("out")
assert a.parse("[TCP:flags:A]-duplicate(tamper{TCP:flags:replace:R}(tamper{TCP:chksum:replace:14239},),duplicate(tamper{TCP:flags:replace:S}(tamper{TCP:chksum:replace:14239},),))-|", logger)
correct_pretty_print = "TCP:flags:A\nduplicate\n├── tamper{TCP:flags:replace:R}\n│ └── tamper{TCP:chksum:replace:14239}\n│ └── ===> \n└── duplicate\n ├── tamper{TCP:flags:replace:S}\n │ └── tamper{TCP:chksum:replace:14239}\n │ └── ===> \n └── ===> "
assert a.pretty_print() == correct_pretty_print
def test_parse():
"""
Tests string parsing.
"""
logger = logging.getLogger("test")
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
base_t = actions.trigger.Trigger("field", "flags", "TCP")
base_a = actions.tree.ActionTree("out", trigger=base_t)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
tamper3 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper4 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
a.parse("[TCP:flags:0]-|", logger)
assert str(a) == str(base_a)
assert len(a) == 0
base_a.add_action(tamper)
assert a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}-|", logger)
assert str(a) == str(base_a)
assert len(a) == 1
assert a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R},)-|", logging.getLogger("test"))
base_a.add_action(tamper2)
assert str(a) == str(base_a)
assert len(a) == 2
base_a.add_action(tamper3)
base_a.add_action(tamper4)
assert a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R}(tamper{TCP:flags:replace:S}(tamper{TCP:flags:replace:R},),),)-|", logging.getLogger("test"))
assert str(a) == str(base_a)
assert len(a) == 4
base_t = actions.trigger.Trigger("field", "flags", "TCP")
base_a = actions.tree.ActionTree("out", trigger=base_t)
duplicate = actions.duplicate.DuplicateAction()
assert a.parse("[TCP:flags:0]-duplicate-|", logger)
base_a.add_action(duplicate)
assert str(a) == str(base_a)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
tamper3 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="A")
tamper4 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
duplicate.left = tamper
assert a.parse("[TCP:flags:0]-duplicate(tamper{TCP:flags:replace:S},)-|", logger)
assert str(a) == str(base_a)
duplicate.right = tamper2
assert a.parse("[TCP:flags:0]-duplicate(tamper{TCP:flags:replace:S},tamper{TCP:flags:replace:R})-|", logger)
assert str(a) == str(base_a)
tamper2.left = tamper3
assert a.parse("[TCP:flags:0]-duplicate(tamper{TCP:flags:replace:S},tamper{TCP:flags:replace:R}(tamper{TCP:flags:replace:A},))-|", logger)
assert str(a) == str(base_a)
strategy = actions.utils.parse("[TCP:flags:0]-duplicate(tamper{TCP:flags:replace:S},tamper{TCP:flags:replace:R})-| \/", logger)
assert strategy
assert len(strategy.out_actions[0]) == 3
assert len(strategy.in_actions) == 0
assert not a.parse("[]", logger) # No valid trigger
assert not a.parse("[TCP:flags:0]-", logger) # No valid ending "|"
assert not a.parse("[TCP:]-|", logger) # invalid trigger
assert not a.parse("[TCP:flags:0]-foo-|", logger) # Non-existent action
assert not a.parse("[TCP:flags:0]--|", logger) # Empty action
assert not a.parse("[TCP:flags:0]-duplicate(,,,)-|", logger) # Bad tree
assert not a.parse("[TCP:flags:0]-duplicate()))-|", logger) # Bad tree
assert not a.parse("[TCP:flags:0]-duplicate(((()-|", logger) # Bad tree
assert not a.parse("[TCP:flags:0]-duplicate(,))))-|", logger) # Bad tree
assert not a.parse("[TCP:flags:0]-drop(duplicate,)-|", logger) # Terminal action with children
assert not a.parse("[TCP:flags:0]-drop(duplicate,duplicate)-|", logger) # Terminal action with children
assert not a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}(,duplicate)-|", logger) # Non-branching action with right child
assert not a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}(drop,duplicate)-|", logger) # Non-branching action with children
def test_tree():
"""
Tests basic tree functionality.
"""
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction()
tamper2 = actions.tamper.TamperAction()
duplicate = actions.duplicate.DuplicateAction()
a.add_action(None)
a.add_action(tamper)
assert a.get_slots() == 1
a.add_action(tamper2)
assert a.get_slots() == 1
a.add_action(duplicate)
assert a.get_slots() == 2
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
drop = actions.drop.DropAction()
a.add_action(drop)
assert a.get_slots() == 0
add_success = a.add_action(tamper)
assert not add_success
assert a.get_slots() == 0
rep = ""
for s in a.string_repr(a.action_root):
rep += s
assert rep == "drop"
print(str(a))
assert a.parse("[TCP:flags:A]-duplicate(tamper{TCP:seq:corrupt},)-|", logging.getLogger("test"))
for act in a:
print(str(a))
assert len(a) == 2
assert a.get_slots() == 2
for _ in range(100):
assert str(a.get_rand_action("out", request="DropAction")) == "drop"
def test_remove():
"""
Tests remove
"""
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction()
tamper2 = actions.tamper.TamperAction()
tamper3 = actions.tamper.TamperAction()
assert not a.remove_action(tamper)
a.add_action(tamper)
assert a.remove_action(tamper)
a.add_action(tamper)
a.add_action(tamper2)
a.add_action(tamper3)
assert a.remove_action(tamper2)
assert tamper2 not in a
assert tamper.left == tamper3
assert not tamper.right
assert len(a) == 2
a = actions.tree.ActionTree("out", trigger=t)
duplicate = actions.duplicate.DuplicateAction()
tamper = actions.tamper.TamperAction()
tamper2 = actions.tamper.TamperAction()
tamper3 = actions.tamper.TamperAction()
a.add_action(tamper)
assert a.action_root == tamper
duplicate.left = tamper2
duplicate.right = tamper3
a.add_action(duplicate)
assert len(a) == 4
assert a.remove_action(duplicate)
assert duplicate not in a
assert tamper.left == tamper2
assert not tamper.right
assert len(a) == 2
a.parse("[TCP:flags:A]-|", logging.getLogger("test"))
assert not a.remove_one(), "Cannot remove one with no action root"
def test_len():
"""
Tests length calculation.
"""
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction()
tamper2 = actions.tamper.TamperAction()
assert len(a) == 0, "__len__ returned wrong length"
a.add_action(tamper)
assert len(a) == 1, "__len__ returned wrong length"
a.add_action(tamper)
assert len(a) == 1, "__len__ returned wrong length"
a.add_action(tamper2)
assert len(a) == 2, "__len__ returned wrong length"
duplicate = actions.duplicate.DuplicateAction()
a.add_action(duplicate)
assert len(a) == 3, "__len__ returned wrong length"
def test_contains():
"""
Tests contains method
"""
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction()
tamper2 = actions.tamper.TamperAction()
tamper3 = actions.tamper.TamperAction()
assert not a.contains(tamper), "contains incorrect behavior"
assert not a.contains(tamper2), "contains incorrect behavior"
a.add_action(tamper)
assert a.contains(tamper), "contains incorrect behavior"
assert not a.contains(tamper2), "contains incorrect behavior"
add_success = a.add_action(tamper)
assert not add_success, "added duplicate action"
assert a.contains(tamper), "contains incorrect behavior"
assert not a.contains(tamper2), "contains incorrect behavior"
a.add_action(tamper2)
assert a.contains(tamper), "contains incorrect behavior"
assert a.contains(tamper2), "contains incorrect behavior"
a.remove_action(tamper2)
assert a.contains(tamper), "contains incorrect behavior"
assert not a.contains(tamper2), "contains incorrect behavior"
a.add_action(tamper2)
assert a.contains(tamper), "contains incorrect behavior"
assert a.contains(tamper2), "contains incorrect behavior"
remove_success = a.remove_action(tamper)
assert remove_success
assert not a.contains(tamper), "contains incorrect behavior"
assert a.contains(tamper2), "contains incorrect behavior"
a.add_action(tamper3)
assert a.contains(tamper3), "contains incorrect behavior"
assert len(a) == 2, "len incorrect return"
remove_success = a.remove_action(tamper2)
assert remove_success
def test_iter():
"""
Tests iterator.
"""
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
assert a.add_action(tamper)
assert a.add_action(tamper2)
assert not a.add_action(tamper)
for node in a:
print(node)
def test_run():
"""
Tests running packets through the chain.
"""
logger = logging.getLogger("test")
t = actions.trigger.Trigger(None, None, None)
a = actions.tree.ActionTree("out", trigger=t)
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
packet = layers.packet.Packet(IP()/TCP())
a.add_action(tamper)
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 1
assert None not in packets
assert packets[0].get("TCP", "flags") == "S"
a.add_action(tamper2)
print(str(a))
packet = layers.packet.Packet(IP()/TCP())
assert not a.add_action(tamper), "tree added duplicate action"
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 1
assert None not in packets
assert packets[0].get("TCP", "flags") == "R"
print(str(a))
a.remove_action(tamper2)
a.remove_action(tamper)
a.add_action(duplicate)
packet = layers.packet.Packet(IP()/TCP(flags="RA"))
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 2
assert None not in packets
assert packets[0][TCP].flags == "RA"
assert packets[1][TCP].flags == "RA"
print(str(a))
duplicate.left = tamper
duplicate.right = tamper2
packet = layers.packet.Packet(IP()/TCP(flags="RA"))
print("ABUT TO RUN")
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 2
assert None not in packets
print(str(a))
print(str(packets[0]))
print(str(packets[1]))
assert packets[0][TCP].flags == "S"
assert packets[1][TCP].flags == "R"
print(str(a))
tamper.left = duplicate2
packet = layers.packet.Packet(IP()/TCP(flags="RA"))
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 3
assert None not in packets
assert packets[0][TCP].flags == "S"
assert packets[1][TCP].flags == "S"
assert packets[2][TCP].flags == "R"
print(str(a))
tamper2.left = drop
packet = layers.packet.Packet(IP()/TCP(flags="RA"))
packets = a.run(packet, logging.getLogger("test"))
assert len(packets) == 2
assert None not in packets
assert packets[0][TCP].flags == "S"
assert packets[1][TCP].flags == "S"
print(str(a))
assert a.remove_action(duplicate2)
tamper.left = actions.drop.DropAction()
packet = layers.packet.Packet(IP()/TCP(flags="RA"))
packets = a.run(packet, logger )
assert len(packets) == 0
print(str(a))
a.parse("[TCP:flags:A]-duplicate(tamper{TCP:flags:replace:R}(tamper{TCP:chksum:replace:14239},),duplicate(tamper{TCP:flags:replace:S},))-|", logger)
packet = layers.packet.Packet(IP()/TCP(flags="A"))
assert a.check(packet, logger)
packets = a.run(packet, logger)
assert len(packets) == 3
assert packets[0][TCP].flags == "R"
assert packets[1][TCP].flags == "S"
assert packets[2][TCP].flags == "A"
def test_index():
"""
Tests index
"""
a = actions.tree.ActionTree("out")
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
tamper3 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="F")
assert a.add_action(tamper)
assert a[0] == tamper
assert not a[1]
assert a.add_action(tamper2)
assert a[0] == tamper
assert a[1] == tamper2
assert a[-1] == tamper2
assert not a[10]
assert a.add_action(tamper3)
assert a[-1] == tamper3
assert not a[-11]
def test_mate():
"""
Tests mate primitive
"""
logger = logging.getLogger("test")
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
assert not a.choose_one()
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
other_a = actions.tree.ActionTree("out", trigger=t)
assert not a.mate(other_a), "Can't mate empty trees"
assert a.add_action(tamper)
assert other_a.add_action(tamper2)
assert a.choose_one() == tamper
assert other_a.choose_one() == tamper2
assert a.get_parent(tamper) == (None, None)
assert other_a.get_parent(tamper2) == (None, None)
assert a.add_action(duplicate)
assert a.get_parent(duplicate) == (tamper, "left")
duplicate.right = drop
assert a.get_parent(drop) == (duplicate, "right")
assert other_a.add_action(duplicate2)
# Test mating a full tree with a full tree
assert str(a) == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(duplicate(,drop),)-|"
assert str(other_a) == "[TCP:flags:0]-tamper{TCP:flags:replace:R}(duplicate,)-|"
assert a.swap(duplicate, other_a, duplicate2)
assert str(a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(duplicate,)-|"
assert str(other_a).strip() == "[TCP:flags:0]-tamper{TCP:flags:replace:R}(duplicate(,drop),)-|"
assert len(a) == 2
assert len(other_a) == 3
assert duplicate2 not in other_a
assert duplicate not in a
assert tamper.left == duplicate2
assert tamper2.left == duplicate
assert other_a.get_parent(duplicate) == (tamper2, "left")
assert a.get_parent(duplicate2) == (tamper, "left")
assert other_a.get_parent(drop) == (duplicate, "right")
assert a.get_parent(None) == (None, None)
# Test mating two trees with just root nodes
t = actions.trigger.Trigger("field", "flags", "TCP")
a = actions.tree.ActionTree("out", trigger=t)
assert not a.choose_one()
tamper = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="S")
tamper2 = actions.tamper.TamperAction(field="flags", tamper_type="replace", tamper_value="R")
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
other_a = actions.tree.ActionTree("out", trigger=t)
assert not a.mate(other_a)
assert a.add_action(duplicate)
assert other_a.add_action(duplicate2)
assert a.mate(other_a)
assert a.action_root == duplicate2
assert other_a.action_root == duplicate
assert not duplicate.left and not duplicate.right
assert not duplicate2.left and not duplicate2.right
# Confirm that no nodes have been aliased or connected between the trees
for node in a:
for other_node in other_a:
assert not node.left == other_node
assert not node.right == other_node
# Test mating two trees where one is empty
assert a.remove_action(duplicate2)
# This should swap the duplicate action to be the action root of the other tree
assert str(a) == "[TCP:flags:0]-|"
assert str(other_a) == "[TCP:flags:0]-duplicate-|"
assert a.mate(other_a)
assert not other_a.action_root
assert a.action_root == duplicate
assert len(a) == 1
assert len(other_a) == 0
# Confirm that no nodes have been aliased or connected between the trees
for node in a:
for other_node in other_a:
if other_node:
assert not node.left == other_node
assert not node.right == other_node
assert a.parse("[TCP:flags:0]-tamper{TCP:flags:replace:S}(duplicate(,drop),)-|", logger)
drop = a.action_root.left.right
assert str(drop) == "drop"
# Note that this will return a valid ActionTree, but because it is empty,
# it is technically a False-y value, as it's length is 0
assert other_a.parse("[TCP:flags:0]-|", logger) == other_a
a.swap(drop, other_a, None)
assert other_a.action_root == drop
assert not a.action_root.left.right
assert str(other_a) == "[TCP:flags:0]-drop-|"
assert str(a) == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(duplicate,)-|"
other_a.swap(drop, a, a.action_root.left)
# Confirm that no nodes have been aliased or connected between the trees
for node in a:
for other_node in other_a:
if other_node:
assert not node.left == other_node
assert not node.right == other_node
assert str(other_a) == "[TCP:flags:0]-duplicate-|"
assert str(a) == "[TCP:flags:0]-tamper{TCP:flags:replace:S}(drop,)-|"
a.parse("[TCP:flags:0]-drop-|", logger)
other_a.parse("[TCP:flags:0]-duplicate(drop,drop)-|", logger)
a_drop = a.action_root
other_duplicate = other_a.action_root
a.swap(a_drop, other_a, other_duplicate)
print(str(a))
print(str(other_a))
assert str(other_a) == "[TCP:flags:0]-drop-|"
assert str(a) == "[TCP:flags:0]-duplicate(drop,drop)-|"
duplicate = actions.duplicate.DuplicateAction()
duplicate2 = actions.duplicate.DuplicateAction()
drop = actions.drop.DropAction()
drop2 = actions.drop.DropAction()
drop3 = actions.drop.DropAction()
a = actions.tree.ActionTree("out", trigger=t)
a.add_action(duplicate)
a.add_action(drop)
a.add_action(drop2)
assert str(a) == "[TCP:flags:0]-duplicate(drop,drop)-|"
assert a.get_slots() == 0
other_a = actions.tree.ActionTree("out", trigger=t)
other_a.add_action(drop3)
a.swap(drop, other_a, drop3)
assert str(a) == "[TCP:flags:0]-duplicate(drop,drop)-|"
a.swap(drop3, other_a, drop)
assert str(a) == "[TCP:flags:0]-duplicate(drop,drop)-|"
assert a.mate(other_a)
def test_choose_one():
"""
Tests choose_one functionality
"""
a = actions.tree.ActionTree("out")
drop = actions.drop.DropAction()
assert not a.choose_one()
assert a.add_action(drop)
assert a.choose_one() == drop
assert a.remove_action(drop)
assert not a.choose_one()
duplicate = actions.duplicate.DuplicateAction()
a.add_action(duplicate)
assert a.choose_one() == duplicate
duplicate.left = drop
assert a.choose_one() in [duplicate, drop]
# Make sure that both actions get chosen
chosen = set()
for i in range(0, 10000):
act = a.choose_one()
chosen.add(act)
assert chosen == set([duplicate, drop])
| 39.501458 | 315 | 0.665104 | 3,749 | 27,098 | 4.746332 | 0.058682 | 0.060245 | 0.031471 | 0.060189 | 0.78105 | 0.717433 | 0.655502 | 0.617231 | 0.586771 | 0.551085 | 0 | 0.013152 | 0.177873 | 27,098 | 685 | 316 | 39.559124 | 0.781892 | 0.052919 | 0 | 0.59854 | 0 | 0.034672 | 0.200095 | 0.124335 | 0 | 0 | 0 | 0 | 0.45073 | 1 | 0.032847 | false | 0 | 0.016423 | 0 | 0.04927 | 0.047445 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
caa0e695442da8dc639ed3c9061223b76d6ae4f6 | 579 | py | Python | db_models/deckentry.py | Teplitsa/false-security-1 | 9e5cc23c8bf324d923965bb2624cac4994891154 | [
"MIT"
] | 1 | 2020-10-01T17:44:26.000Z | 2020-10-01T17:44:26.000Z | db_models/deckentry.py | Teplitsa/false-security-1 | 9e5cc23c8bf324d923965bb2624cac4994891154 | [
"MIT"
] | null | null | null | db_models/deckentry.py | Teplitsa/false-security-1 | 9e5cc23c8bf324d923965bb2624cac4994891154 | [
"MIT"
] | 1 | 2021-10-05T12:09:07.000Z | 2021-10-05T12:09:07.000Z | from globals import db
import db_models.game
from db_models.card import Card
class DeckEntry(db.Model):
#__table_args__ = {'extend_existing': True}
__tablename__ = 'deckentry'
id = db.Column(db.Integer, primary_key=True)
# TODO: Undo nullable
cardId = db.Column(db.Integer, db.ForeignKey('card.id'), nullable=True)
gameId = db.Column(db.Integer, db.ForeignKey('game.id'), nullable=False)
card = db.relationship('Card', lazy=False)
game = db.relationship('Game', back_populates='deck', lazy=False)
order = db.Column(db.Integer, nullable=False) | 41.357143 | 76 | 0.713299 | 80 | 579 | 4.9875 | 0.425 | 0.080201 | 0.100251 | 0.170426 | 0.145363 | 0.145363 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148532 | 579 | 14 | 77 | 41.357143 | 0.809331 | 0.107081 | 0 | 0 | 0 | 0 | 0.067829 | 0 | 0 | 0 | 0 | 0.071429 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
caa108c99289e504df2d4967b77c333b2d533a6d | 1,482 | py | Python | fisher_py/data/file_error.py | abdelq/fisher_py | befb98732ba7c4e57858d158c68cda09ed829d66 | [
"MIT"
] | 3 | 2021-11-03T20:55:45.000Z | 2022-02-01T10:11:47.000Z | fisher_py/data/file_error.py | abdelq/fisher_py | befb98732ba7c4e57858d158c68cda09ed829d66 | [
"MIT"
] | 2 | 2022-01-28T02:04:21.000Z | 2022-01-29T01:29:14.000Z | fisher_py/data/file_error.py | abdelq/fisher_py | befb98732ba7c4e57858d158c68cda09ed829d66 | [
"MIT"
] | 1 | 2022-01-26T23:30:37.000Z | 2022-01-26T23:30:37.000Z | from fisher_py.net_wrapping import NetWrapperBase
class FileError(NetWrapperBase):
@property
def has_error(self) -> bool:
"""
Gets a value indicating whether this file has detected an error. If this is false:
Other error properties in this interface have no meaning. Applications should
not continue with processing data from any file which indicates an error.
"""
return self._get_wrapped_object_().HasError
@property
def has_warning(self) -> bool:
"""
Gets a value indicating whether this file has detected a warning. If this is
false: Other warning properties in this interface have no meaning.
"""
return self._get_wrapped_object_().HasWarning
@property
def error_code(self) -> int:
"""
Gets the error code number. Typically this is a windows system error number.
The lowest valid windows error is: 0x00030200 Errors detected within our files
will have codes below 100.
"""
return self._get_wrapped_object_().ErrorCode
@property
def error_message(self) -> str:
"""
Gets the error message. For "unknown exceptions" this may include a stack trace.
"""
return self._get_wrapped_object_().ErrorMessage
@property
def warning_message(self) -> str:
"""
Gets the warning message.
"""
return self._get_wrapped_object_().WarningMessage
| 32.217391 | 90 | 0.654521 | 179 | 1,482 | 5.268156 | 0.452514 | 0.058325 | 0.068929 | 0.106045 | 0.415695 | 0.195122 | 0.195122 | 0.114528 | 0.114528 | 0.114528 | 0 | 0.011215 | 0.278003 | 1,482 | 45 | 91 | 32.933333 | 0.870093 | 0.450742 | 0 | 0.294118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.058824 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
cab43f37667bf4a445a87188b43c261b7be4262e | 1,250 | py | Python | URLSHORT/lib.py | its-mr-monday/Url-Shortener | bca0f9c1a8d6c5a686704d7a410351b9ce31439b | [
"MIT"
] | null | null | null | URLSHORT/lib.py | its-mr-monday/Url-Shortener | bca0f9c1a8d6c5a686704d7a410351b9ce31439b | [
"MIT"
] | null | null | null | URLSHORT/lib.py | its-mr-monday/Url-Shortener | bca0f9c1a8d6c5a686704d7a410351b9ce31439b | [
"MIT"
] | null | null | null | import random
import string
import requests
def SQL_SYNTAX_CHECK(input: str) -> bool:
bad_char = ['*',';','SELECT ',' FROM ', ' TRUE ', ' WHERE ']
for char in bad_char:
if char in input:
return False
return True
def validateRegistration(name, uname, email, password, confirm):
if len(name) < 1 or len(name) > 45:
return "Error invalid name"
if len(uname) < 1 or len(uname) > 20:
return "Error invalid username"
if len(email) < 1 or len(email) > 100:
return "Error invalid email"
if len(password) < 1:
return "Error invalid password"
if password != confirm:
return "Error passwords do not match"
return "Success"
def validate_email(email: str):
at_counter = 0
for x in email:
if x == "@":
at_counter+=1
if at_counter == 1:
return True
else:
return False
def validate_link(link: str):
req = requests.get('http://www.example.com')
if req.status_code == 200:
return True
else:
return False
def generate_link() -> str:
letters = string.ascii_lowercase+string.ascii_uppercase+"0123456789"
return (''.join(random.choice(letters) for i in range(10))) | 24.509804 | 72 | 0.604 | 164 | 1,250 | 4.52439 | 0.420732 | 0.074124 | 0.097035 | 0.053908 | 0.075472 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0.032438 | 0.2848 | 1,250 | 51 | 73 | 24.509804 | 0.797539 | 0 | 0 | 0.205128 | 0 | 0 | 0.141487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128205 | false | 0.128205 | 0.076923 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
cae2b6216f4f43c83ea66c2eb2462e3bd35c9bfd | 1,607 | py | Python | stats.py | warppoint42/Mahjong221 | dac82c726927730e11112e2a62b500587717b7ed | [
"MIT"
] | null | null | null | stats.py | warppoint42/Mahjong221 | dac82c726927730e11112e2a62b500587717b7ed | [
"MIT"
] | null | null | null | stats.py | warppoint42/Mahjong221 | dac82c726927730e11112e2a62b500587717b7ed | [
"MIT"
] | null | null | null | import csv
stats = dict()
with open('unifiedroundlog.csv', 'r') as csvfile:
reader = csv.DictReader(csvfile, ('AI', 'gameID', 'end', 'win', 'feed', 'riichi'))
for row in reader:
if row['AI'] not in stats:
stats[row['AI']] = dict()
stats[row['AI']].setdefault('n', 0)
stats[row['AI']].setdefault('end', 0)
stats[row['AI']].setdefault('win', 0)
stats[row['AI']].setdefault('feed', 0)
stats[row['AI']]['n'] += 1
if row['end'] == '1':
stats[row['AI']]['end'] += 1
if row['win'] == '1':
stats[row['AI']]['win'] += 1
if row['feed'] == '1':
stats[row['AI']]['feed'] += 1
with open('owari.csv', 'r') as csvfile:
reader = csv.DictReader(csvfile, ('AI', 'gameID', '1', '2', '3', '4'))
for row in reader:
stats[row['AI']].setdefault('1', 0)
stats[row['AI']].setdefault('2', 0)
stats[row['AI']].setdefault('3', 0)
stats[row['AI']].setdefault('4', 0)
stats[row['AI']].setdefault('totp', 0)
stats[row['AI']].setdefault('totn', 0)
stats[row['AI']]['totn'] += 1
if 'Name' in row['1']:
stats[row['AI']]['1'] += 1
stats[row['AI']]['totp'] += 1
if 'Name' in row['2']:
stats[row['AI']]['2'] += 1
stats[row['AI']]['totp'] += 2
if 'Name' in row['3']:
stats[row['AI']]['3'] += 1
stats[row['AI']]['totp'] += 3
if 'Name' in row['4']:
stats[row['AI']]['4'] += 1
stats[row['AI']]['totp'] += 4
print(stats)
| 32.14 | 86 | 0.450529 | 215 | 1,607 | 3.367442 | 0.162791 | 0.172652 | 0.331492 | 0.276243 | 0.477901 | 0.129834 | 0.129834 | 0.129834 | 0.129834 | 0.129834 | 0 | 0.036778 | 0.289359 | 1,607 | 49 | 87 | 32.795918 | 0.597198 | 0 | 0 | 0.04878 | 0 | 0 | 0.129052 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02439 | 0 | 0.02439 | 0.02439 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
caef5559516f4fd218345acf0eae5ad95febdf7a | 1,470 | py | Python | BERT_Custom/bert_sequence_tagger/metrics.py | gp201/BERT | 0479bfb4faf7fb107acb38bd1a0e27a23719aa92 | [
"MIT"
] | 1 | 2020-03-20T09:58:58.000Z | 2020-03-20T09:58:58.000Z | BERT_Custom/bert_sequence_tagger/metrics.py | gp201/BERT | 0479bfb4faf7fb107acb38bd1a0e27a23719aa92 | [
"MIT"
] | null | null | null | BERT_Custom/bert_sequence_tagger/metrics.py | gp201/BERT | 0479bfb4faf7fb107acb38bd1a0e27a23719aa92 | [
"MIT"
] | null | null | null | import itertools
from sklearn.metrics import f1_score as f1_score_sklearn
from seqeval.metrics import f1_score
from sklearn.metrics import classification_report
def f1_entity_level(*args, **kwargs):
return f1_score(*args, **kwargs)
def f1_token_level(true_labels, predictions):
true_labels = list(itertools.chain(*true_labels))
predictions = list(itertools.chain(*predictions))
labels = list(set(true_labels) - {'[PAD]', 'O'})
return f1_score_sklearn(true_labels,
predictions,
average='micro',
labels=labels)
def f1_per_token(true_labels, predictions):
true_labels = list(itertools.chain(*true_labels))
predictions = list(itertools.chain(*predictions))
labels = list(set(true_labels) - {'[PAD]', 'O'})
return classification_report(predictions,
true_labels,
labels=labels)
def f1_per_token_plot(true_labels, predictions):
true_labels = list(itertools.chain(*true_labels))
predictions = list(itertools.chain(*predictions))
labels = list(set(true_labels) - {'[PAD]', 'O'})
return classification_report(predictions,
true_labels,
labels=labels,
output_dict=True)
| 35 | 57 | 0.572109 | 144 | 1,470 | 5.597222 | 0.222222 | 0.186104 | 0.182382 | 0.093052 | 0.669975 | 0.669975 | 0.622829 | 0.622829 | 0.622829 | 0.622829 | 0 | 0.009212 | 0.335374 | 1,470 | 42 | 58 | 35 | 0.815763 | 0 | 0 | 0.517241 | 0 | 0 | 0.016084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.137931 | 0.034483 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b069eee129e69f7b50529f8283d98c09f05a638 | 836 | py | Python | architecture/structures/fsm.py | TEKERone/VuelaBot | c333ba213a91fd2297b5bd2ee393226dcbb39c01 | [
"MIT"
] | null | null | null | architecture/structures/fsm.py | TEKERone/VuelaBot | c333ba213a91fd2297b5bd2ee393226dcbb39c01 | [
"MIT"
] | null | null | null | architecture/structures/fsm.py | TEKERone/VuelaBot | c333ba213a91fd2297b5bd2ee393226dcbb39c01 | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from architecture.structures.states import State
from architecture.sensors.parsing import NLParser
class FSM:
"""Máquina de estados
"""
def __init__(self, current_state, intent_data='data/intents2.csv'):
# compatibilidad
assert isinstance(current_state, State)
# estado actual
self.current_state = current_state
# parser de intenciones
self.parser = NLParser(intent_data)
def run(self):
self.current_state.run()
def transition(self, user_input):
self.current_state = self.current_state.get_next_state(user_input,
self.parser)
| 27.866667 | 75 | 0.657895 | 94 | 836 | 5.521277 | 0.5 | 0.16185 | 0.154143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003247 | 0.263158 | 836 | 29 | 76 | 28.827586 | 0.839286 | 0.141148 | 0 | 0 | 0 | 0 | 0.024011 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.2 | false | 0 | 0.333333 | 0 | 0.6 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1b0d9662f2b05d2ad83bc0fe148544efb0eb5d9a | 2,492 | py | Python | 04-Working-With-Dataframes/4.Exercise_ Distinct Articles.py | RodriGonca/DP-203-Data-Engineer | ef8e81bd4bda1e285c2e43714368d46be3ad041b | [
"MIT"
] | null | null | null | 04-Working-With-Dataframes/4.Exercise_ Distinct Articles.py | RodriGonca/DP-203-Data-Engineer | ef8e81bd4bda1e285c2e43714368d46be3ad041b | [
"MIT"
] | null | null | null | 04-Working-With-Dataframes/4.Exercise_ Distinct Articles.py | RodriGonca/DP-203-Data-Engineer | ef8e81bd4bda1e285c2e43714368d46be3ad041b | [
"MIT"
] | null | null | null | # Databricks notebook source
# MAGIC %md
# MAGIC # Introduction to DataFrames Lab
# MAGIC ## Distinct Articles
# COMMAND ----------
# MAGIC %md
# MAGIC ## Instructions
# MAGIC
# MAGIC In the cell provided below, write the code necessary to count the number of distinct articles in our data set.
# MAGIC 0. Copy and paste all you like from the previous notebook.
# MAGIC 0. Read in our parquet files.
# MAGIC 0. Apply the necessary transformations.
# MAGIC 0. Assign the count to the variable `totalArticles`
# MAGIC 0. Run the last cell to verify that the data was loaded correctly.
# MAGIC
# MAGIC **Bonus**
# MAGIC
# MAGIC If you recall from the beginning of the previous notebook, the act of reading in our parquet files will trigger a job.
# MAGIC 0. Define a schema that matches the data we are working with.
# MAGIC 0. Update the read operation to use the schema.
# COMMAND ----------
# MAGIC %md
# MAGIC ## Getting Started
# MAGIC
# MAGIC Run the following cell to configure our "classroom."
# COMMAND ----------
# MAGIC %run "./Includes/Classroom-Setup"
# COMMAND ----------
# MAGIC %md
# MAGIC ## Show Your Work
# COMMAND ----------
(source, sasEntity, sasToken) = getAzureDataSource()
spark.conf.set(sasEntity, sasToken)
path = source + "/wikipedia/pagecounts/staging_parquet_en_only_clean/"
# COMMAND ----------
# TODO
# Replace <<FILL_IN>> with your code.
df = (spark # Our SparkSession & Entry Point
.read # Our DataFrameReader
<<FILL_IN>> # Read in the parquet files
<<FILL_IN>> # Reduce the columns to just the one
<<FILL_IN>> # Produce a unique set of values
)
totalArticles = df.<<FILL_IN>> # Identify the total number of records remaining.
print("Distinct Articles: {0:,}".format(totalArticles))
# COMMAND ----------
# MAGIC %md
# MAGIC ## Verify Your Work
# MAGIC Run the following cell to verify that your `DataFrame` was created properly.
# COMMAND ----------
expected = 1783138
assert totalArticles == expected, "Expected the total to be " + str(expected) + " but found " + str(totalArticles)
| 34.611111 | 126 | 0.6874 | 337 | 2,492 | 5.032641 | 0.391691 | 0.024764 | 0.035377 | 0.044811 | 0.238208 | 0.238208 | 0.207547 | 0.207547 | 0.207547 | 0.207547 | 0 | 0.013327 | 0.186998 | 2,492 | 71 | 127 | 35.098592 | 0.823791 | 0.735554 | 0 | 0.230769 | 0 | 0 | 0.186047 | 0.086379 | 0 | 0 | 0 | 0.014085 | 0.076923 | 0 | null | null | 0 | 0 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b2ad0ae61c78643d62af25d4ac99dd6250e8bf1 | 1,655 | py | Python | jupiter/use_cases/prm/person/remove.py | horia141/jupiter | 2c721d1d44e1cd2607ad9936e54a20ea254741dc | [
"MIT"
] | 15 | 2019-05-05T14:34:58.000Z | 2022-02-25T09:57:28.000Z | jupiter/use_cases/prm/person/remove.py | horia141/jupiter | 2c721d1d44e1cd2607ad9936e54a20ea254741dc | [
"MIT"
] | 3 | 2020-02-22T16:09:39.000Z | 2021-12-18T21:33:06.000Z | jupiter/use_cases/prm/person/remove.py | horia141/jupiter | 2c721d1d44e1cd2607ad9936e54a20ea254741dc | [
"MIT"
] | null | null | null | """Remove a person."""
import logging
from typing import Final
from jupiter.domain.inbox_tasks.infra.inbox_task_notion_manager import InboxTaskNotionManager
from jupiter.domain.prm.infra.prm_notion_manager import PrmNotionManager
from jupiter.domain.prm.service.remove_service import PersonRemoveService
from jupiter.domain.storage_engine import StorageEngine
from jupiter.framework.base.entity_id import EntityId
from jupiter.framework.use_case import UseCase
from jupiter.utils.time_provider import TimeProvider
LOGGER = logging.getLogger(__name__)
class PersonRemoveUseCase(UseCase[EntityId, None]):
"""The command for removing a person."""
_time_provider: Final[TimeProvider]
_storage_engine: Final[StorageEngine]
_inbox_task_notion_manager: Final[InboxTaskNotionManager]
_prm_notion_manager: Final[PrmNotionManager]
def __init__(
self, time_provider: TimeProvider, storage_engine: StorageEngine,
inbox_task_notion_manager: InboxTaskNotionManager,
prm_notion_manager: PrmNotionManager) -> None:
"""Constructor."""
self._time_provider = time_provider
self._storage_engine = storage_engine
self._inbox_task_notion_manager = inbox_task_notion_manager
self._prm_notion_manager = prm_notion_manager
def execute(self, args: EntityId) -> None:
"""Execute the command's action."""
with self._storage_engine.get_unit_of_work() as uow:
person = uow.person_repository.load_by_id(args)
PersonRemoveService(self._storage_engine, self._prm_notion_manager, self._inbox_task_notion_manager)\
.do_it(person)
| 40.365854 | 109 | 0.767372 | 194 | 1,655 | 6.170103 | 0.340206 | 0.130326 | 0.075188 | 0.110276 | 0.101921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.159517 | 1,655 | 40 | 110 | 41.375 | 0.860532 | 0.056798 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.321429 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
1b3af065e11ddefc9d69f8908fab887ea05f1e93 | 1,233 | py | Python | plugins_/open_package.py | calculuswhiz/PackageDev | 76fe412eefbc775f647591fbd2c526391aea98fc | [
"MIT"
] | null | null | null | plugins_/open_package.py | calculuswhiz/PackageDev | 76fe412eefbc775f647591fbd2c526391aea98fc | [
"MIT"
] | null | null | null | plugins_/open_package.py | calculuswhiz/PackageDev | 76fe412eefbc775f647591fbd2c526391aea98fc | [
"MIT"
] | null | null | null | import glob
import os
import sublime
import sublime_plugin
from .create_package import _open_folder_in_st, _is_override_package
__all__ = ('PackagedevOpenPackageCommand',)
OVERRIDE_SUFFIX = " [*Override*]"
def _list_normal_packages():
pkgspath = sublime.packages_path()
folders = glob.glob(os.path.join(pkgspath, "*/", ""))
names = (os.path.basename(fold.strip("\\/")) for fold in folders)
for name in names:
yield (name, _is_override_package(name))
class NameInputHandler(sublime_plugin.ListInputHandler):
def placeholder(self):
return "Package"
def list_items(self):
packages = list(sorted(_list_normal_packages()))
print(packages)
items = [name + (OVERRIDE_SUFFIX if override else "")
for name, override in packages]
return items
class PackagedevOpenPackageCommand(sublime_plugin.WindowCommand):
def input(self, args):
return NameInputHandler()
def run(self, name):
if not name:
return
name = name.split(OVERRIDE_SUFFIX)[0]
path = os.path.join(sublime.packages_path(), name)
# TODO find a .sublime-project file and open that instead?
_open_folder_in_st(path)
| 26.234043 | 69 | 0.675588 | 144 | 1,233 | 5.548611 | 0.402778 | 0.048811 | 0.030038 | 0.035044 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001044 | 0.223033 | 1,233 | 46 | 70 | 26.804348 | 0.832985 | 0.045418 | 0 | 0 | 0 | 0 | 0.045106 | 0.02383 | 0 | 0 | 0 | 0.021739 | 0 | 1 | 0.16129 | false | 0 | 0.16129 | 0.064516 | 0.516129 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1b5dfc54272c0e15842823c60d77faea8d0bda76 | 2,955 | py | Python | tests/generate_new_site/utilities/test_tables.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | 1 | 2020-10-01T01:07:11.000Z | 2020-10-01T01:07:11.000Z | tests/generate_new_site/utilities/test_tables.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | null | null | null | tests/generate_new_site/utilities/test_tables.py | aychen99/Excavating-Occaneechi-Town | 6e864ca69ff1881554eb4c88aebed236bafbeaf4 | [
"MIT"
] | null | null | null | from unittest import mock
from unittest.mock import patch
from src.generate_new_site.utilities import tables
from pathlib import Path
###############################
# PathTable integration tests #
###############################
def test_pathtable_register_and_gets():
pathtable = tables.PathTable()
test_objs = [{
'entity': "{}".format(i) if i < 5 else None,
'old_path': Path("{}old".format(i)),
'new_path': Path("{}old".format(i))
} for i in range(10)]
for test_obj in test_objs:
pathtable.register(
old_path=test_obj['old_path'],
new_path=test_obj['new_path'],
entity=test_obj['entity']
)
for test_obj in test_objs:
assert pathtable.get_entity(test_obj['old_path']) == test_obj['entity']
assert pathtable.get_path(test_obj['old_path']) == test_obj['new_path']
###############################
# PageTable integration tests #
###############################
def test_pagetable_register_and_gets():
def mock_page_num_to_arabic(page_num):
if page_num.isdigit():
return page_num
elif page_num == "i":
return "1"
elif page_num == "ii":
return "2"
elif page_num == "iii":
return "3"
elif page_num == "vi":
return "6"
return page_num
pagetable = tables.PageTable()
page_nums = ["1", "2", "3", "6", "i", "ii", "iii", "vi"]
test_pages = {num: Path("page{}".format(num)) for num in page_nums}
with patch('src.generate_new_site.utilities.tables.page_num_to_arabic', mock_page_num_to_arabic):
for num, path in test_pages.items():
pagetable.register(
page_num=num,
path=path
)
# Test get_path_path
for num, path in test_pages.items():
assert pagetable.get_page_path(num) == path
# Test get_prev/next_page_path
# 1
assert pagetable.get_prev_page_path("1") is None
assert pagetable.get_next_page_path("1") == test_pages["2"]
# 2
assert pagetable.get_prev_page_path("2") == test_pages["1"]
assert pagetable.get_next_page_path("2") == test_pages["3"]
# 3
assert pagetable.get_prev_page_path("3") == test_pages["2"]
assert pagetable.get_next_page_path("3") is None
# 6
assert pagetable.get_prev_page_path("6") is None
assert pagetable.get_next_page_path("6") is None
# i
assert pagetable.get_prev_page_path("i") is None
assert pagetable.get_next_page_path("i") == test_pages["ii"]
# ii
assert pagetable.get_prev_page_path("ii") == test_pages["i"]
assert pagetable.get_next_page_path("ii") == test_pages["iii"]
# iii
assert pagetable.get_prev_page_path("iii") == test_pages["ii"]
assert pagetable.get_next_page_path("iii") is None
# vi
assert pagetable.get_prev_page_path("vi") is None
assert pagetable.get_next_page_path("vi") is None
| 33.579545 | 101 | 0.609475 | 404 | 2,955 | 4.153465 | 0.14604 | 0.085816 | 0.18236 | 0.104887 | 0.520262 | 0.380215 | 0.116806 | 0.085816 | 0 | 0 | 0 | 0.011832 | 0.22775 | 2,955 | 87 | 102 | 33.965517 | 0.723488 | 0.042301 | 0 | 0.1 | 1 | 0 | 0.077666 | 0.021182 | 0 | 0 | 0 | 0 | 0.316667 | 1 | 0.05 | false | 0 | 0.066667 | 0 | 0.216667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b6d044699c6c19b255bab4c846b102b0a2b1aae | 7,604 | py | Python | test/basemodule_test.py | nktankta/PytorchCNNModules | bc1469ceb37477d3f60062f14a750f272e7ceeb0 | [
"MIT"
] | null | null | null | test/basemodule_test.py | nktankta/PytorchCNNModules | bc1469ceb37477d3f60062f14a750f272e7ceeb0 | [
"MIT"
] | null | null | null | test/basemodule_test.py | nktankta/PytorchCNNModules | bc1469ceb37477d3f60062f14a750f272e7ceeb0 | [
"MIT"
] | null | null | null | import pytest
import torch
import torch.nn as nn
from module_list import get_test_module
from PytorchCNNModules.modules.base_module import BaseModule,SEmodule
class CNN(BaseModule):
def __init__(self,in_feature,out_featue,stride=1):
super(CNN,self).__init__(in_feature,out_featue,stride)
self.cnn = nn.Conv2d(in_feature,out_featue,3,stride,1)
def _forward(self,x):
return self.cnn(x)
class Identity(BaseModule):
def __init__(self,in_feature,out_featue,stride=1,**kwargs):
super(Identity,self).__init__(in_feature,out_featue,stride,norm_layer = nn.Identity,**kwargs)
def _forward(self,x,*args):
if self.stride!=1:
x = nn.AvgPool2d(1,stride=self.stride)(x)
return x
test_modules = get_test_module()
normal_test = [
((2, 3, 5, 5), (2, 10, 5, 5), 3, 10, 1),
((1, 10, 10, 10), (1, 20, 10, 10), 10, 20, 1),
((5, 8, 20, 20), (5, 16, 20, 20), 8, 16, 1),
((2, 3, 10, 10), (2, 10, 5, 5), 3, 10, 2),
((2, 3, 5, 5), (2, 10, 3, 3), 3, 10, 2)
]
residual_test = [
((2, 10, 5, 5), (2, 10, 5, 5), 10, 10, 1),
((1, 17, 10, 10), (1, 17, 10, 10), 17, 17, 1),
((2, 10, 5, 5), (2, 10, 3, 3), 10, 10, 2),
((1, 17, 10, 10), (1, 17, 5, 5), 17, 17, 2),
]
dense_test = [
((2, 3, 5, 5), (2, 13, 5, 5), 3, 10, 1),
((1, 10, 10, 10), (1, 30, 10, 10), 10, 20, 1),
((5, 8, 20, 20), (5, 24, 20, 20), 8, 16, 1),
((2, 3, 10, 10), (2, 13, 5, 5), 3, 10, 2),
((2, 3, 5, 5), (2, 13, 3, 3), 3, 10, 2)
]
def test_residual_featuresize_exception():
with pytest.raises(AssertionError,match="[residual feature size error]"):
CNN(10,5,1).to_residual()
def test_residual_aggregation_error():
with pytest.raises(NotImplementedError):
CNN(10,10,2).to_residual(aggregation="test")
def test_dense_downsample_error():
with pytest.raises(NotImplementedError):
CNN(10, 10, 2).to_dense(downsample="test")
def test_residual_activation_bool():
inp = torch.randn((2,10,20,20))
module = Identity(10,10,1).to_residual(activation=True)
out = module(inp)
assert torch.min(out).item()>=0
def test_residual_activation():
inp = torch.ones((2,10,20,20))*10
module = Identity(10,10,1).to_residual(activation=nn.ReLU6)
out = module(inp)
assert torch.max(out).item()<=6
def test_residual_preactivation_bool():
inp = -torch.ones((2,10,20,20))
module = Identity(10,10,1).to_residual(pre_activation=True)
out = module(inp)
assert torch.min(out).item()>=-1
def test_residual_preactivation():
inp = torch.ones((2,10,20,20))*10
module = Identity(10,10,1).to_residual(pre_activation=nn.ReLU6)
out = module(inp)
assert torch.max(out).item()<=16
def test_residual_random_drop():
inp = torch.ones((2,10,20,20))*1
module = Identity(10,10,1).to_residual(drop_rate=1)
out = module(inp)
assert torch.max(out).item()<=1
def test_semodule():
inp = torch.randn((5, 31, 11, 11))
module = SEmodule(31)
out = module(inp)
assert out.shape == inp.shape
def test_semodule_enable():
inp = torch.ones((2,10,20,20))
out_normal = torch.empty((2,10,20,20))
out_dense = torch.empty((2,20,20,20))
module = Identity(10,10,1,use_SEmodule=True)
out = module(inp)
assert out.shape == out_normal.shape
module.to_residual()
out = module(inp)
assert out.shape == out_normal.shape
module.to_dense()
out = module(inp)
assert out.shape == out_dense.shape
def test_multi_input():
inp1 = torch.randn((2,10,20,20))
inp2 = torch.randn((2,10,20,20))
dense_out = torch.empty((2,20,20,20))
module = Identity(10,10,1)
out = module(inp1,inp2)
assert out.shape == inp1.shape
module.to_residual()
out = module(inp1,inp2)
assert out.shape == inp1.shape
module.to_dense()
out = module(inp1,inp2)
assert out.shape == dense_out.shape
@pytest.mark.parametrize("input_shape", [(2,10,20,20),(2,10,5,5)])
@pytest.mark.parametrize("output_feature", [10,20])
@pytest.mark.parametrize("downsample", ["conv","max","avg"])
def test_residual_downsample_add(input_shape,output_feature,downsample):
n,c,w,h = input_shape
inp = torch.randn(input_shape)
downsample_out = (n,output_feature,(w-1)//2+1,(h-1)//2+1)
module = CNN(10,output_feature,2).to_residual(aggregation="add",downsample=downsample)
out = module(inp)
assert out.shape == torch.empty(downsample_out).shape
@pytest.mark.parametrize("input_shape,output_shape", [((2,10,20,20),(2,20,10,10)),((2,20,5,5),(2,40,3,3))])
@pytest.mark.parametrize("downsample", ["conv","max","avg"])
def test_residual_downsample_conc(input_shape,output_shape,downsample):
inp = torch.randn(input_shape)
module = CNN(input_shape[1],input_shape[1],2).to_residual(aggregation="concatenate",downsample=downsample)
out = module(inp)
assert out.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape", [((2,10,20,20),(2,20,10,10)),((2,20,5,5),(2,40,3,3))])
@pytest.mark.parametrize("downsample", ["conv","max","avg"])
def test_dense_downsample(input_shape,output_shape,downsample):
inp = torch.randn(input_shape)
module = CNN(input_shape[1],input_shape[1],2).to_dense(downsample)
out = module(inp)
assert out.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", normal_test)
def test_module(input_shape,output_shape,channel_in,channel_out,stride):
input = torch.randn(input_shape)
module = CNN(channel_in,channel_out,stride)
output = module(input)
assert output.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", normal_test)
def test_cuda_module(input_shape,output_shape,channel_in,channel_out,stride):
input = torch.randn(input_shape).cuda()
module = CNN(channel_in,channel_out,stride).cuda()
output = module(input)
assert output.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", residual_test)
def test_residual_add(input_shape,output_shape,channel_in,channel_out,stride):
input = torch.randn(input_shape)
module = CNN(channel_in, channel_out, stride).to_residual(aggregation="add")
output = module(input)
assert output.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", dense_test)
def test_residual_concat(input_shape,output_shape,channel_in,channel_out,stride):
input = torch.randn(input_shape)
module = CNN(channel_in, channel_out, stride).to_residual(aggregation="concatenate")
output = module(input)
assert output.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", dense_test)
@pytest.mark.parametrize("downsample", ["conv","avg","max"])
def test_dense(input_shape,output_shape,channel_in,channel_out,stride,downsample):
input = torch.randn(input_shape)
module = CNN(channel_in, channel_out, stride).to_dense()
output = module(input)
assert output.shape == torch.empty(output_shape).shape
@pytest.mark.parametrize("input_shape,output_shape,channel_in,channel_out,stride", normal_test)
def test_backward(input_shape,output_shape,channel_in,channel_out,stride):
input = torch.randn(input_shape,requires_grad=True)
module = CNN(channel_in,channel_out,stride)
output = module(input)
torch.sum(output).backward()
assert input.grad.shape == input.shape | 37.458128 | 110 | 0.687138 | 1,183 | 7,604 | 4.23415 | 0.094675 | 0.065881 | 0.057497 | 0.068277 | 0.729088 | 0.708924 | 0.673787 | 0.618087 | 0.611499 | 0.601118 | 0 | 0.071847 | 0.147028 | 7,604 | 203 | 111 | 37.458128 | 0.700432 | 0 | 0 | 0.355422 | 0 | 0 | 0.071269 | 0.048915 | 0 | 0 | 0 | 0 | 0.13253 | 1 | 0.144578 | false | 0 | 0.03012 | 0.006024 | 0.198795 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b8d1a7f7c749a047b0eddbcadab57bb15964022 | 4,517 | py | Python | unsupervised_meta_learning/_nbdev.py | ojss/c3lr | a018c5a793a2c9eedc3f0fefcca0970f0be35ffc | [
"Apache-2.0"
] | 3 | 2022-02-24T07:02:12.000Z | 2022-03-20T18:33:58.000Z | unsupervised_meta_learning/_nbdev.py | ojss/c3lr | a018c5a793a2c9eedc3f0fefcca0970f0be35ffc | [
"Apache-2.0"
] | null | null | null | unsupervised_meta_learning/_nbdev.py | ojss/c3lr | a018c5a793a2c9eedc3f0fefcca0970f0be35ffc | [
"Apache-2.0"
] | null | null | null | # AUTOGENERATED BY NBDEV! DO NOT EDIT!
__all__ = ["index", "modules", "custom_doc_links", "git_url"]
index = {"c_imshow": "01_nn_utils.ipynb",
"Flatten": "01_nn_utils.ipynb",
"conv3x3": "01_nn_utils.ipynb",
"get_proto_accuracy": "01_nn_utils.ipynb",
"get_accuracy": "02_maml_pl.ipynb",
"collate_task": "01b_data_loaders_pl.ipynb",
"collate_task_batch": "01b_data_loaders_pl.ipynb",
"get_episode_loader": "01b_data_loaders_pl.ipynb",
"UnlabelledDataset": "01b_data_loaders_pl.ipynb",
"get_cub_default_transform": "01b_data_loaders_pl.ipynb",
"get_simCLR_transform": "01b_data_loaders_pl.ipynb",
"get_omniglot_transform": "01b_data_loaders_pl.ipynb",
"get_custom_transform": "01b_data_loaders_pl.ipynb",
"identity_transform": "01b_data_loaders_pl.ipynb",
"UnlabelledDataModule": "01b_data_loaders_pl.ipynb",
"OmniglotDataModule": "01b_data_loaders_pl.ipynb",
"MiniImagenetDataModule": "01b_data_loaders_pl.ipynb",
"cg": "01c_grad_utils.ipynb",
"cat_list_to_tensor": "01c_grad_utils.ipynb",
"reverse_unroll": "01c_grad_utils.ipynb",
"reverse": "01c_grad_utils.ipynb",
"fixed_point": "01c_grad_utils.ipynb",
"CG": "01c_grad_utils.ipynb",
"CG_normaleq": "01c_grad_utils.ipynb",
"neumann": "01c_grad_utils.ipynb",
"exact": "01c_grad_utils.ipynb",
"grd": "01c_grad_utils.ipynb",
"list_dot": "01c_grad_utils.ipynb",
"jvp": "01c_grad_utils.ipynb",
"get_outer_gradients": "01c_grad_utils.ipynb",
"update_tensor_grads": "01c_grad_utils.ipynb",
"grad_unused_zero": "01c_grad_utils.ipynb",
"DifferentiableOptimizer": "01c_grad_utils.ipynb",
"HeavyBall": "01c_grad_utils.ipynb",
"Momentum": "01c_grad_utils.ipynb",
"GradientDescent": "01c_grad_utils.ipynb",
"gd_step": "01c_grad_utils.ipynb",
"heavy_ball_step": "01c_grad_utils.ipynb",
"torch_momentum_step": "01c_grad_utils.ipynb",
"euclidean_distance": "01d_proto_utils.ipynb",
"cosine_similarity": "01d_proto_utils.ipynb",
"get_num_samples": "01d_proto_utils.ipynb",
"get_prototypes": "01d_proto_utils.ipynb",
"prototypical_loss": "01d_proto_utils.ipynb",
"clusterer": "01d_proto_utils.ipynb",
"cluster_diff_loss": "01d_proto_utils.ipynb",
"CNN_4Layer": "01d_proto_utils.ipynb",
"Encoder": "01d_proto_utils.ipynb",
"Decoder": "01d_proto_utils.ipynb",
"CAE": "01d_proto_utils.ipynb",
"Encoder4L": "01d_proto_utils.ipynb",
"Decoder4L": "01d_proto_utils.ipynb",
"Decoder4L4Mini": "01d_proto_utils.ipynb",
"CAE4L": "01d_proto_utils.ipynb",
"get_images_labels_from_dl": "01d_proto_utils.ipynb",
"logger": "02_maml_pl.ipynb",
"ConvolutionalNeuralNetwork": "02_maml_pl.ipynb",
"MAML": "02_maml_pl.ipynb",
"UMTRA": "02_maml_pl.ipynb",
"cg_solve": "02b_iMAML.ipynb",
"iMAML": "02b_iMAML.ipynb",
"PrototypicalNetwork": "03_protonet_pl.ipynb",
"CactusPrototypicalModel": "03_protonet_pl.ipynb",
"ProtoModule": "03_protonet_pl.ipynb",
"Classifier": "03b_ProtoCLR.ipynb",
"get_train_images": "03b_ProtoCLR.ipynb",
"WandbImageCallback": "03b_ProtoCLR.ipynb",
"TensorBoardImageCallback": "03b_ProtoCLR.ipynb",
"ConfidenceIntervalCallback": "03b_ProtoCLR.ipynb",
"UMAPCallback": "03b_ProtoCLR.ipynb",
"UMAPClusteringCallback": "03b_ProtoCLR.ipynb",
"PCACallback": "03b_ProtoCLR.ipynb",
"ProtoCLR": "03b_ProtoCLR.ipynb",
"Partition": "04_cactus.ipynb",
"CactusTaskDataset": "04_cactus.ipynb",
"get_partitions_kmeans": "04_cactus.ipynb",
"DataOpt": "04_cactus.ipynb",
"LoaderOpt": "04_cactus.ipynb",
"load": "04_cactus.ipynb",
"CactusDataModule": "04_cactus.ipynb"}
modules = ["nn_utils.py",
"pl_dataloaders.py",
"hypergrad.py",
"proto_utils.py",
"maml.py",
"imaml.py",
"protonets.py",
"protoclr.py",
"cactus.py"]
doc_url = "https://ojss.github.io/unsupervised_meta_learning/"
git_url = "https://github.com/ojss/unsupervised_meta_learning/tree/main/"
def custom_doc_links(name): return None
| 44.722772 | 73 | 0.636263 | 513 | 4,517 | 5.159844 | 0.298246 | 0.15867 | 0.099736 | 0.141292 | 0.234605 | 0.096335 | 0.037401 | 0 | 0 | 0 | 0 | 0.047782 | 0.221607 | 4,517 | 100 | 74 | 45.17 | 0.705063 | 0.00797 | 0 | 0 | 1 | 0 | 0.643894 | 0.199821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010753 | false | 0 | 0 | 0.010753 | 0.010753 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1b986bdba6c54856abc4ac0745d2acf81a33c07d | 1,356 | py | Python | example/schema/tests/unittest/toys.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 18 | 2015-04-07T14:28:39.000Z | 2020-02-08T14:03:38.000Z | example/schema/tests/unittest/toys.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 7 | 2016-10-05T05:14:06.000Z | 2021-05-20T02:07:22.000Z | example/schema/tests/unittest/toys.py | donghak-shin/dp-tornado | 095bb293661af35cce5f917d8a2228d273489496 | [
"MIT"
] | 11 | 2015-12-15T09:49:39.000Z | 2021-09-06T18:38:21.000Z | # -*- coding: utf-8 -*-
from dp_tornado.engine.schema import Table as dpTable
from dp_tornado.engine.schema import Schema as dpSchema
from dp_tornado.engine.schema import Attribute as dpAttribute
class ToysSchema(dpTable):
__table_name__ = 'toys'
__engine__ = 'InnoDB'
__charset__ = 'euckr'
toy_id = dpAttribute.field(dpAttribute.DataType.BIGINT, ai=True, pk=True, nn=True, un=True, comment='Toy ID')
toy_cd = dpAttribute.field(dpAttribute.DataType.BIGINT(20), uq=True, nn=True, zf=True, un=True, name='toy_code', comment='Toy Code')
toy_name = dpAttribute.field(dpAttribute.DataType.VARCHAR(128), nn=True, comment='Toy Name')
toy_summary = dpAttribute.field(dpAttribute.DataType.TEXT, nn=True, comment='Toy Summary')
toy_description = dpAttribute.field(dpAttribute.DataType.LONGTEXT, nn=True, comment='Toy Description')
primary_key = dpAttribute.index(dpAttribute.IndexType.PRIMARY, 'toy_id')
idx_toys_toy_name = dpAttribute.index(dpAttribute.IndexType.INDEX, 'toy_name')
__dummy_data__ = [
{'toy_id': 1, 'toy_code': 1000, 'toy_name': 'Lego', 'toy_summary': 'Lego Limited Edition', 'toy_description': 'Lego Limited Edition.'},
{'toy_id': 2, 'toy_code': 2000, 'toy_name': 'Teddy Bear', 'toy_summary': 'Teddy Bear Limited Edition', 'toy_description': 'Teddy Bear Limited Edition.'}
]
| 46.758621 | 160 | 0.725664 | 180 | 1,356 | 5.227778 | 0.333333 | 0.044633 | 0.143464 | 0.185972 | 0.185972 | 0.098831 | 0 | 0 | 0 | 0 | 0 | 0.013687 | 0.137906 | 1,356 | 28 | 161 | 48.428571 | 0.791275 | 0.015487 | 0 | 0 | 0 | 0 | 0.216804 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1b9b5b9e063185961a0c23077b632b02171d2fdb | 1,233 | py | Python | source/guides/migrations/0002_auto_20170123_1809.py | OpenNews/opennews-source | 71b557275bc5d03c75eb471fc3293efa492c0ac7 | [
"MIT"
] | 6 | 2017-01-05T00:51:48.000Z | 2021-11-08T10:26:04.000Z | source/guides/migrations/0002_auto_20170123_1809.py | OpenNews/opennews-source | 71b557275bc5d03c75eb471fc3293efa492c0ac7 | [
"MIT"
] | 146 | 2017-01-03T16:06:43.000Z | 2022-03-11T23:25:43.000Z | source/guides/migrations/0002_auto_20170123_1809.py | OpenNews/opennews-source | 71b557275bc5d03c75eb471fc3293efa492c0ac7 | [
"MIT"
] | 3 | 2017-02-16T22:52:47.000Z | 2019-08-15T16:49:47.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2017-01-23 18:09
from __future__ import unicode_literals
from django.db import migrations, models
import sorl.thumbnail.fields
class Migration(migrations.Migration):
dependencies = [
('guides', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='guide',
name='author_bio',
field=models.TextField(blank=True),
),
migrations.AddField(
model_name='guide',
name='author_name',
field=models.CharField(blank=True, max_length=128),
),
migrations.AddField(
model_name='guide',
name='author_photo',
field=sorl.thumbnail.fields.ImageField(blank=True, null=True, upload_to='img/uploads/guide_author_images'),
),
migrations.AddField(
model_name='guidearticle',
name='external_author_name',
field=models.CharField(blank=True, max_length=128),
),
migrations.AddField(
model_name='guidearticle',
name='external_organization_name',
field=models.CharField(blank=True, max_length=128),
),
]
| 29.357143 | 119 | 0.596107 | 126 | 1,233 | 5.642857 | 0.452381 | 0.126582 | 0.161744 | 0.189873 | 0.518987 | 0.518987 | 0.518987 | 0.2827 | 0.2827 | 0.219409 | 0 | 0.034169 | 0.287916 | 1,233 | 41 | 120 | 30.073171 | 0.775626 | 0.05515 | 0 | 0.529412 | 1 | 0 | 0.143718 | 0.049053 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1ba07eaa9253946b4f345408b4a379ec82d0ede6 | 375 | py | Python | recip/util/Address.py | anthonybuckle/Reciprocity-Core | 3254073f44e8fe2222aed9879885a2e609d4044a | [
"MIT"
] | null | null | null | recip/util/Address.py | anthonybuckle/Reciprocity-Core | 3254073f44e8fe2222aed9879885a2e609d4044a | [
"MIT"
] | null | null | null | recip/util/Address.py | anthonybuckle/Reciprocity-Core | 3254073f44e8fe2222aed9879885a2e609d4044a | [
"MIT"
] | null | null | null | from recip.util import DataType
from recip.util import Validator
def toAddressBytes(address):
if address.startswith('0x'):
address = address[2:]
return DataType.fromHex(address)
def toAddressStr(addressBytes):
return DataType.toHex(addressBytes)
def to0xAddress(addressBytes):
address = toAddressStr(addressBytes)
return "0x{0}".format(address) | 26.785714 | 40 | 0.746667 | 42 | 375 | 6.666667 | 0.5 | 0.064286 | 0.092857 | 0.135714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015773 | 0.154667 | 375 | 14 | 41 | 26.785714 | 0.867508 | 0 | 0 | 0 | 0 | 0 | 0.018617 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.181818 | 0.090909 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
1bd1d5cfc2e0f45a350b9178b490c86e26fca79d | 8,346 | py | Python | splunk-cluster/splunk_setup.py | outcoldman/docker-splunk-cluster | 5b3feb8131197b1a0a574dfea9ec4f20703c189b | [
"MIT"
] | 34 | 2016-07-22T16:37:49.000Z | 2021-11-19T22:32:30.000Z | splunk-cluster/splunk_setup.py | mhassan2/docker-splunk-cluster | 5b3feb8131197b1a0a574dfea9ec4f20703c189b | [
"MIT"
] | 5 | 2016-07-25T16:02:29.000Z | 2017-02-17T19:17:45.000Z | splunk-cluster/splunk_setup.py | outcoldman/docker-splunk-cluster | 5b3feb8131197b1a0a574dfea9ec4f20703c189b | [
"MIT"
] | 20 | 2016-07-26T01:02:47.000Z | 2019-09-20T03:01:52.000Z | import os
import sys
import json
import time
import socket
import re
import glob
import subprocess
import requests
import splunk.clilib.cli_common
import splunk.util
var_expandvars_re = re.compile(r'\AENV\((.*)\)$')
var_shell_re = re.compile(r'\ASHELL\((.*)\)$')
def main():
"""
Initialize node. Can run before splunk started and after splunk started
"""
if sys.argv[1] == "--configure":
configure()
elif sys.argv[1] == "--wait-splunk":
wait_splunk(sys.argv[2], sys.argv[3:])
elif sys.argv[1] == "--add-licenses":
add_licenses(sys.argv[2])
elif sys.argv[1] == "--shc-autobootstrap":
shc_autobootstrap(int(sys.argv[2]), sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], sys.argv[7], sys.argv[8])
else:
exit(1)
def configure():
"""
using CONF__ notation you can define any configuration, examples
CONF__[{location_under_splunk_home}__]{conf_file}__{stanza}__{key}=value
If location_under_splunk_home is not specified - system is used.
"""
# Allow to set any configurations with this
conf_updates = {}
for env, val in os.environ.iteritems():
if env.startswith("CONF__"):
parts = env.split("__")[1:]
conf_file_name = None
parent = None
conf_folder = "system"
if len(parts) == 4:
conf_folder = parts[0]
parts = parts[1:]
conf_folder_full = __get_conf_folder_full(conf_folder, parent)
file_name = parts[0]
if file_name == "meta":
file_name = "local.meta"
subfolder = "metadata"
else:
file_name = file_name + ".conf"
subfolder = "local"
conf_file = os.path.join(conf_folder_full, subfolder, file_name)
conf_updates.setdefault(conf_file, {}).setdefault(parts[1], {})[parts[2]] = __get_value(val)
for conf_file, conf_update in conf_updates.iteritems():
conf = splunk.clilib.cli_common.readConfFile(conf_file) if os.path.exists(conf_file) else {}
for stanza, values in conf_update.iteritems():
dest_stanza = conf.setdefault(stanza, {})
dest_stanza.update(values)
if "default" in conf and not conf["default"]:
del conf["default"]
folder = os.path.dirname(conf_file)
if not os.path.isdir(folder):
os.makedirs(folder)
splunk.clilib.cli_common.writeConfFile(conf_file, conf)
def __get_value(val):
var_expand_match = var_expandvars_re.match(val)
if var_expand_match:
return os.path.expandvars(var_expand_match.groups()[0])
var_shell_match = var_shell_re.match(val)
if var_shell_match:
return subprocess.check_output(var_expand_match.groups()[0], shell=True)
return val
def __get_conf_folder_full(conf_folder, parent):
if conf_folder == "system":
return os.path.join(os.environ["SPLUNK_HOME"], "etc", conf_folder)
else:
return os.path.join(os.environ["SPLUNK_HOME"], conf_folder)
def wait_splunk(uri, roles):
"""
Wait 5 minutes for dependency
"""
for x in xrange(1, 300):
try:
# This url does not require authentication, ignore certificate
response = requests.get(uri + "/services/server/info?output_mode=json", verify=False)
if response.status_code == 200:
server_roles = response.json()["entry"][0]["content"]["server_roles"]
if not roles or all(any(re.match(role, server_role) for server_role in server_roles) for role in roles):
return
else:
print "Waiting for " + ", ".join(roles) + " in " + uri + " got " + ", ".join(server_roles) + "."
else:
print "Waiting for "+ ", ".join(roles) + " in " + uri + "."
except requests.exceptions.RequestException as exception:
print "Waiting for " + ", ".join(roles) + " in " + uri + ". Exception: " + str(exception)
time.sleep(1)
print "Failed to connect to " + uri + " and check server roles " + ", ".join(roles)
exit(1)
def add_licenses(folder):
while True:
if os.path.isdir(folder):
licenses = glob.glob(os.path.join(folder, "*.lic"))
if licenses:
# Adding all licenses one by one and break
for license in licenses:
args = [
"add",
"licenses",
"-auth", "admin:changeme",
license
]
__splunk_execute(args)
break
print "Waiting for license files under " + folder
time.sleep(1)
def shc_autobootstrap(autobootstrap, mgmt_uri, local_user, local_password, service_discovery_uri, service_discovery_user, service_discovery_password):
"""
Write current uri to the service discovery URL, if current member has index equal
to INIT_SHCLUSTER_AUTOBOOTSTRAP - bootstrap SHC, if more - add itself to existing SHC
"""
__service_discovery_post(service_discovery_uri, service_discovery_user, service_discovery_password, data=json.dumps({"host": mgmt_uri}), headers={"Content-type": "application/json"})
all_members = __service_discovery_get(service_discovery_uri, service_discovery_user, service_discovery_password, params={"sort": "_key"}).json()
for index, member in enumerate(all_members):
if member["host"] == mgmt_uri:
if (index + 1) == autobootstrap:
__splunk_execute([
"bootstrap",
"shcluster-captain",
"-auth", "%s:%s" % (local_user, local_password),
"-servers_list", ",".join(m["host"] for m in all_members[:autobootstrap])
])
elif (index + 1) > autobootstrap:
# We do not check if current list of members already bootstrapped, assuming that autobootstrap is always equal to
# how many instances user creating at beginning
__splunk_execute([
"add",
"shcluster-member",
"-auth", "%s:%s" % (local_user, local_password),
"-current_member_uri", next(m["host"] for m in all_members[:autobootstrap])
])
def __service_discovery_get(service_discovery_uri, service_discovery_user, service_discovery_password, **kwargs):
for x in xrange(1, 300):
try:
response = requests.get(service_discovery_uri,
verify=False,
auth=(service_discovery_user, service_discovery_password),
**kwargs)
response.raise_for_status()
return response
except requests.exceptions.RequestException as ex:
print "Failed to make GET request to service discovery url. " + str(ex)
sys.stdout.flush()
sys.stderr.flush()
time.sleep(1)
print "FAILED. Could not make GET request to service discovery url."
exit(1)
def __service_discovery_post(service_discovery_uri, service_discovery_user, service_discovery_password, **kwargs):
for x in xrange(1, 300):
try:
response = requests.post(service_discovery_uri,
verify=False,
auth=(service_discovery_user, service_discovery_password),
**kwargs)
response.raise_for_status()
return response
except requests.exceptions.RequestException as ex:
print "Failed to make POST request to service discovery url. " + str(ex)
sys.stdout.flush()
sys.stderr.flush()
time.sleep(1)
print "FAILED. Could not make POST request to service discovery url."
exit(1)
def __splunk_execute(args):
"""
Execute splunk with arguments
"""
sys.stdout.flush()
sys.stderr.flush()
splunk_args = [os.path.join(os.environ['SPLUNK_HOME'], "bin", "splunk")]
splunk_args.extend(args)
subprocess.check_call(splunk_args)
sys.stdout.flush()
sys.stderr.flush()
if __name__ == "__main__":
main() | 38.818605 | 186 | 0.594896 | 976 | 8,346 | 4.865779 | 0.243852 | 0.101074 | 0.028006 | 0.039798 | 0.365761 | 0.337334 | 0.318804 | 0.273952 | 0.215835 | 0.202569 | 0 | 0.008495 | 0.294752 | 8,346 | 215 | 187 | 38.818605 | 0.798335 | 0.036065 | 0 | 0.288344 | 0 | 0 | 0.113154 | 0.005101 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.055215 | 0.067485 | null | null | 0.055215 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
59f1c15a95f6ece1eb1189e10c53a789c1ff17ed | 1,404 | py | Python | rawquery/rawquery.py | joncombe/django-raw-query | b10c8f5731668bd16fd37cfc86b37dcb0ca65f4f | [
"BSD-3-Clause"
] | 3 | 2020-07-16T20:01:57.000Z | 2022-03-26T06:39:32.000Z | rawquery/rawquery.py | joncombe/django-raw-query | b10c8f5731668bd16fd37cfc86b37dcb0ca65f4f | [
"BSD-3-Clause"
] | null | null | null | rawquery/rawquery.py | joncombe/django-raw-query | b10c8f5731668bd16fd37cfc86b37dcb0ca65f4f | [
"BSD-3-Clause"
] | null | null | null | from django.db import connection
class RawQuery:
# return a list of dicts
# e.g. SELECT * FROM my_table
# [
# {'a': 1, 'b': 2, 'c': 3},
# {'a': 1, 'b': 2, 'c': 3},
# ]
def multiple_rows(self, sql, params=[]):
cursor = self._do_query(sql, params)
columns = [col[0] for col in cursor.description]
return [
dict(zip(columns, row))
for row in cursor.fetchall()
]
# return a single dict
# e.g. SELECT COUNT(*) AS count, AVG(price) AS avg_price FROM my_table
# { 'count': 12, 'avg_price': 95.2 }
def single_row(self, sql, params=[]):
return self.multiple_rows(sql, params)[0]
# return a single value
# e.g. SELECT COUNT(*) FROM my_table
# 134
def single_value(self, sql, params=[]):
cursor = self._do_query(sql, params)
return cursor.fetchone()[0]
# return a list of single values
# e.g. SELECT id FROM my_table
# [1, 2, 3, 4, 5]
def multiple_values(self, sql, params=[]):
cursor = self._do_query(sql, params)
return [row[0] for row in cursor.fetchall()]
# UPDATE, INSERT, etc.
def run(self, sql, params=[]):
cursor = self._do_query(sql, params)
return cursor.rowcount
def _do_query(self, sql, params):
cursor = connection.cursor()
cursor.execute(sql, params)
return cursor
| 29.87234 | 74 | 0.57265 | 195 | 1,404 | 4.015385 | 0.312821 | 0.137931 | 0.099617 | 0.121328 | 0.309068 | 0.252874 | 0.237548 | 0.237548 | 0.237548 | 0.187739 | 0 | 0.023069 | 0.289886 | 1,404 | 46 | 75 | 30.521739 | 0.762287 | 0.281339 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.041667 | 0.041667 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
59fe80b466cc0482b71f0268fc89c32034df5ce8 | 10,255 | py | Python | laba/user.py | Snake-Whisper/laba | 78db605d594c2a2c2c3245c00269f08303c6b7ea | [
"MIT"
] | null | null | null | laba/user.py | Snake-Whisper/laba | 78db605d594c2a2c2c3245c00269f08303c6b7ea | [
"MIT"
] | null | null | null | laba/user.py | Snake-Whisper/laba | 78db605d594c2a2c2c3245c00269f08303c6b7ea | [
"MIT"
] | null | null | null | from flask import g, session
import pymysql
import redis
import random
import string
from json import loads, dumps
from exceptions.userException import *
from hashlib import sha256
class User():
__changed = {}
_values = {}
__loggedIn = True
__initialized = False
__health = False
def __init__(self):
raise NotInitializeable("User")
def _init(self, app):
"""User Object"""
self.app = app
if not hasattr(g, 'db'):
g.db = pymysql.connect(user=app.config["DB_USER"], db=app.config["DB_DB"], password=app.config["DB_PWD"], host=app.config["DB_HOST"], cursorclass=pymysql.cursors.DictCursor)
self.cursor = g.db.cursor()
if not hasattr(g, 'redis'):
g.redis = redis.Redis(host=app.config["REDIS_HOST"], port=app.config["REDIS_PORT"], db=app.config["REDIS_DB"])
self.__initialized = True
# |\_/|
# | @ @ Watch!
# | <> _
# | _/\------____ ((| |))
# | `--' |
# ____|_ ___| |___.'
# /_/_____/____/_______|
#^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
def query(self, query, param = ()):
self.cursor.execute(query, param)
return self.cursor.fetchall()
def queryOne(self, query, param = ()):
self.cursor.execute(query, param)
return self.cursor.fetchone()
def recover(self):
"""Call to prevent pymysql Interface error after recovering from session cache"""
if not hasattr(g, 'db'):
g.db = pymysql.connect(user=self.app.config["DB_USER"], db=self.app.config["DB_DB"], password=self.app.config["DB_PWD"], host=self.app.config["DB_HOST"], cursorclass=pymysql.cursors.DictCursor)
self.cursor = g.db.cursor()
if not hasattr(g, 'redis'):
g.redis = redis.Redis(host=self.app.config["REDIS_HOST"], port=self.app.config["REDIS_PORT"], db=self.app.config["REDIS_DB"])
@property
def wsuuid(self):
return g.redis.get(self._values["username"])
@wsuuid.setter
def wsuuid(self, wsuuid):
g.redis.set(self._values["username"], wsuuid, self.app.config["AUTO_LOGOUT"])
@wsuuid.deleter
def wsuuid(self):
g.redis.delete(self._values["username"])
@property
def id(self):
return self._values["id"]
@property
def uuid(self):
return self.__uuid
@property
def health(self):
return self.__health
@property
def username(self):
return self._values["username"]
@username.setter
def username(self, value):
if self._values["username"] != value:
self._values["username"] = value
self.__changed['username'] = value
@property
def firstName(self):
return self._values["firstName"]
@firstName.setter
def firstName(self, value):
if self._values["firstName"] != value:
self._values["firstName"] = value
self.__changed['firstName'] = value
@property
def lastName(self):
return self._values["lastName"]
@lastName.setter
def lastName(self, value):
if self._values["lastName"] != value:
self._values["lastName"] = value
self.__changed['lastName'] = value
@property
def email(self):
return self._values["email"]
@email.setter
def email(self, value):
if self._values["email"] != value:
self._values["email"] = value
self.__changed['email'] = value
@property
def ctime(self):
return self._values["ctime"]
@ctime.setter
def ctime(self, value):
if self._values["ctime"] != value:
self._values["ctime"] = value
self.__changed['ctime'] = value
@property
def atime(self):
return self._values["atime"]
@atime.setter
def atime(self, value):
if self._values["atime"] != value:
self._values["atime"] = value
self.__changed['atime'] = value
@property
def status(self):
return self._values["status"]
@status.setter
def status(self, value):
if self._values["status"] != value:
self._values["status"] = value
self.__changed['status'] = value
@property
def icon(self):
return self._values["icon"]
@icon.setter
def icon(self, value):
if self._values["icon"] != value:
self._values["icon"] = value
self.__changed['icon'] = value
def changePwd (self, old, new):
r = self.cursor.execute("UPDATE users SET password=SHA2(%s, 256) WHERE id=%s AND password=SHA2(%s, 256);", (new, self.__id, old))
if not r:
raise BadUserCredentials(self.__username)
def commit2db(self):
if self.__changed:
sql="UPDATE users SET {0} WHERE users.id = {1}".format(", ".join(i+"=%s" for i in self.__changed.keys()), self._values["id"])
self.query(sql, tuple(self.__changed.values()))
def __serialize(self):
self._values['atime'] = str(self._values['atime']) #Keep private! It's changing self.__value!!!
self._values['ctime'] = str(self._values['ctime'])
return dumps(self._values)
def commit2redis(self):
g.redis.set(self._uuid, self.__serialize(), self.app.config["AUTO_LOGOUT"])
def logOut(self):
self.__loggedIn = False
g.redis.delete(session["uuid"])
session.pop("uuid")
def startSession(self):
self.__health = True
def __del__(self):
if self.__initialized and self.__health:
self.commit2db()
self.cursor.close()
g.db.commit()
if self.__loggedIn:
self.commit2redis()
class LoginUser(User):
def __init__(self, app, username, password):
"""Checks User cred and logs in + moves to redis if ready"""
User._init(self, app)
self._values = self.queryOne("""SELECT
id, username, firstName, lastName, email, ctime, atime, status, icon, enabled
FROM users
WHERE
(username = %s or email = %s)
AND
password = SHA2(%s, 256)""", (username, username, password))
if not self._values:
raise BadUserCredentials(username)
if not self._values["enabled"]:
raise UserDisabled(username)
self.startSession()
self._uuid = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(32)])
session['uuid'] = self._uuid
class RedisUser(User):
def __init__(self, app):
if not 'uuid' in session:
raise UserNotInitialized()
User._init(self, app)
self._uuid = session["uuid"]
vals = g.redis.get(session['uuid'])
if not vals:
session.pop("uuid")
raise UserNotInitialized()
self.startSession()
self._values = loads(vals)
class RegisterUser():
_values = {}
def __init__(self, app):
self.app = app
assert not 'uuid' in session
if not hasattr(g, 'db'):
g.db = pymysql.connect(user=app.config["DB_USER"], db=app.config["DB_DB"], password=app.config["DB_PWD"], host=app.config["DB_HOST"], cursorclass=pymysql.cursors.DictCursor)
self.cursor = g.db.cursor()
if not hasattr(g, 'redis'):
g.redis = redis.Redis(host=app.config["REDIS_HOST"], port=app.config["REDIS_PORT"], db=app.config["REDIS_DB"])
def query(self, query, param = ()):
self.cursor.execute(query, param)
return self.cursor.fetchall()
def queryOne(self, query, param = ()):
self.cursor.execute(query, param)
return self.cursor.fetchone()
@property
def username(self):
return self._values["username"]
@username.setter
def username(self, value):
if self.queryOne("SELECT id FROM users WHERE username=%s", value):
raise RegistrationErrorDupplicate("username")
self._values["username"] = value
@property
def email(self):
return self._values["email"]
@email.setter
def email(self, value):
if self.queryOne("SELECT id FROM users WHERE email=%s", value):
raise RegistrationErrorDupplicate("email")
self._values["email"] = value
@property
def firstName(self):
return self._values["firstName"]
@firstName.setter
def firstName(self, value):
self._values["firstName"] = value
@property
def lastName(self):
return self._values["lastName"]
@lastName.setter
def lastName(self, value):
self._values["lastName"] = value
@property
def password(self):
return self._values["password"]
@password.setter
def password(self, val):
self._values["password"] = sha256(val.encode()).hexdigest()
def commit2redis(self):
if not all(k in self._values for k in ["email", "password", "username", "firstName", "lastName"]):
for i in ["email", "password", "username", "firstName", "lastName"]:
if i not in self._values:
raise RegistrationErrorInfoMissing(i)
token = ''.join([random.choice(string.ascii_letters + string.digits) for n in range(32)])
g.redis.set(token, dumps(self._values), self.app.config["TOKEN_TIMEOUT"])
return token
def confirmToken(self, token):
vals = loads(g.redis.get(token))
if not vals:
raise InvalidToken(token)
g.redis.delete(token)
#WARNING: No check for dupl entry -> time from registerRequest to confirmation: unprotected ~ Problem?
#Without Exception Handling in Prod. env.: YES -> apk BBQ
try:
self.query("INSERT INTO users (email, password, username, firstname, lastname) VALUES (%s, %s, %s, %s, %s)", (
vals["email"],
vals["password"],
vals["username"],
vals["firstName"],
vals["lastName"]))
except pymysql.IntegrityError:
raise RegistrationErrorDupplicate("email / username")
| 33.622951 | 205 | 0.583715 | 1,159 | 10,255 | 4.994823 | 0.150129 | 0.088098 | 0.038694 | 0.048368 | 0.467784 | 0.338055 | 0.317326 | 0.317326 | 0.317326 | 0.317326 | 0 | 0.003911 | 0.276938 | 10,255 | 304 | 206 | 33.733553 | 0.776804 | 0.055193 | 0 | 0.414634 | 0 | 0.00813 | 0.13174 | 0 | 0 | 0 | 0 | 0 | 0.004065 | 1 | 0.207317 | false | 0.060976 | 0.03252 | 0.069106 | 0.373984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
940087754d4f3fba00dce2ece646519c4b46fb0b | 5,417 | py | Python | asp/codegen/scala_ast.py | shoaibkamil/asp | 2bc5fd5595c475d43b9ee4451db1b51eb9165fdb | [
"BSD-3-Clause"
] | 12 | 2015-03-20T17:39:23.000Z | 2021-03-17T17:14:25.000Z | asp/codegen/scala_ast.py | shoaibkamil/asp | 2bc5fd5595c475d43b9ee4451db1b51eb9165fdb | [
"BSD-3-Clause"
] | 1 | 2015-12-28T11:22:48.000Z | 2015-12-28T11:22:48.000Z | asp/codegen/scala_ast.py | shoaibkamil/asp | 2bc5fd5595c475d43b9ee4451db1b51eb9165fdb | [
"BSD-3-Clause"
] | 9 | 2015-01-06T00:36:53.000Z | 2020-09-19T14:31:26.000Z | import ast
"""
I don't use the Generable class inheritance
"""
class Generable():
pass
class func_types(Generable):
def __init__(self, types):
self.types = types
self._fields = []
class Number(Generable):
def __init__(self, num):
self.num = num
self._fields = []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class String(Generable):
def __init__(self, text):
self.text = text
self._fields = ['text']
self.done = False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Name(Generable):
def __init__(self,name):
self.name= name
self._fields= []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Function(Generable):
def __init__(self, declaration, body):
self.declaration = declaration
self.body = body
self._fields = []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Arguments(Generable):
def __init__(self, args):
self.args = args
self._fields = []
class FunctionDeclaration(Generable):
def __init__(self, name, args):
self.name = name
self.args = args
class Expression(Generable):
def __init__(self):
# ???
super(Expression, self)
self._fields = []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Call(Expression):
def __init__(self, func, args):
self.func = func
self.args = args
self._fields = []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Attribute(Expression):
def __init__(self, value, attr):
self.attr = attr
self.value = value
class List(Expression):
def __init__(self, elements):
self.elements = elements
self._fields = []
class BinOp(Expression):
def __init__(self, left, op, right):
self. left = left
self.op = op
self.right = right
self._fields = ['left', 'right']
self.done = False
class BoolOp(Expression):
def __init__(self, op, values):
self.op = op
self.values = values
self._fields = ['op', 'values']
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class UnaryOp(Expression):
def __init__(self, op, operand):
self.op = op
self.operand = operand
self._fields = ['operand']
class Subscript(Expression):
def __init__(self, value, index, context):
self.value = value
self.index = index
self.context = context
self._fields = ['value', 'index', 'context']
class Print(Generable):
def __init__(self,text,newline,dest):
self.text = text
self.newline = newline
self.dest= dest
self.done = False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class ReturnStatement(Generable):
def __init__(self, retval):
self.retval = retval
self._fields = ['retval']
self.done = False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class AugAssign(Generable):
def __init__(self, target, op, value):
self.target = target
self.op = op
self.value = value
self.done = False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Assign(Generable): #should this inherit from something else??
def __init__(self, lvalue, rvalue):
##??
self.lvalue = lvalue
self.rvalue= rvalue
self._fields = ['lvalue', 'rvalue']
self.done = False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class Compare(Generable):
def __init__(self, left,op,right):
self.left = left
self.op = op
self.right = right
self.done=False
self._fields = ('left', 'op', 'right')
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done=True
return self
class IfConv(Generable):
def __init__(self, test, body, orelse, inner_if=False):
self.test = test
self.body = body
self.orelse = orelse
self.inner_if = inner_if
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class For(Generable):
def __init__(self, target, iter, body):
self.target = target
self.iter = iter
self.body = body
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
class While(Generable):
def __init__(self, test, body):
self.test = test
self.body = body
self._fields = []
self.done= False
def __iter__(self):
return self
def next(self):
if self.done:
raise StopIteration
else:
self.done = True
return self
if __name__ == '__main__':
pass
| 17.142405 | 67 | 0.670113 | 731 | 5,417 | 4.718194 | 0.109439 | 0.106698 | 0.073355 | 0.086982 | 0.615251 | 0.534068 | 0.517831 | 0.506234 | 0.506234 | 0.506234 | 0 | 0 | 0.218202 | 5,417 | 315 | 68 | 17.196825 | 0.814404 | 0.008492 | 0 | 0.704 | 0 | 0 | 0.015431 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.212 | false | 0.008 | 0.004 | 0.06 | 0.432 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
940152d4021e4a7153b7871ac9b9df4b89170155 | 1,578 | py | Python | cdgo/mathops.py | s-gordon/CDGo | 7bd1b3a6780f70f1237a7f0cac5e112c6b804100 | [
"MIT"
] | 1 | 2019-01-24T20:52:19.000Z | 2019-01-24T20:52:19.000Z | cdgo/mathops.py | s-gordon/CDGo | 7bd1b3a6780f70f1237a7f0cac5e112c6b804100 | [
"MIT"
] | 3 | 2015-06-18T06:09:37.000Z | 2017-09-07T02:48:44.000Z | cdgo/mathops.py | s-gordon/CDGo | 7bd1b3a6780f70f1237a7f0cac5e112c6b804100 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
def residuals(fit, obs):
"""Calculate residuals for fit compared to observed data
:fit: list of discrete fit data points
:obs: list of observed data points
:returns: fit minus observed data points
"""
return fit-obs
def fit_stats(obs, fit):
"""
https://stackoverflow.com/questions/19189362/getting-the-r-squared-
value-using-curve-fit
"""
resid = fit - obs
ss_res = np.sum(resid**2)
ss_tot = np.sum((obs - np.mean(obs))**2)
r_squared = 1 - (ss_res / ss_tot)
return r_squared, ss_tot, ss_res, resid
def sum_squares_total(calc, obs):
"""
https://stackoverflow.com/questions/19189362/getting-the-r-squared-
value-using-curve-fit
"""
return np.sum((obs - np.mean(obs))**2)
def sum_squares_residuals(calc, obs):
"""
https://stackoverflow.com/questions/19189362/getting-the-r-squared-
value-using-curve-fit
"""
resids = residuals(calc, obs)
return np.sum(resids**2)
def rms_error(calc, obs):
"""Calculate root mean squared deviation
:calc: calculated data from fit
:obs: experimentally observed data
:returns: rmsd
"""
resids = residuals(calc, obs)
mean_sqrd = np.mean(resids**2)
return np.sqrt(mean_sqrd)
def r_squared(calc, obs):
"""
https://stackoverflow.com/questions/19189362/getting-the-r-squared-
value-using-curve-fit
"""
ss_res = sum_squares_residuals(calc, obs)
ss_tot = sum_squares_total(calc, obs)
return 1 - (ss_res / ss_tot)
| 23.552239 | 71 | 0.653992 | 225 | 1,578 | 4.475556 | 0.28 | 0.055611 | 0.083416 | 0.119166 | 0.453823 | 0.350546 | 0.350546 | 0.314796 | 0.314796 | 0.314796 | 0 | 0.032051 | 0.209125 | 1,578 | 66 | 72 | 23.909091 | 0.77484 | 0.439164 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.045455 | 0 | 0.590909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9404aefdf46be89ea0882f84aad9a69dde8da596 | 1,753 | py | Python | gene.py | Chappie733/Neat | 2414842197c9a146293246e984557b36c8b9fb89 | [
"MIT"
] | null | null | null | gene.py | Chappie733/Neat | 2414842197c9a146293246e984557b36c8b9fb89 | [
"MIT"
] | null | null | null | gene.py | Chappie733/Neat | 2414842197c9a146293246e984557b36c8b9fb89 | [
"MIT"
] | null | null | null | from enum import Enum
class GeneType(Enum):
CONNECTION_GENE = 0,
NODE_GENE = 1
class NodeType(Enum):
INPUT = 0,
HIDDEN = 1,
OUTPUT = 2
class Gene:
def __init__(self, _type, innov=1) -> None:
self._type = _type
self.innov = innov
def __str__(self):
return f"Gene:\n\tType: {self._type}\n\tInnovation number: {self.innov}"
class ConnectionGene(Gene):
def __init__(self, start, end, innov, enabled=True, weight=0):
super().__init__(GeneType.CONNECTION_GENE, innov)
self.start, self.end = start, end # indexes of ending and starting node
self.enabled = enabled
self.weight = weight # not usually set when the node is created
def __str__(self):
res = f"Connection Gene: "
res += f"\n\tIn: {self.start}\n\tOut: {self.end}"
res += f"\n\tEnabled: {self.enabled}"
res += f"\n\tInnovation number: {self.innov}"
return res
def equals(self, other) -> bool:
'''
Checks if the gene is equal to another, this doesn't take into account the innovation number, but only the
actual connection this gene represents
'''
return self.end == other.end and self.start == other.start
@staticmethod
def are_equal(f, s) -> bool:
return f.end == s.end and f.start == s.start
class NodeGene(Gene):
def __init__(self, index, _type):
super().__init__(GeneType.NODE_GENE, index)
self.index = index
self._type = _type
self.bias = 0
def __str__(self):
return f"Node Gene:\n\tIndex: {self.index}\n\tInnovation number: {self.innov}\n\tBias: {self.bias}\n\tType: {self._type}" | 30.754386 | 129 | 0.596691 | 233 | 1,753 | 4.296137 | 0.334764 | 0.03996 | 0.032967 | 0.044955 | 0.114885 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00638 | 0.284655 | 1,753 | 57 | 129 | 30.754386 | 0.791866 | 0.127211 | 0 | 0.128205 | 0 | 0.051282 | 0.201803 | 0.050624 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205128 | false | 0 | 0.025641 | 0.076923 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
941112baf421a06451189dbe5c8a03eed694f448 | 968 | py | Python | Quiz/m1_quant_basics/l2_stock_prices/quiz_tests.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | 98 | 2020-05-22T00:41:23.000Z | 2022-03-24T12:57:15.000Z | Quiz/m1_quant_basics/l2_stock_prices/quiz_tests.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | 1 | 2020-01-04T05:32:35.000Z | 2020-01-04T18:22:21.000Z | Quiz/m1_quant_basics/l2_stock_prices/quiz_tests.py | jcrangel/AI-for-Trading | c3b865e992f8eb8deda91e7641428eef1d343636 | [
"Apache-2.0"
] | 74 | 2020-05-05T16:44:42.000Z | 2022-03-23T06:59:09.000Z | from collections import OrderedDict
import pandas as pd
from tests import project_test, assert_output
@project_test
def test_csv_to_close(fn):
tickers = ['A', 'B', 'C']
dates = ['2017-09-22', '2017-09-25', '2017-09-26', '2017-09-27', '2017-09-28']
fn_inputs = {
'csv_filepath': 'prices_2017_09_22_2017-09-28.csv',
'field_names': ['ticker', 'date', 'open', 'high', 'low', 'close', 'volume', 'adj_close', 'adj_volume']}
fn_correct_outputs = OrderedDict([
(
'close',
pd.DataFrame(
[
[152.48000000, 149.19000000, 59.35000000],
[151.11000000, 145.06000000, 60.29000000],
[152.42000000, 145.21000000, 57.74000000],
[154.34000000, 147.02000000, 58.41000000],
[153.68000000, 147.19000000, 56.76000000]],
dates, tickers))])
assert_output(fn, fn_inputs, fn_correct_outputs)
| 35.851852 | 111 | 0.566116 | 115 | 968 | 4.582609 | 0.608696 | 0.079696 | 0.030361 | 0.045541 | 0.053131 | 0 | 0 | 0 | 0 | 0 | 0 | 0.311239 | 0.283058 | 968 | 26 | 112 | 37.230769 | 0.448127 | 0 | 0 | 0 | 0 | 0 | 0.169421 | 0.033058 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.045455 | false | 0 | 0.136364 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
941ba60592ad05d3734afb4acc2a69ff6dc6d844 | 613 | py | Python | notebooks/__code/array.py | neutronimaging/BraggEdgeFitting | 233407fc000425ee79897e514964ef196ca27a08 | [
"BSD-3-Clause"
] | null | null | null | notebooks/__code/array.py | neutronimaging/BraggEdgeFitting | 233407fc000425ee79897e514964ef196ca27a08 | [
"BSD-3-Clause"
] | 2 | 2020-10-06T13:48:24.000Z | 2020-10-07T16:21:46.000Z | notebooks/__code/array.py | neutronimaging/BraggEdgeFitting | 233407fc000425ee79897e514964ef196ca27a08 | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
def exclude_y_value_when_error_is_nan(axis, error_axis):
axis_cleaned = []
error_axis_cleaned = []
for _x, _error in zip(axis, error_axis):
if (_x == "None") or (_error == "None") or (_x is None) or (_error is None):
axis_cleaned.append(np.NaN)
error_axis_cleaned.append(np.NaN)
else:
axis_cleaned.append(np.float(_x))
error_axis_cleaned.append(np.float(_error))
return axis_cleaned, error_axis_cleaned
def check_size(x_axis=None, y_axis=None):
size_x = len(x_axis)
size_y = len(y_axis)
min_len = np.min([size_x, size_y])
return x_axis[:min_len], y_axis[:min_len]
| 25.541667 | 78 | 0.724307 | 109 | 613 | 3.688073 | 0.266055 | 0.218905 | 0.159204 | 0.189055 | 0.457711 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145188 | 613 | 23 | 79 | 26.652174 | 0.767176 | 0 | 0 | 0 | 0 | 0 | 0.013051 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
941d352a3b96792669ab524440dad9a606275e2e | 442 | py | Python | primacy/__init__.py | FofanovLab/Primacy | 5505b839e33659a50fef725bb5c1c3584827e4f1 | [
"MIT"
] | 4 | 2018-10-19T06:39:46.000Z | 2019-04-18T04:46:19.000Z | primacy/__init__.py | FofanovLab/Primacy | 5505b839e33659a50fef725bb5c1c3584827e4f1 | [
"MIT"
] | null | null | null | primacy/__init__.py | FofanovLab/Primacy | 5505b839e33659a50fef725bb5c1c3584827e4f1 | [
"MIT"
] | 1 | 2018-10-19T06:39:56.000Z | 2018-10-19T06:39:56.000Z | import click
def error(msg, logger=False):
"""Prints an error message to stderr and logs."""
click.secho(msg, fg='red', err=True)
if logger:
logger.error(msg)
def warn(msg, logger=False):
'''Prints a warning message to stderr.'''
click.secho(msg, fg='yellow')
if logger:
logger.warning(msg)
def info(msg, logger=False):
click.secho(msg, fg='green')
if logger:
logger.info(msg) | 21.047619 | 53 | 0.615385 | 63 | 442 | 4.31746 | 0.412698 | 0.099265 | 0.154412 | 0.165441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.239819 | 442 | 21 | 54 | 21.047619 | 0.809524 | 0.178733 | 0 | 0.230769 | 0 | 0 | 0.03966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
94255f71fe2c2edb4e8cc5ae13b412e91881739d | 832 | py | Python | src/Reset Ease Automatically/utils.py | RisingOrange/Reset-Ease-Automatically | 7c2fd16b7cac32ba499d87f681c75cfcfb617405 | [
"MIT"
] | 5 | 2020-09-06T10:51:39.000Z | 2021-11-11T01:46:06.000Z | src/Reset Ease Automatically/utils.py | RisingOrange/Reset-Ease-Automatically | 7c2fd16b7cac32ba499d87f681c75cfcfb617405 | [
"MIT"
] | 6 | 2020-09-06T11:28:47.000Z | 2021-06-13T00:22:03.000Z | src/Reset Ease Automatically/utils.py | RisingOrange/Reset-Ease-Automatically | 7c2fd16b7cac32ba499d87f681c75cfcfb617405 | [
"MIT"
] | 1 | 2020-09-06T10:52:43.000Z | 2020-09-06T10:52:43.000Z | from aqt import mw
from .config import get, set
def prepare_deck_to_ease_range():
deck_to_ease_range = d if (d := get('deck_to_ease_range')) else {}
# for backwards compatibilty
deck_to_ease = d if (d := get('deck_to_ease')) else {}
deck_to_ease_range.update(**_to_deck_to_ease_range(deck_to_ease))
set('deck_to_ease', None)
# remove entries of decks that do not exist in anki
# and ensure the deck ids are of type int
cleaned = {
int(deck_id) : ease_range
for deck_id, ease_range in deck_to_ease_range.items()
if str(deck_id) in mw.col.decks.allIds()
}
set('deck_to_ease_range', cleaned)
def _to_deck_to_ease_range(deck_to_ease):
converted = {
deck_id : (ease, ease)
for deck_id, ease in deck_to_ease.items()
}
return converted
| 27.733333 | 70 | 0.671875 | 136 | 832 | 3.757353 | 0.338235 | 0.164384 | 0.273973 | 0.234834 | 0.221135 | 0.221135 | 0.221135 | 0.105675 | 0 | 0 | 0 | 0 | 0.234375 | 832 | 30 | 71 | 27.733333 | 0.802198 | 0.139423 | 0 | 0 | 0 | 0 | 0.084151 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.105263 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
943b21bad087472d917929d954b13b2a355101b8 | 190 | py | Python | by-session/ta-921/j8/atal_matal.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | 2 | 2015-04-29T20:59:35.000Z | 2018-09-26T13:33:43.000Z | by-session/ta-921/j8/atal_matal.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | by-session/ta-921/j8/atal_matal.py | amiraliakbari/sharif-mabani-python | 5d14a08d165267fe71c28389ddbafe29af7078c5 | [
"MIT"
] | null | null | null | def f(a, start):
if len(a) == 1:
return a[0]
d = (start + 15 - 1) % len(a)
del a[d]
return f(a, d % len(a))
n = int(input())
print 1 + f(range(n*2), 0) / 2
| 19 | 34 | 0.426316 | 37 | 190 | 2.189189 | 0.486486 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07438 | 0.363158 | 190 | 9 | 35 | 21.111111 | 0.595041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
945a434c2fd10a32fcc164824e6dff4647c862a3 | 3,251 | py | Python | globalCounter/server/counter_server.py | aratz-lasa/globalCounter | 9ac0841b0e7d1bc71cd6205649c1b07bcf77e01f | [
"MIT"
] | 2 | 2019-03-24T19:09:59.000Z | 2019-03-25T07:15:06.000Z | globalCounter/server/counter_server.py | aratz-lasa/globalCounter | 9ac0841b0e7d1bc71cd6205649c1b07bcf77e01f | [
"MIT"
] | null | null | null | globalCounter/server/counter_server.py | aratz-lasa/globalCounter | 9ac0841b0e7d1bc71cd6205649c1b07bcf77e01f | [
"MIT"
] | null | null | null | import socket
from multiprocessing import Pool, Queue, Manager, cpu_count
from ..protocol.methods import *
from ..protocol.models import *
from ..various.abc import CounterServer
MAX_WORKERS = cpu_count()
class UDPCounterServer(CounterServer):
def __init__(self, ip="0.0.0.0", port=0, max_workers=MAX_WORKERS):
self.ip = ip
self.port = port
self.sock = None
self.is_running = False
# workers
self.manager = Manager()
self.topic_sum_map = self.manager.dict()
self.pending_requests = Queue()
self.workers_pool = Pool(
processes=max_workers, initializer=self.worker_loop)
def run(self) -> None:
self.bind_socket()
self.is_running = True
try:
while self.is_running:
msg, addr = self.sock.recvfrom(MSG_MAXIMUM_LENGTH)
self.pending_requests.put((msg, addr))
except Exception as err:
if self.is_running:
raise err
def worker_loop(self) -> None:
while True:
msg, addr = self.pending_requests.get()
re_msg = get_response(msg, self.topic_sum_map)
self.send_response(re_msg, addr)
def send_response(self, message: bytes, addr) -> None:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(message, addr)
def bind_socket(self) -> None:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.bind((self.ip, self.port))
def stop(self) -> None:
self.is_running = False
self.workers_pool.terminate()
self.workers_pool.join()
self.sock.close()
class TCPCounterServer(CounterServer):
def __init__(self, ip="0.0.0.0", port=0, max_workers=MAX_WORKERS):
self.ip = ip
self.port = port
self.sock = None
self.is_running = False
# workers
self.manager = Manager()
self.topic_sum_map = self.manager.dict()
self.pending_requests = Queue()
self.workers_pool = Pool(
processes=max_workers, initializer=self.worker_loop)
def run(self) -> None:
self.bind_socket()
self.is_running = True
try:
while self.is_running:
conn, addr = self.sock.accept()
self.pending_requests.put((conn, addr))
except Exception as err:
if self.is_running:
raise err
def worker_loop(self) -> None:
while True:
conn, addr = self.pending_requests.get()
msg = conn.recv(MSG_MAXIMUM_LENGTH)
re_msg = get_response(msg, self.topic_sum_map)
self.send_response(re_msg, conn)
def send_response(self, message: bytes, conn) -> None:
conn.send(message)
conn.close()
def bind_socket(self) -> None:
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.bind((self.ip, self.port))
self.sock.listen(1)
def stop(self) -> None:
self.is_running = False
self.sock.close()
| 31.563107 | 71 | 0.609966 | 410 | 3,251 | 4.64878 | 0.2 | 0.054565 | 0.068206 | 0.035677 | 0.737671 | 0.710388 | 0.677859 | 0.677859 | 0.677859 | 0.613851 | 0 | 0.005587 | 0.28422 | 3,251 | 102 | 72 | 31.872549 | 0.813494 | 0.004614 | 0 | 0.658537 | 0 | 0 | 0.00433 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0 | 0.060976 | 0 | 0.231707 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
945db23a1d2728c5507f15d764b787a34c73c68a | 252 | py | Python | cms/templatetags/watch.py | sandmark/DjangoPerfectSquare | 490686a1780c27d1b592b1771450f2a86ac861cf | [
"MIT"
] | null | null | null | cms/templatetags/watch.py | sandmark/DjangoPerfectSquare | 490686a1780c27d1b592b1771450f2a86ac861cf | [
"MIT"
] | 3 | 2020-02-11T21:47:12.000Z | 2021-06-10T18:24:43.000Z | cms/templatetags/watch.py | sandmark/DjangoPerfectSquare | 490686a1780c27d1b592b1771450f2a86ac861cf | [
"MIT"
] | null | null | null | from django import template
from django.utils.http import urlquote
import re
register = template.Library()
@register.filter
def quote_filepath(url):
_, scheme, path = re.split(r'(https?://)', url)
return '{}{}'.format(scheme, urlquote(path))
| 22.909091 | 51 | 0.710317 | 33 | 252 | 5.363636 | 0.666667 | 0.112994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 252 | 10 | 52 | 25.2 | 0.815668 | 0 | 0 | 0 | 0 | 0 | 0.059524 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
945e1051e675a474b42fb803eb11282480b657ba | 264 | py | Python | src/lib/idol/dataclass/codegen/schema/primitive_type.py | lyric-com/idol | 285005e9ddaa92b2284b7e9c28cd12f1e34746ec | [
"MIT"
] | null | null | null | src/lib/idol/dataclass/codegen/schema/primitive_type.py | lyric-com/idol | 285005e9ddaa92b2284b7e9c28cd12f1e34746ec | [
"MIT"
] | 2 | 2020-03-24T18:03:10.000Z | 2020-03-31T10:41:56.000Z | src/lib/idol/dataclass/codegen/schema/primitive_type.py | lyric-com/idol | 285005e9ddaa92b2284b7e9c28cd12f1e34746ec | [
"MIT"
] | null | null | null | # DO NOT EDIT
# This file was generated by idol_data, any changes will be lost when idol_data is rerun again
from enum import Enum
class SchemaPrimitiveTypeEnum(Enum):
INT = "int"
DOUBLE = "double"
STRING = "string"
BOOL = "bool"
ANY = "any"
| 22 | 94 | 0.67803 | 38 | 264 | 4.657895 | 0.736842 | 0.090395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.242424 | 264 | 11 | 95 | 24 | 0.885 | 0.393939 | 0 | 0 | 1 | 0 | 0.140127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
9460da662c249dbfe258e1da02ea4d17876fff61 | 485 | py | Python | tests/test_requirements_files/app.py | Robinson04/serverlesspack | 271fdbc4bcc769a778e369b14e5f491360824512 | [
"MIT"
] | 1 | 2022-03-07T22:32:32.000Z | 2022-03-07T22:32:32.000Z | tests/test_requirements_files/app.py | Robinson04/serverlesspack | 271fdbc4bcc769a778e369b14e5f491360824512 | [
"MIT"
] | 3 | 2021-03-01T19:12:46.000Z | 2021-05-09T11:09:27.000Z | tests/test_requirements_files/app.py | Robinson04/serverlesspack | 271fdbc4bcc769a778e369b14e5f491360824512 | [
"MIT"
] | null | null | null | from pydantic import BaseModel
from typing import Optional
class RequestDataModel(BaseModel):
loginToken: str
def login_with_google(data: dict):
request_data = RequestDataModel(**data)
from google.oauth2 import id_token
from google.auth.transport.requests import Request as GoogleRequest
user_infos: Optional[dict] = id_token.verify_oauth2_token(
id_token=request_data.loginToken, request=GoogleRequest(), audience='token_id'
)
print(user_infos)
| 30.3125 | 86 | 0.769072 | 60 | 485 | 6.016667 | 0.5 | 0.058172 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00489 | 0.156701 | 485 | 15 | 87 | 32.333333 | 0.877751 | 0 | 0 | 0 | 0 | 0 | 0.016495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.333333 | 0 | 0.583333 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
94777a5313465c9c8238bf7bf45c469fe3eb78a4 | 315 | py | Python | wavelink/__init__.py | hamza1311/Wavelink | f593c59d3589dea9a5337731ac3daaac4cabbfee | [
"MIT"
] | null | null | null | wavelink/__init__.py | hamza1311/Wavelink | f593c59d3589dea9a5337731ac3daaac4cabbfee | [
"MIT"
] | null | null | null | wavelink/__init__.py | hamza1311/Wavelink | f593c59d3589dea9a5337731ac3daaac4cabbfee | [
"MIT"
] | null | null | null | __title__ = 'WaveLink'
__author__ = 'EvieePy'
__license__ = 'MIT'
__copyright__ = 'Copyright 2019-2020 (c) PythonistaGuild'
__version__ = '0.6.0'
from .client import Client
from .errors import *
from .eqs import *
from .events import *
from .player import *
from .node import Node
from .websocket import WebSocket
| 22.5 | 57 | 0.752381 | 40 | 315 | 5.425 | 0.575 | 0.184332 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041045 | 0.149206 | 315 | 13 | 58 | 24.230769 | 0.768657 | 0 | 0 | 0 | 0 | 0 | 0.196825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.583333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
9477d09d2f626c9ba5ae9d79fc4fcad2a1d51dc9 | 231 | py | Python | Domain/restaurantvalidator.py | VargaIonut23/restaurant | 3f991f30b03921481142187ef33f81d1dc4fe2ad | [
"MIT"
] | null | null | null | Domain/restaurantvalidator.py | VargaIonut23/restaurant | 3f991f30b03921481142187ef33f81d1dc4fe2ad | [
"MIT"
] | null | null | null | Domain/restaurantvalidator.py | VargaIonut23/restaurant | 3f991f30b03921481142187ef33f81d1dc4fe2ad | [
"MIT"
] | null | null | null | class restaurantvalidator():
def valideaza(self, restaurant):
erori = []
if len(restaurant.nume) == 0:
erori.append('numele nu trb sa fie null')
if erori:
raise ValueError(erori)
| 28.875 | 53 | 0.575758 | 25 | 231 | 5.32 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 0.320346 | 231 | 7 | 54 | 33 | 0.840764 | 0 | 0 | 0 | 0 | 0 | 0.108225 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
947e404b2d74b25f2cf7a78cf9b4ba6e70f26910 | 228 | py | Python | python/testData/inspections/PyUnresolvedReferencesInspection3K/objectNewAttributes.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/inspections/PyUnresolvedReferencesInspection3K/objectNewAttributes.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/inspections/PyUnresolvedReferencesInspection3K/objectNewAttributes.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | class C(object):
def __new__(cls):
self = object.__new__(cls)
self.foo = 1
return self
x = C()
print(x.foo)
print(x.<warning descr="Unresolved attribute reference 'bar' for class 'C'">bar</warning>)
| 22.8 | 90 | 0.622807 | 33 | 228 | 4.060606 | 0.575758 | 0.089552 | 0.149254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0.232456 | 228 | 9 | 91 | 25.333333 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0.219298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9482667f7b3f81e2c39c9eb01e0e0b4cdd7db3f4 | 2,921 | py | Python | wire/messages.py | evuez/stork | bd57b207957a9e2feb6ec253cfd2c125a139e52b | [
"MIT"
] | null | null | null | wire/messages.py | evuez/stork | bd57b207957a9e2feb6ec253cfd2c125a139e52b | [
"MIT"
] | null | null | null | wire/messages.py | evuez/stork | bd57b207957a9e2feb6ec253cfd2c125a139e52b | [
"MIT"
] | null | null | null | """
Messages:
https://wiki.theory.org/BitTorrentSpecification#Messages
<length prefix><message ID><payload>
"""
from collections import namedtuple
from struct import pack
from struct import unpack
FORMAT = '>IB{}'
Message = namedtuple('Message', 'len id payload')
KEEP_ALIVE = -1
CHOKE = 0
UNCHOKE = 1
INTERESTED = 2
NOT_INTERESTED = 3
HAVE = 4
BITFIELD = 5
REQUEST = 6
PIECE = 7
CANCEL = 8
PORT = 9
FORMAT_KEEP_ALIVE = \
FORMAT_CHOKE = \
FORMAT_UNCHOKE = \
FORMAT_INTERESTED = \
FORMAT_NOT_INTERESTED = '>IB'
FORMAT_HAVE = '>IBI'
FORMAT_BITFIELD = '>IB{}B'
FORMAT_REQUEST = '>IBIII'
FORMAT_PIECE = '>IBII{}c'
FORMAT_CANCEL = '>IBIII'
FORMAT_PORT = '>IBH'
def decode(message):
if len(message) == 4:
return Message(0, KEEP_ALIVE, None)
len_, id_ = unpack('>IB', message[:5])
return [
decode_choke,
decode_unchoke,
decode_interested,
decode_not_interested,
decode_have,
decode_bitfield,
decode_request,
decode_piece,
decode_cancel,
decode_port,
][id_](message, len_ - 1)
# Messages
def keep_alive():
return b'\x00\x00\x00\x00'
def choke():
return b'\x00\x00\x00\x01\x00'
def unchoke():
return b'\x00\x00\x00\x01\x01'
def interested():
return b'\x00\x00\x00\x01\x02'
def not_interested():
return b'\x00\x00\x00\x01\x03'
def have(piece_index):
return pack(FORMAT_HAVE, 5, 4, piece_index)
def bitfield(bits):
len_ = 1 + len(bits)
return pack(FORMAT_BITFIELD.format(len_), len_, 5, bits)
def request(index, begin, length):
return pack(FORMAT_REQUEST, 13, 6, index, begin, length)
def piece(index, begin, block):
len_ = 9 + len(block)
return pack(FORMAT_PIECE.format(len_), len_, 7, index, begin, block)
def cancel(index, begin, length):
return pack(FORMAT_CANCEL, 13, 8, index, begin, length)
def port(listen_port):
return pack(FORMAT_PORT, 3, 9, listen_port)
# Decoders
def decode_choke(message, _paylen):
return Message(*unpack(FORMAT_CHOKE, message), None)
def decode_unchoke(message, _paylen):
return Message(*unpack(FORMAT_UNCHOKE, message), None)
def decode_interested(message, _paylen):
return Message(*unpack(FORMAT_INTERESTED, message), None)
def decode_not_interested(message, _paylen):
return Message(*unpack(FORMAT_NOT_INTERESTED, message), None)
def decode_have(message, _paylen):
return Message(*unpack(FORMAT_HAVE, message))
def decode_bitfield(message, paylen):
len_, id_, *payload = unpack(FORMAT_BITFIELD.format(paylen), message)
return Message(len_, id_, payload)
def decode_request(message):
pass
def decode_piece(message, paylen):
len_, id_, index, begin, *block = unpack(
FORMAT_PIECE.format(paylen - 8),
message
)
return Message(len_, id_, (index, begin, block))
def decode_cancel(message):
pass
def decode_port(message):
pass
| 19.091503 | 73 | 0.683328 | 389 | 2,921 | 4.922879 | 0.172237 | 0.051697 | 0.028198 | 0.033943 | 0.277285 | 0.193211 | 0.080418 | 0 | 0 | 0 | 0 | 0.033135 | 0.194112 | 2,921 | 152 | 74 | 19.217105 | 0.780374 | 0.042109 | 0 | 0.032609 | 0 | 0 | 0.058085 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.23913 | false | 0.032609 | 0.032609 | 0.152174 | 0.48913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
847444d00846464ca49ce4d6bdb0871b264d4796 | 1,974 | py | Python | get_longi_lati.py | icepoint666/Pytorch-ThinPlateSpline | 4d1870acddf1cd4bc5d159f0f092cad0ad32406b | [
"MIT"
] | 4 | 2019-06-14T06:07:46.000Z | 2021-06-24T11:19:51.000Z | get_longi_lati.py | icepoint666/Pytorch-ThinPlateSpline | 4d1870acddf1cd4bc5d159f0f092cad0ad32406b | [
"MIT"
] | null | null | null | get_longi_lati.py | icepoint666/Pytorch-ThinPlateSpline | 4d1870acddf1cd4bc5d159f0f092cad0ad32406b | [
"MIT"
] | 1 | 2019-09-02T15:29:00.000Z | 2019-09-02T15:29:00.000Z | import numpy as np
import torch
import matplotlib.animation as animation
import matplotlib.pyplot as plt
from PIL import Image
import ThinPlateSpline as TPS
# 2048x2048.jpg size: 2048 x 2048
def on_press(event):
p = np.array([
[693.55, 531.26],
[1069.85, 1243.04],
[1243.74, 1238.69],
[472.82, 664.85],
[552.50, 1460.07],
[1021.03, 368.02],
[1260.78, 1571.90],
[93.16, 911.26],
[234.85, 914.14],
[383.34, 1140.97],
[375.46, 853.36],
[256.73, 597.61],
[338.32, 502.28],
[754.67, 337.95],
[1120.42, 1797.99],
[1521.97, 1655.66],
[1371.15, 1832.87],
[1522.78, 1315.94],
[1116.38, 754.82],
[1165.72, 1162.44],
[1024.00, 1024.00]])
v = np.array([
[121.52, 25.00],
[142.31, -10.74],
[150.81, -10.63],
[109.60, 18.24],
[113.58, -22.72],
[139.92, 34.87],
[153.25, -28.63],
[45.29, -25.83],
[95.26, 5.30],
[105.86, -6.01],
[104.90, 8.46],
[96.95, 16.70],
[96.81, 27.64],
[122.71, 37.11],
[147.14, -43.12],
[172.68, -34.63],
[167.75, -42.28],
[166.68, -14.63],
[144.68, 13.25],
[146.93, -6.96],
[141.01, 0.09]])
p = torch.Tensor(p.reshape([1, p.shape[0], 2]))
v = torch.Tensor(v.reshape([1, v.shape[0], 2]))
T = TPS.solve_system(p, v)
point = np.array([event.xdata, event.ydata])
point_T = TPS.point_transform(point, T, p)
print("Longitude:", point_T[0, 0, 0])
print("Latitude:", point_T[0, 1, 0])
if __name__ == '__main__':
print("It is suggested that clicking on the image close to the middle position will be more accurate.")
fig = plt.figure()
img = Image.open('2048x2048.jpg')
plt.imshow(img, animated= True)
fig.canvas.mpl_connect('button_press_event', on_press)
plt.show()
| 26.675676 | 107 | 0.507092 | 308 | 1,974 | 3.191558 | 0.636364 | 0.024415 | 0.014242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.322511 | 0.297872 | 1,974 | 73 | 108 | 27.041096 | 0.386724 | 0.016211 | 0 | 0 | 0 | 0 | 0.078351 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0 | 0.092308 | 0 | 0.107692 | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8476a6714e07b92b94204deb2d35b8762a330fb7 | 10,325 | py | Python | webapp/core/migrations/0001_initial.py | PoCDAB/cfns-webapp | 69637efa2046a04172898d3f41520e060efb77f5 | [
"MIT"
] | null | null | null | webapp/core/migrations/0001_initial.py | PoCDAB/cfns-webapp | 69637efa2046a04172898d3f41520e060efb77f5 | [
"MIT"
] | null | null | null | webapp/core/migrations/0001_initial.py | PoCDAB/cfns-webapp | 69637efa2046a04172898d3f41520e060efb77f5 | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-08-25 14:44
import django.contrib.gis.db.models.fields
from django.db import migrations, models
import django.db.models.deletion
import django.utils.timezone
import fontawesome_5.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='aisEncodedModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('received_from', models.CharField(max_length=128)),
('received_at', models.DateTimeField(default=django.utils.timezone.now)),
('message', models.CharField(max_length=256)),
],
options={
'verbose_name': 'Encoded AIS message',
'verbose_name_plural': 'Encoded AIS messages',
},
),
migrations.CreateModel(
name='dabModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('message_id', models.IntegerField(null=True)),
('message_type', models.IntegerField()),
('message', models.CharField(max_length=256)),
('ship_id', models.CharField(max_length=256)),
],
options={
'verbose_name': 'DAB message',
'verbose_name_plural': 'DAB messages',
},
),
migrations.CreateModel(
name='FontAwesomeIcon',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('icon', fontawesome_5.fields.IconField(blank=True, max_length=60)),
],
),
migrations.CreateModel(
name='gatewayModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('rssi', models.IntegerField(blank=True, null=True)),
('snr', models.IntegerField(blank=True, null=True)),
('gateway_id', models.CharField(blank=True, max_length=256, null=True)),
('gateway_eui', models.CharField(blank=True, max_length=256, null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='lorawanModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('ack', models.IntegerField(blank=True, null=True, verbose_name='Acknowledgement')),
('msg', models.CharField(blank=True, max_length=256, null=True, verbose_name='Message')),
('hdop', models.DecimalField(blank=True, decimal_places=2, max_digits=19, null=True)),
('alt', models.DecimalField(blank=True, decimal_places=2, max_digits=19, null=True)),
('geom', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326, verbose_name='Location')),
],
options={
'verbose_name': 'LoRaWAN message',
'verbose_name_plural': 'LoRaWAN messages',
},
),
migrations.CreateModel(
name='aisDecodedModel',
fields=[
('aisencodedmodel_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='core.aisencodedmodel')),
('mmsi', models.IntegerField(null=True)),
('name', models.CharField(blank=True, max_length=128, null=True, verbose_name='Shipname')),
('geom', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326, verbose_name='Location (x,y)')),
('course', models.FloatField(blank=True, null=True, verbose_name='Course')),
('ack', models.IntegerField(blank=True, null=True, verbose_name='Acknowledgement')),
('msg', models.IntegerField(blank=True, null=True, verbose_name='Message')),
('rssi', models.IntegerField(blank=True, null=True, verbose_name='RSSI')),
],
options={
'verbose_name': 'Decoded AIS message',
'verbose_name_plural': 'Decoded AIS messages',
},
bases=('core.aisencodedmodel',),
),
migrations.CreateModel(
name='lorawanGatewayConnectionModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('gateway', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.gatewaymodel')),
('lorawan', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='core.lorawanmodel')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='geoPolygonModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('font_awesome_iconcolor', models.CharField(max_length=256)),
('polygon', django.contrib.gis.db.models.fields.PolygonField(blank=True, null=True, srid=4326)),
('message', models.CharField(max_length=64)),
('dab', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.dabmodel')),
('lorawan', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.lorawanmodel')),
('aisDecoded', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.aisdecodedmodel')),
],
options={
'verbose_name': 'Geo Polygon Message',
'verbose_name_plural': 'Geo Polygon Messages',
},
),
migrations.CreateModel(
name='geoPointModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('font_awesome_iconcolor', models.CharField(max_length=256)),
('location', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326, verbose_name='Pivot')),
('message', models.CharField(max_length=64)),
('dab', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.dabmodel')),
('lorawan', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.lorawanmodel')),
('aisDecoded', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.aisdecodedmodel')),
],
options={
'verbose_name': 'Geo Point Message',
'verbose_name_plural': 'Geo Point Messages',
},
),
migrations.CreateModel(
name='geoMessageModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('font_awesome_iconcolor', models.CharField(max_length=256)),
('message', models.CharField(max_length=64)),
('dab', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.dabmodel')),
('lorawan', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.lorawanmodel')),
('aisDecoded', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.aisdecodedmodel')),
],
options={
'verbose_name': 'Geo Message',
'verbose_name_plural': 'Geo Messages',
},
),
migrations.CreateModel(
name='geoCircleModel',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
('font_awesome_iconcolor', models.CharField(max_length=256)),
('location', django.contrib.gis.db.models.fields.PointField(blank=True, null=True, srid=4326, verbose_name='Pivot')),
('radius', models.DecimalField(blank=True, decimal_places=2, max_digits=20, null=True)),
('message', models.CharField(max_length=64)),
('dab', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.dabmodel')),
('lorawan', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.lorawanmodel')),
('aisDecoded', models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='core.aisdecodedmodel')),
],
options={
'verbose_name': 'Geo Circle Message',
'verbose_name_plural': 'Geo Circle Messages',
},
),
]
| 53.776042 | 206 | 0.584407 | 1,020 | 10,325 | 5.769608 | 0.127451 | 0.04486 | 0.053016 | 0.069329 | 0.768734 | 0.732201 | 0.679184 | 0.671878 | 0.635004 | 0.596432 | 0 | 0.012145 | 0.274286 | 10,325 | 191 | 207 | 54.057592 | 0.773255 | 0.004165 | 0 | 0.597826 | 1 | 0 | 0.16177 | 0.011381 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.027174 | 0 | 0.048913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84807805d4c9e99301ea9459dc3f1a877cc0d1c9 | 428 | py | Python | pulsar/async/_subprocess.py | PyCN/pulsar | fee44e871954aa6ca36d00bb5a3739abfdb89b26 | [
"BSD-3-Clause"
] | 1,410 | 2015-01-02T14:55:07.000Z | 2022-03-28T17:22:06.000Z | pulsar/async/_subprocess.py | PyCN/pulsar | fee44e871954aa6ca36d00bb5a3739abfdb89b26 | [
"BSD-3-Clause"
] | 194 | 2015-01-22T06:18:24.000Z | 2020-10-20T21:21:58.000Z | pulsar/async/_subprocess.py | PyCN/pulsar | fee44e871954aa6ca36d00bb5a3739abfdb89b26 | [
"BSD-3-Clause"
] | 168 | 2015-01-31T10:29:55.000Z | 2022-03-14T10:22:24.000Z |
if __name__ == '__main__':
import sys
import pickle
from multiprocessing import current_process
from multiprocessing.spawn import import_main_path
data = pickle.load(sys.stdin.buffer)
current_process().authkey = data['authkey']
sys.path = data['path']
import_main_path(data['main'])
impl = pickle.loads(data['impl'])
from pulsar.async.concurrency import run_actor
run_actor(impl)
| 25.176471 | 54 | 0.705607 | 54 | 428 | 5.296296 | 0.444444 | 0.083916 | 0.097902 | 0.125874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.191589 | 428 | 16 | 55 | 26.75 | 0.82659 | 0 | 0 | 0 | 0 | 0 | 0.063232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.5 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
848793513929c8f40197ffaea7f2dd8482916af6 | 188 | py | Python | visitors/signals.py | hugorodgerbrown/django-visitor | badb4a7fb5ea696bf5a249188b6cff8cc0a834ae | [
"MIT"
] | 6 | 2021-06-11T13:33:00.000Z | 2022-03-16T13:49:55.000Z | visitors/signals.py | hugorodgerbrown/django-visitor | badb4a7fb5ea696bf5a249188b6cff8cc0a834ae | [
"MIT"
] | 3 | 2021-02-13T15:09:54.000Z | 2021-09-23T14:43:20.000Z | visitors/signals.py | hugorodgerbrown/django-visitor | badb4a7fb5ea696bf5a249188b6cff8cc0a834ae | [
"MIT"
] | 2 | 2021-03-25T14:34:40.000Z | 2021-07-28T17:42:32.000Z | from django.dispatch import Signal
# sent when a user creates their own Visitor - can
# be used to send the email with the token
# kwargs: visitor
self_service_visitor_created = Signal()
| 26.857143 | 50 | 0.781915 | 30 | 188 | 4.8 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 188 | 6 | 51 | 31.333333 | 0.923077 | 0.558511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
848c472304255db936ace73dffdffd1e34d2745e | 33,182 | py | Python | tensorflow/mcst_model.py | nkzhlee/RCModel | 3288250edc8f4021f40e49f547b393434f02be4b | [
"Apache-2.0"
] | 1 | 2018-03-23T05:40:40.000Z | 2018-03-23T05:40:40.000Z | tensorflow/mcst_model.py | nkzhlee/RCModel | 3288250edc8f4021f40e49f547b393434f02be4b | [
"Apache-2.0"
] | null | null | null | tensorflow/mcst_model.py | nkzhlee/RCModel | 3288250edc8f4021f40e49f547b393434f02be4b | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf8 -*-
# ==============================================================================
# Copyright 2017 lizhaohui.com, Inc. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
This module implements the reading comprehension models based on:
Reinforcement Learning and Monte-Carlo Tree Search
Note that we use Pointer Network for the decoding stage of both models.
"""
import os
import time
import logging
import json
from mctree import search_tree
import numpy as np
import tensorflow as tf
from utils import compute_bleu_rouge
from utils import normalize
from layers.basic_rnn import rnn
from layers.match_layer import MatchLSTMLayer
from layers.match_layer import AttentionFlowMatchLayer
from layers.pointer_net import PointerNetDecoder
from pmctree import PSCHTree
class MCSTmodel(object):
"""
Implements the main reading comprehension model.
"""
def __init__(self, vocab, args):
# logging
self.args = args
self.logger = logging.getLogger("brc")
# basic config
self.algo = args.algo
self.hidden_size = args.hidden_size
self.optim_type = args.optim
self.learning_rate = args.learning_rate
self.weight_decay = args.weight_decay
self.use_dropout = args.dropout_keep_prob < 1
# length limit
self.max_p_num = args.max_p_num
self.max_p_len = args.max_p_len
self.max_q_len = args.max_q_len
#self.max_a_len = args.max_a_len
self.max_a_len = 20
#test paras
self.search_time = 3000
self.beta = 100.0
# the vocab
self.vocab = vocab
#self._build_graph()
def _build_graph(self):
"""
Builds the computation graph with Tensorflow
"""
# session info
sess_config = tf.ConfigProto()
sess_config.gpu_options.allow_growth = True
self.sess = tf.Session(config=sess_config)
start_t = time.time()
self._setup_placeholders()
self._embed()
self._encode()
self._initstate()
self._action_frist()
self._action()
self._compute_loss()
#param_num = sum([np.prod(self.sess.run(tf.shape(v))) for v in self.all_params])
#self.logger.info('There are {} parameters in the model'.format(param_num))
self.saver = tf.train.Saver()
self.sess.run(tf.global_variables_initializer())
self.logger.info('Time to build graph: {} s'.format(time.time() - start_t))
def _setup_placeholders(self):
"""
Placeholders
"""
self.p = tf.placeholder(tf.int32, [None, None])
self.q = tf.placeholder(tf.int32, [None, None])
self.p_length = tf.placeholder(tf.int32, [None])
self.q_length = tf.placeholder(tf.int32, [None])
self.start_label = tf.placeholder(tf.int32, [None])
self.end_label = tf.placeholder(tf.int32, [None])
self.dropout_keep_prob = tf.placeholder(tf.float32)
#test
self.p_words_id = tf.placeholder(tf.int32, [None])
self.candidate_id = tf.placeholder(tf.int32, [None])
#self.words = tf.placeholder(tf.float32, [None, None])
self.selected_id_list = tf.placeholder(tf.int32, [None])
self.policy = tf.placeholder(tf.float32, [1, None]) # policy
self.v = tf.placeholder(tf.float32, [1, 1]) # value
def _embed(self):
"""
The embedding layer, question and passage share embeddings
"""
#with tf.device('/cpu:0'), tf.variable_scope('word_embedding'):
with tf.variable_scope('word_embedding'):
self.word_embeddings = tf.get_variable(
'word_embeddings',
shape=(self.vocab.size(), self.vocab.embed_dim),
initializer=tf.constant_initializer(self.vocab.embeddings),
trainable=True
)
self.p_emb = tf.nn.embedding_lookup(self.word_embeddings, self.p)
self.q_emb = tf.nn.embedding_lookup(self.word_embeddings, self.q)
def _encode(self):
"""
Employs two Bi-LSTMs to encode passage and question separately
"""
with tf.variable_scope('passage_encoding'):
self.p_encodes, _ = rnn('bi-lstm', self.p_emb, self.p_length, self.hidden_size)
with tf.variable_scope('question_encoding'):
_, self.sep_q_encodes= rnn('bi-lstm', self.q_emb, self.q_length, self.hidden_size)
if self.use_dropout:
self.p_encodes = tf.nn.dropout(self.p_encodes, self.dropout_keep_prob)
self.sep_q_encodes = tf.nn.dropout(self.sep_q_encodes, self.dropout_keep_prob)
def _initstate(self):
self.V = tf.Variable(tf.random_uniform([self.hidden_size*2, self.hidden_size * 2], -1. / self.hidden_size,1. / self.hidden_size))
self.W = tf.Variable(tf.random_uniform([self.hidden_size * 2, 1], -1. / self.hidden_size, 1. / self.hidden_size))
self.W_b = tf.Variable(tf.random_uniform([1, 1], -1. / self.hidden_size, 1. / self.hidden_size))
self.V_c = tf.Variable(tf.random_uniform([self.hidden_size*2, self.hidden_size], -1. / self.hidden_size, 1. / self.hidden_size))
self.V_h = tf.Variable(tf.random_uniform([self.hidden_size*2, self.hidden_size], -1. / self.hidden_size, 1. / self.hidden_size))
self.q_state_c = tf.sigmoid(tf.matmul(self.sep_q_encodes, self.V_c))
self.q_state_h = tf.sigmoid(tf.matmul(self.sep_q_encodes, self.V_h))
self.q_state = tf.concat([self.q_state_c, self.q_state_h], 1)
self.words = tf.reshape(self.p_encodes,[-1,self.hidden_size*2])
self.words_list = tf.gather(self.words, self.p_words_id) # all words in a question doc
def _action_frist(self):
"""
select first word
"""
#self.candidate = tf.reshape(self.p_emb,[-1,self.hidden_size*2])
self.logits_first = tf.reshape(tf.matmul(tf.matmul(self.words_list, self.V), tf.transpose(self.q_state)), [-1])
self.prob_first = tf.nn.softmax(self.logits_first)
self.prob_id_first = tf.argmax(self.prob_first)
self.value_first = tf.sigmoid(tf.reshape(tf.matmul(self.q_state, self.W), [1, 1]) + self.W_b) # [1,1]
def _action(self):
"""
Employs Bi-LSTM again to fuse the context information after match layer
"""
self.candidate = tf.gather(self.words_list, self.candidate_id)
self.selected_list = tf.gather(self.words_list, self.selected_id_list)
self.input = tf.reshape(self.selected_list, [1, -1, self.hidden_size*2])
rnn_cell = tf.contrib.rnn.BasicLSTMCell(num_units=self.hidden_size, state_is_tuple=False)
_, self.states = tf.nn.dynamic_rnn(rnn_cell, self.input, initial_state=self.q_state, dtype=tf.float32) # [1, dim]
self.logits = tf.reshape(tf.matmul(tf.matmul(self.candidate, self.V), tf.transpose(self.states)), [-1])
self.prob = tf.nn.softmax(self.logits)
self.prob_id = tf.argmax(self.prob)
self.value = tf.sigmoid(tf.reshape(tf.matmul(self.states, self.W), [1, 1]) + self.W_b) # [1,1]
def value_function(self, words_list):
words_list = map(eval, words_list)
#print words_list
if len(words_list) == 0:
value_p = self.sess.run(self.value_first, feed_dict=self.feed_dict)
else:
feed_dict = dict({self.selected_id_list: words_list}.items() + self.feed_dict.items())
value_p = self.sess.run(self.value, feed_dict=feed_dict)
return value_p
def get_policy(self, words_list, l_passages):
max_id = float('-inf')
policy_c_id = []
words_list = map(eval, words_list)
for can in words_list:
max_id = max(can,max_id)
for idx in range(l_passages):
if idx > max_id:
policy_c_id.append(idx)
if len(words_list) == 0:
c_pred = self.sess.run(self.prob_first, feed_dict=self.feed_dict)
else:
feed_dict = dict({self.selected_id_list: words_list, self.candidate_id: policy_c_id}.items() + self.feed_dict.items())
c_pred = self.sess.run(self.prob, feed_dict=feed_dict)
return policy_c_id, c_pred
def _decode(self):
"""
Employs Pointer Network to get the the probs of each position
to be the start or end of the predicted answer.
Note that we concat the fuse_p_encodes for the passages in the same document.
And since the encodes of queries in the same document is same, we select the first one.
"""
with tf.variable_scope('same_question_concat'):
batch_size = tf.shape(self.start_label)[0]
concat_passage_encodes = tf.reshape(
self.fuse_p_encodes,
[batch_size, -1, 2 * self.hidden_size]
)
no_dup_question_encodes = tf.reshape(
self.sep_q_encodes,
[batch_size, -1, tf.shape(self.sep_q_encodes)[1], 2 * self.hidden_size]
)[0:, 0, 0:, 0:]
decoder = PointerNetDecoder(self.hidden_size)
self.start_probs, self.end_probs = decoder.decode(concat_passage_encodes,
no_dup_question_encodes)
def _compute_loss(self):
"""
The loss function
"""
self.loss_first = tf.contrib.losses.mean_squared_error(self.v, self.value_first) - tf.matmul(self.policy, tf.reshape(
tf.log(tf.clip_by_value(self.prob_first, 1e-30, 1.0)), [-1, 1]))
self.optimizer_first = tf.train.AdagradOptimizer(self.learning_rate).minimize(self.loss_first)
self.loss = tf.contrib.losses.mean_squared_error(self.v, self.value) - tf.matmul(self.policy, tf.reshape(
tf.log(tf.clip_by_value(self.prob, 1e-30, 1.0)), [-1, 1]))
self.optimizer = tf.train.AdagradOptimizer(self.learning_rate).minimize(self.loss)
self.all_params = tf.trainable_variables()
def _create_train_op(self):
"""
Selects the training algorithm and creates a train operation with it
"""
if self.optim_type == 'adagrad':
self.optimizer = tf.train.AdagradOptimizer(self.learning_rate)
elif self.optim_type == 'adam':
self.optimizer = tf.train.AdamOptimizer(self.learning_rate)
elif self.optim_type == 'rprop':
self.optimizer = tf.train.RMSPropOptimizer(self.learning_rate)
elif self.optim_type == 'sgd':
self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate)
else:
raise NotImplementedError('Unsupported optimizer: {}'.format(self.optim_type))
self.train_op = self.optimizer.minimize(self.loss)
def _train_epoch_new(self, pmct, train_batches, batch_size, dropout_keep_prob):
"""
Trains the model for a single epoch.
Args:
train_batches: iterable batch data for training
dropout_keep_prob: float value indicating dropout keep probability
"""
for bitx, batch in enumerate(train_batches, 1):
print '------ Batch Question: ' + str(bitx)
'''
feed_dict = {self.p: batch['passage_token_ids'],
self.q: [batch['question_token_ids']],
self.p_length: batch['passage_length'],
self.q_length: [batch['question_length']],
self.dropout_keep_prob: dropout_keep_prob}
'''
pred_answers = {}
#print str(ref_answers)
listSelectedSet = []
p_data = []
tree_batch = {
'tree_ids': batch['question_ids'],
'question_type': batch['question_types'],
'root_tokens': batch['question_token_ids'],
'q_length': batch['question_length'],
'candidates': batch['passage_token_ids'],
'p_length': batch['passage_length'],
'ref_answers': batch['ref_answers'],
'mcst_model': self
}
feed_dict = {}
pmct.feed_in_batch(tree_batch, 3, feed_dict)
loss = pmct.tree_search()
return loss
def _train_epoch(self, train_batches, dropout_keep_prob):
"""
Trains the model for a single epoch.
Args:
train_batches: iterable batch data for training
dropout_keep_prob: float value indicating dropout keep probability
"""
total_num, total_loss = 0, 0
log_every_n_batch, n_batch_loss = 3, 0
for bitx, batch in enumerate(train_batches, 1):
print '------ Batch Question: ' + str(bitx)
#print 'each passage len: '
#print batch['padded_p_len']
p_words_id = [] #all words id
p_words_list = [] #all words except padding
p_words_list_all = []
l_passages = 1 # include end_pad
n = 0
for l in batch['passage_length']:
l_passages += l
temp_id = [i + n * (int(batch['padded_p_len'])) for i in range(l)]
#print temp
temp_w = batch['passage_token_ids'][n][:l]
temp_all = batch['passage_token_ids'][n]
n += 1
p_words_id += temp_id
p_words_list += temp_w
p_words_list_all += temp_all
p_words_list.append(0)
self.end_pad = []
self.end_pad.append(p_words_id[-1] + 1)
p_words_id.append(p_words_id[-1] + 1)
#print 'end_pad: '
#print self.end_pad
#print p_words_list
#print p_words_id
#print len(p_words_list)
#print p_words_list_all
#print len(p_words_list_all)
self.max_a_len = min(self.max_a_len, l_passages)
self.feed_dict = {self.p: batch['passage_token_ids'],
self.q: [batch['question_token_ids'][0]],
self.p_length: batch['passage_length'],
self.q_length: [batch['question_length'][0]],
self.start_label: batch['start_id'],
self.end_label: batch['end_id'],
self.p_words_id: p_words_id,
self.dropout_keep_prob: dropout_keep_prob}
#print "question_length: " + str(batch['question_length'])
#print "passage_length: " + str(batch['passage_length'])
pred_answers, ref_answers = [], []
for sample in batch['raw_data']:
if 'answers' in sample:
ref_answers.append({'question_id': sample['question_id'],
'question_type': sample['question_type'],
'answers': sample['answers'],
'entity_answers': [[]],
'yesno_answers': []})
#print 'answers: '
#print str(sample['answers'])
#print 'ref_answers: '
#print ref_answers
listSelectedSet = []
p_data = []
start_node = 'question_'+ str(batch['question_ids'][0])
mcts_tree = search_tree(self, batch['question_ids'][0], self.max_a_len, l_passages, p_words_list, ref_answers, self.vocab)
#for t in xrange(3):
for t in xrange(self.max_a_len):
#print '-------------'+str(t)+'------------'
mcts_tree.search(start_node)
tmp_policy = mcts_tree.get_ppolicy(start_node)
#print 'tmp_policy.values(): '
#print tmp_policy.values()
#print 'sum(tmp_policy.values()): '
#print sum(tmp_policy.values())
prob, select_doc_id, start_node = mcts_tree.take_action(start_node)
p_data.append(prob)
listSelectedSet.append(select_doc_id)
if select_doc_id in self.end_pad:
print 'break!!!!!!!!!!!'
break
listSelectedSet_words = []
listSelectedSet = map(eval, listSelectedSet)
for idx in listSelectedSet:
listSelectedSet_words.append(p_words_list[idx])
#print 'listSelectedSet:'
#print listSelectedSet
#print 'listSelectedSet_words: '
#print listSelectedSet_words
for sample in batch['raw_data']:
#print 'str:'
strr123 = self.vocab.recover_from_ids(listSelectedSet_words, 0)
#print strr123
pred_answers.append({'question_id': sample['question_id'],
'question_type': sample['question_type'],
'answers': [''.join(strr123)],
'entity_answers': [[]],
'yesno_answers': []})
#print 'pred_answer: '
#print pred_answers
if len(ref_answers) > 0:
pred_dict, ref_dict = {}, {}
for pred, ref in zip(pred_answers, ref_answers):
question_id = ref['question_id']
if len(ref['answers']) > 0:
pred_dict[question_id] = normalize(pred['answers'])
ref_dict[question_id] = normalize(ref['answers'])
#print '========compare======='
#print pred_dict[question_id]
#print '----------------------'
#print ref_dict[question_id]
#print '========compare 2======='
#print pred_dict
#print '----------------------'
#print ref_dict
bleu_rouge = compute_bleu_rouge(pred_dict, ref_dict)
else:
bleu_rouge = None
value_with_mcts = bleu_rouge
print 'bleu_rouge(value_with_mcts): '
print value_with_mcts
# now use Bleu-4 , Rouge-L
input_v = value_with_mcts['Bleu-4']
for prob_id, prob_data in enumerate(p_data):
#print 'p_data: '
#print prob_id
#print prob_data
c = []
policy = []
for prob_key, prob_value in prob_data.items():
c.append(prob_key)
policy.append(prob_value)
#print 'policy: '
#print [policy]
#print 'value: '
#print [value_with_mcts]
#print 'candidate: '
#print c
if prob_id == 0:
feed_dict = dict(self.feed_dict.items() + {self.policy: [policy], self.v: [[input_v]]}.items())
_, loss = self.sess.run([self.optimizer_first,self.loss_first], feed_dict=feed_dict)
else:
feed_dict = dict(self.feed_dict.items() + {self.selected_id_list: listSelectedSet[:prob_id], self.candidate_id: c, self.policy: [policy],
self.v: [[input_v]]}.items())
_, loss = self.sess.run([self.optimizer,self.loss], feed_dict=feed_dict)
total_loss += loss * len(batch['raw_data'])
total_num += len(batch['raw_data'])
n_batch_loss += loss
if log_every_n_batch > 0 and bitx % log_every_n_batch == 0:
self.logger.info('Average loss from batch {} to {} is {}'.format(
bitx - log_every_n_batch + 1, bitx, n_batch_loss / log_every_n_batch))
n_batch_loss = 0
return 1.0 * total_loss / total_num
def train(self, data, epochs, batch_size, save_dir, save_prefix,
dropout_keep_prob=1.0, evaluate=True):
"""
Train the model with data
Args:
data: the BRCDataset class implemented in dataset.py
epochs: number of training epochs
batch_size:
save_dir: the directory to save the model
save_prefix: the prefix indicating the model type
dropout_keep_prob: float value indicating dropout keep probability
evaluate: whether to evaluate the model on test set after each epoch
"""
pad_id = self.vocab.get_id(self.vocab.pad_token)
print 'pad_id is '
print pad_id
max_bleu_4 = 0
for epoch in range(1, epochs + 1):
self.logger.info('Training the model for epoch {}'.format(epoch))
epoch_start_time = time.time()
train_batches = data.gen_batches('train', 3, pad_id, shuffle=True)
#mctree = MCtree(train_batches)
#mctree.search()
pmct = PSCHTree(self.args,self.vocab)
result = self._train_epoch_new(pmct, train_batches, batch_size, dropout_keep_prob)
epoch_end_time = time.time()
self.logger.info('Train time for epoch {} is {} min'.format(epoch, str((epoch_end_time - epoch_start_time)/60)))
#train_batches = data.gen_mini_batches('train', batch_size, pad_id, shuffle=True)
#result = self._train_epoch(train_batches, dropout_keep_prob)
#self.logger.info('Average train loss for epoch {} is {}'.format(epoch, result))
#self.save(save_dir, save_prefix + '_' + str(epoch))
'''
if evaluate:
self.logger.info('Evaluating the model after epoch {}'.format(epoch))
if data.dev_set is not None:
eval_batches = data.gen_mini_batches('dev', batch_size, pad_id, shuffle=False)
bleu_rouge = self.evaluate(eval_batches)
#eval_loss, bleu_rouge = self.evaluate(eval_batches)
#self.logger.info('Dev eval loss {}'.format(eval_loss))
self.logger.info('Dev eval result: {}'.format(bleu_rouge))
if bleu_rouge['Bleu-4'] > max_bleu_4:
self.save(save_dir, save_prefix)
max_bleu_4 = bleu_rouge['Bleu-4']
_
else:
self.logger.warning('No dev set is loaded for evaluation in the dataset!')
else:
self.save(save_dir, save_prefix + '_' + str(epoch))
'''
def evaluate(self, eval_batches, result_dir=None, result_prefix=None, save_full_info=False):
"""
Evaluates the model performance on eval_batches and results are saved if specified
Args:
eval_batches: iterable batch data
result_dir: directory to save predicted answers, answers will not be saved if None
result_prefix: prefix of the file for saving predicted answers,
answers will not be saved if None
save_full_info: if True, the pred_answers will be added to raw sample and saved
"""
pred_answers, ref_answers = [], []
total_loss, total_num = 0, 0
for b_itx, batch in enumerate(eval_batches):
pred_answers, ref_answers = [], []
print '------ evaluate Batch Question: ' + str(b_itx)
for sample in batch['raw_data']:
if 'answers' in sample:
ref_answers.append({'question_id': sample['question_id'],
'question_type': sample['question_type'],
'answers': sample['answers'],
'entity_answers': [[]],
'yesno_answers': []})
print 'ref_answers: '
print ref_answers
print batch['padded_p_len']
p_words_id = [] # all words id
p_words_list = [] # all words
l_passages = 1 # include end_pad
n = 0
for l in batch['passage_length']:
l_passages += l
temp_id = [i + n * (int(batch['padded_p_len'])) for i in range(l)]
# print temp
temp_w = batch['passage_token_ids'][n][:l]
n += 1
p_words_id += temp_id
p_words_list += temp_w
p_words_list.append(0)
# print p_words_id[-1]
# print p_words_id
self.end_pad = []
self.end_pad.append(p_words_id[-1] + 1)
p_words_id.append(p_words_id[-1] + 1)
print 'end_pad: '
print self.end_pad
# print p_words_list
# print p_words_id
listSelectedSet_id = []
self.max_a_len = min(self.max_a_len, l_passages)
feed_dict = {self.p: batch['passage_token_ids'],
self.q: [batch['question_token_ids'][0]],
self.p_length: batch['passage_length'],
self.q_length: [batch['question_length'][0]],
self.p_words_id: p_words_id,
self.dropout_keep_prob: 1.0}
# policy
for tt in xrange(3):
#for tt in xrange(self.max_a_len):
max_id = float('-inf')
policy_c_id = []
print listSelectedSet_id
#listSelectedSet_id = map(eval, listSelectedSet_id)
for can in listSelectedSet_id:
max_id = max(can, max_id)
for idx in range(l_passages):
if idx > max_id:
policy_c_id.append(idx)
if len(listSelectedSet_id) == 0:
pred_id = self.sess.run(self.prob_id_first, feed_dict=feed_dict)
else:
feed_dict = dict({self.selected_id_list: listSelectedSet_id,
self.candidate_id: policy_c_id}.items() + feed_dict.items())
pred_id = self.sess.run(self.prob_id, feed_dict=feed_dict)
listSelectedSet_id.append(pred_id)
#print 'pred_id:'
#print pred_id
if pred_id in self.end_pad:
print 'break!!!!!!!!!!!'
break
'''
# value function
listSelectedSet_id_value = []
listSelectedSet_id_value = map(eval, listSelectedSet_id_value)
max_one_value_pred_test = float("-inf")
one_doc_pred_test = ''
for ttt in xrange(1):
# for tt in xrange(self.max_a_len):
# print words_list
candidate_list = []
for can in listSelectedSet_id:
max_id = max(can, max_id)
for idx in range(l_passages):
if idx > max_id:
candidate_list.append(idx)
for w_id in candidate_list:
c_tmp = listSelectedSet_id_value
c_tmp = c_tmp.append(w_id)
if len(listSelectedSet_id_value) == 0:
value_p = self.sess.run(self.value_first, feed_dict=self.feed_dict)
else:
feed_dict = dict({self.selected_id_list: listSelectedSet_id_value}.items() + self.feed_dict.items())
value_p = self.sess.run(self.value, feed_dict=feed_dict)
one_doc_value_pred_test = value_p
if one_doc_value_pred_test > max_one_value_pred_test:
one_doc_pred_test = w_id
max_one_value_pred_test = one_doc_value_pred_test
listSelectedSet_id_value.append(one_doc_pred_test)
if one_doc_pred_test in self.end_pad:
break
print 'break!!!!!!!!!!!'
#listSelectedSet_id or listSelectedSet_id_value
listSelectedSet_words = []
listSelectedSet = map(eval, listSelectedSet_id)
for idx in listSelectedSet:
listSelectedSet_words.append(p_words_list[idx])
print 'listSelectedSet:'
print listSelectedSet
print 'listSelectedSet_words: '
print listSelectedSet_words
'''
for sample in batch['raw_data']:
#print 'str:'
strr123 = self.vocab.recover_from_ids(listSelectedSet_id, 0)
#print strr123
pred_answers.append({'question_id': sample['question_id'],
'question_type': sample['question_type'],
'answers': [strr123],
'entity_answers': [[]],
'yesno_answers': []})
print 'pred_answer: '
print pred_answers
if len(ref_answers) > 0:
#print ref_answers
#print ref_answers
pred_dict, ref_dict = {}, {}
for pred, ref in zip(pred_answers, ref_answers):
question_id = ref['question_id']
if len(ref['answers']) > 0:
pred_dict[question_id] = normalize(pred['answers'])
ref_dict[question_id] = normalize(ref['answers'])
bleu_rouge = compute_bleu_rouge(pred_dict, ref_dict)
else:
bleu_rouge = None
value_with_mcts = bleu_rouge
print 'bleu_rouge(value_with_mcts): '
print value_with_mcts
return value_with_mcts
def find_best_answer(self, sample, start_prob, end_prob, padded_p_len):
"""
Finds the best answer for a sample given start_prob and end_prob for each position.
This will call find_best_answer_for_passage because there are multiple passages in a sample
"""
best_p_idx, best_span, best_score = None, None, 0
for p_idx, passage in enumerate(sample['passages']):
if p_idx >= self.max_p_num:
continue
passage_len = min(self.max_p_len, len(passage['passage_tokens']))
answer_span, score = self.find_best_answer_for_passage(
start_prob[p_idx * padded_p_len: (p_idx + 1) * padded_p_len],
end_prob[p_idx * padded_p_len: (p_idx + 1) * padded_p_len],
passage_len)
if score > best_score:
best_score = score
best_p_idx = p_idx
best_span = answer_span
best_answer = ''.join(
sample['passages'][best_p_idx]['passage_tokens'][best_span[0]: best_span[1] + 1])
return best_answer
def find_best_answer_for_passage(self, start_probs, end_probs, passage_len=None):
"""
Finds the best answer with the maximum start_prob * end_prob from a single passage
"""
if passage_len is None:
passage_len = len(start_probs)
else:
passage_len = min(len(start_probs), passage_len)
best_start, best_end, max_prob = -1, -1, 0
for start_idx in range(passage_len):
for ans_len in range(self.max_a_len):
end_idx = start_idx + ans_len
if end_idx >= passage_len:
continue
prob = start_probs[start_idx] * end_probs[end_idx]
if prob > max_prob:
best_start = start_idx
best_end = end_idx
max_prob = prob
return (best_start, best_end), max_prob
def save(self, model_dir, model_prefix):
"""
Saves the model into model_dir with model_prefix as the model indicator
"""
self.saver.save(self.sess, os.path.join(model_dir, model_prefix))
self.logger.info('Model saved in {}, with prefix {}.'.format(model_dir, model_prefix))
def restore(self, model_dir, model_prefix):
"""
Restores the model into model_dir from model_prefix as the model indicator
"""
self.saver.restore(self.sess, os.path.join(model_dir, model_prefix))
self.logger.info('Model restored from {}, with prefix {}'.format(model_dir, model_prefix))
| 44.780027 | 157 | 0.55952 | 4,012 | 33,182 | 4.357428 | 0.110668 | 0.016932 | 0.021622 | 0.011154 | 0.50183 | 0.420604 | 0.39269 | 0.340236 | 0.307631 | 0.285837 | 0 | 0.009486 | 0.332861 | 33,182 | 740 | 158 | 44.840541 | 0.780232 | 0.093123 | 0 | 0.310096 | 0 | 0 | 0.066061 | 0.002378 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.084135 | 0.033654 | null | null | 0.045673 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
84a0d9d3c023f36de188ac13980c222c4ee2ce2f | 11,211 | py | Python | pysnmp/ITOUCH-RADIUS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/ITOUCH-RADIUS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/ITOUCH-RADIUS-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module ITOUCH-RADIUS-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/ITOUCH-RADIUS-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 19:47:05 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
Integer, OctetString, ObjectIdentifier = mibBuilder.importSymbols("ASN1", "Integer", "OctetString", "ObjectIdentifier")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsUnion, ValueSizeConstraint, SingleValueConstraint, ConstraintsIntersection, ValueRangeConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsUnion", "ValueSizeConstraint", "SingleValueConstraint", "ConstraintsIntersection", "ValueRangeConstraint")
iTouch, = mibBuilder.importSymbols("ITOUCH-MIB", "iTouch")
NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "NotificationGroup", "ModuleCompliance")
Counter64, Integer32, Gauge32, ModuleIdentity, Unsigned32, Counter32, TimeTicks, NotificationType, iso, ObjectIdentity, MibScalar, MibTable, MibTableRow, MibTableColumn, Bits, IpAddress, MibIdentifier = mibBuilder.importSymbols("SNMPv2-SMI", "Counter64", "Integer32", "Gauge32", "ModuleIdentity", "Unsigned32", "Counter32", "TimeTicks", "NotificationType", "iso", "ObjectIdentity", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Bits", "IpAddress", "MibIdentifier")
DisplayString, TextualConvention = mibBuilder.importSymbols("SNMPv2-TC", "DisplayString", "TextualConvention")
xRadius = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35))
xRadiusPort = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 1))
xRadiusCircuit = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 2))
xRadiusConfig = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 3))
xRadiusServers = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 4))
xRadiusCounters = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 5))
xRadiusPortTable = MibTable((1, 3, 6, 1, 4, 1, 33, 35, 1, 1), )
if mibBuilder.loadTexts: xRadiusPortTable.setStatus('mandatory')
xRadiusPortEntry = MibTableRow((1, 3, 6, 1, 4, 1, 33, 35, 1, 1, 1), ).setIndexNames((0, "ITOUCH-RADIUS-MIB", "xRadiusPortIndex"))
if mibBuilder.loadTexts: xRadiusPortEntry.setStatus('mandatory')
xRadiusPortIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 1, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusPortIndex.setStatus('mandatory')
xRadiusPortStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("disabled", 1), ("enabled", 2))).clone('disabled')).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusPortStatus.setStatus('mandatory')
xRadiusPortSolicitStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 1, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("disabled", 1), ("enabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusPortSolicitStatus.setStatus('mandatory')
xRadiusAcctPortStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("disabled", 1), ("enabled", 2), ("limited", 3))).clone('disabled')).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusAcctPortStatus.setStatus('mandatory')
xRadiusCircuitTable = MibTable((1, 3, 6, 1, 4, 1, 33, 35, 2, 1), )
if mibBuilder.loadTexts: xRadiusCircuitTable.setStatus('mandatory')
xRadiusCircuitEntry = MibTableRow((1, 3, 6, 1, 4, 1, 33, 35, 2, 1, 1), ).setIndexNames((0, "ITOUCH-RADIUS-MIB", "xRadiusCircuitIndex"))
if mibBuilder.loadTexts: xRadiusCircuitEntry.setStatus('mandatory')
xRadiusCircuitIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 2, 1, 1, 1), Integer32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusCircuitIndex.setStatus('mandatory')
xRadiusCircAcctOnOff = MibTableColumn((1, 3, 6, 1, 4, 1, 33, 35, 2, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("disabled", 1), ("enabled", 2), ("limited", 3))).clone('disabled')).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusCircAcctOnOff.setStatus('mandatory')
xRadiusAuthServerPort = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535)).clone(1645)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusAuthServerPort.setStatus('mandatory')
xRadiusAcctServerPort = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535)).clone(1646)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusAcctServerPort.setStatus('mandatory')
xRadiusTimeout = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 255)).clone(5)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusTimeout.setStatus('mandatory')
xRadiusServerRetries = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 10)).clone(3)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusServerRetries.setStatus('mandatory')
xRadiusAcctLogAttempts = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 50000)).clone(5)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusAcctLogAttempts.setStatus('mandatory')
xRadiusChapChallengeSize = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(4, 128)).clone(16)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusChapChallengeSize.setStatus('mandatory')
xRadiusLogging = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("disabled", 1), ("enabled", 2))).clone('disabled')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusLogging.setStatus('mandatory')
xRadiusMessage = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 3, 8), DisplayString().subtype(subtypeSpec=ValueSizeConstraint(40, 40)).setFixedLength(40)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusMessage.setStatus('mandatory')
xRadServer1SubGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 4, 1))
xRadServer2SubGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 4, 2))
xRadiusServerName1 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 1, 1), OctetString().subtype(subtypeSpec=ValueSizeConstraint(51, 51)).setFixedLength(51)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusServerName1.setStatus('mandatory')
xRadiusSecret1 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 1, 2), OctetString().subtype(subtypeSpec=ValueSizeConstraint(32, 32)).setFixedLength(32).clone('Default_Secret')).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusSecret1.setStatus('obsolete')
xRadiusServerAccess1 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusServerAccess1.setStatus('mandatory')
xRadiusServerAccessFailed1 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusServerAccessFailed1.setStatus('mandatory')
xRadiusServerName2 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 2, 1), OctetString().subtype(subtypeSpec=ValueSizeConstraint(51, 51)).setFixedLength(51)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: xRadiusServerName2.setStatus('mandatory')
xRadiusSecret2 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 2, 2), OctetString().subtype(subtypeSpec=ValueSizeConstraint(32, 32)).setFixedLength(32).clone('Default_Secret')).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusSecret2.setStatus('obsolete')
xRadiusServerAccess2 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 2, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusServerAccess2.setStatus('mandatory')
xRadiusServerAccessFailed2 = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 4, 2, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusServerAccessFailed2.setStatus('mandatory')
xRadAuthCtsSubGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 5, 1))
xRadAcctCtsSubGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 33, 35, 5, 2))
xRadiusLogins = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 1, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusLogins.setStatus('mandatory')
xRadiusLoginsFailed = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 1, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusLoginsFailed.setStatus('mandatory')
xRadiusConfigFailed = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 1, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusConfigFailed.setStatus('mandatory')
xRadiusPolicyFailed = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 1, 4), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusPolicyFailed.setStatus('mandatory')
xRadiusAcctSuccess = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 2, 1), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusAcctSuccess.setStatus('mandatory')
xRadiusAcctFailed = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 2, 2), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusAcctFailed.setStatus('mandatory')
xRadiusAcctReqWait = MibScalar((1, 3, 6, 1, 4, 1, 33, 35, 5, 2, 3), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: xRadiusAcctReqWait.setStatus('mandatory')
mibBuilder.exportSymbols("ITOUCH-RADIUS-MIB", xRadiusConfigFailed=xRadiusConfigFailed, xRadiusLogging=xRadiusLogging, xRadiusCounters=xRadiusCounters, xRadiusAcctPortStatus=xRadiusAcctPortStatus, xRadiusPortIndex=xRadiusPortIndex, xRadiusChapChallengeSize=xRadiusChapChallengeSize, xRadiusCircuitTable=xRadiusCircuitTable, xRadiusCircuitEntry=xRadiusCircuitEntry, xRadiusAcctServerPort=xRadiusAcctServerPort, xRadiusMessage=xRadiusMessage, xRadiusAcctLogAttempts=xRadiusAcctLogAttempts, xRadServer2SubGrp=xRadServer2SubGrp, xRadius=xRadius, xRadiusServerAccess1=xRadiusServerAccess1, xRadiusServerAccessFailed2=xRadiusServerAccessFailed2, xRadiusCircuitIndex=xRadiusCircuitIndex, xRadiusServerAccess2=xRadiusServerAccess2, xRadAcctCtsSubGrp=xRadAcctCtsSubGrp, xRadiusLoginsFailed=xRadiusLoginsFailed, xRadiusAcctSuccess=xRadiusAcctSuccess, xRadiusServerName2=xRadiusServerName2, xRadiusTimeout=xRadiusTimeout, xRadiusAcctReqWait=xRadiusAcctReqWait, xRadServer1SubGrp=xRadServer1SubGrp, xRadiusPort=xRadiusPort, xRadiusPortTable=xRadiusPortTable, xRadiusPortSolicitStatus=xRadiusPortSolicitStatus, xRadiusServerAccessFailed1=xRadiusServerAccessFailed1, xRadiusCircAcctOnOff=xRadiusCircAcctOnOff, xRadiusLogins=xRadiusLogins, xRadiusAcctFailed=xRadiusAcctFailed, xRadiusPolicyFailed=xRadiusPolicyFailed, xRadiusConfig=xRadiusConfig, xRadiusCircuit=xRadiusCircuit, xRadiusServers=xRadiusServers, xRadAuthCtsSubGrp=xRadAuthCtsSubGrp, xRadiusSecret2=xRadiusSecret2, xRadiusServerRetries=xRadiusServerRetries, xRadiusServerName1=xRadiusServerName1, xRadiusPortStatus=xRadiusPortStatus, xRadiusAuthServerPort=xRadiusAuthServerPort, xRadiusPortEntry=xRadiusPortEntry, xRadiusSecret1=xRadiusSecret1)
| 121.858696 | 1,690 | 0.769155 | 1,244 | 11,211 | 6.930064 | 0.132637 | 0.011136 | 0.014963 | 0.019951 | 0.476627 | 0.412597 | 0.412597 | 0.348335 | 0.312841 | 0.285582 | 0 | 0.076529 | 0.078048 | 11,211 | 91 | 1,691 | 123.197802 | 0.757546 | 0.029257 | 0 | 0 | 0 | 0 | 0.113758 | 0.004046 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84ae2d27a7faf85ed78a360e4c7469b228bf8a62 | 6,205 | py | Python | python/p054.py | wephy/project-euler | cc4824478282d3e1514a1bf7a1821b938db5bfcb | [
"MIT"
] | null | null | null | python/p054.py | wephy/project-euler | cc4824478282d3e1514a1bf7a1821b938db5bfcb | [
"MIT"
] | 1 | 2021-06-07T19:03:35.000Z | 2021-06-07T19:03:35.000Z | python/p054.py | wephy/project-euler | cc4824478282d3e1514a1bf7a1821b938db5bfcb | [
"MIT"
] | null | null | null | # Poker hands
import os
import numpy as np
def solve():
data = np.loadtxt(os.path.join("..", "data", "p054.txt"),
delimiter=" ",
dtype=str)
player1 = data[:, :5]
player2 = data[:, 5:]
return sum(
score(player1[game]) > score(player2[game]) for game in range(1_000))
def score(hand):
"""Return a number representing the score of a particular hand"""
ROYAL_FLUSH = ["T", "J", "Q", "K", "A"]
RANKS = ["2", "3", "4", "5", "6", "7", "8", "9", "T", "J", "Q", "K", "A"]
# Test for royal flush
failed = False
if len(set(card[1] for card in hand)) == 1:
for card in hand:
if card[0] not in ROYAL_FLUSH:
failed = True
else:
failed = True
if not failed:
return 9.13
# Test for straight flush
failed = False
if len(set(card[1] for card in hand)) == 1:
indexes = []
for card in hand:
indexes.append(RANKS.index(card[0]))
if max(indexes) - min(indexes) != 4:
failed = True
else:
val1 = max(indexes) + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
else:
failed = True
if not failed:
return float(f"8.{val1}")
# Test for four of a kind
failed = False
numbers = []
for card in hand:
numbers.append(card[0])
mode = max(set(numbers), key=numbers.count)
if numbers.count(mode) != 4:
failed = True
else:
val1 = RANKS.index(mode) + 1
val2 = RANKS.index([x for x in numbers if x != mode][0]) + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if not failed:
return float(f"7.{val1}{val2}")
# Test for full house
failed = False
if len(set(card[0] for card in hand)) == 2:
mode = max(set(numbers), key=numbers.count)
val1 = RANKS.index(mode) + 1
val2 = RANKS.index([x for x in numbers if x != mode][0]) + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
else:
failed = True
if not failed:
return float(f"6.{val1}{val2}")
# Test for flush
failed = False
if len(set(card[1] for card in hand)) == 1:
numbers = [card[0] for card in hand]
numbers = sorted(map(RANKS.index, numbers))
val1 = numbers[4] + 1
val2 = numbers[3] + 1
val3 = numbers[2] + 1
val4 = numbers[1] + 1
val5 = numbers[0] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if len(str(val3)) == 1:
val3 = f"0{val3}"
if len(str(val4)) == 1:
val4 = f"0{val4}"
if len(str(val5)) == 1:
val5 = f"0{val5}"
else:
failed = True
if not failed:
return float(f"5.{val1}{val2}{val3}{val4}{val5}")
# Test for straight
failed = False
if len(set(card[0] for card in hand)) != 5:
failed = True
else:
numbers = [card[0] for card in hand]
numbers = sorted(map(RANKS.index, numbers))
if numbers[4] - numbers[0] != 4:
failed = True
else:
val1 = numbers[4] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if not failed:
return float(f"4.{val1}")
# Test for three of a kind
failed = False
numbers = [card[0] for card in hand]
mode = max(set(numbers), key=numbers.count)
if numbers.count(mode) != 3:
failed = True
else:
val1 = RANKS.index(mode) + 1
numbers = sorted(map(RANKS.index, [x for x in numbers if x != mode]))
val2 = numbers[1] + 1
val3 = numbers[0] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if len(str(val3)) == 1:
val3 = f"0{val3}"
if not failed:
return float(f"3.{val1}{val2}{val3}")
# Test for two pairs
failed = False
if len(set(card[0] for card in hand)) == 3:
numbers = [card[0] for card in hand]
numbers = map(RANKS.index, numbers)
numbers = sorted(numbers, reverse=True)
numbers = sorted(numbers, key=numbers.count, reverse=True)
val1 = numbers[0] + 1
val2 = numbers[2] + 1
val3 = numbers[4] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if len(str(val3)) == 1:
val3 = f"0{val3}"
else:
failed = True
if not failed:
return float(f"2.{val1}{val2}{val3}")
# Test for one pair
failed = False
if len(set(card[0] for card in hand)) == 4:
numbers = [card[0] for card in hand]
numbers = map(RANKS.index, numbers)
numbers = sorted(numbers, reverse=True)
numbers = sorted(numbers, key=numbers.count, reverse=True)
val1 = numbers[0] + 1
val2 = numbers[2] + 1
val3 = numbers[3] + 1
val4 = numbers[4] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if len(str(val3)) == 1:
val3 = f"0{val3}"
if len(str(val4)) == 1:
val4 = f"0{val4}"
else:
failed = True
if not failed:
return float(f"1.{val1}{val2}{val3}{val4}")
# Highest card
numbers = [card[0] for card in hand]
numbers = map(RANKS.index, numbers)
numbers = sorted(numbers, reverse=True)
val1 = numbers[0] + 1
val2 = numbers[1] + 1
val3 = numbers[2] + 1
val4 = numbers[3] + 1
val5 = numbers[4] + 1
if len(str(val1)) == 1:
val1 = f"0{val1}"
if len(str(val2)) == 1:
val2 = f"0{val2}"
if len(str(val3)) == 1:
val3 = f"0{val3}"
if len(str(val4)) == 1:
val4 = f"0{val4}"
if len(str(val5)) == 1:
val5 = f"0{val5}"
return float(f"0.{val1}{val2}{val3}{val4}{val5}")
if __name__ == "__main__":
print(solve())
| 28.726852 | 77 | 0.494601 | 882 | 6,205 | 3.46712 | 0.109977 | 0.053957 | 0.068018 | 0.068018 | 0.783846 | 0.726292 | 0.685742 | 0.640615 | 0.617397 | 0.5569 | 0 | 0.072601 | 0.351813 | 6,205 | 215 | 78 | 28.860465 | 0.687718 | 0.043352 | 0 | 0.73224 | 0 | 0 | 0.067061 | 0.015203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010929 | false | 0 | 0.010929 | 0 | 0.081967 | 0.005464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84ba5b58e3b09cdda3ab86841485c081865f35c5 | 271 | py | Python | src/replit/__init__.py | ykdojo/replit-py | 6f8f4572d067bc0160543ef70e10e3ec18d6112c | [
"0BSD"
] | null | null | null | src/replit/__init__.py | ykdojo/replit-py | 6f8f4572d067bc0160543ef70e10e3ec18d6112c | [
"0BSD"
] | null | null | null | src/replit/__init__.py | ykdojo/replit-py | 6f8f4572d067bc0160543ef70e10e3ec18d6112c | [
"0BSD"
] | null | null | null | # flake8: noqa
"""The Replit Python module."""
from . import web
from .audio import Audio
from .database import db, Database
# Backwards compatibility.
def clear() -> None:
"""Clear the terminal."""
print("\033[H\033[2J", end="", flush=True)
audio = Audio()
| 16.9375 | 46 | 0.656827 | 36 | 271 | 4.944444 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036036 | 0.180812 | 271 | 15 | 47 | 18.066667 | 0.765766 | 0.309963 | 0 | 0 | 0 | 0 | 0.074286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.5 | 0 | 0.666667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
84bc8fa1f7a69f91425ebb836514b7b200334c2e | 183 | py | Python | data_hyperparams/beeradvocate.py | noveens/sampling_cf | e135819b1e7310ee58edbbd138f303e5240a2619 | [
"MIT"
] | 6 | 2022-01-14T13:38:03.000Z | 2022-03-01T17:57:09.000Z | data_hyperparams/beeradvocate.py | noveens/sampling_cf | e135819b1e7310ee58edbbd138f303e5240a2619 | [
"MIT"
] | null | null | null | data_hyperparams/beeradvocate.py | noveens/sampling_cf | e135819b1e7310ee58edbbd138f303e5240a2619 | [
"MIT"
] | null | null | null | hyper_params = {
'weight_decay': float(1e-6),
'epochs': 30,
'batch_size': 256,
'validate_every': 3,
'early_stop': 3,
'max_seq_len': 10,
}
| 20.333333 | 35 | 0.502732 | 22 | 183 | 3.863636 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090164 | 0.333333 | 183 | 8 | 36 | 22.875 | 0.606557 | 0 | 0 | 0 | 0 | 0 | 0.344262 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84bcc13013a218425a479e662d8c073e65eb7fb9 | 1,151 | py | Python | Algae.py | pblanc5/LSystemFun | f8e381d6dc4c398a496d84f756d797fbc0f5b5cb | [
"MIT"
] | null | null | null | Algae.py | pblanc5/LSystemFun | f8e381d6dc4c398a496d84f756d797fbc0f5b5cb | [
"MIT"
] | null | null | null | Algae.py | pblanc5/LSystemFun | f8e381d6dc4c398a496d84f756d797fbc0f5b5cb | [
"MIT"
] | null | null | null | # Author: Patrick Blanchard
# Date: 12-13-16
# Description: This is a program to help me learn L-Systems
# It should generate strings based on the rules applied.
class system:
axiom = "A"
sentence = axiom
rules = []
rules.append({
"A":"A",
"B":"AB"
})
rules.append({
"A": "B",
"B": "A"
})
def generate(self):
nextSentence = ""
for i in range(0,len(self.sentence)):
current = self.sentence[i]
found = False
for j in range(0, len(self.rules)):
if(current == self.rules[j].get("A")):
found = True
nextSentence += self.rules[j].get("B")
break
if found == False:
nextSentence += current
self.sentence = nextSentence
print self.sentence
def main(self):
print "Press enter to generate.\nType exit to exit."
print self.axiom
while True:
state = raw_input()
if(state == "exit"):
break
self.generate()
a = system()
a.main()
| 23.489796 | 60 | 0.486533 | 129 | 1,151 | 4.333333 | 0.472868 | 0.085868 | 0.042934 | 0.039356 | 0.053667 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011478 | 0.39444 | 1,151 | 48 | 61 | 23.979167 | 0.790531 | 0.132928 | 0 | 0.166667 | 1 | 0 | 0.060423 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84bf3ea78035bb6975398ce9c43f8e994ac134c6 | 430 | py | Python | bigfastapi/models/role_models.py | kofimokome/bigfastapi | d9cf181271847c1f420747da8e114af7ad717598 | [
"MIT"
] | null | null | null | bigfastapi/models/role_models.py | kofimokome/bigfastapi | d9cf181271847c1f420747da8e114af7ad717598 | [
"MIT"
] | null | null | null | bigfastapi/models/role_models.py | kofimokome/bigfastapi | d9cf181271847c1f420747da8e114af7ad717598 | [
"MIT"
] | null | null | null | import datetime as dt
from re import T
from sqlalchemy.schema import Column
from sqlalchemy.types import String, DateTime
from uuid import UUID, uuid4
import bigfastapi.db.database as database
class Role(database.Base):
__tablename__ = "roles"
id = Column(String(255), primary_key=True, index=True, default=uuid4().hex)
organization_id = Column(String(255), index=True)
role_name = Column(String(255), index=True) | 35.833333 | 79 | 0.762791 | 62 | 430 | 5.177419 | 0.516129 | 0.11215 | 0.140187 | 0.105919 | 0.149533 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02981 | 0.14186 | 430 | 12 | 80 | 35.833333 | 0.840108 | 0 | 0 | 0 | 0 | 0 | 0.011601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.545455 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
84d4868dfa8278d45ab24de70b96033a459f33a6 | 261 | py | Python | branching.py | fabiolealsc/quest | 8f936aa6d8208ec23cc13f28148b9e39a976d997 | [
"MIT"
] | 2 | 2021-09-29T02:39:20.000Z | 2021-11-08T08:56:50.000Z | branching.py | fabiolealsc/quest | 8f936aa6d8208ec23cc13f28148b9e39a976d997 | [
"MIT"
] | null | null | null | branching.py | fabiolealsc/quest | 8f936aa6d8208ec23cc13f28148b9e39a976d997 | [
"MIT"
] | null | null | null | import sys
n1 = int(sys.argv[1])
n2 = int(sys.argv[2])
if n1 + n2 <= 0:
print('You have chosen the path of destitution.')
elif 1 <= (n1 + n2) <= 100:
print('You have chosen the path of plenty.')
else:
print('You have chosen the path of excess.')
| 20.076923 | 53 | 0.62069 | 46 | 261 | 3.521739 | 0.5 | 0.148148 | 0.222222 | 0.333333 | 0.5 | 0.5 | 0.5 | 0 | 0 | 0 | 0 | 0.064356 | 0.226054 | 261 | 12 | 54 | 21.75 | 0.737624 | 0 | 0 | 0 | 0 | 0 | 0.421456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.333333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84e19ba2c6107db9072d391fee40d05418fcecf9 | 6,329 | py | Python | roles/openshift_openstack/library/os_service_catalog.py | Roscoe198/Ansible-Openshift | b874bef456852ef082a27dfec4f2d7d466702370 | [
"Apache-2.0"
] | 164 | 2015-07-29T17:35:04.000Z | 2021-12-16T16:38:04.000Z | roles/openshift_openstack/library/os_service_catalog.py | Roscoe198/Ansible-Openshift | b874bef456852ef082a27dfec4f2d7d466702370 | [
"Apache-2.0"
] | 3,634 | 2015-06-09T13:49:15.000Z | 2022-03-23T20:55:44.000Z | roles/openshift_openstack/library/os_service_catalog.py | Roscoe198/Ansible-Openshift | b874bef456852ef082a27dfec4f2d7d466702370 | [
"Apache-2.0"
] | 250 | 2015-06-08T19:53:11.000Z | 2022-03-01T04:51:23.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright 2018 Red Hat, Inc. and/or its affiliates
# and other contributors as indicated by the @author tags.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=unused-wildcard-import,wildcard-import,unused-import,redefined-builtin
''' os_service_catalog_facts '''
from ansible.module_utils.basic import AnsibleModule
try:
import shade
HAS_SHADE = True
except ImportError:
HAS_SHADE = False
DOCUMENTATION = '''
---
module: os_service_catalog_facts
short_description: Retrieve OpenStack service catalog facts
description:
- Retrieves all the available OpenStack services
notes:
- This module creates a new top-level C(openstack_service_catalog) fact
which contains a dictionary of OpenStack service endpoints like
network and load-balancers.
author:
- "Antoni Segura Puimedon <antoni@redhat.com>"
'''
RETURN = '''
openstack_service_catalog:
description: OpenStack available services.
type: dict
returned: always
sample:
alarming:
- adminURL: http://172.16.0.9:8042
id: 2c40b50da0bb44178db91c8a9a29a46e
internalURL: http://172.16.0.9:8042
publicURL: https://mycloud.org:13042
region: regionOne
cloudformation:
- adminURL: http://172.16.0.9:8000/v1
id: 46648eded04e463281a9cba7ddcc45cb
internalURL: http://172.16.0.9:8000/v1
publicURL: https://mycloud.org:13005/v1
region: regionOne
compute:
- adminURL: http://172.16.0.9:8774/v2.1
id: bff1bc5dd92842c281b2358a6d15c5bc
internalURL: http://172.16.0.9:8774/v2.1
publicURL: https://mycloud.org:13774/v2.1
region: regionOne
event:
- adminURL: http://172.16.0.9:8779
id: 608ac3666ef24f2e8f240785b8612efb
internalURL: http://172.16.0.9:8779
publicURL: https://mycloud.org:13779
region: regionOne
identity:
- adminURL: https://mycloud.org:35357
id: 4d07689ce46b4d51a01cc873bc772c80
internalURL: http://172.16.0.9:5000
publicURL: https://mycloud.org:13000
region: regionOne
image:
- adminURL: http://172.16.0.9:9292
id: 1850105115ea493eb65f3f704d421291
internalURL: http://172.16.0.9:9292
publicURL: https://mycloud.org:13292
region: regionOne
metering:
- adminURL: http://172.16.0.9:8777
id: 4cae4dcabe0a4914a6ec6dabd62490ba
internalURL: http://172.16.0.9:8777
publicURL: https://mycloud.org:13777
region: regionOne
metric:
- adminURL: http://172.16.0.9:8041
id: 29bcecf9a06f40f782f19dd7492af352
internalURL: http://172.16.0.9:8041
publicURL: https://mycloud.org:13041
region: regionOne
network:
- adminURL: http://172.16.0.9:9696
id: 5d5785c9b8174c21bfb19dc3b16c87fa
internalURL: http://172.16.0.9:9696
publicURL: https://mycloud.org:13696
region: regionOne
object-store:
- adminURL: http://172.17.0.9:8080
id: 031f1e342fdf4f25b6099d1f3b0847e3
internalURL: http://172.17.0.9:8080/v1/AUTH_6d2847d6a6414308a67644eefc7b98c7
publicURL: https://mycloud.org:13808/v1/AUTH_6d2847d6a6414308a67644eefc7b98c7
region: regionOne
orchestration:
- adminURL: http://172.16.0.9:8004/v1/6d2847d6a6414308a67644eefc7b98c7
id: 1e6cecbd15b3413d9411052c52b9d433
internalURL: http://172.16.0.9:8004/v1/6d2847d6a6414308a67644eefc7b98c7
publicURL: https://mycloud.org:13004/v1/6d2847d6a6414308a67644eefc7b98c7
region: regionOne
placement:
- adminURL: http://172.16.0.9:8778/placement
id: 1f2551e5450c4bd6a9f716f92e93a154
internalURL: http://172.16.0.9:8778/placement
publicURL: https://mycloud.org:13778/placement
region: regionOne
volume:
- adminURL: http://172.16.0.9:8776/v1/6d2847d6a6414308a67644eefc7b98c7
id: 38e369a0e17346fe8e37a20146e005ef
internalURL: http://172.16.0.9:8776/v1/6d2847d6a6414308a67644eefc7b98c7
publicURL: https://mycloud.org:13776/v1/6d2847d6a6414308a67644eefc7b98c7
region: regionOne
volumev2:
- adminURL: http://172.16.0.9:8776/v2/6d2847d6a6414308a67644eefc7b98c7
id: 113a0bff9f2347b6b8774407a1c8d572
internalURL: http://172.16.0.9:8776/v2/6d2847d6a6414308a67644eefc7b98c7
publicURL: https://mycloud.org:13776/v2/6d2847d6a6414308a67644eefc7b98c7
region: regionOne
volumev3:
- adminURL: http://172.16.0.9:8776/v3/6d2847d6a6414308a67644eefc7b98c7
id: 9982c0afd28941a19feb1ffb13b91daf
internalURL: http://172.16.0.9:8776/v3/6d2847d6a6414308a67644eefc7b98c7
publicURL: https://mycloud.org:13776/v3/6d2847d6a6414308a67644eefc7b98c7
region: regionOne
'''
def main():
''' Main module function '''
module = AnsibleModule(argument_spec={}, supports_check_mode=True)
if not HAS_SHADE:
module.fail_json(msg='shade is required for this module')
try:
cloud = shade.openstack_cloud()
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to connect to the cloud')
try:
service_catalog = cloud.cloud_config.get_service_catalog()
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to retrieve the service catalog')
try:
endpoints = service_catalog.get_endpoints()
# pylint: disable=broad-except
except Exception:
module.fail_json(msg='Failed to retrieve the service catalog '
'endpoints')
module.exit_json(
changed=False,
ansible_facts={'openstack_service_catalog': endpoints})
if __name__ == '__main__':
main()
| 35.55618 | 88 | 0.686522 | 721 | 6,329 | 5.966713 | 0.321775 | 0.047187 | 0.056485 | 0.062762 | 0.324965 | 0.311948 | 0.188052 | 0.155742 | 0.056253 | 0.056253 | 0 | 0.208258 | 0.211724 | 6,329 | 177 | 89 | 35.757062 | 0.654039 | 0.140465 | 0 | 0.173913 | 0 | 0 | 0.840422 | 0.107803 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007246 | false | 0 | 0.021739 | 0 | 0.028986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84e3c7c02f7dc25ab2a0825c94313b4e836de166 | 28,308 | py | Python | spinoffs/inference_gym/inference_gym/internal/datasets/synthetic_plasma_spectroscopy.py | mederrata/probability | bc6c411b0fbd83141f303f91a27343fe3c43a797 | [
"Apache-2.0"
] | 1 | 2022-03-22T11:56:31.000Z | 2022-03-22T11:56:31.000Z | spinoffs/inference_gym/inference_gym/internal/datasets/synthetic_plasma_spectroscopy.py | robot0102/probability | 89d248c420b8ecabfd9d6de4a1aa8d3886920049 | [
"Apache-2.0"
] | null | null | null | spinoffs/inference_gym/inference_gym/internal/datasets/synthetic_plasma_spectroscopy.py | robot0102/probability | 89d248c420b8ecabfd9d6de4a1aa8d3886920049 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# pylint: disable=line-too-long
r"""Synthetic dataset generated from the PlasmaSpectroscopy model.
This was generated using the following snippet:
```python
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
from inference_gym.internal import array_to_source
from inference_gym import using_tensorflow as gym
import numpy as np
num_sensors = 40
num_wavelengths = 40
wavelengths = np.linspace(0.01, 0.2, num_wavelengths)
center_wavelength = wavelengths.mean()
model = gym.targets.PlasmaSpectroscopy(
tf.zeros((num_wavelengths, num_sensors)),
wavelengths=wavelengths,
center_wavelength=center_wavelength)
sample, dataset = model._sample_dataset(seed=(0, 8))
sources = []
for k, v in sample._asdict().items():
sources.append(
array_to_source.array_to_source(
k.upper(), v))
for k, v in dataset.items():
sources.append(
array_to_source.array_to_source(
k.upper(), v))
with open('/tmp/synthetic_plasma_spectroscopy.py', 'w') as f:
f.write("\n\n".join(sources))
```
Note that the final `_sample_dataset` is not reproducible reproducible across
software versions, hence the output is checked in.
"""
import numpy as np
AMPLITUDE = np.array([
1.4802036,
1.8915913,
-0.011120212,
1.1328301,
1.2841645,
0.6033605,
-1.887041,
-2.012894,
0.046582267,
1.5555662,
0.4305847,
-1.7179363,
-1.1399889,
-0.4432812,
-1.4721184,
0.35457477,
]).reshape((16,))
TEMPERATURE = np.array([
1.2321296,
-0.020694781,
-1.3441145,
-0.51342154,
-0.6282792,
-0.22180416,
-1.0089059,
1.4475185,
-1.8519154,
0.5540126,
-1.3644233,
1.5542297,
-0.4033564,
-0.029513652,
-0.14812116,
0.93214256,
]).reshape((16,))
VELOCITY = np.array([
0.010279292,
-1.6109133,
0.85784495,
0.8826037,
0.19365458,
-0.36963812,
1.2059057,
-0.93545884,
0.38819882,
1.6983186,
-1.8130875,
0.94406796,
-0.79738003,
-1.0478632,
-0.38848934,
-0.48529625,
]).reshape((16,))
SHIFT = np.array([
-0.5514385,
]).reshape(())
WAVELENGTHS = np.array([
0.01,
0.014871794871794873,
0.019743589743589744,
0.024615384615384615,
0.029487179487179487,
0.03435897435897436,
0.039230769230769236,
0.04410256410256411,
0.04897435897435898,
0.05384615384615385,
0.05871794871794872,
0.06358974358974359,
0.06846153846153846,
0.07333333333333333,
0.0782051282051282,
0.08307692307692308,
0.08794871794871795,
0.09282051282051282,
0.09769230769230769,
0.10256410256410256,
0.10743589743589743,
0.1123076923076923,
0.11717948717948717,
0.12205128205128205,
0.12692307692307694,
0.13179487179487182,
0.1366666666666667,
0.14153846153846156,
0.14641025641025643,
0.1512820512820513,
0.15615384615384617,
0.16102564102564104,
0.1658974358974359,
0.17076923076923078,
0.17564102564102566,
0.18051282051282053,
0.1853846153846154,
0.19025641025641027,
0.19512820512820514,
0.2,
]).reshape((40,))
CENTER_WAVELENGTH = np.array([
0.10500000000000001,
]).reshape(())
MEASUREMENTS = np.array([
-0.66101485,
0.31644753,
-0.5896422,
0.4764485,
2.1545932,
15.793148,
8.2264805,
6.457074,
5.7062893,
6.1811686,
8.777044,
6.9074125,
7.9522552,
7.701313,
8.559349,
8.296498,
6.1969037,
6.4804926,
6.8852997,
8.830744,
14.376627,
0.54612935,
0.124028,
0.44405863,
0.5131382,
0.5987899,
0.008983987,
-0.24756075,
0.7618118,
-0.21146192,
0.4546959,
0.09494688,
-0.26813537,
0.5798886,
-0.10784844,
0.18372172,
0.8161483,
-0.3787802,
0.61460984,
-0.41957632,
0.13647377,
-0.3481221,
0.03326019,
1.7144626,
3.8620698,
14.40822,
9.046495,
7.6838465,
7.2554746,
8.057631,
11.189637,
9.038466,
8.125581,
8.294034,
10.172681,
11.90528,
7.1925435,
6.708079,
7.6085744,
9.414239,
14.608672,
1.5265317,
1.09792,
0.29970562,
0.29824358,
0.36030084,
-0.37960574,
0.47860667,
0.91203105,
-0.6904322,
-0.2722036,
0.23733543,
-0.6658274,
0.62095886,
0.73466265,
-0.8475226,
-0.1700871,
0.9261157,
0.422822,
0.32836267,
0.58122945,
-0.83155084,
-0.20049855,
-0.040298104,
4.014356,
16.160791,
7.2828264,
7.3377733,
6.665611,
8.653453,
11.973017,
9.656379,
10.9801235,
9.05112,
10.565474,
11.942185,
7.2904882,
7.4630857,
6.514908,
9.644132,
14.969957,
0.07107994,
0.11467081,
0.92357284,
0.04355552,
0.6726098,
-0.15279476,
0.713554,
0.5466241,
-0.38109347,
0.5590394,
0.08306945,
0.9525252,
0.6713458,
0.51892877,
-0.1279359,
-0.15663871,
0.020156374,
-0.060285714,
-1.0264076,
-0.53699505,
-0.9786586,
0.015289649,
1.5724823,
4.0689135,
13.646254,
8.417458,
7.3368583,
6.966266,
8.73208,
14.498494,
10.2102165,
11.423929,
11.351579,
12.9430065,
15.01266,
9.051174,
7.077483,
6.785291,
9.483119,
15.76488,
1.1677985,
1.6693239,
-0.21604359,
0.32284033,
-0.22243214,
0.60323435,
-0.11199745,
0.29957047,
0.006062749,
0.7996792,
0.3094816,
-0.7718058,
0.503415,
0.07231447,
-0.2853677,
0.4330218,
0.844616,
-0.19574685,
-0.3879851,
0.5901966,
0.051313907,
-0.29432508,
1.2537544,
3.1426716,
14.615546,
8.347049,
7.4366584,
6.4491363,
9.865336,
15.843064,
12.469691,
11.894229,
12.133173,
14.63979,
16.16245,
9.504371,
8.017702,
7.867693,
9.518961,
14.380217,
0.66953653,
0.60293055,
0.00082825124,
-0.28320992,
0.8367502,
0.12513764,
0.22053392,
-0.10229007,
-0.20082277,
0.63717407,
0.32739908,
-0.093239225,
-0.80318755,
0.9917766,
0.24838758,
-0.07330545,
0.15537623,
0.09008534,
-0.06607497,
1.0962121,
0.55644095,
0.6913326,
0.9021442,
3.8921309,
14.102233,
7.184174,
7.315026,
7.334084,
10.787065,
19.485243,
13.958044,
14.3500805,
13.616628,
15.63192,
17.07027,
9.131023,
6.8167133,
6.970449,
8.922994,
14.361785,
1.7793398,
0.94775784,
0.105669454,
-0.18747061,
0.6676264,
-0.3883816,
-0.6202498,
-0.0833843,
-0.5216094,
1.1268811,
-0.59910476,
0.39042526,
0.47714886,
-0.7111677,
-0.5756576,
0.9333002,
0.1010186,
0.13677923,
-0.75147396,
1.2583244,
-0.23063457,
0.7901664,
0.24705392,
3.6259048,
12.530731,
6.9297647,
7.079164,
7.2256374,
11.940973,
20.025602,
14.700426,
13.519883,
14.241193,
17.55714,
17.386055,
10.167002,
7.536337,
7.0136056,
9.326938,
12.228463,
0.17775005,
0.8319777,
-0.8991761,
-0.01412341,
0.61705685,
-0.14188325,
-0.41435227,
-0.316557,
-0.5893145,
-0.010637931,
0.20675054,
0.44020182,
-0.7080041,
0.16052538,
-0.48142046,
0.9052833,
0.432698,
0.03338314,
0.35594848,
1.1689888,
0.36019892,
0.23971666,
1.4662509,
3.3352752,
11.360069,
8.300535,
7.5611286,
7.2111707,
17.327162,
20.148909,
17.380922,
17.596447,
14.160338,
19.188683,
17.219112,
10.499862,
8.309862,
6.1963353,
7.3864193,
12.878287,
1.4184926,
1.7496321,
-0.082713336,
0.23216072,
0.20258206,
1.0141679,
0.14271286,
-0.29340488,
-0.055605985,
-0.5336929,
-0.54352623,
0.19902669,
0.12139763,
-0.018293247,
-0.20558693,
-0.8606704,
0.22833318,
0.4463366,
0.20494421,
0.7066752,
-0.62247527,
0.117985666,
1.831157,
3.299585,
9.63925,
7.483565,
7.1289496,
6.4751153,
15.985568,
21.507505,
18.539736,
16.699535,
16.726501,
19.698357,
22.443224,
11.952675,
7.005475,
6.2864413,
8.778635,
10.89195,
0.66351974,
1.1440128,
-0.25076824,
0.66586065,
1.0526825,
0.015522989,
0.07891381,
1.104366,
0.7747889,
0.15351877,
-0.12182697,
-0.59052014,
-0.12581429,
0.5053382,
0.17305401,
0.67090386,
1.036633,
0.05909565,
0.28418896,
0.86726683,
0.1763895,
0.33444333,
1.7197226,
2.5705223,
9.934082,
6.614648,
5.9702163,
7.0940704,
18.322672,
24.886862,
18.648033,
19.174364,
17.071978,
18.935146,
20.495438,
13.39125,
7.1744776,
5.476832,
7.2689962,
10.46958,
1.1804211,
1.0994785,
0.64040864,
0.021063149,
0.75519574,
0.40024444,
-0.48553574,
0.87461084,
-0.23675112,
0.1914608,
-0.49892142,
0.2618199,
0.6261685,
-1.4913763,
0.41756257,
0.5763335,
-0.45616063,
0.38227928,
-0.6692691,
1.8232274,
0.7977414,
0.40125495,
2.787939,
3.2074018,
8.831141,
6.6602535,
7.500632,
8.793667,
18.995548,
23.698793,
18.186054,
17.543282,
18.392523,
20.788607,
24.634804,
14.188387,
8.168461,
5.5740485,
6.8008204,
8.531001,
1.4529983,
2.276989,
1.0289037,
0.9468033,
-0.038641334,
-0.39401633,
-1.1387177,
0.49660775,
0.5171432,
-0.6254447,
1.2226907,
-0.13812594,
0.11419458,
-0.36041245,
0.16572447,
-0.2501292,
-0.95744544,
0.6987992,
0.3099944,
1.108943,
0.41807377,
1.350997,
1.2673455,
3.2821457,
8.0927515,
5.9851384,
4.8361425,
8.642136,
20.54146,
23.320255,
20.936903,
19.881096,
18.084406,
20.986282,
22.538109,
15.849695,
7.59143,
5.759286,
7.9955835,
7.542832,
1.5869404,
2.191163,
-0.0054766536,
0.38372415,
1.4580531,
-0.6341528,
-0.20307654,
-0.82046396,
0.30573404,
0.59632486,
-0.12896755,
-0.42806864,
-0.47942856,
-0.7036555,
0.075889945,
0.29308736,
-1.4974035,
-0.036708307,
-0.43896213,
0.54672736,
1.3562044,
1.5058006,
2.0175235,
3.2622445,
7.817541,
6.1968045,
5.7298784,
8.535798,
22.878216,
23.569859,
21.438442,
20.779306,
18.338245,
23.335554,
23.656643,
16.534071,
7.0056953,
5.3699074,
6.2035737,
6.91238,
1.8461741,
2.0328891,
0.6284174,
0.07324934,
0.72266495,
0.43248987,
0.55657876,
-0.36850226,
0.2892055,
0.120979175,
-0.3255677,
0.18210961,
-0.13677588,
-0.79952997,
-0.16948017,
0.27382505,
0.011414817,
-0.002753294,
0.1875501,
1.7294772,
0.86453336,
0.8789885,
2.0237687,
2.686733,
7.0931683,
6.7965593,
5.703301,
9.106176,
19.852842,
22.134148,
24.209602,
20.48003,
19.87589,
22.650255,
24.67572,
17.161873,
7.185769,
5.12218,
5.9893394,
5.907269,
2.1844404,
1.9687537,
1.0286644,
0.052360654,
1.7644687,
0.5339646,
-0.53046066,
-0.2281848,
-1.2462859,
0.6778776,
0.5408989,
-0.14820653,
0.38658077,
-0.65733767,
0.014478714,
0.45866382,
0.47466084,
0.48330665,
0.52647215,
1.6572766,
-0.093874216,
1.0939939,
2.8252633,
3.250628,
7.286972,
5.736179,
5.5879693,
9.545634,
22.925808,
23.213871,
23.39594,
21.748808,
22.024412,
24.974943,
23.57301,
18.065563,
8.397812,
4.8709254,
7.626314,
4.6410003,
1.8595266,
3.0831103,
1.4402436,
1.2672244,
1.312456,
-0.18201214,
0.21097422,
-0.026861114,
0.18476872,
0.7252849,
-0.002409873,
-0.29303908,
1.3546691,
-0.04322617,
-0.053203642,
-0.30067968,
-0.12050266,
-0.5528519,
0.057745364,
1.3053449,
1.8519605,
1.8503615,
2.5469666,
4.2060847,
5.5301046,
7.0553675,
5.9386334,
11.875089,
23.438046,
20.363987,
23.725615,
20.967691,
21.432257,
24.202627,
19.774887,
18.783188,
7.98809,
6.2239876,
7.760503,
5.212336,
2.9735184,
2.7213335,
2.0156252,
1.814288,
2.2770615,
0.01533184,
0.58220863,
-0.49351138,
0.31417957,
-0.36469758,
0.45743746,
0.66627234,
0.3081961,
0.828259,
-0.31382263,
0.26520026,
0.22944771,
-0.6709603,
-0.07570245,
1.5327783,
1.7784487,
2.6468341,
3.198592,
3.7656205,
5.9252257,
6.9020658,
4.9581833,
12.047751,
22.348654,
20.17518,
24.174393,
21.535011,
19.05106,
22.163195,
21.497072,
18.43445,
8.682917,
5.3132563,
7.030179,
3.717919,
2.0626392,
2.4575338,
2.2717822,
0.8625143,
2.4770658,
-0.786061,
1.2881083,
-0.2518999,
0.72405684,
-0.122574806,
-0.34197915,
0.13918422,
0.26873538,
-0.47515658,
-0.54810023,
0.89566797,
-0.54384357,
-0.12311963,
0.567525,
2.7046611,
1.5512958,
1.7786896,
3.8791292,
3.9559023,
4.788476,
8.228316,
5.3946,
12.281274,
21.967098,
20.923243,
23.913458,
20.710938,
19.420635,
25.138704,
18.289383,
19.177135,
8.415327,
4.8929396,
8.965305,
4.3885813,
3.4578655,
3.0384607,
1.5863328,
1.91974,
2.4258208,
0.5892152,
0.048560977,
-0.13528748,
-0.21397328,
0.16264682,
-0.57951355,
-0.40301454,
0.21641892,
-0.22450455,
0.38177252,
-0.967473,
-0.35485935,
0.062246032,
-0.03395147,
2.1338463,
1.9084859,
3.1863737,
1.9375713,
3.4518764,
6.570703,
6.878443,
5.679476,
13.351213,
22.931889,
19.282558,
22.36135,
23.796984,
21.032475,
23.09803,
20.966232,
20.72223,
6.7338567,
6.4885483,
7.190284,
4.9310346,
3.1236634,
3.5150487,
2.9693668,
2.2454295,
1.82249,
-0.09966546,
0.72314006,
-0.79027426,
0.41793302,
-0.14793015,
0.45988762,
0.8456978,
-0.5273398,
0.1830612,
-1.0828326,
-1.0117317,
-0.3019783,
0.17001551,
-0.62556803,
2.961217,
2.6823378,
2.9682546,
5.2445164,
4.9527783,
6.309333,
7.7392774,
6.2129936,
15.35368,
20.683935,
20.589102,
22.10926,
20.185204,
20.562426,
22.645317,
18.869568,
20.659521,
8.880328,
6.4410696,
9.769155,
5.5935693,
5.527752,
4.5683465,
3.4019177,
3.3163903,
2.244741,
0.38402623,
0.2960868,
-0.4828044,
0.13759217,
0.25681636,
0.11657055,
-0.330115,
0.4011577,
-0.7654019,
0.14916949,
-0.6228205,
-0.96823233,
-0.022868,
-0.49047035,
3.20636,
2.6912642,
2.9050756,
4.912674,
5.7441964,
6.489336,
9.632326,
6.2825303,
16.68777,
21.077969,
17.172966,
18.92938,
23.38385,
20.251026,
22.16378,
18.001736,
20.24098,
11.019654,
6.6073513,
8.655663,
6.298364,
6.4654784,
3.6983974,
3.1087956,
2.226927,
2.6668777,
-0.35526595,
1.4488825,
0.20488043,
0.047601122,
-0.6924504,
0.57495445,
0.5399022,
-0.47663862,
0.8161736,
-0.36598107,
-0.59101355,
0.20327158,
0.41677478,
0.27029967,
3.7847342,
3.2484818,
3.747693,
4.7734656,
6.716756,
8.185982,
9.418276,
7.493696,
14.704602,
17.729408,
17.48148,
19.855602,
20.371563,
18.5821,
18.155266,
16.968113,
17.100256,
10.015516,
7.8247633,
8.993816,
6.4911056,
6.2132425,
4.3434267,
3.7000012,
3.7377622,
3.1024928,
-0.30869377,
0.051026687,
-0.34078225,
0.7479868,
0.03696166,
-0.75611556,
1.1542099,
-0.028129257,
0.08181842,
0.09559424,
0.8364861,
0.096545294,
0.5584201,
-0.5194905,
3.589691,
4.05453,
3.794124,
4.707637,
9.231918,
8.564278,
9.2333975,
7.006125,
16.20831,
19.324417,
15.819074,
19.356344,
17.93927,
18.384487,
18.001207,
16.142382,
21.02356,
9.986794,
6.614442,
10.657583,
6.6237283,
8.433239,
4.4907804,
4.2819304,
3.7269611,
3.5132716,
0.4662154,
0.30799574,
0.96793664,
-0.23279454,
-0.65458816,
0.3362532,
-0.25408295,
0.06732974,
0.4873681,
0.51199776,
0.14874719,
-0.29994798,
0.4666868,
0.33490536,
3.3489285,
2.9599032,
3.7671084,
5.274986,
11.143537,
9.2554245,
9.07235,
9.138557,
17.255503,
18.355011,
15.364281,
17.336935,
18.85955,
17.050003,
15.608138,
15.812602,
18.231024,
11.6336155,
6.9478188,
11.149977,
7.419574,
10.250601,
4.7022414,
3.971905,
4.7929826,
3.3438401,
-0.39000547,
-0.28059074,
0.6398243,
0.54544014,
0.6069346,
-0.17257981,
0.22857136,
0.5565434,
0.004583537,
-1.6335539,
-0.8888735,
-0.51765877,
0.25269827,
-0.01876194,
3.6656997,
3.8518455,
5.484056,
6.189166,
12.860901,
9.803692,
10.184517,
8.937886,
17.70772,
18.956602,
15.036017,
18.585073,
18.892986,
18.184309,
15.378883,
13.1691475,
16.713081,
11.373385,
10.050861,
11.757488,
10.44355,
12.29941,
4.694755,
5.29064,
3.8482742,
3.204164,
0.0923521,
0.023937136,
0.1471634,
0.6328977,
0.086753555,
0.4752982,
-0.6725007,
0.39593527,
0.22832835,
-0.27118513,
-0.8305444,
0.61332023,
-0.46385112,
-0.07130288,
3.392937,
5.612763,
5.2056,
5.706025,
15.220109,
11.131699,
11.811647,
9.684384,
18.768026,
16.84839,
13.052551,
16.32535,
17.554602,
17.395172,
14.127713,
12.6871,
17.62177,
11.645812,
8.629343,
11.129438,
11.581531,
14.195255,
4.8469067,
5.1938415,
4.0862703,
3.181031,
-1.0452468,
-0.25019166,
-0.7914238,
0.12144237,
-0.41462633,
0.54280686,
-0.69631076,
0.3511648,
0.004874259,
-0.06835556,
0.8735261,
0.24838078,
-0.31527227,
0.52716863,
3.9399889,
6.0550613,
6.129095,
6.861085,
18.186186,
11.700109,
9.944186,
8.473949,
16.194746,
15.487744,
11.69865,
15.148699,
17.62606,
18.724825,
14.773164,
12.397501,
17.29195,
12.904611,
10.236364,
9.858109,
12.551205,
17.244278,
5.081826,
5.861555,
4.532901,
2.9011462,
-0.6339103,
-0.14527631,
-0.34604034,
0.16419859,
-0.21205892,
1.0102317,
-0.6850754,
-0.35831228,
0.2243401,
-0.12707797,
0.12315286,
0.75053287,
-0.30611196,
0.946708,
3.2013948,
5.563331,
4.7585716,
7.213843,
20.686522,
11.607341,
12.30799,
10.50174,
15.599098,
14.504682,
13.629604,
13.69594,
17.019728,
16.432478,
13.931328,
13.392891,
16.40223,
12.716988,
10.136288,
11.304484,
14.544636,
18.359613,
5.5700507,
5.302722,
5.3971443,
4.0632043,
0.34419727,
-0.43536162,
0.2166448,
-0.95898896,
0.54851377,
0.7104762,
0.73580873,
-0.025371978,
-0.42447037,
-0.055623855,
-0.057257153,
-0.042765763,
-0.32910374,
0.110769786,
4.9113693,
6.042119,
5.789901,
8.213889,
21.399662,
13.620898,
12.268165,
12.022924,
15.812675,
14.541431,
11.235446,
13.432023,
16.380638,
17.424328,
13.075844,
13.108509,
16.125572,
12.70376,
9.833503,
12.167731,
15.966658,
19.35662,
4.726227,
5.754112,
5.277654,
3.513394,
0.27682012,
-0.6424214,
0.63972783,
0.052361738,
0.6900285,
0.8120001,
0.13217215,
-0.06418637,
-0.34938893,
-0.1332957,
-0.14414565,
0.13367409,
0.2113514,
0.013457297,
5.1611977,
5.566288,
5.6893077,
6.982988,
20.4595,
14.453565,
13.59946,
10.934562,
16.137613,
14.927114,
11.994792,
13.434463,
17.021969,
17.274439,
13.322607,
11.919087,
16.481926,
12.076119,
10.847066,
11.398886,
16.077639,
19.727343,
4.5308523,
6.236413,
4.8869467,
3.9474933,
0.5430834,
-0.16916445,
1.1437705,
0.16070405,
0.31188658,
0.8880989,
-0.14495048,
-0.5266939,
0.22656989,
0.3505556,
0.015732061,
-0.005636345,
-0.56870633,
0.40287915,
4.4800043,
4.970619,
4.5086727,
7.2337227,
21.180979,
13.984755,
12.418574,
10.579776,
14.925623,
11.359912,
10.660921,
12.467203,
17.208267,
17.148045,
11.586628,
11.8577,
13.493896,
13.254265,
10.851606,
13.149869,
17.053873,
19.849815,
4.9660897,
5.8460274,
3.998473,
3.6802619,
0.8031087,
-0.013905935,
0.3503995,
0.31186494,
-0.038673762,
-0.07608058,
0.21588215,
-0.23191574,
-0.3952367,
-0.09744672,
0.10716237,
-1.3977432,
-0.2775279,
0.28267142,
3.4341362,
5.5165367,
4.798283,
5.5223513,
23.267078,
15.076336,
13.030845,
10.9562845,
13.846566,
11.140822,
10.528686,
12.319912,
15.81127,
17.356304,
10.330765,
10.917309,
11.82135,
11.22828,
9.395469,
12.859789,
15.528548,
18.173409,
4.9549546,
7.068773,
5.830448,
2.882567,
-0.47524917,
-0.3299339,
0.19532575,
-0.5605442,
-0.05505767,
-0.22165492,
-0.4325593,
0.13398468,
-0.34254703,
0.0140561955,
-0.31874263,
-0.14240773,
-0.91078305,
0.69452536,
4.23155,
5.7011547,
6.0003905,
6.377488,
20.312622,
13.978043,
11.040157,
11.176402,
13.108543,
9.652381,
9.632209,
11.781593,
14.856762,
15.745179,
9.215103,
9.966311,
12.876652,
11.37008,
10.591258,
10.1424675,
14.367625,
19.73172,
3.84762,
7.103483,
3.7233605,
2.376824,
0.5252924,
0.38380843,
0.99321234,
-0.46900645,
0.12149067,
0.42257598,
0.0632253,
-0.6670193,
0.03464376,
0.452787,
0.29236665,
-0.017891373,
-0.075127214,
0.9828477,
2.3365817,
5.2860856,
4.3626456,
5.785785,
20.600492,
12.966171,
11.047343,
9.063554,
10.454045,
10.47048,
9.218836,
11.104739,
15.136548,
14.689532,
10.122101,
9.4212675,
11.134829,
8.617753,
9.327736,
11.278048,
13.085438,
18.43459,
3.9763334,
5.9072723,
3.9930198,
3.4963682,
0.2813723,
1.0457343,
0.31889322,
0.37867522,
1.2037315,
-0.47904515,
0.582204,
0.68306595,
-0.088313825,
-0.107233785,
-0.53984404,
0.39104667,
1.1425363,
0.51777375,
2.9267018,
5.183814,
4.495046,
4.6087675,
18.143732,
12.06679,
8.621597,
7.8071413,
9.6548195,
8.168409,
7.199488,
7.962524,
13.9421425,
12.19501,
8.027851,
8.022394,
8.449041,
8.428407,
7.2122917,
9.045476,
12.2283,
16.851568,
4.1475954,
5.7582254,
3.977257,
1.8516432,
-0.32922924,
-0.12237206,
-0.072756164,
-0.6167613,
0.5225413,
0.37072095,
-0.6287377,
-0.7166235,
-0.37311992,
0.81874573,
0.17337193,
0.17729722,
0.40824133,
-0.3479744,
2.9783738,
4.5450144,
3.9617758,
4.9179983,
15.7159395,
10.0808935,
7.922992,
6.9472337,
9.000638,
7.62391,
6.7539964,
8.514194,
12.004702,
12.731859,
7.173314,
7.301387,
7.240425,
7.4015136,
7.516923,
8.6178665,
9.913477,
14.592376,
4.5969114,
5.9667635,
2.2334886,
2.1020658,
-0.9194653,
0.43381432,
-0.74259335,
-0.8438142,
0.01724637,
-0.6245163,
0.34715256,
-0.24820891,
-0.6074153,
-0.066010244,
-0.05560958,
-0.32758415,
0.3784681,
-0.09629097,
2.7877793,
4.203103,
3.26329,
4.44158,
12.650619,
8.000976,
5.2695656,
5.8276386,
7.0067124,
6.36843,
5.256174,
7.340733,
9.230904,
13.014863,
5.453347,
6.2923303,
6.518343,
6.5802903,
5.615034,
7.000242,
8.82858,
11.683347,
3.8504424,
4.365258,
3.2354295,
2.2202947,
0.5615039,
0.41533247,
0.21722497,
0.3176445,
0.2709266,
-0.2929376,
0.090651914,
-0.32017383,
-0.30647907,
0.15408067,
-0.3604456,
0.6241022,
0.42943946,
0.30790985,
2.0098479,
3.1669462,
3.8518548,
4.0607076,
11.639872,
5.7104745,
7.125849,
5.09103,
5.6111135,
3.951972,
4.0356493,
7.02897,
11.430392,
11.738871,
4.115266,
5.621048,
5.3278913,
5.120655,
5.990115,
5.7664003,
5.7767644,
9.013329,
2.9515538,
5.6055756,
4.1827626,
1.7799046,
-0.21542077,
0.24031225,
-0.6824815,
-0.6190339,
0.6256524,
-0.48574805,
0.09997501,
0.3266095,
0.07135873,
-0.3254111,
-0.047491744,
-0.014772129,
-0.38849118,
0.286563,
2.9551277,
3.957588,
3.0914695,
3.1707056,
8.462824,
4.728864,
5.0381837,
4.0804534,
5.1110387,
4.62399,
4.415538,
6.1308045,
10.654469,
10.723281,
4.4972973,
3.627521,
3.8499038,
4.373936,
4.0010695,
4.3314424,
6.3237967,
7.2798166,
2.3315697,
4.04032,
3.2531312,
2.022844,
-0.5356632,
0.52645034,
0.11135009,
-0.26490784,
0.39241284,
0.13336958,
-0.15545088,
-0.048340384,
0.6705195,
-0.14051451,
-0.7617515,
0.11379189,
0.21909207,
0.63809645,
1.5451268,
4.243852,
3.2245193,
3.3400161,
6.511011,
4.033045,
2.8604522,
3.6116364,
3.5580635,
3.1904101,
2.9593391,
4.813459,
8.871713,
8.875507,
2.922824,
2.6118903,
3.5907378,
2.6278322,
3.5242443,
3.0563798,
4.969574,
5.5496926,
3.3797112,
3.520721,
2.3572729,
1.7771024,
-0.43368375,
-0.6439688,
-0.56648374,
0.25869504,
-0.13318418,
-0.25542453,
-1.2330167,
0.34627095,
1.5127228,
-0.6055812,
0.6232876,
0.23605451,
-0.5616809,
0.500821,
]).reshape((40, 40))
| 15.930219 | 77 | 0.551399 | 3,681 | 28,308 | 4.231731 | 0.503124 | 0.003146 | 0.004173 | 0.002054 | 0.006548 | 0.006548 | 0.006548 | 0.006548 | 0.006548 | 0.006548 | 0 | 0.720682 | 0.309594 | 28,308 | 1,776 | 78 | 15.939189 | 0.076341 | 0.061855 | 0 | 0.002931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.000586 | 0 | 0.000586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84e740e86537e562565c37ab944502008646b1ad | 726 | py | Python | raddiwala/views.py | nikhilbelchada/online-raddiwala | 9f53f2dc63a29c08ee9d61fdf2d3124c79ca7980 | [
"MIT"
] | null | null | null | raddiwala/views.py | nikhilbelchada/online-raddiwala | 9f53f2dc63a29c08ee9d61fdf2d3124c79ca7980 | [
"MIT"
] | null | null | null | raddiwala/views.py | nikhilbelchada/online-raddiwala | 9f53f2dc63a29c08ee9d61fdf2d3124c79ca7980 | [
"MIT"
] | null | null | null | from django.views.generic import TemplateView
from django.utils.decorators import method_decorator
from django.contrib.auth.decorators import login_required
from django.shortcuts import render
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
class IndexTemplateView(TemplateView):
template_name = 'index.html'
@method_decorator(login_required)
def get(self, request, *args, **kwargs):
return render(request, self.template_name)
class UserView(APIView):
permission_classes = (IsAuthenticated, )
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
| 27.923077 | 57 | 0.769972 | 84 | 726 | 6.535714 | 0.5 | 0.07286 | 0.092896 | 0.061931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 726 | 25 | 58 | 29.04 | 0.891234 | 0 | 0 | 0 | 0 | 0 | 0.041379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.411765 | 0.058824 | 0.882353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
84f40d2a66ba94f561380b5d1bd073b352b479d5 | 3,446 | py | Python | scripts/rewrite-uris.py | CaptSolo/bib-rdf-pipeline | 9ff7ba2f1d737e6d496a6b6ff207ba8fe971fb17 | [
"CC0-1.0"
] | 31 | 2016-11-30T10:39:55.000Z | 2021-12-28T11:19:32.000Z | scripts/rewrite-uris.py | CaptSolo/bib-rdf-pipeline | 9ff7ba2f1d737e6d496a6b6ff207ba8fe971fb17 | [
"CC0-1.0"
] | 96 | 2016-12-08T08:21:46.000Z | 2019-08-28T12:55:45.000Z | scripts/rewrite-uris.py | CaptSolo/bib-rdf-pipeline | 9ff7ba2f1d737e6d496a6b6ff207ba8fe971fb17 | [
"CC0-1.0"
] | 5 | 2017-02-28T14:55:42.000Z | 2021-12-16T13:21:51.000Z | #!/usr/bin/env python
"""Rewrite all the marc2bibframe2-generated URIs in the input NT file; output the rewritten NT file on stdout"""
import sys
import re
# regex for detecting URIs generated by marc2bibframe
m2bf_uri = re.compile(r'(\d{9})#(Work|Instance|Agent)((\d\d\d)-(\d+))?')
# regexes for matching N-Triples
IRIREF = r'<[^\x00-\x20<>"{}|^`\\]*>'
BNODE = r'_:\S+'
LITERAL = r'".*"\S*'
TRIPLE = '(%s|%s)\s+(%s)\s+(%s|%s|%s)\s.' % (IRIREF, BNODE, IRIREF, IRIREF, LITERAL, BNODE)
TRIPLE_RE = re.compile(TRIPLE)
def get_typeid(typename, field):
"""determine type ID (single letter indicating type) based on type name and optional field tag parsed from the URI"""
if typename == 'Agent':
if field in ('100', '600', '700'):
return 'P' # Person
else:
return 'O' # Organization
if typename == 'Instance':
return 'I'
if typename == 'Work':
return 'W' # Work
return 'X' # unknown, should never happen
def collect_uris(ntfile):
"""Collect and parse marc2bibframe2-generated URIs from the subject URIs within a NT file,
returning a sequence of dicts with the keys "uri", "recid", "type", "field", "seqno". """
uris = {}
for line in ntfile:
subject = line.split()[0]
if subject[0] != '<':
continue # a blank node, not a URI reference
uri = subject[1:-1] # extract the URI itself
if uri in uris:
continue # already seen it
m = m2bf_uri.search(uri)
if m is None:
continue # not a marc2bibframe2-generated URI
recid = m.group(1)
typename = m.group(2)
field = m.group(4)
seqno = int(m.group(5) or 0)
typeid = get_typeid(typename, field)
uris[uri] = {'uri': uri, 'recid': recid, 'typeid': typeid, 'seqno': seqno}
return uris.values()
def rewrite(uritag, substitutions):
if uritag[0] != '<':
return uritag
uri = uritag[1:-1]
return '<%s>' % substitutions.get(uri, uri)
def rewrite_uris(ntfile, substitutions):
for line in ntfile:
m = TRIPLE_RE.match(line)
if m is None: # no match, just pass it through (a comment perhaps?)
print line,
continue
s = m.group(1)
p = m.group(2)
o = m.group(3)
s = rewrite(s, substitutions)
o = rewrite(o, substitutions)
print "%s %s %s ." % (s, p, o)
with open(sys.argv[1]) as f:
# 1st pass: collect and parse URIs to determine substitutions
uris = collect_uris(f)
# group the URIs by record ID and entity type for renumbering
groups = {}
for uri in uris:
key = (uri['recid'], uri['typeid'])
groups.setdefault(key, [])
groups[key].append(uri)
# determine the new URIs to use instead of the existing ones
substitutions = {}
for key, group_uris in groups.iteritems():
group_uris.sort(key=lambda u:u['seqno'])
if group_uris[0]['seqno'] == 0:
offset = 0
else:
offset = 1
for idx, guri in enumerate(group_uris):
localname = "%s%s%02d" % (guri['typeid'], guri['recid'], idx + offset)
newuri = m2bf_uri.sub(localname, guri['uri'])
substitutions[guri['uri']] = newuri
# rewind back to start
f.seek(0)
# 2nd pass: rewrite all the URIs based on the substitutions
rewrite_uris(f, substitutions)
| 34.118812 | 121 | 0.583575 | 470 | 3,446 | 4.244681 | 0.353191 | 0.01203 | 0.013534 | 0.014035 | 0.004511 | 0.004511 | 0.004511 | 0.004511 | 0 | 0 | 0 | 0.01975 | 0.280035 | 3,446 | 100 | 122 | 34.46 | 0.784361 | 0.16686 | 0 | 0.133333 | 1 | 0 | 0.092289 | 0.041427 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026667 | null | null | 0.026667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
84fa72a574df3f5f1447d915ccb88abf966a52b7 | 45,951 | py | Python | pysnmp/H3C-IPSEC-MONITOR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 11 | 2021-02-02T16:27:16.000Z | 2021-08-31T06:22:49.000Z | pysnmp/H3C-IPSEC-MONITOR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 75 | 2021-02-24T17:30:31.000Z | 2021-12-08T00:01:18.000Z | pysnmp/H3C-IPSEC-MONITOR-MIB.py | agustinhenze/mibs.snmplabs.com | 1fc5c07860542b89212f4c8ab807057d9a9206c7 | [
"Apache-2.0"
] | 10 | 2019-04-30T05:51:36.000Z | 2022-02-16T03:33:41.000Z | #
# PySNMP MIB module H3C-IPSEC-MONITOR-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/H3C-IPSEC-MONITOR-MIB
# Produced by pysmi-0.3.4 at Mon Apr 29 19:09:33 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
SingleValueConstraint, ValueRangeConstraint, ValueSizeConstraint, ConstraintsUnion, ConstraintsIntersection = mibBuilder.importSymbols("ASN1-REFINEMENT", "SingleValueConstraint", "ValueRangeConstraint", "ValueSizeConstraint", "ConstraintsUnion", "ConstraintsIntersection")
h3cCommon, = mibBuilder.importSymbols("HUAWEI-3COM-OID-MIB", "h3cCommon")
ifIndex, = mibBuilder.importSymbols("IF-MIB", "ifIndex")
ObjectGroup, NotificationGroup, ModuleCompliance = mibBuilder.importSymbols("SNMPv2-CONF", "ObjectGroup", "NotificationGroup", "ModuleCompliance")
IpAddress, Counter64, TimeTicks, Unsigned32, ModuleIdentity, ObjectIdentity, iso, NotificationType, MibIdentifier, Counter32, Gauge32, Bits, Integer32, MibScalar, MibTable, MibTableRow, MibTableColumn = mibBuilder.importSymbols("SNMPv2-SMI", "IpAddress", "Counter64", "TimeTicks", "Unsigned32", "ModuleIdentity", "ObjectIdentity", "iso", "NotificationType", "MibIdentifier", "Counter32", "Gauge32", "Bits", "Integer32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn")
TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TextualConvention", "DisplayString")
h3cIPSecMonitor = ModuleIdentity((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7))
if mibBuilder.loadTexts: h3cIPSecMonitor.setLastUpdated('200410260000Z')
if mibBuilder.loadTexts: h3cIPSecMonitor.setOrganization('Huawei-3COM Technologies Co., Ltd.')
class H3cDiffHellmanGrp(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(0, 1, 2, 5, 14, 2147483647))
namedValues = NamedValues(("none", 0), ("modp768", 1), ("modp1024", 2), ("modp1536", 5), ("modp2048", 14), ("invalidGroup", 2147483647))
class H3cEncapMode(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 2147483647))
namedValues = NamedValues(("tunnel", 1), ("transport", 2), ("invalidMode", 2147483647))
class H3cEncryptAlgo(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2147483647))
namedValues = NamedValues(("none", 0), ("desCbc", 1), ("ideaCbc", 2), ("blowfishCbc", 3), ("rc5R16B64Cbc", 4), ("tripledesCbc", 5), ("castCbc", 6), ("aesCbc", 7), ("nsaCbc", 8), ("aesCbc128", 9), ("aesCbc192", 10), ("aesCbc256", 11), ("invalidAlg", 2147483647))
class H3cAuthAlgo(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(0, 1, 2, 2147483647))
namedValues = NamedValues(("none", 0), ("md5", 1), ("sha", 2), ("invalidAlg", 2147483647))
class H3cSaProtocol(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4))
namedValues = NamedValues(("reserved", 0), ("isakmp", 1), ("ah", 2), ("esp", 3), ("ipcomp", 4))
class H3cTrapStatus(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2))
namedValues = NamedValues(("enabled", 1), ("disabled", 2))
class H3cIPSecIDType(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11))
namedValues = NamedValues(("reserved", 0), ("ipv4Addr", 1), ("fqdn", 2), ("userFqdn", 3), ("ipv4AddrSubnet", 4), ("ipv6Addr", 5), ("ipv6AddrSubnet", 6), ("ipv4AddrRange", 7), ("ipv6AddrRange", 8), ("derAsn1Dn", 9), ("derAsn1Gn", 10), ("keyId", 11))
class H3cTrafficType(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 4, 5, 6, 7, 8))
namedValues = NamedValues(("ipv4Addr", 1), ("ipv4AddrSubnet", 4), ("ipv6Addr", 5), ("ipv6AddrSubnet", 6), ("ipv4AddrRange", 7), ("ipv6AddrRange", 8))
class H3cIPSecNegoType(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2, 2147483647))
namedValues = NamedValues(("ike", 1), ("manual", 2), ("invalidType", 2147483647))
class H3cIPSecTunnelState(TextualConvention, Integer32):
status = 'current'
subtypeSpec = Integer32.subtypeSpec + ConstraintsUnion(SingleValueConstraint(1, 2))
namedValues = NamedValues(("active", 1), ("timeout", 2))
h3cIPSecObjects = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1))
h3cIPSecTunnelTable = MibTable((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1), )
if mibBuilder.loadTexts: h3cIPSecTunnelTable.setStatus('current')
h3cIPSecTunnelEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1), ).setIndexNames((0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIfIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunEntryIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIndex"))
if mibBuilder.loadTexts: h3cIPSecTunnelEntry.setStatus('current')
h3cIPSecTunIfIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647)))
if mibBuilder.loadTexts: h3cIPSecTunIfIndex.setStatus('current')
h3cIPSecTunEntryIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647)))
if mibBuilder.loadTexts: h3cIPSecTunEntryIndex.setStatus('current')
h3cIPSecTunIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647)))
if mibBuilder.loadTexts: h3cIPSecTunIndex.setStatus('current')
h3cIPSecTunIKETunnelIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunIKETunnelIndex.setStatus('current')
h3cIPSecTunLocalAddr = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 5), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunLocalAddr.setStatus('current')
h3cIPSecTunRemoteAddr = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 6), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunRemoteAddr.setStatus('current')
h3cIPSecTunKeyType = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 7), H3cIPSecNegoType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunKeyType.setStatus('current')
h3cIPSecTunEncapMode = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 8), H3cEncapMode()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunEncapMode.setStatus('current')
h3cIPSecTunInitiator = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 9), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 2147483647))).clone(namedValues=NamedValues(("local", 1), ("remote", 2), ("none", 2147483647)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInitiator.setStatus('current')
h3cIPSecTunLifeSize = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 10), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunLifeSize.setStatus('current')
h3cIPSecTunLifeTime = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunLifeTime.setStatus('current')
h3cIPSecTunRemainTime = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 12), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunRemainTime.setStatus('current')
h3cIPSecTunActiveTime = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 13), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunActiveTime.setStatus('current')
h3cIPSecTunRemainSize = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 14), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunRemainSize.setStatus('current')
h3cIPSecTunTotalRefreshes = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 15), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunTotalRefreshes.setStatus('current')
h3cIPSecTunCurrentSaInstances = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 16), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunCurrentSaInstances.setStatus('current')
h3cIPSecTunInSaEncryptAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 17), H3cEncryptAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInSaEncryptAlgo.setStatus('current')
h3cIPSecTunInSaAhAuthAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 18), H3cAuthAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInSaAhAuthAlgo.setStatus('current')
h3cIPSecTunInSaEspAuthAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 19), H3cAuthAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInSaEspAuthAlgo.setStatus('current')
h3cIPSecTunDiffHellmanGrp = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 20), H3cDiffHellmanGrp()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunDiffHellmanGrp.setStatus('current')
h3cIPSecTunOutSaEncryptAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 21), H3cEncryptAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutSaEncryptAlgo.setStatus('current')
h3cIPSecTunOutSaAhAuthAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 22), H3cAuthAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutSaAhAuthAlgo.setStatus('current')
h3cIPSecTunOutSaEspAuthAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 23), H3cAuthAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutSaEspAuthAlgo.setStatus('current')
h3cIPSecTunPolicyName = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 24), DisplayString()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunPolicyName.setStatus('current')
h3cIPSecTunPolicyNum = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 25), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunPolicyNum.setStatus('current')
h3cIPSecTunStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 1, 1, 26), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("initial", 1), ("ready", 2), ("rekeyed", 3), ("closed", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunStatus.setStatus('current')
h3cIPSecTunnelStatTable = MibTable((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2), )
if mibBuilder.loadTexts: h3cIPSecTunnelStatTable.setStatus('current')
h3cIPSecTunnelStatEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1), ).setIndexNames((0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIfIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunEntryIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIndex"))
if mibBuilder.loadTexts: h3cIPSecTunnelStatEntry.setStatus('current')
h3cIPSecTunInOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 1), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInOctets.setStatus('current')
h3cIPSecTunInDecompOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 2), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInDecompOctets.setStatus('current')
h3cIPSecTunInPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 3), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInPkts.setStatus('current')
h3cIPSecTunInDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 4), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInDropPkts.setStatus('current')
h3cIPSecTunInReplayDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 5), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInReplayDropPkts.setStatus('current')
h3cIPSecTunInAuthFails = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 6), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInAuthFails.setStatus('current')
h3cIPSecTunInDecryptFails = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInDecryptFails.setStatus('current')
h3cIPSecTunOutOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 8), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutOctets.setStatus('current')
h3cIPSecTunOutUncompOctets = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 9), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutUncompOctets.setStatus('current')
h3cIPSecTunOutPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 10), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutPkts.setStatus('current')
h3cIPSecTunOutDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 11), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutDropPkts.setStatus('current')
h3cIPSecTunOutEncryptFails = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 12), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunOutEncryptFails.setStatus('current')
h3cIPSecTunNoMemoryDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 13), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunNoMemoryDropPkts.setStatus('current')
h3cIPSecTunQueueFullDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunQueueFullDropPkts.setStatus('current')
h3cIPSecTunInvalidLenDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 15), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInvalidLenDropPkts.setStatus('current')
h3cIPSecTunTooLongDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 16), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunTooLongDropPkts.setStatus('current')
h3cIPSecTunInvalidSaDropPkts = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 2, 1, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTunInvalidSaDropPkts.setStatus('current')
h3cIPSecSaTable = MibTable((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3), )
if mibBuilder.loadTexts: h3cIPSecSaTable.setStatus('current')
h3cIPSecSaEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1), ).setIndexNames((0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIfIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunEntryIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaIndex"))
if mibBuilder.loadTexts: h3cIPSecSaEntry.setStatus('current')
h3cIPSecSaIndex = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647)))
if mibBuilder.loadTexts: h3cIPSecSaIndex.setStatus('current')
h3cIPSecSaDirection = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("in", 1), ("out", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaDirection.setStatus('current')
h3cIPSecSaValue = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 3), Unsigned32().subtype(subtypeSpec=ValueRangeConstraint(1, 4294967295))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaValue.setStatus('current')
h3cIPSecSaProtocol = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 4), H3cSaProtocol()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaProtocol.setStatus('current')
h3cIPSecSaEncryptAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 5), H3cEncryptAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaEncryptAlgo.setStatus('current')
h3cIPSecSaAuthAlgo = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 6), H3cAuthAlgo()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaAuthAlgo.setStatus('current')
h3cIPSecSaStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 3, 1, 7), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("active", 1), ("expiring", 2)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecSaStatus.setStatus('current')
h3cIPSecTrafficTable = MibTable((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4), )
if mibBuilder.loadTexts: h3cIPSecTrafficTable.setStatus('current')
h3cIPSecTrafficEntry = MibTableRow((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1), ).setIndexNames((0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIfIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunEntryIndex"), (0, "H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIndex"))
if mibBuilder.loadTexts: h3cIPSecTrafficEntry.setStatus('current')
h3cIPSecTrafficLocalType = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 1), H3cTrafficType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficLocalType.setStatus('current')
h3cIPSecTrafficLocalAddr1 = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 2), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficLocalAddr1.setStatus('current')
h3cIPSecTrafficLocalAddr2 = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 3), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficLocalAddr2.setStatus('current')
h3cIPSecTrafficLocalProtocol = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficLocalProtocol.setStatus('current')
h3cIPSecTrafficLocalPort = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficLocalPort.setStatus('current')
h3cIPSecTrafficRemoteType = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 6), H3cTrafficType()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficRemoteType.setStatus('current')
h3cIPSecTrafficRemoteAddr1 = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 7), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficRemoteAddr1.setStatus('current')
h3cIPSecTrafficRemoteAddr2 = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 8), IpAddress()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficRemoteAddr2.setStatus('current')
h3cIPSecTrafficRemoteProtocol = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficRemoteProtocol.setStatus('current')
h3cIPSecTrafficRemotePort = MibTableColumn((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 4, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecTrafficRemotePort.setStatus('current')
h3cIPSecGlobalStats = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5))
h3cIPSecGlobalActiveTunnels = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 1), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalActiveTunnels.setStatus('current')
h3cIPSecGlobalActiveSas = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 2), Gauge32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalActiveSas.setStatus('current')
h3cIPSecGlobalInOctets = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 3), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInOctets.setStatus('current')
h3cIPSecGlobalInDecompOctets = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 4), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInDecompOctets.setStatus('current')
h3cIPSecGlobalInPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 5), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInPkts.setStatus('current')
h3cIPSecGlobalInDrops = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 6), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInDrops.setStatus('current')
h3cIPSecGlobalInReplayDrops = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 7), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInReplayDrops.setStatus('current')
h3cIPSecGlobalInAuthFails = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 8), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInAuthFails.setStatus('current')
h3cIPSecGlobalInDecryptFails = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 9), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInDecryptFails.setStatus('current')
h3cIPSecGlobalOutOctets = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 10), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalOutOctets.setStatus('current')
h3cIPSecGlobalOutUncompOctets = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 11), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalOutUncompOctets.setStatus('current')
h3cIPSecGlobalOutPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 12), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalOutPkts.setStatus('current')
h3cIPSecGlobalOutDrops = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 13), Counter64()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalOutDrops.setStatus('current')
h3cIPSecGlobalOutEncryptFails = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 14), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalOutEncryptFails.setStatus('current')
h3cIPSecGlobalNoMemoryDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 15), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalNoMemoryDropPkts.setStatus('current')
h3cIPSecGlobalNoFindSaDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 16), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalNoFindSaDropPkts.setStatus('current')
h3cIPSecGlobalQueueFullDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 17), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalQueueFullDropPkts.setStatus('current')
h3cIPSecGlobalInvalidLenDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 18), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInvalidLenDropPkts.setStatus('current')
h3cIPSecGlobalTooLongDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 19), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalTooLongDropPkts.setStatus('current')
h3cIPSecGlobalInvalidSaDropPkts = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 5, 20), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: h3cIPSecGlobalInvalidSaDropPkts.setStatus('current')
h3cIPSecTrapObject = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 6))
h3cIPSecPolicyName = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 6, 1), DisplayString()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: h3cIPSecPolicyName.setStatus('current')
h3cIPSecPolicySeqNum = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 6, 2), Integer32()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: h3cIPSecPolicySeqNum.setStatus('current')
h3cIPSecPolicySize = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 6, 3), Integer32()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: h3cIPSecPolicySize.setStatus('current')
h3cIPSecSpiValue = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 6, 4), Integer32()).setMaxAccess("accessiblefornotify")
if mibBuilder.loadTexts: h3cIPSecSpiValue.setStatus('current')
h3cIPSecTrapCntl = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7))
h3cIPSecTrapGlobalCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 1), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecTrapGlobalCntl.setStatus('current')
h3cIPSecTunnelStartTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 2), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecTunnelStartTrapCntl.setStatus('current')
h3cIPSecTunnelStopTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 3), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecTunnelStopTrapCntl.setStatus('current')
h3cIPSecNoSaTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 4), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecNoSaTrapCntl.setStatus('current')
h3cIPSecAuthFailureTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 5), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecAuthFailureTrapCntl.setStatus('current')
h3cIPSecEncryFailureTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 6), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecEncryFailureTrapCntl.setStatus('current')
h3cIPSecDecryFailureTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 7), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecDecryFailureTrapCntl.setStatus('current')
h3cIPSecInvalidSaTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 8), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecInvalidSaTrapCntl.setStatus('current')
h3cIPSecPolicyAddTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 9), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecPolicyAddTrapCntl.setStatus('current')
h3cIPSecPolicyDelTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 10), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecPolicyDelTrapCntl.setStatus('current')
h3cIPSecPolicyAttachTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 11), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecPolicyAttachTrapCntl.setStatus('current')
h3cIPSecPolicyDetachTrapCntl = MibScalar((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 7, 12), H3cTrapStatus()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: h3cIPSecPolicyDetachTrapCntl.setStatus('current')
h3cIPSecTrap = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8))
h3cIPSecNotifications = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1))
h3cIPSecTunnelStart = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 1)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLifeTime"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLifeSize"))
if mibBuilder.loadTexts: h3cIPSecTunnelStart.setStatus('current')
h3cIPSecTunnelStop = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 2)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunActiveTime"))
if mibBuilder.loadTexts: h3cIPSecTunnelStop.setStatus('current')
h3cIPSecNoSaFailure = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 3)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"))
if mibBuilder.loadTexts: h3cIPSecNoSaFailure.setStatus('current')
h3cIPSecAuthFailFailure = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 4)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"))
if mibBuilder.loadTexts: h3cIPSecAuthFailFailure.setStatus('current')
h3cIPSecEncryFailFailure = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 5)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"))
if mibBuilder.loadTexts: h3cIPSecEncryFailFailure.setStatus('current')
h3cIPSecDecryFailFailure = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 6)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"))
if mibBuilder.loadTexts: h3cIPSecDecryFailFailure.setStatus('current')
h3cIPSecInvalidSaFailure = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 7)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSpiValue"))
if mibBuilder.loadTexts: h3cIPSecInvalidSaFailure.setStatus('current')
h3cIPSecPolicyAdd = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 8)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySeqNum"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySize"))
if mibBuilder.loadTexts: h3cIPSecPolicyAdd.setStatus('current')
h3cIPSecPolicyDel = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 9)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySeqNum"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySize"))
if mibBuilder.loadTexts: h3cIPSecPolicyDel.setStatus('current')
h3cIPSecPolicyAttach = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 10)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySize"), ("IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: h3cIPSecPolicyAttach.setStatus('current')
h3cIPSecPolicyDetach = NotificationType((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 1, 8, 1, 11)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySize"), ("IF-MIB", "ifIndex"))
if mibBuilder.loadTexts: h3cIPSecPolicyDetach.setStatus('current')
h3cIPSecConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2))
h3cIPSecCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 1))
h3cIPSecGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2))
h3cIPSecCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 1, 1)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelTableGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelStatGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficTableGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalStatsGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrapObjectGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrapCntlGroup"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrapGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecCompliance = h3cIPSecCompliance.setStatus('current')
h3cIPSecTunnelTableGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 1)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunIKETunnelIndex"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLocalAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemoteAddr"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunKeyType"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunEncapMode"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInitiator"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLifeSize"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunLifeTime"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemainTime"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunActiveTime"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunRemainSize"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunTotalRefreshes"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunCurrentSaInstances"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInSaEncryptAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInSaAhAuthAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInSaEspAuthAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunDiffHellmanGrp"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutSaEncryptAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutSaAhAuthAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutSaEspAuthAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunPolicyNum"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTunnelTableGroup = h3cIPSecTunnelTableGroup.setStatus('current')
h3cIPSecTunnelStatGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 2)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInDecompOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInReplayDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInAuthFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInDecryptFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutUncompOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunOutEncryptFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunNoMemoryDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunQueueFullDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInvalidLenDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunTooLongDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunInvalidSaDropPkts"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTunnelStatGroup = h3cIPSecTunnelStatGroup.setStatus('current')
h3cIPSecSaGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 3)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaDirection"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaValue"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaProtocol"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaEncryptAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaAuthAlgo"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSaStatus"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecSaGroup = h3cIPSecSaGroup.setStatus('current')
h3cIPSecTrafficTableGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 4)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficLocalType"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficLocalAddr1"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficLocalAddr2"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficLocalProtocol"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficLocalPort"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficRemoteType"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficRemoteAddr1"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficRemoteAddr2"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficRemoteProtocol"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrafficRemotePort"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTrafficTableGroup = h3cIPSecTrafficTableGroup.setStatus('current')
h3cIPSecGlobalStatsGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 5)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalActiveTunnels"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalActiveSas"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInDecompOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInDrops"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInReplayDrops"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInAuthFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInDecryptFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalOutOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalOutUncompOctets"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalOutPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalOutDrops"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalOutEncryptFails"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalNoMemoryDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalNoFindSaDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalQueueFullDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInvalidLenDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalTooLongDropPkts"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecGlobalInvalidSaDropPkts"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecGlobalStatsGroup = h3cIPSecGlobalStatsGroup.setStatus('current')
h3cIPSecTrapObjectGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 6)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyName"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySeqNum"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicySize"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecSpiValue"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTrapObjectGroup = h3cIPSecTrapObjectGroup.setStatus('current')
h3cIPSecTrapCntlGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 7)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTrapGlobalCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelStartTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelStopTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecNoSaTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecAuthFailureTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecEncryFailureTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecDecryFailureTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecInvalidSaTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyAddTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyDelTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyAttachTrapCntl"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyDetachTrapCntl"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTrapCntlGroup = h3cIPSecTrapCntlGroup.setStatus('current')
h3cIPSecTrapGroup = NotificationGroup((1, 3, 6, 1, 4, 1, 2011, 10, 2, 7, 2, 2, 8)).setObjects(("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelStart"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecTunnelStop"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecNoSaFailure"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecAuthFailFailure"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecEncryFailFailure"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecDecryFailFailure"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecInvalidSaFailure"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyAdd"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyDel"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyAttach"), ("H3C-IPSEC-MONITOR-MIB", "h3cIPSecPolicyDetach"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
h3cIPSecTrapGroup = h3cIPSecTrapGroup.setStatus('current')
mibBuilder.exportSymbols("H3C-IPSEC-MONITOR-MIB", h3cIPSecTrafficRemoteAddr2=h3cIPSecTrafficRemoteAddr2, h3cIPSecTrafficLocalAddr1=h3cIPSecTrafficLocalAddr1, h3cIPSecTunInOctets=h3cIPSecTunInOctets, h3cIPSecTunStatus=h3cIPSecTunStatus, h3cIPSecGlobalStats=h3cIPSecGlobalStats, h3cIPSecTrafficRemoteType=h3cIPSecTrafficRemoteType, h3cIPSecGlobalQueueFullDropPkts=h3cIPSecGlobalQueueFullDropPkts, h3cIPSecTunInvalidSaDropPkts=h3cIPSecTunInvalidSaDropPkts, h3cIPSecTunLocalAddr=h3cIPSecTunLocalAddr, h3cIPSecTunKeyType=h3cIPSecTunKeyType, h3cIPSecGlobalTooLongDropPkts=h3cIPSecGlobalTooLongDropPkts, h3cIPSecTunEntryIndex=h3cIPSecTunEntryIndex, PYSNMP_MODULE_ID=h3cIPSecMonitor, h3cIPSecTrapGlobalCntl=h3cIPSecTrapGlobalCntl, h3cIPSecTunOutEncryptFails=h3cIPSecTunOutEncryptFails, h3cIPSecTunNoMemoryDropPkts=h3cIPSecTunNoMemoryDropPkts, h3cIPSecAuthFailFailure=h3cIPSecAuthFailFailure, h3cIPSecSpiValue=h3cIPSecSpiValue, h3cIPSecGlobalOutEncryptFails=h3cIPSecGlobalOutEncryptFails, h3cIPSecSaEncryptAlgo=h3cIPSecSaEncryptAlgo, h3cIPSecSaStatus=h3cIPSecSaStatus, h3cIPSecTunRemainTime=h3cIPSecTunRemainTime, h3cIPSecTunnelStartTrapCntl=h3cIPSecTunnelStartTrapCntl, H3cAuthAlgo=H3cAuthAlgo, h3cIPSecTrafficTableGroup=h3cIPSecTrafficTableGroup, h3cIPSecPolicyAttach=h3cIPSecPolicyAttach, h3cIPSecGlobalInDecryptFails=h3cIPSecGlobalInDecryptFails, h3cIPSecTunRemainSize=h3cIPSecTunRemainSize, h3cIPSecSaDirection=h3cIPSecSaDirection, h3cIPSecDecryFailureTrapCntl=h3cIPSecDecryFailureTrapCntl, h3cIPSecTunIndex=h3cIPSecTunIndex, h3cIPSecPolicyDetachTrapCntl=h3cIPSecPolicyDetachTrapCntl, h3cIPSecNoSaFailure=h3cIPSecNoSaFailure, h3cIPSecPolicyAttachTrapCntl=h3cIPSecPolicyAttachTrapCntl, h3cIPSecTunInSaAhAuthAlgo=h3cIPSecTunInSaAhAuthAlgo, h3cIPSecTunnelStatEntry=h3cIPSecTunnelStatEntry, h3cIPSecTunInDecryptFails=h3cIPSecTunInDecryptFails, h3cIPSecNotifications=h3cIPSecNotifications, h3cIPSecGlobalInPkts=h3cIPSecGlobalInPkts, h3cIPSecTunInvalidLenDropPkts=h3cIPSecTunInvalidLenDropPkts, h3cIPSecGlobalActiveSas=h3cIPSecGlobalActiveSas, h3cIPSecGlobalActiveTunnels=h3cIPSecGlobalActiveTunnels, h3cIPSecGlobalOutUncompOctets=h3cIPSecGlobalOutUncompOctets, H3cSaProtocol=H3cSaProtocol, h3cIPSecTunInSaEncryptAlgo=h3cIPSecTunInSaEncryptAlgo, h3cIPSecTunOutOctets=h3cIPSecTunOutOctets, h3cIPSecInvalidSaFailure=h3cIPSecInvalidSaFailure, h3cIPSecTunQueueFullDropPkts=h3cIPSecTunQueueFullDropPkts, h3cIPSecPolicyName=h3cIPSecPolicyName, h3cIPSecSaIndex=h3cIPSecSaIndex, h3cIPSecTunTotalRefreshes=h3cIPSecTunTotalRefreshes, h3cIPSecSaProtocol=h3cIPSecSaProtocol, H3cEncapMode=H3cEncapMode, h3cIPSecTrafficLocalProtocol=h3cIPSecTrafficLocalProtocol, h3cIPSecPolicySeqNum=h3cIPSecPolicySeqNum, h3cIPSecTunActiveTime=h3cIPSecTunActiveTime, h3cIPSecTunOutSaEspAuthAlgo=h3cIPSecTunOutSaEspAuthAlgo, h3cIPSecTunPolicyNum=h3cIPSecTunPolicyNum, h3cIPSecGlobalInAuthFails=h3cIPSecGlobalInAuthFails, h3cIPSecGlobalOutDrops=h3cIPSecGlobalOutDrops, h3cIPSecConformance=h3cIPSecConformance, h3cIPSecTunInAuthFails=h3cIPSecTunInAuthFails, h3cIPSecTunnelStop=h3cIPSecTunnelStop, h3cIPSecTunOutSaAhAuthAlgo=h3cIPSecTunOutSaAhAuthAlgo, h3cIPSecSaValue=h3cIPSecSaValue, h3cIPSecGlobalOutOctets=h3cIPSecGlobalOutOctets, h3cIPSecPolicyDelTrapCntl=h3cIPSecPolicyDelTrapCntl, h3cIPSecTunTooLongDropPkts=h3cIPSecTunTooLongDropPkts, h3cIPSecTunInSaEspAuthAlgo=h3cIPSecTunInSaEspAuthAlgo, h3cIPSecTunDiffHellmanGrp=h3cIPSecTunDiffHellmanGrp, h3cIPSecTunOutUncompOctets=h3cIPSecTunOutUncompOctets, h3cIPSecTunnelStatGroup=h3cIPSecTunnelStatGroup, h3cIPSecTunPolicyName=h3cIPSecTunPolicyName, h3cIPSecObjects=h3cIPSecObjects, h3cIPSecMonitor=h3cIPSecMonitor, h3cIPSecEncryFailFailure=h3cIPSecEncryFailFailure, h3cIPSecTunInReplayDropPkts=h3cIPSecTunInReplayDropPkts, h3cIPSecGlobalNoMemoryDropPkts=h3cIPSecGlobalNoMemoryDropPkts, h3cIPSecPolicyAdd=h3cIPSecPolicyAdd, h3cIPSecGlobalInDrops=h3cIPSecGlobalInDrops, h3cIPSecPolicyDetach=h3cIPSecPolicyDetach, h3cIPSecDecryFailFailure=h3cIPSecDecryFailFailure, h3cIPSecTrapCntlGroup=h3cIPSecTrapCntlGroup, h3cIPSecTunOutPkts=h3cIPSecTunOutPkts, h3cIPSecTrafficRemoteAddr1=h3cIPSecTrafficRemoteAddr1, h3cIPSecSaGroup=h3cIPSecSaGroup, H3cIPSecTunnelState=H3cIPSecTunnelState, h3cIPSecTunLifeSize=h3cIPSecTunLifeSize, h3cIPSecTunOutDropPkts=h3cIPSecTunOutDropPkts, H3cTrapStatus=H3cTrapStatus, h3cIPSecGroups=h3cIPSecGroups, h3cIPSecTrafficLocalPort=h3cIPSecTrafficLocalPort, h3cIPSecGlobalInOctets=h3cIPSecGlobalInOctets, h3cIPSecGlobalStatsGroup=h3cIPSecGlobalStatsGroup, h3cIPSecTunInDropPkts=h3cIPSecTunInDropPkts, h3cIPSecGlobalOutPkts=h3cIPSecGlobalOutPkts, h3cIPSecTunOutSaEncryptAlgo=h3cIPSecTunOutSaEncryptAlgo, H3cIPSecNegoType=H3cIPSecNegoType, h3cIPSecTrafficLocalAddr2=h3cIPSecTrafficLocalAddr2, h3cIPSecTrafficRemoteProtocol=h3cIPSecTrafficRemoteProtocol, h3cIPSecTrapObject=h3cIPSecTrapObject, h3cIPSecTunCurrentSaInstances=h3cIPSecTunCurrentSaInstances, h3cIPSecGlobalInvalidLenDropPkts=h3cIPSecGlobalInvalidLenDropPkts, h3cIPSecGlobalInReplayDrops=h3cIPSecGlobalInReplayDrops, h3cIPSecPolicyDel=h3cIPSecPolicyDel, h3cIPSecTunnelTableGroup=h3cIPSecTunnelTableGroup, h3cIPSecAuthFailureTrapCntl=h3cIPSecAuthFailureTrapCntl, H3cTrafficType=H3cTrafficType, h3cIPSecTunIfIndex=h3cIPSecTunIfIndex, h3cIPSecNoSaTrapCntl=h3cIPSecNoSaTrapCntl, h3cIPSecTunInDecompOctets=h3cIPSecTunInDecompOctets, h3cIPSecPolicyAddTrapCntl=h3cIPSecPolicyAddTrapCntl, h3cIPSecCompliance=h3cIPSecCompliance, h3cIPSecTunnelStopTrapCntl=h3cIPSecTunnelStopTrapCntl, h3cIPSecTunInPkts=h3cIPSecTunInPkts, h3cIPSecInvalidSaTrapCntl=h3cIPSecInvalidSaTrapCntl, h3cIPSecSaAuthAlgo=h3cIPSecSaAuthAlgo, h3cIPSecTrafficTable=h3cIPSecTrafficTable, h3cIPSecPolicySize=h3cIPSecPolicySize, h3cIPSecTrap=h3cIPSecTrap, h3cIPSecTunnelEntry=h3cIPSecTunnelEntry, h3cIPSecTunEncapMode=h3cIPSecTunEncapMode, h3cIPSecTrafficLocalType=h3cIPSecTrafficLocalType, h3cIPSecTunnelStatTable=h3cIPSecTunnelStatTable, h3cIPSecSaEntry=h3cIPSecSaEntry, h3cIPSecTrafficRemotePort=h3cIPSecTrafficRemotePort, h3cIPSecTrapCntl=h3cIPSecTrapCntl, h3cIPSecEncryFailureTrapCntl=h3cIPSecEncryFailureTrapCntl, h3cIPSecGlobalInDecompOctets=h3cIPSecGlobalInDecompOctets, h3cIPSecCompliances=h3cIPSecCompliances, h3cIPSecTunIKETunnelIndex=h3cIPSecTunIKETunnelIndex, h3cIPSecTunnelTable=h3cIPSecTunnelTable, h3cIPSecTrafficEntry=h3cIPSecTrafficEntry, h3cIPSecTunRemoteAddr=h3cIPSecTunRemoteAddr, H3cIPSecIDType=H3cIPSecIDType, h3cIPSecTrapObjectGroup=h3cIPSecTrapObjectGroup, h3cIPSecTunInitiator=h3cIPSecTunInitiator, h3cIPSecTunLifeTime=h3cIPSecTunLifeTime, h3cIPSecTrapGroup=h3cIPSecTrapGroup, H3cDiffHellmanGrp=H3cDiffHellmanGrp, H3cEncryptAlgo=H3cEncryptAlgo, h3cIPSecGlobalNoFindSaDropPkts=h3cIPSecGlobalNoFindSaDropPkts, h3cIPSecTunnelStart=h3cIPSecTunnelStart, h3cIPSecGlobalInvalidSaDropPkts=h3cIPSecGlobalInvalidSaDropPkts, h3cIPSecSaTable=h3cIPSecSaTable)
| 136.353116 | 6,811 | 0.763705 | 4,872 | 45,951 | 7.202586 | 0.069581 | 0.035337 | 0.066256 | 0.079508 | 0.413553 | 0.388476 | 0.297911 | 0.268274 | 0.26531 | 0.258186 | 0 | 0.095553 | 0.080564 | 45,951 | 336 | 6,812 | 136.758929 | 0.735023 | 0.007312 | 0 | 0.072327 | 0 | 0 | 0.207495 | 0.120971 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025157 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
1701bada6800d4c692ecdcf52d4c1b2b794314af | 233 | py | Python | scripts/pylint_custom_plugin/tests/test_files/enum_checker_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | 1 | 2022-01-24T08:54:57.000Z | 2022-01-24T08:54:57.000Z | scripts/pylint_custom_plugin/tests/test_files/enum_checker_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | scripts/pylint_custom_plugin/tests/test_files/enum_checker_acceptable.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | # Test file for enum checker
from enum import Enum
from six import with_metaclass
from azure.core import CaseInsensitiveEnumMeta
class EnumPython2(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
ONE = "one"
TWO = "two"
| 25.888889 | 70 | 0.776824 | 30 | 233 | 5.966667 | 0.6 | 0.145251 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005102 | 0.158798 | 233 | 8 | 71 | 29.125 | 0.908163 | 0.111588 | 0 | 0 | 0 | 0 | 0.029268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
ca064731956adaddb40fad64332618d7f5b74b2e | 800 | py | Python | pav_propms/pav_property_management_solution/doctype/rent_request/rent_request.py | alkuhlani/pav_propms | 2b1a9f9b2430fd45083ea750ab0d0df0243f6d17 | [
"MIT"
] | null | null | null | pav_propms/pav_property_management_solution/doctype/rent_request/rent_request.py | alkuhlani/pav_propms | 2b1a9f9b2430fd45083ea750ab0d0df0243f6d17 | [
"MIT"
] | null | null | null | pav_propms/pav_property_management_solution/doctype/rent_request/rent_request.py | alkuhlani/pav_propms | 2b1a9f9b2430fd45083ea750ab0d0df0243f6d17 | [
"MIT"
] | 3 | 2021-03-24T13:43:14.000Z | 2021-06-20T09:02:24.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2021, Patrner Team and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from frappe.model.mapper import get_mapped_doc
class RentRequest(Document):
def validate(self):
frappe.msgprint("Hi")
def get_feed(self):
pass
@frappe.whitelist()
def make_request_for_rent_contract(source_name, target_doc=None):
# doc = frappe.get_doc(dt, dn)
# rc = frappe.new_doc("Rent Contract")
# rc.rent_request = doc
# #frappe.msgprint("Hi=={0}".format(doc.requster_name))
# return rc
mapped = get_mapped_doc("Rent Request", source_name, {
"Rent Request": {
"doctype": "Rent Contract",
}
}, target_doc)
#frappe.msgprint(mapped)
return mapped
| 25.806452 | 65 | 0.735 | 111 | 800 | 5.099099 | 0.495496 | 0.074205 | 0.053004 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008721 | 0.14 | 800 | 30 | 66 | 26.666667 | 0.813953 | 0.36625 | 0 | 0 | 0 | 0 | 0.092742 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.058824 | 0.235294 | 0 | 0.529412 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
ca07ea6dedd3f0db63c8b2ef89c11429753898b8 | 16,671 | py | Python | test/fisheries_test_io.py | phargogh/invest-natcap.invest-3 | ee96055a4fa034d9a95fa8ccc6259ab03264e6c1 | [
"BSD-3-Clause"
] | null | null | null | test/fisheries_test_io.py | phargogh/invest-natcap.invest-3 | ee96055a4fa034d9a95fa8ccc6259ab03264e6c1 | [
"BSD-3-Clause"
] | null | null | null | test/fisheries_test_io.py | phargogh/invest-natcap.invest-3 | ee96055a4fa034d9a95fa8ccc6259ab03264e6c1 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import os
import pprint
from numpy import testing
import numpy as np
import invest_natcap.fisheries.fisheries_io as fisheries_io
from invest_natcap.fisheries.fisheries_io import MissingParameter
data_directory = './invest-data/test/data/fisheries'
pp = pprint.PrettyPrinter(indent=4)
All_Parameters = ['Classes', 'Duration', 'Exploitationfraction', 'Fecundity',
'Larvaldispersal', 'Maturity', 'Regions', 'Survnaturalfrac',
'Weight', 'Vulnfishing']
Necessary_Params = ['Classes', 'Exploitationfraction', 'Maturity', 'Regions',
'Survnaturalfrac', 'Vulnfishing']
class TestPopulationParamsIO(unittest.TestCase):
def test_parse_popu_params_blue_crab(self):
uri = os.path.join(data_directory, 'CSVs/New_Params.csv')
sexsp = 1
pop_dict = fisheries_io._parse_population_csv(uri, sexsp)
# Check that keys are correct
Matching_Keys = [i for i in pop_dict.keys() if i in Necessary_Params]
self.assertEqual(len(Matching_Keys), len(Necessary_Params))
# Check that sexsp handled correctly
self.assertEqual(len(pop_dict['Survnaturalfrac'][0]), sexsp)
# Check that Class attribute lengths match
self.assertEqual(
len(pop_dict['Vulnfishing']), len(pop_dict['Maturity']))
# Print Dictionary if debugging
#pp.pprint(pop_dict)
def test_parse_popu_params_sn(self):
uri = os.path.join(data_directory, 'CSVs/TestCSV_SN_Syntax.csv')
sexsp = 1
pop_dict = fisheries_io._parse_population_csv(uri, sexsp)
# Check that keys are correct
Matching_Keys = [i for i in pop_dict.keys() if i in All_Parameters]
self.assertEqual(len(Matching_Keys), len(All_Parameters))
# Check that sexsp handled correctly
self.assertEqual(len(pop_dict['Survnaturalfrac'][0]), sexsp)
# Check that Class attribute lengths match
self.assertEqual(
len(pop_dict['Vulnfishing']), len(pop_dict['Maturity']))
# Print Dictionary if debugging
#pp.pprint(pop_dict)
def test_parse_popu_params_ss(self):
uri = os.path.join(data_directory, 'CSVs/TestCSV_SS_Syntax.csv')
sexsp = 2
pop_dict = fisheries_io._parse_population_csv(uri, sexsp)
# Check that keys are correct
Matching_Params = [i for i in pop_dict.keys() if i in All_Parameters]
self.assertEqual(len(Matching_Params), len(All_Parameters))
# Check that sexsp handled correctly
self.assertEqual(len(pop_dict['Survnaturalfrac'][0]), sexsp)
# Check that Class attribute lengths match
self.assertEqual(
len(pop_dict['Vulnfishing']), len(pop_dict['Maturity']))
# Print Dictionary if debugging
#pp.pprint(pop_dict)
def test_read_popu_params(self):
# Check that throws error when necessary information does not exist
# Test with not all necessary params
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Syntax_fail1.csv')
args = {'population_csv_uri': population_csv_uri, 'sexsp': 1}
with self.assertRaises(MissingParameter):
fisheries_io.read_population_csv(args, population_csv_uri)
# Test Stage-based without Duration vector
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Syntax_fail2.csv')
args['population_csv_uri'] = population_csv_uri
args['recruitment_type'] = 'Beverton-Holt'
args['population_type'] = 'Stage-Based'
with self.assertRaises(MissingParameter):
fisheries_io.read_population_csv(args, population_csv_uri)
# Test B-H / Weight without Weight vector
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Syntax_fail3.csv')
args['population_csv_uri'] = population_csv_uri
args['spawn_units'] = 'Weight'
with self.assertRaises(MissingParameter):
fisheries_io.read_population_csv(args, population_csv_uri)
# Test Fecundity without Fecundity vector
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Syntax_fail3.csv')
args['population_csv_uri'] = population_csv_uri
args['recruitment_type'] = 'Fecundity'
args['harvest_units'] = 'Weight'
with self.assertRaises(MissingParameter):
fisheries_io.read_population_csv(args, population_csv_uri)
'''
# Check that throws error when incorrect information exists
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Semantics_fail1.csv')
args = {'population_csv_uri': population_csv_uri, 'sexsp': 1}
self.assertRaises(
MissingParameter, fisheries_io.read_population_csv(args))
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Semantics_fail2.csv')
args = {'population_csv_uri': population_csv_uri, 'sexsp': 1}
self.assertRaises(
MissingParameter, fisheries_io.read_population_csv(args))
population_csv_uri = os.path.join(data_directory, 'CSVs/Fail/TestCSV_SN_Semantics_fail3.csv')
args = {'population_csv_uri': population_csv_uri, 'sexsp': 1}
self.assertRaises(
MissingParameter, fisheries_io.read_population_csv(args))
'''
class TestMigrationIO(unittest.TestCase):
def test_parse_migration(self):
uri = os.path.join(data_directory, 'migration/')
args = {
'migr_cont': True,
'migration_dir': uri
}
class_list = ['larva', 'adult']
mig_dict = fisheries_io._parse_migration_tables(args, class_list)
#pp.pprint(mig_dict)
self.assertIsInstance(mig_dict['adult'], np.matrix)
self.assertEqual(
mig_dict['adult'].shape[0], mig_dict['adult'].shape[1])
def test_read_migration(self):
uri = os.path.join(data_directory, 'migration/')
args = {
"migration_dir": uri,
"migr_cont": True,
}
class_list = ['larva', 'other', 'other2', 'adult']
region_list = ['Region 1', 'Region 2', '...', 'Region N']
mig_dict = fisheries_io.read_migration_tables(
args, class_list, region_list)
test_matrix_dict = fisheries_io._parse_migration_tables(
args, ['larva'])
# pp.pprint(test_matrix_dict)
# pp.pprint(mig_dict)
testing.assert_array_equal(
mig_dict['Migration'][0], test_matrix_dict['larva'])
class TestSingleParamsIO(unittest.TestCase):
def test_verify_single_params(self):
args = {
'workspace_dir': '',
'aoi_uri': None,
'population_type': None,
'sexsp': 1,
'do_batch': False,
'total_init_recruits': -1.0,
'total_timesteps': -1,
'recruitment_type': 'Ricker',
'spawn_units': 'Individuals',
'alpha': None,
'beta': None,
'total_recur_recruits': None,
'migr_cont': True,
'harvest_units': None,
'frac_post_process': None,
'unit_price': None,
'val_cont': True,
}
# Check that path exists and user has read/write permissions along path
with self.assertRaises(OSError):
fisheries_io._verify_single_params(args)
# Check timesteps positive number
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['total_timesteps'] = 100
# Check total_init_recruits for non-negative float
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['total_init_recruits'] = 1.2
# Check recruitment type's corresponding parameters exist
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['alpha'] = -1.0
args['beta'] = -1.0
args['total_recur_recruits'] = -1.0
# If BH or Ricker: Check alpha positive float
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['alpha'] = 1.0
# Check positive beta positive float
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['beta'] = 1.0
# Check total_recur_recruits is non-negative float
args['recruitment_type'] = 'Fixed'
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['total_recur_recruits'] = 100.0
# If Harvest: Check frac_post_process float between [0,1]
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['frac_post_process'] = 0.2
# If Harvest: Check unit_price non-negative float
with self.assertRaises(ValueError):
fisheries_io._verify_single_params(args, create_outputs=False)
args['unit_price'] = 20.2
# Check file extension? (maybe try / except would be better)
# Check shapefile subregions match regions in population parameters file
args['aoi_uri'] = None
class TestFetchArgs(unittest.TestCase):
def test_fetch_args(self):
csv_uri = os.path.join(data_directory, 'CSVs/TestCSV_SN_Syntax.csv')
mig_uri = os.path.join(data_directory, 'migration/')
args = {
'population_csv_uri': csv_uri,
'migr_cont': True,
'migration_dir': mig_uri,
'workspace_dir': '',
'aoi_uri': None,
'population_type': "Stage-Based",
'sexsp': 'No',
'do_batch': False,
'total_init_recruits': 1.2,
'total_timesteps': 100,
'recruitment_type': 'Ricker',
'spawn_units': 'Individuals',
'alpha': 1.0,
'beta': 1.2,
'total_recur_recruits': 100.0,
'migr_cont': True,
'harvest_units': "Weight",
'frac_post_process': 0.2,
'unit_price': 20.2,
'val_cont': True,
}
vars_dict = fisheries_io.fetch_args(args, create_outputs=False)
# pp.pprint(vars_dict)
# with self.assertRaises():
# fisheries_io.fetch_args(args)
def test_fetch_args2(self):
csv_dir = os.path.join(data_directory, 'CSVs/Multiple_CSV_Test')
mig_uri = os.path.join(data_directory, 'migration/')
workspace_dir = ''
args = {
'population_csv_dir': csv_dir,
'migr_cont': True,
'migration_dir': mig_uri,
'workspace_dir': workspace_dir,
'aoi_uri': None,
'population_type': "Stage-Based",
'sexsp': 'No',
'do_batch': True,
'total_init_recruits': 1.2,
'total_timesteps': 100,
'recruitment_type': 'Ricker',
'spawn_units': 'Individuals',
'alpha': 1.0,
'beta': 1.2,
'total_recur_recruits': 100.0,
'migr_cont': True,
'harvest_units': "Weight",
'frac_post_process': 0.2,
'unit_price': 20.2,
'val_cont': True,
}
# model_list = fisheries_io.fetch_args(args)
# pp.pprint(model_list)
# with self.assertRaises():
# fisheries_io.fetch_args(args)
# os.removedirs(os.path.join(args['workspace_dir'], 'output'))
class TestCreateCSV(unittest.TestCase):
def setUp(self):
self.vars_dict = {
'workspace_dir': 'path/to/workspace_dir',
'output_dir': os.getcwd(),
# 'aoi_uri': 'path/to/aoi_uri',
'total_timesteps': 15,
'population_type': 'Age-Based',
'sexsp': 2,
'do_batch': False,
'spawn_units': 'Weight',
'total_init_recruits': 100.0,
'recruitment_type': 'Fixed',
'alpha': 3.0,
'beta': 4.0,
'total_recur_recruits': 1.0,
'migr_cont': True,
'val_cont': True,
'harvest_units': 'Individuals',
'frac_post_process': 0.5,
'unit_price': 5.0,
# Pop Params
# 'population_csv_uri': 'path/to/csv_uri',
'Survnaturalfrac': np.ones([2, 2, 2]) * 0.5, # Regions, Sexes, Classes
'Classes': np.array(['larva', 'adult']),
'Vulnfishing': np.array([[0.5, 0.5], [0.5, 0.5]]),
'Maturity': np.array([[0.0, 1.0], [0.0, 1.0]]),
'Duration': np.array([[2, 3], [2, 3]]),
'Weight': np.array([[0.1, 1.0], [0.1, 1.0]]),
'Fecundity': np.array([[0.1, 1.0], [0.1, 2.0]]),
'Regions': np.array(['r1', 'r2']),
'Exploitationfraction': np.array([0.25, 0.5]),
'Larvaldispersal': np.array([0.5, 0.5]),
# Mig Params
# 'migration_dir': 'path/to/mig_dir',
'Migration': [np.eye(2), np.eye(2)],
# Derived Params
'equilibrate_cycle': 10,
'Survtotalfrac': np.array([[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]]]), # Index Order: class, sex, region
'G_survtotalfrac': np.ones([2, 2, 2]), # (same)
'P_survtotalfrac': np.ones([2, 2, 2]), # (same)
'N_tasx': np.ones([15, 2, 2, 2]), # Index Order: time, class, sex, region
'H_tx': np.ones([15, 2]),
'V_tx': np.ones([15, 2]) * 5.0,
}
pass
def test_create_csv(self):
# fisheries_io._create_csv(self.vars_dict)
pass
class TestCreateHTML(unittest.TestCase):
def setUp(self):
self.vars_dict = {
'workspace_dir': 'path/to/workspace_dir',
'output_dir': os.getcwd(),
# 'aoi_uri': 'path/to/aoi_uri',
'total_timesteps': 15,
'population_type': 'Age-Based',
'sexsp': 2,
'do_batch': False,
'spawn_units': 'Weight',
'total_init_recruits': 100.0,
'recruitment_type': 'Fixed',
'alpha': 3.0,
'beta': 4.0,
'total_recur_recruits': 1.0,
'migr_cont': True,
'val_cont': True,
'harvest_units': 'Individuals',
'frac_post_process': 0.5,
'unit_price': 5.0,
# Pop Params
# 'population_csv_uri': 'path/to/csv_uri',
'Survnaturalfrac': np.ones([2, 2, 2]) * 0.5, # Regions, Sexes, Classes
'Classes': np.array(['larva', 'adult']),
'Vulnfishing': np.array([[0.5, 0.5], [0.5, 0.5]]),
'Maturity': np.array([[0.0, 1.0], [0.0, 1.0]]),
'Duration': np.array([[2, 3], [2, 3]]),
'Weight': np.array([[0.1, 1.0], [0.1, 1.0]]),
'Fecundity': np.array([[0.1, 1.0], [0.1, 2.0]]),
'Regions': np.array(['r1', 'r2']),
'Exploitationfraction': np.array([0.25, 0.5]),
'Larvaldispersal': np.array([0.5, 0.5]),
# Mig Params
# 'migration_dir': 'path/to/mig_dir',
'Migration': [np.eye(2), np.eye(2)],
# Derived Params
'equilibrate_cycle': 10,
'Survtotalfrac': np.array([[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]]]), # Index Order: class, sex, region
'G_survtotalfrac': np.ones([2, 2, 2]), # (same)
'P_survtotalfrac': np.ones([2, 2, 2]), # (same)
'N_tasx': np.ones([15, 2, 2, 2]), # Index Order: time, class, sex, region
'H_tx': np.ones([15, 2]),
'V_tx': np.ones([15, 2]) * 5.0,
}
pass
def test_create_html(self):
# fisheries_io._create_html(self.vars_dict)
pass
class TestCreateAOI(unittest.TestCase):
def setUp(self):
self.vars_dict = {
'workspace_dir': 'path/to/workspace_dir',
'output_dir': os.getcwd(),
'aoi_uri': os.path.join(data_directory, 'Galveston_Subregion.shp'),
'Classes': np.array(['larva']),
'Regions': np.array(['1']),
'N_tasx': np.ones([15, 2, 2, 2]),
'H_tx': np.ones([15, 1]),
'V_tx': np.ones([15, 1]) * 5.0,
}
pass
def test_create_aoi(self):
# fisheries_io._create_aoi(self.vars_dict)
pass
if __name__ == '__main__':
unittest.main()
| 39.598575 | 127 | 0.586168 | 1,990 | 16,671 | 4.668342 | 0.119598 | 0.054575 | 0.053391 | 0.009473 | 0.750161 | 0.702261 | 0.687298 | 0.6662 | 0.642411 | 0.628202 | 0 | 0.026777 | 0.280907 | 16,671 | 420 | 128 | 39.692857 | 0.748165 | 0.132506 | 0 | 0.616949 | 0 | 0 | 0.212413 | 0.027181 | 0 | 0 | 0 | 0 | 0.084746 | 1 | 0.050847 | false | 0.020339 | 0.023729 | 0 | 0.098305 | 0.00678 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ca18ace5f10b1c7df64cf4e65c93a3bcd9d3f0c0 | 980 | py | Python | blog/admin.py | DongQinglin/djangoblog | e514af57bae9dbca76e783ada6b5921d47ec7f91 | [
"Apache-2.0"
] | null | null | null | blog/admin.py | DongQinglin/djangoblog | e514af57bae9dbca76e783ada6b5921d47ec7f91 | [
"Apache-2.0"
] | null | null | null | blog/admin.py | DongQinglin/djangoblog | e514af57bae9dbca76e783ada6b5921d47ec7f91 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from .models import Banner, ArticleTag, ArticleKind, Article, Link, Recommend
# Register your models here.
@admin.register(Article)
class ArticleAdmin(admin.ModelAdmin):
# 添加想要展示的字段
list_display = ('id', 'kind', 'title', 'recommend', 'author', 'viewscount', 'created_time')
list_per_page = 50
ordering = ('created_time',)
# 后台编辑
list_display_links = ('id', 'title')
@admin.register(Banner)
class BannerAdmin(admin.ModelAdmin):
list_display = ('id', 'text_info', 'img', 'link_url', 'is_active')
@admin.register(ArticleKind)
class ArticleKindAdmin(admin.ModelAdmin):
list_display = ('id', 'name', 'index')
@admin.register(ArticleTag)
class ArticleTagAdmin(admin.ModelAdmin):
list_display = ('id', 'name')
@admin.register(Recommend)
class RecommendAdmin(admin.ModelAdmin):
list_display = ('id', 'name')
@admin.register(Link)
class LinkAdmin(admin.ModelAdmin):
list_display = ('id', 'name', 'linkurl') | 30.625 | 95 | 0.709184 | 112 | 980 | 6.071429 | 0.428571 | 0.113235 | 0.114706 | 0.191176 | 0.267647 | 0.226471 | 0.132353 | 0.132353 | 0 | 0 | 0 | 0.002361 | 0.135714 | 980 | 32 | 96 | 30.625 | 0.800472 | 0.041837 | 0 | 0.086957 | 0 | 0 | 0.143162 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.086957 | 0 | 0.73913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ca448ade87199aa9f38670ff442666f58aa3ead0 | 1,067 | py | Python | public_admin/admin.py | jhrdt/django-public-admin | 0b83d2e6762a3e9de9e4995e130d57ff82cfee6b | [
"MIT"
] | 36 | 2020-03-29T22:23:14.000Z | 2021-09-28T18:44:49.000Z | public_admin/admin.py | jhrdt/django-public-admin | 0b83d2e6762a3e9de9e4995e130d57ff82cfee6b | [
"MIT"
] | 19 | 2020-03-30T07:09:21.000Z | 2021-09-28T18:15:11.000Z | public_admin/admin.py | jhrdt/django-public-admin | 0b83d2e6762a3e9de9e4995e130d57ff82cfee6b | [
"MIT"
] | 14 | 2020-03-30T07:04:37.000Z | 2021-11-23T00:49:22.000Z | from django.contrib.admin import ModelAdmin
from public_admin.sites import PublicAdminSite
class PublicModelAdmin(ModelAdmin):
"""This mimics the Django's native ModelAdmin but filters URLs that should
not exist in a public admin, and deals with request-based permissions."""
def has_view_permission(self, request, obj=None):
"""Only allows view requests if the method is GET"""
return request.method == "GET"
def has_add_permission(self, request):
"""Denies permission to any request trying to add new objects."""
return False
def has_change_permission(self, request, obj=None):
"""Denies permission to any request trying to change objects."""
return False
def has_delete_permission(self, request, obj=None):
"""Denies permission to any request trying to delete objects."""
return False
def get_urls(self):
"""Filter out the URLs that should not exist in a public admin."""
return [url for url in super().get_urls() if PublicAdminSite.valid_url(url)]
| 38.107143 | 84 | 0.699157 | 145 | 1,067 | 5.062069 | 0.427586 | 0.032698 | 0.114441 | 0.098093 | 0.425068 | 0.321526 | 0.321526 | 0.27248 | 0.27248 | 0.174387 | 0 | 0 | 0.216495 | 1,067 | 27 | 85 | 39.518519 | 0.87799 | 0.401125 | 0 | 0.230769 | 0 | 0 | 0.004967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.384615 | false | 0 | 0.153846 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ca46f5d25a47bc2a893de74ab43154109ff006e1 | 1,531 | py | Python | tests/test_adaptor/test_model.py | richlewis42/pandas-learn | 4330c642e4f62e8abc6dcd58ba33daf22519f41e | [
"MIT"
] | 1 | 2015-12-16T04:03:19.000Z | 2015-12-16T04:03:19.000Z | tests/test_adaptor/test_model.py | lewisacidic/pandas-learn | 4330c642e4f62e8abc6dcd58ba33daf22519f41e | [
"MIT"
] | 3 | 2015-12-10T02:05:13.000Z | 2015-12-16T04:04:16.000Z | tests/test_adaptor/test_model.py | lewisacidic/pandas-learn | 4330c642e4f62e8abc6dcd58ba33daf22519f41e | [
"MIT"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
#
# This file is part of pandas-learn
# https://github.com/RichLewis42/pandas-learn
#
# Licensed under the MIT license:
# http://www.opensource.org/licenses/MIT
# Copyright (c) 2015, Rich Lewis <rl403@cam.ac.uk>
"""
tests.test_adaptor.model
~~~~~~~~~~~~~~~~~~~~~~~~
Tests for the model adaptor module of pdlearn.
"""
from pdlearn.adaptor.model import model, fit_method
class Parent(object):
def shout(self):
return 'parent'
def fit(self, X, y=None):
pass
@fit_method
def mock_fit(self, X, y=None):
pass
@model
def model_mock(cls):
cls.fit = mock_fit
return cls
@model_mock
class Child(Parent):
def __init__(self, feature_names=None, target_names=None):
if feature_names:
self.feature_names_ = feature_names
if target_names:
self.target_names_ = target_names
def shout(self):
return 'child'
class TestModel(object):
""" Tests for pdlearn.adaptor.model """
def test_unyouthanize(self):
child = Child()
assert child.shout() == 'child'
with child._unyouthanize():
assert child.shout() == 'parent'
assert child.shout() == 'child'
def test_pandas_mode(self):
assert not Child().pandas_mode_
assert Child(feature_names=['a', 'b']).pandas_mode_
assert Child(target_names=['c', 'd']).pandas_mode_
assert Child(feature_names=['a', 'b'],
target_names=['c', 'd']).pandas_mode_
| 22.188406 | 62 | 0.622469 | 198 | 1,531 | 4.626263 | 0.383838 | 0.078603 | 0.052402 | 0.068777 | 0.152838 | 0.152838 | 0.076419 | 0.076419 | 0 | 0 | 0 | 0.008525 | 0.233834 | 1,531 | 68 | 63 | 22.514706 | 0.772379 | 0.242978 | 0 | 0.171429 | 0 | 0 | 0.030783 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.228571 | false | 0.057143 | 0.028571 | 0.057143 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ca5cbcc69e82b11a1566f7085d8b6893d6551a99 | 323 | py | Python | tests/conftest.py | podaac/concise | 2b9ea074312d0caee4ed59b4efac554514b89673 | [
"Apache-2.0"
] | 1 | 2021-11-17T18:44:19.000Z | 2021-11-17T18:44:19.000Z | tests/conftest.py | podaac/concise | 2b9ea074312d0caee4ed59b4efac554514b89673 | [
"Apache-2.0"
] | 16 | 2021-10-29T18:34:12.000Z | 2022-03-18T22:39:37.000Z | tests/conftest.py | podaac/concise | 2b9ea074312d0caee4ed59b4efac554514b89673 | [
"Apache-2.0"
] | null | null | null | import pytest
def pytest_addoption(parser):
parser.addoption(
'--keep-tmp',
action='store_true',
help='Keep temporary directory after testing. Useful for debugging.')
@pytest.fixture(scope='class')
def pass_options(request):
request.cls.KEEP_TMP = request.config.getoption('--keep-tmp')
| 23.071429 | 77 | 0.690402 | 39 | 323 | 5.615385 | 0.692308 | 0.09589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 323 | 13 | 78 | 24.846154 | 0.823308 | 0 | 0 | 0 | 0 | 0 | 0.297214 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0.111111 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
ca5da25d2014d31e81e89d98d85c283b89ae1f60 | 583 | py | Python | courses/MITx 6.00.1x/lec9/coordinate.py | NomikOS/learning | 268f94605214f6861ef476ca7573e68c068ccbe5 | [
"Unlicense"
] | null | null | null | courses/MITx 6.00.1x/lec9/coordinate.py | NomikOS/learning | 268f94605214f6861ef476ca7573e68c068ccbe5 | [
"Unlicense"
] | null | null | null | courses/MITx 6.00.1x/lec9/coordinate.py | NomikOS/learning | 268f94605214f6861ef476ca7573e68c068ccbe5 | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue May 10 08:18:02 2016
@author: WELG
"""
class Coordinate(object):
def __init__(self, x, y):
self.x = x
self.y = y
def distance(self, other):
x_diff_sq = (self.x-other.x)**2
y_diff_sq = (self.y-other.y)**2
return (x_diff_sq + y_diff_sq)**0.5
def __str__(self):
return "<" + str(self.x) + "," + str(self.y) + ">"
def __sub__(self, other):
return Coordinate(self.x - other.x, self.y - other.y)
c = Coordinate(3, 4)
origin = Coordinate(0, 0)
print(c)
print(origin)
| 19.433333 | 61 | 0.559177 | 93 | 583 | 3.290323 | 0.397849 | 0.081699 | 0.039216 | 0.071895 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048837 | 0.262436 | 583 | 29 | 62 | 20.103448 | 0.662791 | 0.125214 | 0 | 0 | 0 | 0 | 0.005976 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.125 | 0.5 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
ca604a9ab5e1dc4d5f71fe846b65e2a15a5e251f | 571 | py | Python | bookmark/serializers.py | UNIZAR-30226-2021-05/Lector--Backend | 8cc285216972f7485781ece51a88d5f02dc85013 | [
"MIT"
] | null | null | null | bookmark/serializers.py | UNIZAR-30226-2021-05/Lector--Backend | 8cc285216972f7485781ece51a88d5f02dc85013 | [
"MIT"
] | null | null | null | bookmark/serializers.py | UNIZAR-30226-2021-05/Lector--Backend | 8cc285216972f7485781ece51a88d5f02dc85013 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Bookmark, Usuario
class BookmarkSerializer(serializers.ModelSerializer):
"""
API endpoint
"""
class Meta:
model = Bookmark
fields = [
"esAnotacion",
"cuerpo",
"offset",
"titulo",
"Libro",
"Usuario",
"id",
]
class UsuarioIDSerializer(serializers.ModelSerializer):
"""
API endpoint
"""
class Meta:
model = Usuario
fields = [
"id",
] | 17.84375 | 55 | 0.502627 | 40 | 571 | 7.15 | 0.575 | 0.181818 | 0.202797 | 0.258741 | 0.356643 | 0.356643 | 0.356643 | 0 | 0 | 0 | 0 | 0 | 0.399299 | 571 | 32 | 56 | 17.84375 | 0.833819 | 0.043783 | 0 | 0.3 | 0 | 0 | 0.087209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ca7714124421148aec428513cb4835bc39ba6e9c | 218 | py | Python | models/objects.py | YTXIRE/able_crm_concrete_accounting_python_tests | c708bfb973a19499e0b04da5c219d924b295d74e | [
"Apache-2.0"
] | 1 | 2022-03-06T17:30:38.000Z | 2022-03-06T17:30:38.000Z | models/objects.py | YTXIRE/able_crm_concrete_accounting_python_tests | c708bfb973a19499e0b04da5c219d924b295d74e | [
"Apache-2.0"
] | null | null | null | models/objects.py | YTXIRE/able_crm_concrete_accounting_python_tests | c708bfb973a19499e0b04da5c219d924b295d74e | [
"Apache-2.0"
] | null | null | null | from peewee import CharField, IntegerField
from core.db.db import BaseModel
class Objects(BaseModel):
name = CharField(unique=True)
is_archive = IntegerField()
class Meta:
table_name = 'objects'
| 19.818182 | 42 | 0.715596 | 26 | 218 | 5.923077 | 0.653846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206422 | 218 | 10 | 43 | 21.8 | 0.890173 | 0 | 0 | 0 | 0 | 0 | 0.03211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.857143 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
ca78177fdd6b90e2972e4e035b6b677b36a1fa10 | 4,023 | py | Python | thenewboston_node/business_logic/tests/test_blockchain_base/test_get_root_account_file.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_blockchain_base/test_get_root_account_file.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | thenewboston_node/business_logic/tests/test_blockchain_base/test_get_root_account_file.py | MLonTNB/thenewboston-node | 3fbd0fc36c4f0eabaa8267f2a0be2fd717f133d1 | [
"MIT"
] | null | null | null | import pytest
from thenewboston_node.business_logic.tests.mocks.utils import patch_blockchain_states
def test_can_get_account_root_file_count(blockchain_base, blockchain_state_10, blockchain_state_20):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
arf_count = blockchain_base.get_account_root_file_count()
assert arf_count == 2
def test_can_yield_blockchain_states_reversed(blockchain_base, blockchain_state_10, blockchain_state_20):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
account_root_files = list(blockchain_base.yield_blockchain_states_reversed())
assert account_root_files == [blockchain_state_20, blockchain_state_10]
def test_can_get_last_blockchain_state(blockchain_base, blockchain_state_10, blockchain_state_20):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
last_arf = blockchain_base.get_last_blockchain_state()
assert last_arf == blockchain_state_20
def test_last_account_root_file_is_none(blockchain_base, blockchain_state_10, blockchain_state_20):
with patch_blockchain_states(blockchain_base, []):
last_arf = blockchain_base.get_last_blockchain_state()
assert last_arf is None
def test_can_get_first_blockchain_state(blockchain_base, blockchain_state_10, blockchain_state_20):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
first_arf = blockchain_base.get_first_blockchain_state()
assert first_arf == blockchain_state_10
def test_first_account_root_file_is_none(blockchain_base):
with patch_blockchain_states(blockchain_base, []):
first_arf = blockchain_base.get_first_blockchain_state()
assert first_arf is None
def test_get_closest_blockchain_state_snapshot_validates_excludes_block_number(blockchain_base):
with pytest.raises(ValueError):
blockchain_base.get_closest_blockchain_state_snapshot(excludes_block_number=-2)
def test_blockchain_genesis_state_not_found(blockchain_base):
with patch_blockchain_states(blockchain_base, []):
initial_arf = blockchain_base.get_closest_blockchain_state_snapshot(excludes_block_number=-1)
assert initial_arf is None
def test_can_get_blockchain_genesis_state(blockchain_base, blockchain_genesis_state, blockchain_state_10):
with patch_blockchain_states(blockchain_base, [blockchain_genesis_state, blockchain_state_10]):
retrieved_arf = blockchain_base.get_closest_blockchain_state_snapshot(excludes_block_number=-1)
assert retrieved_arf == blockchain_genesis_state
@pytest.mark.parametrize('excludes_block_number', (11, 15, 20))
def test_can_exclude_last_from_closest_account_root_files(
blockchain_base, excludes_block_number, blockchain_state_10, blockchain_state_20
):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
retrieved_arf = blockchain_base.get_closest_blockchain_state_snapshot(
excludes_block_number=excludes_block_number
)
assert retrieved_arf == blockchain_state_10
def test_exclude_non_existing_account_root_file_from_closest(
blockchain_base, blockchain_state_10, blockchain_state_20
):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
retrieved_arf = blockchain_base.get_closest_blockchain_state_snapshot(excludes_block_number=21)
assert retrieved_arf == blockchain_state_20
@pytest.mark.parametrize('excludes_block_number', (0, 5, 10))
def test_closest_account_root_file_not_found(
blockchain_base, excludes_block_number, blockchain_state_10, blockchain_state_20
):
with patch_blockchain_states(blockchain_base, [blockchain_state_10, blockchain_state_20]):
retrieved_arf = blockchain_base.get_closest_blockchain_state_snapshot(
excludes_block_number=excludes_block_number
)
assert retrieved_arf is None
| 41.05102 | 106 | 0.825255 | 527 | 4,023 | 5.71537 | 0.11575 | 0.253984 | 0.112882 | 0.134462 | 0.784861 | 0.735392 | 0.692895 | 0.657039 | 0.586653 | 0.586653 | 0 | 0.025864 | 0.115834 | 4,023 | 97 | 107 | 41.474227 | 0.820917 | 0 | 0 | 0.377049 | 0 | 0 | 0.01044 | 0.01044 | 0 | 0 | 0 | 0 | 0.180328 | 1 | 0.196721 | false | 0 | 0.032787 | 0 | 0.229508 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
ca8c9b33540c4d6f3a2ac61bb2a9b71d2ad65554 | 2,531 | py | Python | lgrp/migrations/0002_auto_20180523_1724.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | null | null | null | lgrp/migrations/0002_auto_20180523_1724.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | 7 | 2020-02-05T20:54:24.000Z | 2021-12-13T20:13:20.000Z | lgrp/migrations/0002_auto_20180523_1724.py | paleocore/paleocore110 | 754f3248ab22a2996b43bd224bd4ba15462edf7d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.1 on 2018-05-23 17:24
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('lgrp', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='collectioncode',
options={'ordering': ['name'], 'verbose_name': '06-LGRP Collection Code'},
),
migrations.AlterModelOptions(
name='person',
options={'ordering': ['name'], 'verbose_name': '05-LGRP Person', 'verbose_name_plural': '05-LGRP People'},
),
migrations.AlterModelOptions(
name='taxon',
options={'ordering': ['rank__ordinal', 'name'], 'verbose_name': '09-LGRP Taxon', 'verbose_name_plural': '09-LGRP Taxa'},
),
migrations.AlterField(
model_name='biology',
name='identified_by',
field=models.CharField(blank=True, choices=[('D. Braun', 'D. Braun'), ('J. Thompson', 'J. Thompson'), ('E. Scott', 'E. Scott'), ('E. Locke', 'E. Locke'), ('A.E. Shapiro', 'A.E. Shapiro'), ('A.W. Gentry', 'A.W. Gentry'), ('B.J. Schoville', 'B.J. Schoville'), ('B.M. Latimer', 'B.M. Latimer'), ('C. Denys', 'C. Denys'), ('C.A. Lockwood', 'C.A. Lockwood'), ('D. Geraads', 'D. Geraads'), ('D.C. Johanson', 'D.C. Johanson'), ('E. Delson', 'E. Delson'), ('B. Villmoare', 'B. Villmoare'), ('E.S. Vrba', 'E.S. Vrba'), ('F.C. Howell', 'F.C. Howell'), ('G. Petter', 'G. Petter'), ('G. Suwa', 'G. Suwa'), ('G.G. Eck', 'G.G. Eck'), ('H.B. Krentza', 'H.B. Krentza'), ('H.B. Wesselman', 'H.B. Wesselman'), ('H.B.S. Cooke', 'H.B.S. Cooke'), ('Institute Staff', 'Institute Staff'), ('J.C. Rage', 'J.C. Rage'), ('K.E. Reed', 'K.E. Reed'), ('L.A. Werdelin', 'L.A. Werdelin'), ('L.J. Flynn', 'L.J. Flynn'), ('M. Sabatier', 'M. Sabatier'), ('M.E. Lewis', 'M.E. Lewis'), ('N. Fessaha', 'N. Fessaha'), ('P. Brodkorb', 'P. Brodkorb'), ('R. Bobe-Quinteros', 'R. Bobe-Quinteros'), ('R. Geze', 'R. Geze'), ('R.L. Bernor', 'R.L. Bernor'), ('S.R. Frost', 'S.R. Frost'), ('T.D. White', 'T.D. White'), ('T.K. Nalley', 'T.K. Nalley'), ('V. Eisenmann', 'V. Eisenmann'), ('W.H. Kimbel', 'W.H. Kimbel'), ('Z. Alemseged', 'Z. Alemseged'), ('S. Oestmo', 'S. Oestmo'), ('J. Rowan', 'J. Rowan'), ('C.J. Campisano', 'C.J. Campisano'), ('J. Robinson', 'J. Robinson'), ('I. Smail', 'I. Smail'), ('I. Lazagabaster', 'I. Lazagabaster'), ('A. Rector', 'A. Rector')], max_length=100, null=True),
),
]
| 76.69697 | 1,562 | 0.551561 | 343 | 2,531 | 4.017493 | 0.376093 | 0.008708 | 0.067489 | 0.037736 | 0.07402 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016433 | 0.182537 | 2,531 | 32 | 1,563 | 79.09375 | 0.649589 | 0.026867 | 0 | 0.28 | 1 | 0 | 0.515447 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0476e92ddb685f3d848b0ff0228d9311265e9dc0 | 109 | py | Python | src/server_CFD/definitions.py | robertpardillo/Funnel | f45e419f55e085bbb95e17c47b4c94a7c625ba9b | [
"MIT"
] | 1 | 2021-05-18T16:10:49.000Z | 2021-05-18T16:10:49.000Z | src/server_CFD/definitions.py | robertpardillo/Funnel | f45e419f55e085bbb95e17c47b4c94a7c625ba9b | [
"MIT"
] | null | null | null | src/server_CFD/definitions.py | robertpardillo/Funnel | f45e419f55e085bbb95e17c47b4c94a7c625ba9b | [
"MIT"
] | null | null | null |
import os
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
SERVER = 'CFD'
IS_MULTITHREADING = 0
| 15.571429 | 54 | 0.706422 | 16 | 109 | 4.4375 | 0.8125 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.174312 | 109 | 6 | 55 | 18.166667 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0477dce09b1e1c9c972295a526736ea853619dd3 | 201 | py | Python | src/base.py | mchange/gitlab-migrator | 9ae5779993bb699c050ad6156c1f71f3ae2ddaf9 | [
"MIT"
] | 2 | 2020-02-12T07:49:13.000Z | 2021-03-23T02:58:20.000Z | src/base.py | mchange/gitlab-migrator | 9ae5779993bb699c050ad6156c1f71f3ae2ddaf9 | [
"MIT"
] | null | null | null | src/base.py | mchange/gitlab-migrator | 9ae5779993bb699c050ad6156c1f71f3ae2ddaf9 | [
"MIT"
] | 3 | 2019-11-18T23:49:25.000Z | 2021-03-23T02:58:24.000Z | # -*- coding: utf-8 -*-
import json
def storage(name, data):
with open('tmp/%s.json' % name, 'w', encoding = 'UTF-8') as f:
json.dump(data, f, sort_keys = False, indent = 2, ensure_ascii = False)
| 25.125 | 73 | 0.621891 | 33 | 201 | 3.727273 | 0.757576 | 0.065041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.179104 | 201 | 7 | 74 | 28.714286 | 0.727273 | 0.104478 | 0 | 0 | 0 | 0 | 0.095506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
0487745f61c3a9a8d9923f5b46f9963da793fbf5 | 2,416 | py | Python | geosquizzy/fsm/selection.py | LowerSilesians/geo-squizzy | 8fff2df9e158c4063ef93d58f60f84e90ad44edd | [
"MIT"
] | 3 | 2016-02-25T07:59:24.000Z | 2016-03-11T12:33:48.000Z | geosquizzy/fsm/selection.py | LowerSilesians/geo-squizzy | 8fff2df9e158c4063ef93d58f60f84e90ad44edd | [
"MIT"
] | 2 | 2016-03-09T12:23:42.000Z | 2016-03-09T21:21:47.000Z | geosquizzy/fsm/selection.py | LowerSilesians/geo-squizzy | 8fff2df9e158c4063ef93d58f60f84e90ad44edd | [
"MIT"
] | null | null | null | # TODO selection percentage pattern works, but run and done methods really slow down
# TODO whole computation when we traverse really big data
# TODO in order to achieve better performance EconomizeFiniteStateMachine class was provided
import math
class SelectionFiniteStateMachine:
def __init__(self, *args, **kwargs):
self.empty_passage = False
self.obj_temp_size = 0
self.obj_size = 0
self.visited = -1
self.visited_total = -1
self.to_visit = 0
self.space = 0
self.intersections = -1
self.items = 0
self.doc_len = kwargs.get('len', 0)
self.percentage = kwargs.get('percentage', 0)
self.immersion = {'features': [0, 1],
'features_obj': [0, 1, 0]}
def run(self, anatomy=None, blocked=None):
"""
when data stream will nest itself in features array, we start to checking
if data stream is inside of features item or outside before entering next item
:param anatomy: DataAnatomyFiniteStateMachine.stack [int,...]
:return: bool()
"""
if self.immersion['features'] == anatomy and not self.empty_passage:
self.visited_total += 1
self.intersections += 1
self.empty_passage = True
if blocked:
self.obj_temp_size = 0
else:
self.visited += 1
self.__adjust_information__()
if self.intersections >= self.space:
self.intersections = 0
elif self.immersion['features_obj'] == anatomy[:3]:
self.empty_passage = False
self.obj_temp_size += 1
return self.intersections >= 1
def done(self):
return self.visited >= self.to_visit and self.to_visit != 0
def __adjust_information__(self):
if self.obj_temp_size > self.obj_size:
self.obj_size = self.obj_temp_size
self.__calculate_percentage__()
self.__calculate_space__()
self.obj_temp_size = 0
def __calculate_percentage__(self):
self.items = math.floor(self.doc_len / self.obj_size)
to_visit = math.floor(((self.items * self.percentage) / 100) - self.visited)
self.to_visit = (1, to_visit)[to_visit >= 0]
def __calculate_space__(self):
self.space = math.floor(self.items / (self.to_visit + self.visited)) | 36.059701 | 92 | 0.612997 | 295 | 2,416 | 4.789831 | 0.315254 | 0.04954 | 0.046709 | 0.063694 | 0.205945 | 0.07431 | 0.050955 | 0.050955 | 0 | 0 | 0 | 0.017585 | 0.293874 | 2,416 | 67 | 93 | 36.059701 | 0.810668 | 0.191225 | 0 | 0.108696 | 0 | 0 | 0.027822 | 0 | 0 | 0 | 0 | 0.014925 | 0 | 1 | 0.130435 | false | 0.086957 | 0.021739 | 0.021739 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
049e902695b3f12dc0159bb3ea84ccc388a6852b | 4,478 | py | Python | lib/utils/convert.py | lin-zju/descriptor-space | 7b6aaa6ed710d7e3b097b0097c01f4562df75c59 | [
"MIT"
] | null | null | null | lib/utils/convert.py | lin-zju/descriptor-space | 7b6aaa6ed710d7e3b097b0097c01f4562df75c59 | [
"MIT"
] | 5 | 2021-03-19T04:29:29.000Z | 2022-03-12T00:01:54.000Z | lib/utils/convert.py | lin-zju/descriptor-space | 7b6aaa6ed710d7e3b097b0097c01f4562df75c59 | [
"MIT"
] | null | null | null | import torch
import numpy as np
import cv2
def tonumpyimg(img):
"""
Convert a normalized tensor image to unnormalized uint8 numpy image
For single channel image, no unnormalization is done.
:param img: torch, normalized, (3, H, W), (H, W)
:return: numpy: (H, W, 3), (H, W). uint8
"""
return touint8(tonumpy(unnormalize_torch(img)))
def tonumpy(img):
"""
Convert torch image map to numpy image map
Note the range is not change
:param img: tensor, shape (C, H, W), (H, W)
:return: numpy, shape (H, W, C), (H, W)
"""
if len(img.size()) == 2:
return img.cpu().detach().numpy()
return img.permute(1, 2, 0).cpu().detach().numpy()
def touint8(img):
"""
Convert float numpy image to uint8 image
:param img: numpy image, float, (0, 1)
:return: uint8 image
"""
img = img * 255
return img.astype(np.uint8)
def normalize_torch(img, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]):
"""
Normalize a torch image.
:param img: (3, H, W), in range (0, 1)
"""
img = img.clone()
img -= torch.tensor(mean).view(3, 1, 1)
img /= torch.tensor(std).view(3, 1, 1)
return img
def unnormalize_torch(img, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]):
"""
Convert a normalized Tensor image to unnormalized form
For single channel image, no normalization is done.
:param img: (C, H, W), (H, W)
"""
if img.size()[0] == 3:
img = img.clone()
img *= torch.Tensor(std).view(3, 1, 1)
img += torch.Tensor(mean).view(3, 1, 1)
return img
def gray2RGB(img_raw):
"""
Convert a gray image to RGB
:param img_raw: (H, W, 3) or (H, W), uint8, numpy
:return: (H, W, 3)
"""
if len(img_raw.shape) == 2:
img_raw = np.repeat(img_raw[:, :, None], 3, axis=2)
if img_raw.shape[2] > 3:
img_raw = img_raw[:, :, :3]
return img_raw
def color_scale(attention):
"""
Visualize a attention map
:param scale_map: (C, H, W), attention map, softmaxed
:return: (3, H, W), colored version
"""
colors = torch.Tensor([
[1, 0, 0], # red
[0, 1, 0], # green
[0, 0, 1], # blue
[0, 0, 0], # black
]).float()
# (H, W)
attention = torch.argmax(attention, dim=0)
# (H, W, C)
color_map = colors[attention]
color_map = color_map.permute(2, 0, 1)
return color_map
def warp_torch(map, H):
"""
Warp a torch image.
:param map: either (C, H, W) or (H, W)
:param H: (3, 3)
:return: warped iamge, (C, H, W) or (H, W)
"""
map = tonumpy(map)
h, w = map.shape[-2:]
map = cv2.warpPerspective(map, H, dsize=(w, h))
return totensor(map)
def torange(array, low, high):
"""
Render an array to value range (low, high)
:param array: any array
:param low, high: the range
:return: new array
"""
min, max = array.min(), array.max()
# normalized to [0, 1]
array = array - min
array = array / (max - min)
# to (low, high)
array = array * (high - low) + low
return array
def tofloat(img):
"""
Convert a uint8 image to float image
:param img: numpy image, uint8
:return: float image
"""
return img.astype(np.float) / 255
def tonumpy_batch(imgs):
"""
Convert a batch of torch images to numpy image map
:param imgs: (B, C, H, W)
:return: (B, H, W, C)
"""
return imgs.permute(0, 2, 3, 1).cpu().detach().numpy()
def totensor(img, device=torch.device('cpu')):
"""
Do the reverse of tonumpy
"""
if len(img.shape) == 2:
return torch.from_numpy(img).to(device).float()
return torch.from_numpy(img).permute(2, 0, 1).to(device).float()
def totensor_batch(imgs, device=torch.device('cpu')):
"""
Do the reverse of tonumpy_batch
"""
return torch.from_numpy(imgs).permute(0, 3, 1, 2).to(device).float()
def RGB2BGR(*imgs):
return [cv2.cvtColor(x, cv2.COLOR_RGB2BGR) for x in imgs]
def unnormalize(img, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]):
"""
Convert a normalized tensor image to unnormalized form
:param img: (B, C, H, W)
"""
img = img.detach().cpu()
img *= torch.tensor(std).view(3, 1, 1)
img += torch.tensor(mean).view(3, 1, 1)
return img
def toUint8RGB(img):
return (tonumpy(unnormalize(img)) * 255.).astype(np.uint8)
| 23.946524 | 82 | 0.566994 | 679 | 4,478 | 3.701031 | 0.17673 | 0.020692 | 0.00955 | 0.016713 | 0.288102 | 0.2308 | 0.204536 | 0.18265 | 0.163152 | 0.130521 | 0 | 0.05297 | 0.270657 | 4,478 | 186 | 83 | 24.075269 | 0.716473 | 0.339214 | 0 | 0.071429 | 0 | 0 | 0.002283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.228571 | false | 0 | 0.042857 | 0.028571 | 0.528571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
04a67ebc35b4f07f80f0f7f2893f69b1c95e51a4 | 721 | py | Python | pvapy/pvaPyProblem.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | null | null | null | pvapy/pvaPyProblem.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | 1 | 2020-07-18T19:50:51.000Z | 2020-07-19T09:58:16.000Z | pvapy/pvaPyProblem.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | 2 | 2020-07-18T18:06:57.000Z | 2020-09-10T06:40:34.000Z | from pvapy import Channel, CA, PvTimeStamp, PvAlarm
print('DBRdouble')
channel = Channel('DBRdouble')
timestamp = PvTimeStamp(10, 100)
alarm = PvAlarm(1,1,"mess")
print(channel.get('value'))
print('here 1')
channel.put(alarm,'record[process=false]field(alarm)')
print('here 2')
print(channel.get('value'))
channel.put(timestamp,'record[process=false]field(timeStamp)')
print(channel.get('value'))
print('here 3')
print('DBRdouble CA')
channel = Channel('DBRdouble',CA)
print(channel.get('value'))
print('here 4')
channel.put(alarm,'record[process=false]field(alarm)')
print('here 5')
print(channel.get('value'))
channel.put(timestamp,'record[process=false]field(timeStamp)')
print(channel.get('value'))
print('here 6')
| 27.730769 | 62 | 0.736477 | 102 | 721 | 5.205882 | 0.27451 | 0.135593 | 0.169492 | 0.225989 | 0.681733 | 0.681733 | 0.572505 | 0.572505 | 0.572505 | 0.572505 | 0 | 0.019288 | 0.065187 | 721 | 25 | 63 | 28.84 | 0.768546 | 0 | 0 | 0.434783 | 0 | 0 | 0.345833 | 0.194444 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.608696 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
04a8f1e9519e4c6ba6d356c5c2d842492bb0fa06 | 1,020 | py | Python | setup.py | SomeHybrid/mineid | b05216541b9c29a15fa6bbbd95531fed7b43f184 | [
"MIT"
] | null | null | null | setup.py | SomeHybrid/mineid | b05216541b9c29a15fa6bbbd95531fed7b43f184 | [
"MIT"
] | null | null | null | setup.py | SomeHybrid/mineid | b05216541b9c29a15fa6bbbd95531fed7b43f184 | [
"MIT"
] | null | null | null | try:
from setuptools import setup
except ImportError:
from distutils.core import setup
import mineid
import pathlib
HERE = pathlib.Path(__file__).parent
README = (HERE / "README.md").read_text()
setup(
name=mineid.__name__,
version=mineid.__version__,
description="A small Python library for getting Minecraft UUIDs.",
long_description=README,
long_description_content_type="text/markdown",
package_dir={"": "mineid"},
url="https://github.com/SomeHybrid/mineid",
classifiers=[
"Framework :: AsyncIO",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Topic :: Internet :: WWW/HTTP"
]
)
| 30.909091 | 70 | 0.640196 | 107 | 1,020 | 5.934579 | 0.607477 | 0.179528 | 0.23622 | 0.204724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015248 | 0.228431 | 1,020 | 32 | 71 | 31.875 | 0.791614 | 0 | 0 | 0 | 0 | 0 | 0.467647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
04bc20e97c7b91fb6c91377e5dcad98a1ee694a4 | 362 | py | Python | client-server-chatroom/server.py | crista/swarch | 336e885e086bde17cded6acc9afd7904f6c749e3 | [
"MIT"
] | null | null | null | client-server-chatroom/server.py | crista/swarch | 336e885e086bde17cded6acc9afd7904f6c749e3 | [
"MIT"
] | null | null | null | client-server-chatroom/server.py | crista/swarch | 336e885e086bde17cded6acc9afd7904f6c749e3 | [
"MIT"
] | null | null | null | from network import Listener, Handler, poll
handlers = {} # map client handler to user name
class MyHandler(Handler):
def on_open(self):
pass
def on_close(self):
pass
def on_msg(self, msg):
print msg
port = 8888
server = Listener(port, MyHandler)
while 1:
poll(timeout=0.05) # in seconds
| 15.083333 | 48 | 0.596685 | 47 | 362 | 4.531915 | 0.680851 | 0.070423 | 0.103286 | 0.122066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03252 | 0.320442 | 362 | 23 | 49 | 15.73913 | 0.833333 | 0.116022 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.153846 | 0.076923 | null | null | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
04d8816c058198033200e277b7e97cd45ae91b00 | 7,675 | py | Python | artgens/voice2txt.py | mark-nick-o/AIML_SoundArtsProject | be391f0ce8cde322abbfb49015e632a98c7f00c6 | [
"Apache-2.0"
] | null | null | null | artgens/voice2txt.py | mark-nick-o/AIML_SoundArtsProject | be391f0ce8cde322abbfb49015e632a98c7f00c6 | [
"Apache-2.0"
] | null | null | null | artgens/voice2txt.py | mark-nick-o/AIML_SoundArtsProject | be391f0ce8cde322abbfb49015e632a98c7f00c6 | [
"Apache-2.0"
] | null | null | null |
#
# This python takes either a microphone or sound file as an input
# and tries to decipher the vocal message into text
#
#
import os
import sys
# python -m pip install --upgrade pip
# sudo -H pip3 install SpeechRecognition pydub
import speech_recognition as sr
# sudo apt-get install python-pyaudio python3-pyaudio
# pip3 install pyaudio
#
# if the speech recognition using the google engine fails we revert to use the
# mimi text to speech engine or kaldi (TO BE COMPLETED)
#
# Also needs to cycle every possible (used language) to find out what was said to the robot
# (TO BE COMPLETED) - Maybe best way is to have a version for each language and then call each one
# that way we can try to best guess in an order what language it is based on previous questions
# with last messages being most relevent. (therefore PASS the language and choose the engine accordingly)
#
if __name__ == "__main__":
if (len(sys.argv) - 1) <= 0:
print("please pass the filename for the sound you want to convert to text or --MIC <duration> for a recording from the mircophone")
sys.exit()
r = sr.Recognizer() # initialize the recognizer
progHomeMimi = '/home/mark/pics/mimi_trans_sound.sh '
recFilePath = '/mnt/c/linuxmirror/'
recFileSave = 'recordedSample.wav'
if (len(sys.argv) - 1) == 2: # passed -MIC <duration>
with sr.Microphone() as source:
r.adjust_for_ambient_noise(source, duration=1)
# audio_data = recognizer.listen(source, timeout=int(sys.argv[2]))
audio_data = r.record(source, duration=int(sys.argv[2])) # read the audio data from the default microphone for the period of the duration
try:
text = r.recognize_google(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_bing(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_google_cloud(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_houndify(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_ibm(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_sphinx(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_wit(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
recFilePathFile = recFilePath + recFileSave
with open(recFilePathFile, "wb") as f:
f.write(audio.get_wav_data()) # CHECK IF WE NEED TO DOWNSAMPLE here !!!
cmd = progHomeMimi + recFileSave # run the mimi translator on the input file to see if we can get result (use each possible engine)
f = os.popen(cmd)
else:
# ------- read in the requested sound file ---------
fileNam = "/mnt/c/linuxmirror/" + sys.argv[1]
if os.path.isfile(fileNam) == False:
fileNam = fileNam + ".wav"
if os.path.isfile(fileNam) == False:
print("invalid file name or path %s" % fileNam)
sys.exit()
with sr.AudioFile(fileNam) as source: # open the file
# audio_data = recognizer.listen(source)
audio_data = r.record(source) # listen for the data (load audio to memory)
try:
text = r.recognize_google(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_bing(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_google_cloud(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_houndify(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_ibm(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_sphinx(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
try:
text = r.recognize_wit(audio_data,language="en-US") # recognize (convert from speech to text) - english USA
print("Decoded Text : {}".format(text))
except Exception as ex:
print(ex)
cmd = progHomeMimi + fileNam # run the mimi translator on the input file to see if we can get result (use each possible engine)
f = os.popen(cmd)
#text = r.recognize_google(audio_data,language="es-ES") # recognize (convert from speech to text) - spanish
print(text)
| 58.587786 | 194 | 0.506971 | 824 | 7,675 | 4.658981 | 0.231796 | 0.046887 | 0.054702 | 0.101589 | 0.613962 | 0.579578 | 0.557697 | 0.548059 | 0.548059 | 0.548059 | 0 | 0.002431 | 0.410423 | 7,675 | 130 | 195 | 59.038462 | 0.845967 | 0.292638 | 0 | 0.745098 | 0 | 0.009804 | 0.104774 | 0.006502 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009804 | 0.029412 | null | null | 0.303922 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
04e8d0e9ff239a8ce726752d726dd36f5030b1dc | 1,875 | py | Python | servers/python/coweb/auth/ini.py | opencoweb/coweb | 7b3a87ee9eda735a859447d404ee16edde1c5671 | [
"AFL-2.1"
] | 83 | 2015-01-05T19:02:57.000Z | 2021-11-19T02:48:09.000Z | servers/python/coweb/auth/ini.py | xuelingxiao/coweb | 7b3a87ee9eda735a859447d404ee16edde1c5671 | [
"AFL-2.1"
] | 3 | 2015-12-16T13:49:33.000Z | 2019-06-17T13:38:50.000Z | servers/python/coweb/auth/ini.py | xuelingxiao/coweb | 7b3a87ee9eda735a859447d404ee16edde1c5671 | [
"AFL-2.1"
] | 14 | 2015-04-29T22:36:53.000Z | 2021-11-18T03:24:29.000Z | '''
Copyright (c) The Dojo Foundation 2011. All Rights Reserved.
Copyright (c) IBM Corporation 2008, 2011. All Rights Reserved.
'''
# std lib
import ConfigParser
# tornado
import tornado.web
from base import AuthBase
class IniAuth(AuthBase):
cookieName = 'coweb.auth.ini.username'
def __init__(self, container, iniPath='users.ini'):
super(IniAuth, self).__init__(container)
# compute abs path to ini file
self._iniPath = self._container.get_absolute_path(iniPath)
def requires_login(self):
'''Requires user login.'''
return True
def requires_cookies(self):
'''Uses tornado's secure cookies'.'''
return True
def get_current_user(self, handler):
'''Gets the current username from the secure cookie.'''
return handler.get_secure_cookie(self.cookieName)
def check_credentials(self, handler, username, password):
'''Checks the login credentials against a simple INI file.'''
# @todo: put this on a timer or something; wasteful to do each time
users = ConfigParser.ConfigParser()
users.optionxform = str
users.read(self._iniPath)
try:
pw = users.get('md5', username)
except (ConfigParser.NoOptionError, ConfigParser.NoSectionError):
pass
else:
known = (pw == password)
try:
pw = users.get('plain', username)
except (ConfigParser.NoOptionError, ConfigParser.NoSectionError):
known = False
else:
known = (pw == password)
if known:
handler.set_secure_cookie(self.cookieName, username)
else:
raise tornado.web.HTTPError(403)
def clear_credentials(self, handler):
'''Clears the authentication cookie.'''
handler.clear_cookie(self.cookieName) | 32.327586 | 75 | 0.633067 | 206 | 1,875 | 5.645631 | 0.461165 | 0.028375 | 0.051591 | 0.036114 | 0.11178 | 0.11178 | 0 | 0 | 0 | 0 | 0 | 0.011704 | 0.270933 | 1,875 | 58 | 76 | 32.327586 | 0.839064 | 0.228267 | 0 | 0.305556 | 0 | 0 | 0.028329 | 0.016289 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.166667 | false | 0.111111 | 0.083333 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
04f6f8530ee92454c592c0aff5b1d9043f871316 | 304 | py | Python | algorithm/codility.com/20160330/longest_0_binary_gap.py | leonard-sxy/algorithm-practice | 4efde003a085b57201f364e7590e454d6b066e6d | [
"MIT"
] | 1 | 2016-03-29T06:16:40.000Z | 2016-03-29T06:16:40.000Z | algorithm/codility.com/20160330/longest_0_binary_gap.py | leonard-sxy/algorithm-practice | 4efde003a085b57201f364e7590e454d6b066e6d | [
"MIT"
] | null | null | null | algorithm/codility.com/20160330/longest_0_binary_gap.py | leonard-sxy/algorithm-practice | 4efde003a085b57201f364e7590e454d6b066e6d | [
"MIT"
] | null | null | null | def solution(N):
str = '{0:b}'.format(N)
counter = prev_counter = 0
for c in str:
if c is '0':
counter += 1
else:
if prev_counter == 0 or counter > prev_counter:
prev_counter = counter
counter = 0
return prev_counter if prev_counter > counter else counter
| 19 | 60 | 0.605263 | 45 | 304 | 3.955556 | 0.4 | 0.370787 | 0.303371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028302 | 0.302632 | 304 | 15 | 61 | 20.266667 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0.019868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
04f77bc85b8a70ab36278fbebe8b5450e1812373 | 223 | py | Python | m06_classes/misc_types.py | zkamel/learn-python | 78fdd6ceacdfcc8306e2d8306bf8eec8b6702f23 | [
"MIT"
] | 80 | 2020-11-14T19:19:27.000Z | 2022-03-10T17:43:17.000Z | m06_classes/misc_types.py | nerbertb/python-52-weeks | 55add5d75d1aabed4c59d445e1d1b773ede047b0 | [
"MIT"
] | 10 | 2020-11-24T06:19:45.000Z | 2022-02-27T23:53:28.000Z | m06_classes/misc_types.py | nerbertb/python-52-weeks | 55add5d75d1aabed4c59d445e1d1b773ede047b0 | [
"MIT"
] | 58 | 2020-11-13T18:35:22.000Z | 2022-03-28T06:40:08.000Z | class TransportType:
NAPALM = "napalm"
NCCLIENT = "ncclient"
NETMIKO = "netmiko"
class DeviceType:
IOS = "ios"
NXOS = "nxos"
NXOS_SSH = "nxos_ssh"
NEXUS = "nexus"
CISCO_NXOS = "cisco_nxos"
| 17.153846 | 29 | 0.605381 | 24 | 223 | 5.458333 | 0.458333 | 0.122137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.278027 | 223 | 12 | 30 | 18.583333 | 0.813665 | 0 | 0 | 0 | 0 | 0 | 0.2287 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b6b9913db8898e98ae280c9f805bab61c89f01e7 | 242 | py | Python | sols/1207.py | Paul11100/LeetCode | 9896c579dff1812c0c76964db8d60603ee715e35 | [
"MIT"
] | null | null | null | sols/1207.py | Paul11100/LeetCode | 9896c579dff1812c0c76964db8d60603ee715e35 | [
"MIT"
] | null | null | null | sols/1207.py | Paul11100/LeetCode | 9896c579dff1812c0c76964db8d60603ee715e35 | [
"MIT"
] | null | null | null | from collections import Counter
class Solution:
# Counter (Accepted + Top Voted), O(n) time and space
def uniqueOccurrences(self, arr: List[int]) -> bool:
cnt = Counter(arr)
return len(set(cnt.values())) == len(cnt)
| 26.888889 | 57 | 0.644628 | 32 | 242 | 4.875 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.227273 | 242 | 8 | 58 | 30.25 | 0.834225 | 0.210744 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b6bda7f3a6c152d1c774609be82c07ddb95b60bc | 1,425 | py | Python | fastapi-master-api/app/api/models/machine.py | SionAbes/fullstack-porfolio | 6ca74da425a0f6e2d9b65b2aeb8d5452ff1565a9 | [
"MIT"
] | 1 | 2021-12-25T09:19:25.000Z | 2021-12-25T09:19:25.000Z | fastapi-master-api/app/api/models/machine.py | SionAbes/fullstack-porfolio | 6ca74da425a0f6e2d9b65b2aeb8d5452ff1565a9 | [
"MIT"
] | null | null | null | fastapi-master-api/app/api/models/machine.py | SionAbes/fullstack-porfolio | 6ca74da425a0f6e2d9b65b2aeb8d5452ff1565a9 | [
"MIT"
] | null | null | null | # coding: utf-8
from __future__ import annotations
import re # noqa: F401
from datetime import date, datetime # noqa: F401
from typing import Any, Dict, List, Optional # noqa: F401
from pydantic import AnyUrl, BaseModel, EmailStr, validator # noqa: F401
class Machine(BaseModel):
"""NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech).
Do not edit the class manually.
Machine - a model defined in OpenAPI
id: The id of this Machine.
user_id: The user_id of this Machine.
created_at: The created_at of this Machine.
updated_at: The updated_at of this Machine.
unit_installed_at: The unit_installed_at of this Machine [Optional].
oem_name: The oem_name of this Machine.
model: The model of this Machine [Optional].
make: The make of this Machine [Optional].
equipment_id: The equipment_id of this Machine [Optional].
serial_number: The serial_number of this Machine [Optional].
pin: The pin of this Machine [Optional].
"""
id: int
user_id: int
created_at: datetime
updated_at: datetime
unit_installed_at: Optional[datetime] = None
oem_name: str
model: Optional[str] = None
make: Optional[str] = None
equipment_id: Optional[str] = None
serial_number: Optional[str] = None
pin: Optional[str] = None
Machine.update_forward_refs()
| 30.978261 | 96 | 0.691228 | 198 | 1,425 | 4.823232 | 0.323232 | 0.06911 | 0.149738 | 0.131937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011938 | 0.235789 | 1,425 | 45 | 97 | 31.666667 | 0.865014 | 0.544561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.277778 | 0 | 0.944444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
b6d5f7d52c3ae0ba8e0e0bccd6828e7cc95ac77d | 1,330 | py | Python | fluidpythran/for_test_init.py | fluiddyn/fluidpythran | e34e9886680e6b8e365d24a77fcb66b67e554043 | [
"CECILL-B"
] | 1 | 2018-10-31T09:30:15.000Z | 2018-10-31T09:30:15.000Z | fluidpythran/for_test_init.py | fluiddyn/fluidpythran | e34e9886680e6b8e365d24a77fcb66b67e554043 | [
"CECILL-B"
] | null | null | null | fluidpythran/for_test_init.py | fluiddyn/fluidpythran | e34e9886680e6b8e365d24a77fcb66b67e554043 | [
"CECILL-B"
] | null | null | null | import numpy as np
# pythran import numpy as np
from fluidpythran import FluidPythran, boost, include, Array, Union
# pythran def func(int, float)
@boost
def func(a, b):
return a + b
@include
def func_tmp(arg):
return arg ** 2
A = Union[int, Array[int, "1d"]]
@boost
def func2(a: A, b: float):
return a - func_tmp(b)
fp = FluidPythran()
def func1(a, b):
n = 10
if fp.is_transpiled:
result = fp.use_pythranized_block("block0")
else:
# pythran block (
# float a, b;
# int n
# ) -> (result, a)
# blabla
result = 0.0
for _ in range(n):
result += a ** 2 + b ** 3
@boost
class Transmitter:
freq: float
def __init__(self, freq):
self.freq = float(freq)
@boost
def __call__(self, inp: "float[]"):
"""My docstring"""
return inp * np.exp(np.arange(len(inp)) * self.freq * 1j)
@boost
def other_func(self):
return 2 * self.freq
def check_class():
inp = np.ones(2)
freq = 1.0
trans = Transmitter(freq)
def for_check(freq, inp):
return inp * np.exp(np.arange(len(inp)) * freq * 1j)
assert np.allclose(trans(inp), for_check(freq, inp))
assert trans.other_func() == 2 * freq
if __name__ == "__main__":
check_class()
| 16.419753 | 67 | 0.564662 | 186 | 1,330 | 3.887097 | 0.344086 | 0.013831 | 0.035961 | 0.041494 | 0.077455 | 0.077455 | 0.077455 | 0.077455 | 0 | 0 | 0 | 0.019397 | 0.302256 | 1,330 | 80 | 68 | 16.625 | 0.759698 | 0.101504 | 0 | 0.119048 | 0 | 0 | 0.019442 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.214286 | false | 0 | 0.047619 | 0.119048 | 0.452381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b6dc336da62888edd5386714f6a8e59cf1cf1328 | 5,929 | py | Python | receiveSTA/rcv_kset_omit2.py | ilinastoilkovska/syncTA | 9224ef50d5ada4782d0a1d5723531ebe43149584 | [
"Apache-2.0"
] | 2 | 2021-01-05T11:45:48.000Z | 2021-04-06T12:59:38.000Z | receiveSTA/rcv_kset_omit2.py | ilinastoilkovska/syncTA | 9224ef50d5ada4782d0a1d5723531ebe43149584 | [
"Apache-2.0"
] | null | null | null | receiveSTA/rcv_kset_omit2.py | ilinastoilkovska/syncTA | 9224ef50d5ada4782d0a1d5723531ebe43149584 | [
"Apache-2.0"
] | null | null | null | # process local states
local = range(12)
# L states
L = {"x0" : [6], "x1" : [7], "x2" : [8],
"f0" : [9], "f1" : [10], "f2" : [11],
"v0" : [0, 3, 6, 9], "v1" : [1, 4, 7, 10], "v2" : [2, 5, 8, 11],
"corr0" : [0, 6], "corr1" : [1, 7], "corr2" : [2, 8]}
# receive variables
rcv_vars = ["nr0", "nr1", "nr2"]
# initial states
initial = local
# rules
rules = []
rules.append({'idx': 0, 'from': 0, 'to': 0, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 1, 'from': 0, 'to': 1, 'guard': "(>= nr1 1)"})
rules.append({'idx': 2, 'from': 0, 'to': 2, 'guard': "(>= nr2 1)"})
rules.append({'idx': 3, 'from': 1, 'to': 1, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 4, 'from': 1, 'to': 0, 'guard': "(>= nr0 1)"})
rules.append({'idx': 5, 'from': 1, 'to': 2, 'guard': "(>= nr2 1)"})
rules.append({'idx': 6, 'from': 2, 'to': 2, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 7, 'from': 2, 'to': 0, 'guard': "(>= nr0 1)"})
rules.append({'idx': 8, 'from': 2, 'to': 1, 'guard': "(>= nr1 1)"})
rules.append({'idx': 9, 'from': 0, 'to': 6, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 10, 'from': 0, 'to': 7, 'guard': "(>= nr1 1)"})
rules.append({'idx': 11, 'from': 0, 'to': 8, 'guard': "(>= nr2 1)"})
rules.append({'idx': 12, 'from': 1, 'to': 7, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 13, 'from': 1, 'to': 6, 'guard': "(>= nr0 1)"})
rules.append({'idx': 14, 'from': 1, 'to': 8, 'guard': "(>= nr2 1)"})
rules.append({'idx': 15, 'from': 2, 'to': 8, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 16, 'from': 2, 'to': 6, 'guard': "(>= nr0 1)"})
rules.append({'idx': 17, 'from': 2, 'to': 7, 'guard': "(>= nr1 1)"})
rules.append({'idx': 18, 'from': 6, 'to': 0, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 19, 'from': 6, 'to': 1, 'guard': "(>= nr1 1)"})
rules.append({'idx': 20, 'from': 6, 'to': 2, 'guard': "(>= nr2 1)"})
rules.append({'idx': 21, 'from': 7, 'to': 1, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 22, 'from': 7, 'to': 0, 'guard': "(>= nr0 1)"})
rules.append({'idx': 23, 'from': 7, 'to': 2, 'guard': "(>= nr2 1)"})
rules.append({'idx': 24, 'from': 8, 'to': 2, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 25, 'from': 8, 'to': 0, 'guard': "(>= nr0 1)"})
rules.append({'idx': 26, 'from': 8, 'to': 1, 'guard': "(>= nr1 1)"})
# send omission faulty
rules.append({'idx': 27, 'from': 3, 'to': 3, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 28, 'from': 3, 'to': 4, 'guard': "(>= nr1 1)"})
rules.append({'idx': 29, 'from': 3, 'to': 5, 'guard': "(>= nr2 1)"})
rules.append({'idx': 30, 'from': 4, 'to': 4, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 31, 'from': 4, 'to': 3, 'guard': "(>= nr0 1)"})
rules.append({'idx': 32, 'from': 4, 'to': 5, 'guard': "(>= nr2 1)"})
rules.append({'idx': 33, 'from': 5, 'to': 5, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 34, 'from': 5, 'to': 3, 'guard': "(>= nr0 1)"})
rules.append({'idx': 35, 'from': 5, 'to': 4, 'guard': "(>= nr1 1)"})
rules.append({'idx': 36, 'from': 3, 'to': 9, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 37, 'from': 3, 'to': 10, 'guard': "(>= nr1 1)"})
rules.append({'idx': 38, 'from': 3, 'to': 11, 'guard': "(>= nr2 1)"})
rules.append({'idx': 39, 'from': 4, 'to': 10, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 40, 'from': 4, 'to': 9, 'guard': "(>= nr0 1)"})
rules.append({'idx': 41, 'from': 4, 'to': 11, 'guard': "(>= nr2 1)"})
rules.append({'idx': 42, 'from': 5, 'to': 11, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 43, 'from': 5, 'to': 9, 'guard': "(>= nr0 1)"})
rules.append({'idx': 44, 'from': 5, 'to': 10, 'guard': "(>= nr1 1)"})
rules.append({'idx': 45, 'from': 9, 'to': 3, 'guard': "(and (< nr1 1) (< nr2 1))"})
rules.append({'idx': 46, 'from': 9, 'to': 4, 'guard': "(>= nr1 1)"})
rules.append({'idx': 47, 'from': 9, 'to': 5, 'guard': "(>= nr2 1)"})
rules.append({'idx': 48, 'from': 10, 'to': 4, 'guard': "(and (< nr0 1) (< nr2 1))"})
rules.append({'idx': 49, 'from': 10, 'to': 3, 'guard': "(>= nr0 1)"})
rules.append({'idx': 50, 'from': 10, 'to': 5, 'guard': "(>= nr2 1)"})
rules.append({'idx': 51, 'from': 11, 'to': 5, 'guard': "(and (< nr0 1) (< nr1 1))"})
rules.append({'idx': 52, 'from': 11, 'to': 3, 'guard': "(>= nr0 1)"})
rules.append({'idx': 53, 'from': 11, 'to': 4, 'guard': "(>= nr1 1)"})
# parameters, resilience condition
params = ["n", "t", "f"]
active = "n"
broadcast = [6, 7, 8, 9, 10, 11]
rc = ["(> n 0)", "(>= t 0)", "(>= t f)", "(> n t)"]
# faults
faults = "send omission"
faulty = [3, 4, 5, 9, 10, 11]
broadcast_faulty = [9, 10, 11]
max_faulty = "f"
phase = 1
# configuration/transition constraints
constraints = []
constraints.append({'type': 'configuration', 'sum': 'eq', 'object': local, 'result': active})
constraints.append({'type': 'configuration', 'sum': 'eq', 'object': faulty, 'result': max_faulty})
constraints.append({'type': 'configuration', 'sum': 'eq', 'object': broadcast, 'result': 2})
constraints.append({'type': 'transition', 'sum': 'eq', 'object': range(len(rules)), 'result': active})
constraints.append({'type': 'round_config', 'sum': 'le', 'object': broadcast_faulty, 'result': 1})
# receive environment constraints
environment = []
environment.append('(>= nr0 x0)')
environment.append('(<= nr0 (+ x0 f0))')
environment.append('(>= nr1 x1)')
environment.append('(<= nr1 (+ x1 f1))')
environment.append('(>= nr2 x2)')
environment.append('(<= nr2 (+ x2 f2))')
# properties
properties = []
properties.append({'name':'validity0', 'spec':'safety', 'initial':'(= v0 0)', 'qf':'last', 'reachable':'(> corr0 0)'})
properties.append({'name':'validity1', 'spec':'safety', 'initial':'(= v1 0)', 'qf':'last', 'reachable':'(> corr1 0)'})
properties.append({'name':'agreement', 'spec':'safety', 'initial':'true', 'qf':'last', 'reachable':'(and (> corr0 0) (> corr1 0) (> corr2 0))'})
| 47.432 | 144 | 0.512059 | 893 | 5,929 | 3.393057 | 0.141097 | 0.19604 | 0.249505 | 0.257426 | 0.550495 | 0.527723 | 0.527723 | 0.483168 | 0.190099 | 0.130693 | 0 | 0.09243 | 0.153314 | 5,929 | 124 | 145 | 47.814516 | 0.511155 | 0.035419 | 0 | 0 | 0 | 0 | 0.3727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
b6e55ebb8af286313e7dc054c50c7d19cf4d73db | 1,010 | py | Python | Surrogate_MBO/constraints.py | Romit-Maulik/Tutorials-Demos-Practice | a58ddc819f24a16f7059e63d7f201fc2cd23e03a | [
"MIT"
] | 8 | 2020-09-02T14:46:07.000Z | 2021-11-29T15:27:05.000Z | Surrogate_MBO/constraints.py | omersan/Practice | 77eecdc2a202e6b333123cfd92e7db6dc0eea021 | [
"MIT"
] | 18 | 2020-11-13T18:49:33.000Z | 2022-03-12T00:54:43.000Z | Surrogate_MBO/constraints.py | omersan/Practice | 77eecdc2a202e6b333123cfd92e7db6dc0eea021 | [
"MIT"
] | 5 | 2019-09-25T23:57:00.000Z | 2021-04-18T08:15:34.000Z | '''
Define constraints (depends on your problem)
https://stackoverflow.com/questions/42303470/scipy-optimize-inequality-constraint-which-side-of-the-inequality-is-considered
[0.1268 0.467 0.5834 0.2103 -0.1268 -0.5425 -0.5096 0.0581] . The bounds are +/-30% of this.
'''
t_base = [0.1268, 0.467, 0.5834, 0.2103, -0.1268, -0.5425, -0.5096, 0.0581]
t_lower = [0.08876, 0.3269, 0.40838, 0.14721, -0.1648, -0.70525, -0.66248, 0.04067]
t_upper = [0.1648, 0.6071, 0.75842, 0.27339, -0.08876, -0.37975, -0.35672, 0.07553]
def f_factory(i):
def f_lower(t):
return t[i] - t_lower[i]
def f_upper(t):
return -t[i] + t_upper[i]
return f_lower, f_upper
functions = []
for i in range(len(t_base)):
f_lower, f_upper = f_factory(i)
functions.append(f_lower)
functions.append(f_upper)
cons=[]
for ii in range(len(functions)):
# the value of ii is set in each loop
cons.append({'type': 'ineq', 'fun': functions[ii]})
if __name__ == '__main__':
print('Constraints file') | 32.580645 | 124 | 0.656436 | 179 | 1,010 | 3.569832 | 0.424581 | 0.031299 | 0.037559 | 0.028169 | 0.153365 | 0.122066 | 0.122066 | 0.122066 | 0.122066 | 0.122066 | 0 | 0.213523 | 0.165347 | 1,010 | 31 | 125 | 32.580645 | 0.544484 | 0.29604 | 0 | 0 | 0 | 0 | 0.049716 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0 | 0.105263 | 0.315789 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
b6e79da658647f9a32d8c09cfd8180a52d57e871 | 437 | py | Python | mime/users/api/serializers.py | mrdvince/mime | ff0df1797a821c0fa607a56df2d31fb3408bea31 | [
"MIT"
] | null | null | null | mime/users/api/serializers.py | mrdvince/mime | ff0df1797a821c0fa607a56df2d31fb3408bea31 | [
"MIT"
] | 9 | 2021-12-16T06:39:57.000Z | 2022-02-14T15:58:00.000Z | mime/users/api/serializers.py | mrdvince/mime | ff0df1797a821c0fa607a56df2d31fb3408bea31 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from rest_framework import serializers
from mime.mime.models import Mime
User = get_user_model()
class UserSerializer(serializers.ModelSerializer):
"""
Serializer for the User model (used for the API)
"""
mime = serializers.PrimaryKeyRelatedField(many=True, queryset=Mime.objects.all())
class Meta:
model = User
fields = ["id", "username", "mime"]
| 23 | 85 | 0.709382 | 53 | 437 | 5.754717 | 0.584906 | 0.088525 | 0.078689 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19222 | 437 | 18 | 86 | 24.277778 | 0.864023 | 0.10984 | 0 | 0 | 0 | 0 | 0.037534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
b6e87a305461c5f4c196baa5e970e25ed6d80a40 | 483 | py | Python | mb_changedetection.py | mouhamedba/mb_changedetection.io | 41d339fb807da3bf27a45dff5d69db6a4f99a9e0 | [
"Apache-2.0"
] | null | null | null | mb_changedetection.py | mouhamedba/mb_changedetection.io | 41d339fb807da3bf27a45dff5d69db6a4f99a9e0 | [
"Apache-2.0"
] | null | null | null | mb_changedetection.py | mouhamedba/mb_changedetection.io | 41d339fb807da3bf27a45dff5d69db6a4f99a9e0 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python3
# Entry-point for running from the CLI when not installed via Pip, Pip will handle the console_scripts entry_points's from setup.py
# It's recommended to use `pip3 install changedetection.io` and start with `changedetection.py` instead, it will be linkd to your global path.
# or Docker.
# Read more https://github.com/dgtlmoon/changedetection.io/wiki
from mb_changedetectionio import mb_changedetection
if __name__ == '__main__':
mb_changedetection.main()
| 40.25 | 142 | 0.784679 | 73 | 483 | 5.013699 | 0.753425 | 0.092896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004785 | 0.134576 | 483 | 11 | 143 | 43.909091 | 0.870813 | 0.747412 | 0 | 0 | 0 | 0 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
b6f654169c3271517b08a11d104c504eb77b6207 | 20,356 | py | Python | parsewkt/parse.py | cleder/parsewkt | 728579d79a37a5ad413abceac5e8349f70380624 | [
"BSD-2-Clause"
] | 12 | 2015-01-26T00:39:42.000Z | 2021-07-01T16:15:17.000Z | parsewkt/parse.py | cleder/parsewkt | 728579d79a37a5ad413abceac5e8349f70380624 | [
"BSD-2-Clause"
] | 1 | 2020-05-22T08:26:09.000Z | 2020-05-24T16:58:53.000Z | parsewkt/parse.py | cleder/parsewkt | 728579d79a37a5ad413abceac5e8349f70380624 | [
"BSD-2-Clause"
] | 3 | 2015-11-22T01:09:34.000Z | 2016-05-26T20:57:54.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# CAVEAT UTILITOR
# This file was automatically generated by Grako.
# https://bitbucket.org/apalala/grako/
# Any changes you make to it will be overwritten the
# next time the file is generated.
#
from __future__ import print_function, division, absolute_import, unicode_literals
from grako.parsing import * # @UnusedWildImport
from grako.exceptions import * # @UnusedWildImport
__version__ = '13.297.14.02.27'
class WktParser(Parser):
def __init__(self, whitespace=None, nameguard=True, **kwargs):
super(WktParser, self).__init__(whitespace=whitespace,
nameguard=nameguard, **kwargs)
@rule_def
def _well_known_text_representation_(self):
with self._choice():
with self._option():
self._point_text_representation_()
with self._option():
self._curve_text_representation_()
with self._option():
self._surface_text_representation_()
with self._option():
self._collection_text_representation_()
self._error('no available options')
@rule_def
def _point_text_representation_(self):
self._token('POINT')
with self._optional():
self._z_m_()
self._point_text_()
@rule_def
def _curve_text_representation_(self):
with self._choice():
with self._option():
self._linestring_text_representation_()
with self._option():
self._circularstring_text_representation_()
with self._option():
self._compoundcurve_text_representation_()
self._error('no available options')
@rule_def
def _linestring_text_representation_(self):
self._token('LINESTRING')
with self._optional():
self._z_m_()
self._linestring_text_body_()
@rule_def
def _circularstring_text_representation_(self):
self._token('CIRCULARSTRING')
with self._optional():
self._z_m_()
self._circularstring_text_()
@rule_def
def _compoundcurve_text_representation_(self):
self._token('COMPOUNDCURVE')
with self._optional():
self._z_m_()
self._compoundcurve_text_()
@rule_def
def _surface_text_representation_(self):
self._curvepolygon_text_representation_()
@rule_def
def _curvepolygon_text_representation_(self):
with self._choice():
with self._option():
self._token('CURVEPOLYGON')
with self._optional():
self._z_m_()
self._curvepolygon_text_body_()
with self._option():
self._polygon_text_representation_()
with self._option():
self._triangle_text_representation_()
self._error('no available options')
@rule_def
def _polygon_text_representation_(self):
self._token('POLYGON')
with self._optional():
self._z_m_()
self._polygon_text_body_()
@rule_def
def _triangle_text_representation_(self):
self._token('TRIANGLE')
with self._optional():
self._z_m_()
self._triangle_text_body_()
@rule_def
def _collection_text_representation_(self):
with self._choice():
with self._option():
self._multipoint_text_representation_()
with self._option():
self._multicurve_text_representation_()
with self._option():
self._multisurface_text_representation_()
with self._option():
self._geometrycollection_text_representation_()
self._error('no available options')
@rule_def
def _multipoint_text_representation_(self):
self._token('MULTIPOINT')
with self._optional():
self._z_m_()
self._multipoint_text_()
@rule_def
def _multicurve_text_representation_(self):
with self._choice():
with self._option():
self._token('MULTICURVE')
with self._optional():
self._z_m_()
self._multicurve_text_()
with self._option():
self._multilinestring_text_representation_()
self._error('no available options')
@rule_def
def _multilinestring_text_representation_(self):
self._token('MULTILINESTRING')
with self._optional():
self._z_m_()
self._multilinestring_text_()
@rule_def
def _multisurface_text_representation_(self):
with self._choice():
with self._option():
self._token('MULTISURFACE')
with self._optional():
self._z_m_()
self._multisurface_text_()
with self._option():
self._multipolygon_text_representation_()
with self._option():
self._polyhedralsurface_text_representation_()
with self._option():
self._tin_text_representation_()
self._error('no available options')
@rule_def
def _multipolygon_text_representation_(self):
self._token('MULTIPOLYGON')
with self._optional():
self._z_m_()
self._multipolygon_text_()
@rule_def
def _polyhedralsurface_text_representation_(self):
self._token('POLYHEDRALSURFACE')
with self._optional():
self._z_m_()
self._polyhedralsurface_text_()
@rule_def
def _tin_text_representation_(self):
self._token('TIN')
with self._optional():
self._z_m_()
self._tin_text_()
@rule_def
def _geometrycollection_text_representation_(self):
self._token('GEOMETRYCOLLECTION')
with self._optional():
self._z_m_()
self._geometrycollection_text_()
@rule_def
def _linestring_text_body_(self):
self._linestring_text_()
@rule_def
def _curvepolygon_text_body_(self):
self._curvepolygon_text_()
@rule_def
def _polygon_text_body_(self):
self._polygon_text_()
@rule_def
def _triangle_text_body_(self):
self._triangle_text_()
@rule_def
def _point_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._point_()
self._right_paren_()
self._error('no available options')
@rule_def
def _point_(self):
self._x_()
self._y_()
with self._optional():
self._z_()
with self._optional():
self._m_()
@rule_def
def _x_(self):
self._number_()
@rule_def
def _y_(self):
self._number_()
@rule_def
def _z_(self):
self._number_()
@rule_def
def _m_(self):
self._number_()
@rule_def
def _linestring_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._point_()
def block0():
self._comma_()
self._point_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _circularstring_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._point_()
def block0():
self._comma_()
self._point_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _compoundcurve_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._single_curve_text_()
def block0():
self._comma_()
self._single_curve_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _single_curve_text_(self):
with self._choice():
with self._option():
self._linestring_text_body_()
with self._option():
self._circularstring_text_representation_()
self._error('no available options')
@rule_def
def _curve_text_(self):
with self._choice():
with self._option():
self._linestring_text_body_()
with self._option():
self._circularstring_text_representation_()
with self._option():
self._compoundcurve_text_representation_()
self._error('no available options')
@rule_def
def _ring_text_(self):
with self._choice():
with self._option():
self._linestring_text_body_()
with self._option():
self._circularstring_text_representation_()
with self._option():
self._compoundcurve_text_representation_()
self._error('no available options')
@rule_def
def _surface_text_(self):
with self._choice():
with self._option():
self._token('CURVEPOLYGON')
self._curvepolygon_text_body_()
with self._option():
self._polygon_text_body_()
self._error('no available options')
@rule_def
def _curvepolygon_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._ring_text_()
def block0():
self._comma_()
self._ring_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _polygon_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._linestring_text_()
def block0():
self._comma_()
self._linestring_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _triangle_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._linestring_text_()
self._right_paren_()
self._error('no available options')
@rule_def
def _multipoint_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._point_text_()
def block0():
self._comma_()
self._point_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _multicurve_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._curve_text_()
def block0():
self._comma_()
self._curve_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _multilinestring_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._linestring_text_body_()
def block0():
self._comma_()
self._linestring_text_body_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _multisurface_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._surface_text_()
def block0():
self._comma_()
self._surface_text_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _multipolygon_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._polygon_text_body_()
def block0():
self._comma_()
self._polygon_text_body_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _polyhedralsurface_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._polygon_text_body_()
def block0():
self._comma_()
self._polygon_text_body_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _tin_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._triangle_text_body_()
def block0():
self._comma_()
self._triangle_text_body_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _geometrycollection_text_(self):
with self._choice():
with self._option():
self._empty_set_()
with self._option():
self._left_paren_()
self._well_known_text_representation_()
def block0():
self._comma_()
self._well_known_text_representation_()
self._closure(block0)
self._right_paren_()
self._error('no available options')
@rule_def
def _empty_set_(self):
self._token('EMPTY')
@rule_def
def _z_m_(self):
with self._choice():
with self._option():
self._token('ZM')
with self._option():
self._token('Z')
with self._option():
self._token('M')
self._error('expecting one of: Z ZM M')
@rule_def
def _left_paren_(self):
self._token('(')
@rule_def
def _right_paren_(self):
self._token(')')
@rule_def
def _number_(self):
self._pattern(r'[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?')
@rule_def
def _comma_(self):
self._token(',')
class WktSemanticParser(CheckSemanticsMixin, WktParser):
pass
class WktSemantics(object):
def well_known_text_representation(self, ast):
return ast
def point_text_representation(self, ast):
return ast
def curve_text_representation(self, ast):
return ast
def linestring_text_representation(self, ast):
return ast
def circularstring_text_representation(self, ast):
return ast
def compoundcurve_text_representation(self, ast):
return ast
def surface_text_representation(self, ast):
return ast
def curvepolygon_text_representation(self, ast):
return ast
def polygon_text_representation(self, ast):
return ast
def triangle_text_representation(self, ast):
return ast
def collection_text_representation(self, ast):
return ast
def multipoint_text_representation(self, ast):
return ast
def multicurve_text_representation(self, ast):
return ast
def multilinestring_text_representation(self, ast):
return ast
def multisurface_text_representation(self, ast):
return ast
def multipolygon_text_representation(self, ast):
return ast
def polyhedralsurface_text_representation(self, ast):
return ast
def tin_text_representation(self, ast):
return ast
def geometrycollection_text_representation(self, ast):
return ast
def linestring_text_body(self, ast):
return ast
def curvepolygon_text_body(self, ast):
return ast
def polygon_text_body(self, ast):
return ast
def triangle_text_body(self, ast):
return ast
def point_text(self, ast):
return ast
def point(self, ast):
return ast
def x(self, ast):
return ast
def y(self, ast):
return ast
def z(self, ast):
return ast
def m(self, ast):
return ast
def linestring_text(self, ast):
return ast
def circularstring_text(self, ast):
return ast
def compoundcurve_text(self, ast):
return ast
def single_curve_text(self, ast):
return ast
def curve_text(self, ast):
return ast
def ring_text(self, ast):
return ast
def surface_text(self, ast):
return ast
def curvepolygon_text(self, ast):
return ast
def polygon_text(self, ast):
return ast
def triangle_text(self, ast):
return ast
def multipoint_text(self, ast):
return ast
def multicurve_text(self, ast):
return ast
def multilinestring_text(self, ast):
return ast
def multisurface_text(self, ast):
return ast
def multipolygon_text(self, ast):
return ast
def polyhedralsurface_text(self, ast):
return ast
def tin_text(self, ast):
return ast
def geometrycollection_text(self, ast):
return ast
def empty_set(self, ast):
return ast
def z_m(self, ast):
return ast
def left_paren(self, ast):
return ast
def right_paren(self, ast):
return ast
def number(self, ast):
return ast
def comma(self, ast):
return ast
def main(filename, startrule, trace=False):
import json
with open(filename) as f:
text = f.read()
parser = WktParser(parseinfo=False)
ast = parser.parse(text, startrule, filename=filename, trace=trace)
print('AST:')
print(ast)
print()
print('JSON:')
print(json.dumps(ast, indent=2))
print()
if __name__ == '__main__':
import argparse
import sys
class ListRules(argparse.Action):
def __call__(self, parser, namespace, values, option_string):
print('Rules:')
for r in WktParser.rule_list():
print(r)
print()
sys.exit(0)
parser = argparse.ArgumentParser(description="Simple parser for Wkt.")
parser.add_argument('-l', '--list', action=ListRules, nargs=0,
help="list all rules and exit")
parser.add_argument('-t', '--trace', action='store_true',
help="output trace information")
parser.add_argument('file', metavar="FILE", help="the input file to parse")
parser.add_argument('startrule', metavar="STARTRULE",
help="the start rule for parsing")
args = parser.parse_args()
main(args.file, args.startrule, trace=args.trace)
| 28.311544 | 82 | 0.560523 | 2,041 | 20,356 | 5.101911 | 0.086232 | 0.081437 | 0.084702 | 0.108902 | 0.793815 | 0.675406 | 0.60703 | 0.404975 | 0.401421 | 0.386344 | 0 | 0.003074 | 0.344861 | 20,356 | 718 | 83 | 28.350975 | 0.777744 | 0.013067 | 0 | 0.659359 | 1 | 0 | 0.047911 | 0.001942 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205734 | false | 0.001686 | 0.010118 | 0.089376 | 0.311973 | 0.016863 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e04cb8bc0a08664e80d4f15a028d215c05859b3 | 875 | py | Python | util.py | WRY-learning/k3http | 095a49118d052c43eb0e1dd82b6764eee9fcb158 | [
"MIT"
] | null | null | null | util.py | WRY-learning/k3http | 095a49118d052c43eb0e1dd82b6764eee9fcb158 | [
"MIT"
] | 2 | 2021-11-10T22:16:25.000Z | 2022-03-23T06:59:52.000Z | util.py | WRY-learning/k3http | 095a49118d052c43eb0e1dd82b6764eee9fcb158 | [
"MIT"
] | 1 | 2021-08-18T05:16:59.000Z | 2021-08-18T05:16:59.000Z | #!/usr/bin/env python3
# coding: utf-8
import copy
def headers_add_host(headers, address):
"""
If there is no Host field in the headers, insert the address as a Host into the headers.
:param headers: a 'dict'(header name, header value) of http request headers
:param address: a string represents a domain name
:return: headers after adding
"""
headers.setdefault('Host', address)
return headers
def request_add_host(request, address):
"""
If the request has no headers field, or request['headers'] does not have a Host field, then add address to Host.
:param request: a dict(request key, request name) of http request
:param address: a string represents a domain name
:return: request after adding
"""
request.setdefault('headers', {})
request['headers'].setdefault('Host', address)
return request
| 27.34375 | 116 | 0.693714 | 123 | 875 | 4.902439 | 0.390244 | 0.069652 | 0.043118 | 0.063018 | 0.26534 | 0.15257 | 0.15257 | 0.15257 | 0.15257 | 0 | 0 | 0.002907 | 0.213714 | 875 | 31 | 117 | 28.225806 | 0.873547 | 0.618286 | 0 | 0 | 0 | 0 | 0.079137 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8e05676982045b8332280fafa7905cf849cb0db4 | 22,892 | py | Python | ansible-devel/test/units/module_utils/facts/hardware/linux_data.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | ansible-devel/test/units/module_utils/facts/hardware/linux_data.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | ansible-devel/test/units/module_utils/facts/hardware/linux_data.py | satishcarya/ansible | ed091e174c26316f621ac16344a95c99f56bdc43 | [
"MIT"
] | null | null | null | # This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
LSBLK_OUTPUT = b"""
/dev/sda
/dev/sda1 32caaec3-ef40-4691-a3b6-438c3f9bc1c0
/dev/sda2 66Ojcd-ULtu-1cZa-Tywo-mx0d-RF4O-ysA9jK
/dev/mapper/fedora_dhcp129--186-swap eae6059d-2fbe-4d1c-920d-a80bbeb1ac6d
/dev/mapper/fedora_dhcp129--186-root d34cf5e3-3449-4a6c-8179-a1feb2bca6ce
/dev/mapper/fedora_dhcp129--186-home 2d3e4853-fa69-4ccf-8a6a-77b05ab0a42d
/dev/sr0
/dev/loop0 0f031512-ab15-497d-9abd-3a512b4a9390
/dev/loop1 7c1b0f30-cf34-459f-9a70-2612f82b870a
/dev/loop9 0f031512-ab15-497d-9abd-3a512b4a9390
/dev/loop9 7c1b4444-cf34-459f-9a70-2612f82b870a
/dev/mapper/docker-253:1-1050967-pool
/dev/loop2
/dev/mapper/docker-253:1-1050967-pool
"""
LSBLK_OUTPUT_2 = b"""
/dev/sda
/dev/sda1 32caaec3-ef40-4691-a3b6-438c3f9bc1c0
/dev/sda2 66Ojcd-ULtu-1cZa-Tywo-mx0d-RF4O-ysA9jK
/dev/mapper/fedora_dhcp129--186-swap eae6059d-2fbe-4d1c-920d-a80bbeb1ac6d
/dev/mapper/fedora_dhcp129--186-root d34cf5e3-3449-4a6c-8179-a1feb2bca6ce
/dev/mapper/fedora_dhcp129--186-home 2d3e4853-fa69-4ccf-8a6a-77b05ab0a42d
/dev/mapper/an-example-mapper with a space in the name 84639acb-013f-4d2f-9392-526a572b4373
/dev/sr0
/dev/loop0 0f031512-ab15-497d-9abd-3a512b4a9390
"""
LSBLK_UUIDS = {'/dev/sda1': '66Ojcd-ULtu-1cZa-Tywo-mx0d-RF4O-ysA9jK'}
UDEVADM_UUID = 'N/A'
UDEVADM_OUTPUT = """
UDEV_LOG=3
DEVPATH=/devices/pci0000:00/0000:00:07.0/virtio2/block/vda/vda1
MAJOR=252
MINOR=1
DEVNAME=/dev/vda1
DEVTYPE=partition
SUBSYSTEM=block
MPATH_SBIN_PATH=/sbin
ID_PATH=pci-0000:00:07.0-virtio-pci-virtio2
ID_PART_TABLE_TYPE=dos
ID_FS_UUID=57b1a3e7-9019-4747-9809-7ec52bba9179
ID_FS_UUID_ENC=57b1a3e7-9019-4747-9809-7ec52bba9179
ID_FS_VERSION=1.0
ID_FS_TYPE=ext4
ID_FS_USAGE=filesystem
LVM_SBIN_PATH=/sbin
DEVLINKS=/dev/block/252:1 /dev/disk/by-path/pci-0000:00:07.0-virtio-pci-virtio2-part1 /dev/disk/by-uuid/57b1a3e7-9019-4747-9809-7ec52bba9179
"""
MTAB = """
sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=8044400k,nr_inodes=2011100,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,seclabel,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/fedora_dhcp129--186-root / ext4 rw,seclabel,relatime,data=ordered 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,seclabel,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
tmpfs /tmp tmpfs rw,seclabel 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
/dev/loop0 /var/lib/machines btrfs rw,seclabel,relatime,space_cache,subvolid=5,subvol=/ 0 0
/dev/sda1 /boot ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/mapper/fedora_dhcp129--186-home /home ext4 rw,seclabel,relatime,data=ordered 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=1611044k,mode=700,uid=1000,gid=1000 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
grimlock.g.a: /home/adrian/sshfs-grimlock fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
grimlock.g.a:test_path/path_with'single_quotes /home/adrian/sshfs-grimlock-single-quote fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
grimlock.g.a:path_with'single_quotes /home/adrian/sshfs-grimlock-single-quote-2 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
grimlock.g.a:/mnt/data/foto's /home/adrian/fotos fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
"""
MTAB_ENTRIES = [
[
'sysfs',
'/sys',
'sysfs',
'rw,seclabel,nosuid,nodev,noexec,relatime',
'0',
'0'
],
['proc', '/proc', 'proc', 'rw,nosuid,nodev,noexec,relatime', '0', '0'],
[
'devtmpfs',
'/dev',
'devtmpfs',
'rw,seclabel,nosuid,size=8044400k,nr_inodes=2011100,mode=755',
'0',
'0'
],
[
'securityfs',
'/sys/kernel/security',
'securityfs',
'rw,nosuid,nodev,noexec,relatime',
'0',
'0'
],
['tmpfs', '/dev/shm', 'tmpfs', 'rw,seclabel,nosuid,nodev', '0', '0'],
[
'devpts',
'/dev/pts',
'devpts',
'rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000',
'0',
'0'
],
['tmpfs', '/run', 'tmpfs', 'rw,seclabel,nosuid,nodev,mode=755', '0', '0'],
[
'tmpfs',
'/sys/fs/cgroup',
'tmpfs',
'ro,seclabel,nosuid,nodev,noexec,mode=755',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/systemd',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd',
'0',
'0'
],
[
'pstore',
'/sys/fs/pstore',
'pstore',
'rw,seclabel,nosuid,nodev,noexec,relatime',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/devices',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,devices',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/freezer',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,freezer',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/memory',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,memory',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/pids',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,pids',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/blkio',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,blkio',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/cpuset',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,cpuset',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/cpu,cpuacct',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,cpu,cpuacct',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/hugetlb',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,hugetlb',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/perf_event',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,perf_event',
'0',
'0'
],
[
'cgroup',
'/sys/fs/cgroup/net_cls,net_prio',
'cgroup',
'rw,nosuid,nodev,noexec,relatime,net_cls,net_prio',
'0',
'0'
],
['configfs', '/sys/kernel/config', 'configfs', 'rw,relatime', '0', '0'],
[
'/dev/mapper/fedora_dhcp129--186-root',
'/',
'ext4',
'rw,seclabel,relatime,data=ordered',
'0',
'0'
],
['selinuxfs', '/sys/fs/selinux', 'selinuxfs', 'rw,relatime', '0', '0'],
[
'systemd-1',
'/proc/sys/fs/binfmt_misc',
'autofs',
'rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct',
'0',
'0'
],
['debugfs', '/sys/kernel/debug', 'debugfs', 'rw,seclabel,relatime', '0', '0'],
[
'hugetlbfs',
'/dev/hugepages',
'hugetlbfs',
'rw,seclabel,relatime',
'0',
'0'
],
['tmpfs', '/tmp', 'tmpfs', 'rw,seclabel', '0', '0'],
['mqueue', '/dev/mqueue', 'mqueue', 'rw,seclabel,relatime', '0', '0'],
[
'/dev/loop0',
'/var/lib/machines',
'btrfs',
'rw,seclabel,relatime,space_cache,subvolid=5,subvol=/',
'0',
'0'
],
['/dev/sda1', '/boot', 'ext4', 'rw,seclabel,relatime,data=ordered', '0', '0'],
# A 'none' fstype
['/dev/sdz3', '/not/a/real/device', 'none', 'rw,seclabel,relatime,data=ordered', '0', '0'],
# lets assume this is a bindmount
['/dev/sdz4', '/not/a/real/bind_mount', 'ext4', 'rw,seclabel,relatime,data=ordered', '0', '0'],
[
'/dev/mapper/fedora_dhcp129--186-home',
'/home',
'ext4',
'rw,seclabel,relatime,data=ordered',
'0',
'0'
],
[
'tmpfs',
'/run/user/1000',
'tmpfs',
'rw,seclabel,nosuid,nodev,relatime,size=1611044k,mode=700,uid=1000,gid=1000',
'0',
'0'
],
[
'gvfsd-fuse',
'/run/user/1000/gvfs',
'fuse.gvfsd-fuse',
'rw,nosuid,nodev,relatime,user_id=1000,group_id=1000',
'0',
'0'
],
['fusectl', '/sys/fs/fuse/connections', 'fusectl', 'rw,relatime', '0', '0']]
STATVFS_INFO = {'/': {'block_available': 10192323,
'block_size': 4096,
'block_total': 12868728,
'block_used': 2676405,
'inode_available': 3061699,
'inode_total': 3276800,
'inode_used': 215101,
'size_available': 41747755008,
'size_total': 52710309888},
'/not/a/real/bind_mount': {},
'/home': {'block_available': 1001578731,
'block_size': 4096,
'block_total': 105871006,
'block_used': 5713133,
'inode_available': 26860880,
'inode_total': 26902528,
'inode_used': 41648,
'size_available': 410246647808,
'size_total': 433647640576},
'/var/lib/machines': {'block_available': 10192316,
'block_size': 4096,
'block_total': 12868728,
'block_used': 2676412,
'inode_available': 3061699,
'inode_total': 3276800,
'inode_used': 215101,
'size_available': 41747726336,
'size_total': 52710309888},
'/boot': {'block_available': 187585,
'block_size': 4096,
'block_total': 249830,
'block_used': 62245,
'inode_available': 65096,
'inode_total': 65536,
'inode_used': 440,
'size_available': 768348160,
'size_total': 1023303680}
}
# ['/dev/sdz4', '/not/a/real/bind_mount', 'ext4', 'rw,seclabel,relatime,data=ordered', '0', '0'],
BIND_MOUNTS = ['/not/a/real/bind_mount']
CPU_INFO_TEST_SCENARIOS = [
{
'architecture': 'armv61',
'nproc_out': 1,
'sched_getaffinity': set([0]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/armv6-rev7-1cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': ['0', 'ARMv6-compatible processor rev 7 (v6l)'],
'processor_cores': 1,
'processor_count': 1,
'processor_nproc': 1,
'processor_threads_per_core': 1,
'processor_vcpus': 1},
},
{
'architecture': 'armv71',
'nproc_out': 4,
'sched_getaffinity': set([0, 1, 2, 3]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/armv7-rev4-4cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'ARMv7 Processor rev 4 (v7l)',
'1', 'ARMv7 Processor rev 4 (v7l)',
'2', 'ARMv7 Processor rev 4 (v7l)',
'3', 'ARMv7 Processor rev 4 (v7l)',
],
'processor_cores': 1,
'processor_count': 4,
'processor_nproc': 4,
'processor_threads_per_core': 1,
'processor_vcpus': 4},
},
{
'architecture': 'aarch64',
'nproc_out': 4,
'sched_getaffinity': set([0, 1, 2, 3]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/aarch64-4cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'AArch64 Processor rev 4 (aarch64)',
'1', 'AArch64 Processor rev 4 (aarch64)',
'2', 'AArch64 Processor rev 4 (aarch64)',
'3', 'AArch64 Processor rev 4 (aarch64)',
],
'processor_cores': 1,
'processor_count': 4,
'processor_nproc': 4,
'processor_threads_per_core': 1,
'processor_vcpus': 4},
},
{
'architecture': 'x86_64',
'nproc_out': 4,
'sched_getaffinity': set([0, 1, 2, 3]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/x86_64-4cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'AuthenticAMD', 'Dual-Core AMD Opteron(tm) Processor 2216',
'1', 'AuthenticAMD', 'Dual-Core AMD Opteron(tm) Processor 2216',
'2', 'AuthenticAMD', 'Dual-Core AMD Opteron(tm) Processor 2216',
'3', 'AuthenticAMD', 'Dual-Core AMD Opteron(tm) Processor 2216',
],
'processor_cores': 2,
'processor_count': 2,
'processor_nproc': 4,
'processor_threads_per_core': 1,
'processor_vcpus': 4},
},
{
'architecture': 'x86_64',
'nproc_out': 4,
'sched_getaffinity': set([0, 1, 2, 3]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/x86_64-8cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'1', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'2', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'3', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'4', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'5', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'6', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
'7', 'GenuineIntel', 'Intel(R) Core(TM) i7-4800MQ CPU @ 2.70GHz',
],
'processor_cores': 4,
'processor_count': 1,
'processor_nproc': 4,
'processor_threads_per_core': 2,
'processor_vcpus': 8},
},
{
'architecture': 'arm64',
'nproc_out': 4,
'sched_getaffinity': set([0, 1, 2, 3]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/arm64-4cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': ['0', '1', '2', '3'],
'processor_cores': 1,
'processor_count': 4,
'processor_nproc': 4,
'processor_threads_per_core': 1,
'processor_vcpus': 4},
},
{
'architecture': 'armv71',
'nproc_out': 8,
'sched_getaffinity': set([0, 1, 2, 3, 4, 5, 6, 7]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/armv7-rev3-8cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'ARMv7 Processor rev 3 (v7l)',
'1', 'ARMv7 Processor rev 3 (v7l)',
'2', 'ARMv7 Processor rev 3 (v7l)',
'3', 'ARMv7 Processor rev 3 (v7l)',
'4', 'ARMv7 Processor rev 3 (v7l)',
'5', 'ARMv7 Processor rev 3 (v7l)',
'6', 'ARMv7 Processor rev 3 (v7l)',
'7', 'ARMv7 Processor rev 3 (v7l)',
],
'processor_cores': 1,
'processor_count': 8,
'processor_nproc': 8,
'processor_threads_per_core': 1,
'processor_vcpus': 8},
},
{
'architecture': 'x86_64',
'nproc_out': 2,
'sched_getaffinity': set([0, 1]),
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/x86_64-2cpu-cpuinfo')).readlines(),
'expected_result': {
'processor': [
'0', 'GenuineIntel', 'Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz',
'1', 'GenuineIntel', 'Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz',
],
'processor_cores': 1,
'processor_count': 2,
'processor_nproc': 2,
'processor_threads_per_core': 1,
'processor_vcpus': 2},
},
{
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/ppc64-power7-rhel7-8cpu-cpuinfo')).readlines(),
'architecture': 'ppc64',
'nproc_out': 8,
'sched_getaffinity': set([0, 1, 2, 3, 4, 5, 6, 7]),
'expected_result': {
'processor': [
'0', 'POWER7 (architected), altivec supported',
'1', 'POWER7 (architected), altivec supported',
'2', 'POWER7 (architected), altivec supported',
'3', 'POWER7 (architected), altivec supported',
'4', 'POWER7 (architected), altivec supported',
'5', 'POWER7 (architected), altivec supported',
'6', 'POWER7 (architected), altivec supported',
'7', 'POWER7 (architected), altivec supported'
],
'processor_cores': 1,
'processor_count': 8,
'processor_nproc': 8,
'processor_threads_per_core': 1,
'processor_vcpus': 8
},
},
{
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/ppc64le-power8-24cpu-cpuinfo')).readlines(),
'architecture': 'ppc64le',
'nproc_out': 24,
'sched_getaffinity': set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]),
'expected_result': {
'processor': [
'0', 'POWER8 (architected), altivec supported',
'1', 'POWER8 (architected), altivec supported',
'2', 'POWER8 (architected), altivec supported',
'3', 'POWER8 (architected), altivec supported',
'4', 'POWER8 (architected), altivec supported',
'5', 'POWER8 (architected), altivec supported',
'6', 'POWER8 (architected), altivec supported',
'7', 'POWER8 (architected), altivec supported',
'8', 'POWER8 (architected), altivec supported',
'9', 'POWER8 (architected), altivec supported',
'10', 'POWER8 (architected), altivec supported',
'11', 'POWER8 (architected), altivec supported',
'12', 'POWER8 (architected), altivec supported',
'13', 'POWER8 (architected), altivec supported',
'14', 'POWER8 (architected), altivec supported',
'15', 'POWER8 (architected), altivec supported',
'16', 'POWER8 (architected), altivec supported',
'17', 'POWER8 (architected), altivec supported',
'18', 'POWER8 (architected), altivec supported',
'19', 'POWER8 (architected), altivec supported',
'20', 'POWER8 (architected), altivec supported',
'21', 'POWER8 (architected), altivec supported',
'22', 'POWER8 (architected), altivec supported',
'23', 'POWER8 (architected), altivec supported',
],
'processor_cores': 1,
'processor_count': 24,
'processor_nproc': 24,
'processor_threads_per_core': 1,
'processor_vcpus': 24
},
},
{
'cpuinfo': open(os.path.join(os.path.dirname(__file__), '../fixtures/cpuinfo/sparc-t5-debian-ldom-24vcpu')).readlines(),
'architecture': 'sparc64',
'nproc_out': 24,
'sched_getaffinity': set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]),
'expected_result': {
'processor': [
'UltraSparc T5 (Niagara5)',
],
'processor_cores': 1,
'processor_count': 24,
'processor_nproc': 24,
'processor_threads_per_core': 1,
'processor_vcpus': 24
},
},
]
| 39.064846 | 154 | 0.554211 | 2,643 | 22,892 | 4.690882 | 0.167234 | 0.012099 | 0.043878 | 0.060494 | 0.757622 | 0.70463 | 0.690111 | 0.653573 | 0.628246 | 0.600258 | 0 | 0.101441 | 0.290757 | 22,892 | 585 | 155 | 39.131624 | 0.66217 | 0.033636 | 0 | 0.452252 | 0 | 0.057658 | 0.58451 | 0.258867 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003604 | 0 | 0.003604 | 0.001802 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e0e7025679823e2bffe9b97772e691e6fdbe9e0 | 1,065 | py | Python | data_pipeline/sql/statement/ddl_statement.py | albertteoh/data_pipeline | a99f1c7412375b3e9f4115108fcdde517b2e71a6 | [
"Apache-2.0"
] | null | null | null | data_pipeline/sql/statement/ddl_statement.py | albertteoh/data_pipeline | a99f1c7412375b3e9f4115108fcdde517b2e71a6 | [
"Apache-2.0"
] | null | null | null | data_pipeline/sql/statement/ddl_statement.py | albertteoh/data_pipeline | a99f1c7412375b3e9f4115108fcdde517b2e71a6 | [
"Apache-2.0"
] | null | null | null | ###############################################################################
# Module: ddl_statement
# Purpose: Parent class for DDL (Data Definition Language) statements
#
# Notes:
#
###############################################################################
import data_pipeline.constants.const as const
from abc import ABCMeta, abstractmethod
from .base_statement import BaseStatement
class DdlStatement(BaseStatement):
"""Contains data necessary for producing a valid DDL statement"""
__metaclass__ = ABCMeta
def __init__(self, table_name):
super(DdlStatement, self).__init__(table_name)
self._entries = []
@property
def entries(self):
return self._entries
@abstractmethod
def add_entry(self, **kwargs):
pass
def _build_field_params(self, params):
if params:
return "({})".format(const.COMMASPACE.join(params))
return const.EMPTY_STRING
def _build_field_string(self, value):
return " {}".format(value if value else const.EMPTY_STRING)
| 27.307692 | 79 | 0.585915 | 104 | 1,065 | 5.730769 | 0.519231 | 0.040268 | 0.043624 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189671 | 1,065 | 38 | 80 | 28.026316 | 0.690614 | 0.152113 | 0 | 0 | 0 | 0 | 0.009524 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0.05 | 0.15 | 0.1 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8e1ab0021a7c0f128fe23cec8868353c85222900 | 1,052 | py | Python | tests/__init__.py | Kua-Fu/rally | 7c58ef6f81f618fbc142dfa58b0ed00a5b05fbae | [
"Apache-2.0"
] | 1,577 | 2016-04-19T12:38:58.000Z | 2022-03-31T07:18:25.000Z | tests/__init__.py | Kua-Fu/rally | 7c58ef6f81f618fbc142dfa58b0ed00a5b05fbae | [
"Apache-2.0"
] | 1,079 | 2016-04-19T12:09:16.000Z | 2022-03-31T05:38:50.000Z | tests/__init__.py | Kua-Fu/rally | 7c58ef6f81f618fbc142dfa58b0ed00a5b05fbae | [
"Apache-2.0"
] | 300 | 2016-04-19T18:27:12.000Z | 2022-03-23T07:54:16.000Z | # Licensed to Elasticsearch B.V. under one or more contributor
# license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright
# ownership. Elasticsearch B.V. licenses this file to you under
# the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import asyncio
def run_async(t):
"""
A wrapper that ensures that a test is run in an asyncio context.
:param t: The test case to wrap.
"""
def async_wrapper(*args, **kwargs):
asyncio.run(t(*args, **kwargs), debug=True)
return async_wrapper
| 32.875 | 68 | 0.737643 | 160 | 1,052 | 4.83125 | 0.575 | 0.07762 | 0.03881 | 0.041397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004689 | 0.189164 | 1,052 | 31 | 69 | 33.935484 | 0.901524 | 0.79943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8e25761876fd8e4aaf1e3c9049789e90709486aa | 1,048 | py | Python | Quizzes/Quiz3.py | MatthewDShen/ComputingInCivil | 681bc684fa4bc3413a9e8e26ad784c2b5e7a8d4a | [
"MIT"
] | null | null | null | Quizzes/Quiz3.py | MatthewDShen/ComputingInCivil | 681bc684fa4bc3413a9e8e26ad784c2b5e7a8d4a | [
"MIT"
] | null | null | null | Quizzes/Quiz3.py | MatthewDShen/ComputingInCivil | 681bc684fa4bc3413a9e8e26ad784c2b5e7a8d4a | [
"MIT"
] | null | null | null | import scipy
import matplotlib.pyplot as plt
import numpy as np
x = [
0.001, 0.019, 0.039, 0.058, 0.080, 0.098, 0.119, 0.139,
0.159, 0.180, 0.198, 0.249, 0.298, 0.349, 0.398, 0.419,
0.439, 0.460, 0.479, 0.499, 0.519, 0.540, 0.558, 0.578,
0.598, 0.649, 0.698, 0.749, 0.798, 0.819, 0.839, 0.859,
0.879, 0.900, 0.920, 0.939, 0.958, 0.980, 0.998
]
y = [
0.056, 0.077, 0.076, 0.078, 0.088, 0.078, 0.105, 0.101,
0.107, 0.111, 0.119, 0.120, 0.155, 0.195, 0.223, 0.276,
0.293, 0.304, 0.325, 0.349, 0.370, 0.387, 0.390, 0.386,
0.408, 0.458, 0.449, 0.467, 0.456, 0.447, 0.436, 0.443,
0.444, 0.423, 0.429, 0.428, 0.445, 0.416, 0.400
]
x_axis = np.arange(min(x), max(x) + 0.1, 0.1)
fig, ax = plt.subplots()
ax.scatter(x, y,)
for degree in range(1,5):
poly_coefficient, residual, _, _, _ = np.polyfit(x, y, degree, full=True)
poly_function = np.poly1d(poly_coefficient)
ax.plot(x_axis, poly_function(x_axis), label=f'deg: {degree}, res: {residual}')
print(residual)
ax.grid(ls='-')
plt.show()
| 26.2 | 83 | 0.583969 | 232 | 1,048 | 2.594828 | 0.50431 | 0.024917 | 0.016611 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.377515 | 0.193702 | 1,048 | 39 | 84 | 26.871795 | 0.334911 | 0 | 0 | 0 | 0 | 0 | 0.02958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e29b5bc086a0b91e42ca993c138b28d7a9a26fe | 242 | py | Python | LeetCode/easy - Hash Table/290. Word Pattern/.ipynb_checkpoints/solution-checkpoint.py | vincent507cpu/Comprehensive-Algorithm-Solution | 04e01e49622457f09af2e1133954f043c0c92cb9 | [
"MIT"
] | 4 | 2020-06-26T00:45:53.000Z | 2021-04-19T12:23:32.000Z | LeetCode/easy - Hash Table/290. Word Pattern/solution.py | vincent507cpu/LeetCode-Comprehensive-Solution | 04e01e49622457f09af2e1133954f043c0c92cb9 | [
"MIT"
] | null | null | null | LeetCode/easy - Hash Table/290. Word Pattern/solution.py | vincent507cpu/LeetCode-Comprehensive-Solution | 04e01e49622457f09af2e1133954f043c0c92cb9 | [
"MIT"
] | null | null | null | class Solution:
def wordPattern(self, pattern: str, str: str) -> bool:
lst = str.split()
combo = zip(pattern, lst)
return len(set(pattern)) == len(set(lst)) == len(set(combo)) and len(pattern) == len(lst) | 34.571429 | 97 | 0.570248 | 32 | 242 | 4.3125 | 0.5 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.264463 | 242 | 7 | 97 | 34.571429 | 0.775281 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8e329e4a0e6140f052d060368eaa9d899b41b1a3 | 679 | py | Python | app.py | ialjumah/python-helloworld | 3bf5d450617e5ab79f00fe25e6906b662f48f418 | [
"MIT"
] | null | null | null | app.py | ialjumah/python-helloworld | 3bf5d450617e5ab79f00fe25e6906b662f48f418 | [
"MIT"
] | null | null | null | app.py | ialjumah/python-helloworld | 3bf5d450617e5ab79f00fe25e6906b662f48f418 | [
"MIT"
] | null | null | null | Volume in drive C has no label.
Volume Serial Number is 30AD-FC05
Directory of C:\Users\HP\Documents\nd064_course_1-main\solutions\python-helloworld
13/07/2021 14:33 <DIR> .
13/07/2021 14:33 <DIR> ..
13/07/2021 14:34 0 app.log
13/07/2021 14:34 0 app.py
10/07/2021 23:21 175 Dockerfile
13/07/2021 14:33 159 listing
11/07/2021 13:34 485 requirements.txt
10/07/2021 23:21 94 test_with_pytest.py
11/07/2021 00:22 <DIR> __pycache__
6 File(s) 913 bytes
3 Dir(s) 42,745,831,424 bytes free
| 39.941176 | 84 | 0.536082 | 107 | 679 | 3.327103 | 0.598131 | 0.151685 | 0.11236 | 0.140449 | 0.275281 | 0.174157 | 0.174157 | 0.11236 | 0.11236 | 0.11236 | 0 | 0.343602 | 0.378498 | 679 | 16 | 85 | 42.4375 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e34ebaa15e43dca37d0fd1c2a7556fa0314b835 | 1,185 | py | Python | test/test_stack_with_max_value.py | kisliakovsky/structures | 19969470a7e9b150b077082cc8ca0c2fc9be279e | [
"MIT"
] | null | null | null | test/test_stack_with_max_value.py | kisliakovsky/structures | 19969470a7e9b150b077082cc8ca0c2fc9be279e | [
"MIT"
] | null | null | null | test/test_stack_with_max_value.py | kisliakovsky/structures | 19969470a7e9b150b077082cc8ca0c2fc9be279e | [
"MIT"
] | null | null | null | from unittest import TestCase
from src.stack import StackWithMaxValue
class TestStackWithMaxValue(TestCase):
def test_push(self):
stack = StackWithMaxValue()
stack.push(1)
stack.push(2)
stack.push(3)
self.assertEqual([1, 2, 3], stack.as_list())
def test_pop(self):
stack = StackWithMaxValue()
stack.push(1)
self.assertEqual(1, stack.pop())
with self.assertRaises(IndexError):
stack.pop()
self.assertEqual([], stack.as_list())
def test_peek(self):
stack = StackWithMaxValue()
stack.push(1)
self.assertEqual(1, stack.peek())
self.assertEqual([1], stack.as_list())
stack.pop()
with self.assertRaises(IndexError):
stack.peek()
def test_is_empty(self):
stack = StackWithMaxValue()
self.assertTrue(stack.is_empty())
stack.push(1)
self.assertFalse(stack.is_empty())
def test_maximum(self):
stack = StackWithMaxValue()
stack.push(1)
stack.push(2)
self.assertEqual(2, stack.maximum())
stack.pop()
self.assertEqual(1, stack.maximum())
| 26.333333 | 52 | 0.598312 | 132 | 1,185 | 5.287879 | 0.219697 | 0.103152 | 0.186246 | 0.17765 | 0.462751 | 0.411175 | 0.411175 | 0.295129 | 0.295129 | 0.163324 | 0 | 0.018692 | 0.277637 | 1,185 | 44 | 53 | 26.931818 | 0.796729 | 0 | 0 | 0.472222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305556 | 1 | 0.138889 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e3a75b2271028360ac22ac73e19608d706d63fc | 7,031 | py | Python | notebooks/presentation/styling.py | flatironinstitute/binary_classification_metrics | 5cceacfa815265849d84ea6eb4d6e084e35f3f07 | [
"BSD-2-Clause"
] | 1 | 2021-09-23T01:10:58.000Z | 2021-09-23T01:10:58.000Z | notebooks/presentation/styling.py | flatironinstitute/binary_classification_metrics | 5cceacfa815265849d84ea6eb4d6e084e35f3f07 | [
"BSD-2-Clause"
] | null | null | null | notebooks/presentation/styling.py | flatironinstitute/binary_classification_metrics | 5cceacfa815265849d84ea6eb4d6e084e35f3f07 | [
"BSD-2-Clause"
] | 1 | 2021-09-23T01:11:06.000Z | 2021-09-23T01:11:06.000Z |
style_string = """
<style>
.container { width:100% !important; }
.hit {
border-style: dotted;
border-width: 20px;
border-color: #ddd;
color: black;
background-color: white;
}
.miss {
border-style: solid;
border-width: 20px;
border-color: black;
color: pink;
background-color: black;
}
.outside {
border-style: solid;
border-width: 10px;
border-color: #ddf;
}
</style>
"""
from IPython.display import display, HTML
table_template = """
<div class="outside">
<table class="outside">
<tr>
%s
</tr>
</table>
</div>
"""
hit_template = """
<td><div class="hit"> %s </div></td>
"""
miss_template = """
<td><div class="miss"> %s </div></td>
"""
def classification_table(zero_one_string):
L = []
for c in zero_one_string:
if int(c):
entry = hit_template % c
else:
entry = miss_template % c
L.append(entry)
body = "".join(L)
return table_template % body
def cls(zero_one_string):
tb = classification_table(zero_one_string)
display(HTML(tb))
def go():
display(HTML(style_string))
def compare_metric_table():
display(HTML(comparison_table))
comparison_table = """
<table border>
<tr>
<th colspan="8">Secondary Statistic</th>
</tr>
<tr>
<td>
<em>Primary Statistic Favors</em>
</td>
<th>
ASP = Average Squared Preference
</th>
<th>
AUROC = Area Under the ROC Curve
</th>
<th>
AUPR = Area Under the precision recall curve
</th>
<th>
ALP = Average Log Preference
</th>
<th>
RLP = Reversed Log Preference
</th>
<th>
SFP = Squared False Penalty
</th>
<th>
LFP = Log False Penalty
</th>
</tr>
<tr>
<th>
ASP = Average Squared Preference
</th>
<td>
<!-- ASP = Average Squared Preference -->
(same)
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
hits spread out
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
???
</td>
<td>
<!-- ALP = Average Log Preference -->
hits spread out
</td>
<td>
<!-- RLP = Reversed Log Preference -->
hits clustered
</td>
<td>
<!-- SFP = Squared False Penalty -->
hits spread out
</td>
<td>
<!-- LFP = Log False Penalty -->
hits clustered
</td>
</tr>
<tr>
<th>
AUROC = Area Under the ROC Curve
</th>
<td>
<!-- ASP = Average Squared Preference -->
hits clustered
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
(same)
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
tolerates early misses
</td>
<td>
<!-- ALP = Average Log Preference -->
hits spread out
</td>
<td>
<!-- RLP = Reversed Log Preference -->
hits clustered
</td>
<td>
<!-- SFP = Squared False Penalty -->
hits spread out
</td>
<td>
<!-- LFP = Log False Penalty -->
hits clustered
</td>
</tr>
<tr>
<th>
AUPR = Area Under the precision recall curve
</th>
<td>
<!-- ASP = Average Squared Preference -->
favors early hits
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
favors early hits
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
(same)
</td>
<td>
<!-- ALP = Average Log Preference -->
favors early hits
</td>
<td>
<!-- RLP = Reversed Log Preference -->
favors early hits
</td>
<td>
<!-- SFP = Squared False Penalty -->
favors early hits
</td>
<td>
<!-- LFP = Log False Penalty -->
favors early hits
</td>
</tr>
<tr>
<th>
ALP = Average Log Preference
</th>
<td>
<!-- ASP = Average Squared Preference -->
hits clustered
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
hits clustered
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
tolerates early misses
</td>
<td>
<!-- ALP = Average Log Preference -->
(same)
</td>
<td>
<!-- RLP = Reversed Log Preference -->
hits clustered
</td>
<td>
<!-- SFP = Squared False Penalty -->
hits clustered
</td>
<td>
<!-- LFP = Log False Penalty -->
hits clustered
</td>
</tr>
<th>
RLP = Reversed Log Preference
</th>
<td>
<!-- ASP = Average Squared Preference -->
hits spread out
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
hits spread out
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
tolerates early misses
</td>
<td>
<!-- ALP = Average Log Preference -->
hits spread out
</td>
<td>
<!-- RLP = Reversed Log Preference -->
(same)
</td>
<td>
<!-- SFP = Squared False Penalty -->
hits spread out
</td>
<td>
<!-- LFP = Log False Penalty -->
CORRELATED!
</td>
</tr>
<tr>
<th>
SFP = Squared False Penalty
</th>
<td>
<!-- ASP = Average Squared Preference -->
hits clustered
</td>
<td>
<!-- AUROC = Area Under the ROC Curve -->
hits clustered
</td>
<td>
<!-- AUPR = Area Under the precision recall curve -->
tolerates early misses
</td>
<td>
<!-- ALP = Average Log Preference -->
hits clustered
</td>
<td>
<!-- RLP = Reversed Log Preference -->
hits clustered
</td>
<td>
<!-- SFP = Squared False Penalty -->
(same)
</td>
<td>
<!-- LFP = Log False Penalty -->
hits clustered
</td>
</tr>
</table>
"""
go()
| 22.827922 | 65 | 0.420993 | 646 | 7,031 | 4.547988 | 0.143963 | 0.049013 | 0.065351 | 0.063649 | 0.762083 | 0.697073 | 0.590197 | 0.544588 | 0.516338 | 0.455412 | 0 | 0.002591 | 0.451145 | 7,031 | 307 | 66 | 22.90228 | 0.758746 | 0 | 0 | 0.8125 | 0 | 0 | 0.908108 | 0.002987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013889 | false | 0 | 0.006944 | 0 | 0.024306 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e3cc8400178232463ef97c742304e359a41866c | 827 | py | Python | src/cleanMarkdown.py | xRuiAlves/minute-to-pdf | a46bef9cc76318c702f8d94da4ad6186bcdf0e3b | [
"MIT"
] | null | null | null | src/cleanMarkdown.py | xRuiAlves/minute-to-pdf | a46bef9cc76318c702f8d94da4ad6186bcdf0e3b | [
"MIT"
] | null | null | null | src/cleanMarkdown.py | xRuiAlves/minute-to-pdf | a46bef9cc76318c702f8d94da4ad6186bcdf0e3b | [
"MIT"
] | null | null | null | import re as regex
def removeTags(sourceCode):
return regex.sub(r"###### tags: .*", "", sourceCode)
def getTitle(sourceCode):
titleRegex = r"^# ([^\n]*)"
return regex.search(titleRegex, sourceCode).group(1), regex.sub(titleRegex, "", sourceCode)
def downsizeTitles(sourceCode):
titleRegex = r"#(#* [\w\ ]*)"
return regex.sub(titleRegex, lambda match: match.group(1), sourceCode)
def spacePoints(sourceCode):
pointRegex = r"(\*\*[\w\ ]*\*\*: [^\n]*)\n\*"
while regex.search(pointRegex, sourceCode) is not None:
sourceCode = regex.sub(pointRegex, lambda match: f"{match.group(1)}\n\n*", sourceCode)
return sourceCode
def addFrontmatter(sourceCode, title):
frontmatter = f"---\ntitle: \"{title}\"\ngeometry:\n- margin=2cm\n---\n"
return frontmatter + sourceCode.strip()
| 28.517241 | 95 | 0.648126 | 95 | 827 | 5.642105 | 0.378947 | 0.059701 | 0.052239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005747 | 0.158404 | 827 | 28 | 96 | 29.535714 | 0.764368 | 0 | 0 | 0 | 0 | 0 | 0.162031 | 0.025393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.294118 | false | 0 | 0.058824 | 0.058824 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8e407ed85ad9862ceadb000fd47a59627e319a69 | 1,694 | py | Python | tests/test_scale.py | aaronfinke/fast_dp | 23ceb72e79f9c45df1d97bf535477e9749582c93 | [
"Apache-2.0"
] | null | null | null | tests/test_scale.py | aaronfinke/fast_dp | 23ceb72e79f9c45df1d97bf535477e9749582c93 | [
"Apache-2.0"
] | 3 | 2019-06-11T17:18:30.000Z | 2019-07-15T22:37:50.000Z | tests/test_scale.py | aaronfinke/fast_dp | 23ceb72e79f9c45df1d97bf535477e9749582c93 | [
"Apache-2.0"
] | 5 | 2018-11-16T22:02:03.000Z | 2020-08-10T15:23:45.000Z | #!/usr/local/crys-local/ccp4-7.0/bin/cctbx.python
import unittest
from scale import scale
class TestScale(unittest.TestCase):
def setUp(self):
pass
def test_scale(self):
unit_cell = (232.96666666666667, 232.96666666666667, 232.96666666666667, 90.0, 90.0, 90.0)
meta = {'beam': (120.55726144764847, 119.44351135718689),
'detector': 'EIGER_9M',
'detector_class': 'eiger 9M',
'directory': '/mnt/optane/hbernstein/CollinsLaccases/data/CataApo05/5/NSLS2-18_10',
'distance': 200.04000666485112,
'end': 50,
'exposure_time': 0.05000000074505806,
'extra_text': 'LIB=/usr/local/crys-local/ccp4-7.0/bin/../lib/eiger2cbf.so\n',
'matching': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
46, 47, 48, 49, 50],
'oscillation': (0.0, 0.20000000298023224),
'oscillation_axis': 'Omega_I_guess',
'phi_end': 0.20000000298023224,
'phi_start': 0.0,
'phi_width': 0.20000000298023224,
'pixel': (0.07500000356230885, 0.07500000356230885),
'saturation': 92461.0,
'sensor': 0.44999999227002263,
'serial_number': 0,
'size': (3269, 3110),
'start': 1,
'template': 'CataApo05_1444_??????.h5',
'wavelength': 0.9201257824897766}
spg_num = 197
res_high = 6.492
res_low = 30.0
n_jobs = 1
n_processors = 0
self.assertEqual(scale(unit_cell, meta, spg_num, res_high, res_low, n_jobs, n_processors), ((224.6473, 224.6473, 224.6473, 90.0, 90.0, 90.0), 'I 2 3', 4951, (1608.34, 1593.23)))
def tearDown(self):
pass
if __name__ == '__main__':
unittest.main(verbosity=3)
| 32.576923 | 179 | 0.639906 | 253 | 1,694 | 4.146245 | 0.596838 | 0.017159 | 0.019066 | 0.022879 | 0.06673 | 0.06673 | 0.049571 | 0.049571 | 0 | 0 | 0 | 0.325883 | 0.181228 | 1,694 | 51 | 180 | 33.215686 | 0.430425 | 0.028335 | 0 | 0.047619 | 0 | 0.02381 | 0.232827 | 0.091793 | 0 | 0 | 0 | 0 | 0.02381 | 1 | 0.071429 | false | 0.047619 | 0.047619 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8e419662f3f0d022764cde564f042fd6ac971aa6 | 330 | py | Python | netcad_demo_meraki1/profiles/__init__.py | jeremyschulman/netcad-demo-meraki-1 | fb04e2c2e2d6a7ea005602121f35d4ac4ea3b3bb | [
"Apache-2.0"
] | null | null | null | netcad_demo_meraki1/profiles/__init__.py | jeremyschulman/netcad-demo-meraki-1 | fb04e2c2e2d6a7ea005602121f35d4ac4ea3b3bb | [
"Apache-2.0"
] | null | null | null | netcad_demo_meraki1/profiles/__init__.py | jeremyschulman/netcad-demo-meraki-1 | fb04e2c2e2d6a7ea005602121f35d4ac4ea3b3bb | [
"Apache-2.0"
] | null | null | null | from netcad.device.l2_interfaces import InterfaceL2Access, InterfaceL2Trunk
from netcad.device import PeerInterfaceId
from netcad_demo_meraki1.vlans import vlan_native_1
from .physical import port_UTP_1G
class AccessVlan1(InterfaceL2Access):
port_profile = port_UTP_1G
vlan = vlan_native_1
desc = PeerInterfaceId()
| 27.5 | 75 | 0.827273 | 42 | 330 | 6.214286 | 0.547619 | 0.114943 | 0.122605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034843 | 0.130303 | 330 | 11 | 76 | 30 | 0.874564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8e45edae7334754c6e435545a3fb44c9d485fdf2 | 1,596 | py | Python | modules/SemanticWebImport/src/main/resources/fr/inria/edelweiss/examples/autolayout.py | cmatties/gephi-plugins | 48b9e2eaba33e83ec2bd268b7528d9b66d31b7c2 | [
"Apache-2.0"
] | 7 | 2016-06-22T12:28:42.000Z | 2022-01-03T05:13:06.000Z | modules/SemanticWebImport/src/main/resources/fr/inria/edelweiss/examples/autolayout.py | cmatties/gephi-plugins | 48b9e2eaba33e83ec2bd268b7528d9b66d31b7c2 | [
"Apache-2.0"
] | null | null | null | modules/SemanticWebImport/src/main/resources/fr/inria/edelweiss/examples/autolayout.py | cmatties/gephi-plugins | 48b9e2eaba33e83ec2bd268b7528d9b66d31b7c2 | [
"Apache-2.0"
] | 2 | 2016-01-31T03:59:51.000Z | 2016-02-24T14:55:06.000Z | import org.openide.util.Lookup as Lookup
import org.gephi.ranking.api.RankingController
import org.gephi.ranking.api.Ranking as Ranking
import org.gephi.ranking.api.Transformer as Transformer
import java.awt.Color as Color
rankingController = Lookup.getDefault().lookup(org.gephi.ranking.api.RankingController)
# Set the color in function of the degree.
degreeRanking = rankingController.getModel().getRanking(Ranking.NODE_ELEMENT, Ranking.DEGREE_RANKING);
colorTransformer = rankingController.getModel().getTransformer(Ranking.NODE_ELEMENT, Transformer.RENDERABLE_COLOR)
colorTransformer.setColors([Color.BLUE, Color.YELLOW])
rankingController.transform(degreeRanking, colorTransformer)
# Set the size in function of the degree of the nodes.
sizeTransformer = rankingController.getModel().getTransformer(Ranking.NODE_ELEMENT, Transformer.RENDERABLE_SIZE)
sizeTransformer.setMinSize(3)
sizeTransformer.setMaxSize(40)
rankingController.transform(degreeRanking, sizeTransformer)
### Layout of the graph
# Construction of a layout object
import org.gephi.layout.plugin.forceAtlas2.ForceAtlas2Builder as ForceAtlas2Builder
import org.gephi.layout.plugin.forceAtlas2.ForceAtlas2 as ForceAtlas2
fa2builder = ForceAtlas2Builder()
fa2 = ForceAtlas2(fa2builder)
# Setting the layout object
import org.gephi.graph.api.GraphController as GraphController
graphModel = Lookup.getDefault().lookup(GraphController).getModel()
fa2.setGraphModel(graphModel)
fa2.setAdjustSizes(True) # To prevent overlap
print "executing layout"
# Run the layout.
fa2.initAlgo()
for i in range(5000):
fa2.goAlgo()
| 35.466667 | 115 | 0.828321 | 187 | 1,596 | 7.037433 | 0.363636 | 0.047872 | 0.06383 | 0.054711 | 0.329787 | 0.174772 | 0.118541 | 0.118541 | 0 | 0 | 0 | 0.015048 | 0.08396 | 1,596 | 44 | 116 | 36.272727 | 0.885089 | 0.129073 | 0 | 0 | 0 | 0 | 0.011619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.307692 | null | null | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8e48b052744dd7bd27f923bdf71cbb9def01860f | 1,204 | py | Python | catalog_get_product.py | cognitivefashion/cf-sdk-python | 9eb90245314a54d1a472f835fb427e7f6509d92a | [
"Apache-2.0"
] | 9 | 2019-03-05T02:50:48.000Z | 2022-02-25T20:21:42.000Z | catalog_get_product.py | cognitivefashion/cf-sdk-python | 9eb90245314a54d1a472f835fb427e7f6509d92a | [
"Apache-2.0"
] | 1 | 2019-05-21T02:04:27.000Z | 2020-02-10T20:33:10.000Z | catalog_get_product.py | cognitivefashion/cf-sdk-python | 9eb90245314a54d1a472f835fb427e7f6509d92a | [
"Apache-2.0"
] | 5 | 2017-06-16T00:00:13.000Z | 2021-02-08T19:23:59.000Z | #-----------------------------------------------------------------------------
# Get product from a catalog.
# GET /v1/catalog/{catalog_name}/products/{id}
#------------------------------------------------------------------------------
import os
import json
import requests
from urlparse import urljoin
from pprint import pprint
from props import *
# Replace this with the custom url generated for you.
api_gateway_url = props['api_gateway_url']
# Pass the api key into the header
# Replace 'your_api_key' with your API key.
headers = {'X-Api-Key': props['X-Api-Key']}
# catalog name
catalog_name = props['catalog_name']
# product id
id = 'SKLTS16AMCWSH8SH20'
# API end point
api_endpoint = '/v1/catalog/%s/products/%s'%(catalog_name,id)
url = urljoin(api_gateway_url,api_endpoint)
response = requests.get(url,headers=headers)
print response.status_code
pprint(response.json())
# The local copy of the catalog image can be accessed as
image_url = response.json()['data']['images']['1']['image_url']
image_filename = response.json()['data']['images']['1']['image_filename']
print urljoin(api_gateway_url,
'/v1/catalog/%s/images/%s'%(catalog_name,image_filename))
| 28 | 79 | 0.629568 | 155 | 1,204 | 4.741935 | 0.36129 | 0.089796 | 0.070748 | 0.054422 | 0.07619 | 0.07619 | 0 | 0 | 0 | 0 | 0 | 0.009506 | 0.126246 | 1,204 | 43 | 80 | 28 | 0.689164 | 0.373754 | 0 | 0 | 0 | 0 | 0.212366 | 0.067204 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.315789 | null | null | 0.210526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
8e4caec47e3b45cf60343961b13fbd6d09ccd66d | 7,131 | py | Python | fabfile/testbeds/testbed_setup80.py | GaryGaryWU/contrail_fabric_util | 70b944afe801593cd2664ae46e87363534085bcc | [
"Apache-2.0"
] | null | null | null | fabfile/testbeds/testbed_setup80.py | GaryGaryWU/contrail_fabric_util | 70b944afe801593cd2664ae46e87363534085bcc | [
"Apache-2.0"
] | null | null | null | fabfile/testbeds/testbed_setup80.py | GaryGaryWU/contrail_fabric_util | 70b944afe801593cd2664ae46e87363534085bcc | [
"Apache-2.0"
] | null | null | null | from fabric.api import env
os_username = 'admin'
os_password = 'contrail123'
os_tenant_name = 'demo'
host1 = 'root@10.84.21.1'
host2 = 'root@10.84.21.2'
host3 = 'root@10.84.21.3'
host4 = 'root@10.84.21.4'
host5 = 'root@10.84.21.5'
host6 = 'root@10.84.21.6'
host7 = 'root@10.84.21.7'
host8 = 'root@10.84.21.8'
host9 = 'root@10.84.21.9'
host10 = 'root@10.84.21.10'
host11 = 'root@10.84.21.11'
host12 = 'root@10.84.21.12'
host13 = 'root@10.84.21.13'
host14 = 'root@10.84.21.14'
host15 = 'root@10.84.21.15'
host16 = 'root@10.84.21.16'
host17 = 'root@10.84.21.17'
host18 = 'root@10.84.21.18'
host19 = 'root@10.84.21.19'
host20 = 'root@10.84.21.20'
host21 = 'root@10.84.21.21'
host22 = 'root@10.84.21.22'
host23 = 'root@10.84.21.23'
host24 = 'root@10.84.21.24'
host25 = 'root@10.84.21.28'
host26 = 'root@10.84.21.29'
host27 = 'root@10.84.21.30'
host28 = 'root@10.84.21.31'
host29 = 'root@10.84.21.32'
host30 = 'root@10.84.21.33'
host31 = 'root@10.84.23.1'
host32 = 'root@10.84.23.2'
host33 = 'root@10.84.23.3'
host34 = 'root@10.84.23.4'
host35 = 'root@10.84.23.5'
host36 = 'root@10.84.23.6'
host37 = 'root@10.84.23.7'
host38 = 'root@10.84.23.8'
host39 = 'root@10.84.23.9'
host40 = 'root@10.84.23.10'
host41 = 'root@10.84.21.34'
host42 = 'root@10.84.21.35'
host43 = 'root@10.84.21.36'
host44 = 'root@10.84.21.37'
host45 = 'root@10.84.21.38'
host46 = 'root@10.84.21.39'
host47 = 'root@10.84.21.40'
host48 = 'root@10.84.21.41'
host49 = 'root@10.84.21.42'
host50 = 'root@10.84.21.43'
host51 = 'root@10.84.22.1'
host52 = 'root@10.84.22.2'
host53 = 'root@10.84.22.3'
host54 = 'root@10.84.22.4'
host55 = 'root@10.84.22.5'
host56 = 'root@10.84.22.6'
host57 = 'root@10.84.22.7'
host58 = 'root@10.84.22.8'
host59 = 'root@10.84.22.9'
host60 = 'root@10.84.22.10'
host61 = 'root@10.84.22.11'
host62 = 'root@10.84.22.12'
host63 = 'root@10.84.22.13'
host64 = 'root@10.84.22.14'
host65 = 'root@10.84.22.15'
host66 = 'root@10.84.22.16'
host67 = 'root@10.84.22.17'
host68 = 'root@10.84.22.18'
host69 = 'root@10.84.22.19'
host70 = 'root@10.84.22.20'
host71 = 'root@10.84.23.11'
host72 = 'root@10.84.23.12'
host73 = 'root@10.84.23.13'
host74 = 'root@10.84.23.14'
host75 = 'root@10.84.23.15'
host76 = 'root@10.84.23.16'
host77 = 'root@10.84.23.17'
host78 = 'root@10.84.23.18'
host79 = 'root@10.84.23.19'
host80 = 'root@10.84.23.20'
ext_routers = [('mx1', '10.84.23.253'), ('mx2', '10.84.23.252')]
router_asn = 64512
public_vn_rtgt = 10000
public_vn_subnet = "10.84.46.0/24"
host_build = 'ajayhn@10.84.5.101'
env.roledefs = {
'all': [host1, host2, host3, host4, host5, host6, host7, host8, host9, host10,
host11, host12, host13, host14, host15, host16, host17, host18, host19, host20,
host21, host22, host23, host24, host25, host26, host27, host28, host29, host30,
host31, host32, host33, host34, host35, host36, host37, host38, host39, host40,
host41, host42, host43, host44, host45, host46, host47, host48, host49, host50,
host51, host52, host53, host54, host55, host56, host57, host58, host59, host60,
host61, host62, host63, host64, host65, host66, host67, host68, host69, host70,
host71, host72, host73, host74, host75, host76, host77, host78, host79, host80],
'cfgm': [host40],
'control': [host1, host2],
'compute': [host3, host4, host5, host6, host7, host8, host9, host10,
host11, host12, host13, host14, host15, host16, host17, host18, host19, host20,
host21, host22, host23, host24, host25, host26, host27, host28, host29, host30,
host31, host32, host33, host34, host35, host36, host37, host38, host39,
host41, host42, host43, host44, host45, host46, host47, host48, host49, host50,
host51, host52, host53, host54, host55, host56, host57, host58, host59, host60,
host61, host62, host63, host64, host65, host66, host67, host68, host69, host70,
host71, host72, host73, host74, host75, host76, host77, host78, host79, host80],
'database': [host39, host38],
'collector': [host37, host36],
'webui': [host35],
'build': [host_build],
}
env.hostnames = {
'all': [
'b1s1',
'b1s2',
'b1s3',
'b1s4',
'b1s5',
'b1s6',
'b1s7',
'b1s8',
'b1s9',
'b1s10',
'b1s11',
'b1s12',
'b1s13',
'b1s14',
'b1s15',
'b1s16',
'b1s17',
'b1s18',
'b1s19',
'b1s20',
'b1s21',
'b1s22',
'b1s23',
'b1s24',
'b1s28',
'b1s29',
'b1s30',
'b1s31',
'b1s32',
'b1s33',
'b3s1',
'b3s2',
'b3s3',
'b3s4',
'b3s5',
'b3s6',
'b3s7',
'b3s8',
'b3s9',
'b3s10',
'b1s34',
'b1s35',
'b1s36',
'b1s37',
'b1s38',
'b1s39',
'b1s40',
'b1s41',
'b1s42',
'b1s43',
'b2s1',
'b2s2',
'b2s3',
'b2s4',
'b2s5',
'b2s6',
'b2s7',
'b2s8',
'b2s9',
'b2s10',
'b2s11',
'b2s12',
'b2s13',
'b2s14',
'b2s15',
'b2s16',
'b2s17',
'b2s18',
'b2s19',
'b2s20',
'b3s11',
'b3s12',
'b3s13',
'b3s14',
'b3s15',
'b3s16',
'b3s17',
'b3s18',
'b3s19',
'b3s20',
]
}
env.passwords = {
host1: 'c0ntrail123',
host2: 'c0ntrail123',
host3: 'c0ntrail123',
host4: 'c0ntrail123',
host5: 'c0ntrail123',
host6: 'c0ntrail123',
host7: 'c0ntrail123',
host8: 'c0ntrail123',
host9: 'c0ntrail123',
host10: 'c0ntrail123',
host11: 'c0ntrail123',
host12: 'c0ntrail123',
host13: 'c0ntrail123',
host14: 'c0ntrail123',
host15: 'c0ntrail123',
host16: 'c0ntrail123',
host17: 'c0ntrail123',
host18: 'c0ntrail123',
host19: 'c0ntrail123',
host20: 'c0ntrail123',
host21: 'c0ntrail123',
host22: 'c0ntrail123',
host23: 'c0ntrail123',
host24: 'c0ntrail123',
host25: 'c0ntrail123',
host26: 'c0ntrail123',
host27: 'c0ntrail123',
host28: 'c0ntrail123',
host29: 'c0ntrail123',
host30: 'c0ntrail123',
host31: 'c0ntrail123',
host32: 'c0ntrail123',
host33: 'c0ntrail123',
host34: 'c0ntrail123',
host35: 'c0ntrail123',
host36: 'c0ntrail123',
host37: 'c0ntrail123',
host38: 'c0ntrail123',
host39: 'c0ntrail123',
host40: 'c0ntrail123',
host41: 'c0ntrail123',
host42: 'c0ntrail123',
host43: 'c0ntrail123',
host44: 'c0ntrail123',
host45: 'c0ntrail123',
host46: 'c0ntrail123',
host47: 'c0ntrail123',
host48: 'c0ntrail123',
host49: 'c0ntrail123',
host50: 'c0ntrail123',
host51: 'c0ntrail123',
host52: 'c0ntrail123',
host53: 'c0ntrail123',
host54: 'c0ntrail123',
host55: 'c0ntrail123',
host56: 'c0ntrail123',
host57: 'c0ntrail123',
host58: 'c0ntrail123',
host59: 'c0ntrail123',
host60: 'c0ntrail123',
host61: 'c0ntrail123',
host62: 'c0ntrail123',
host63: 'c0ntrail123',
host64: 'c0ntrail123',
host65: 'c0ntrail123',
host66: 'c0ntrail123',
host67: 'c0ntrail123',
host68: 'c0ntrail123',
host69: 'c0ntrail123',
host70: 'c0ntrail123',
host71: 'c0ntrail123',
host72: 'c0ntrail123',
host73: 'c0ntrail123',
host74: 'c0ntrail123',
host75: 'c0ntrail123',
host76: 'c0ntrail123',
host77: 'c0ntrail123',
host78: 'c0ntrail123',
host79: 'c0ntrail123',
host80: 'c0ntrail123',
host_build: 'c0ntrail123'
}
| 24.505155 | 92 | 0.618847 | 954 | 7,131 | 4.612159 | 0.257862 | 0.076364 | 0.145455 | 0.090909 | 0.206818 | 0.206818 | 0.206818 | 0.206818 | 0.206818 | 0.206818 | 0 | 0.312274 | 0.185388 | 7,131 | 290 | 93 | 24.589655 | 0.445171 | 0 | 0 | 0.042857 | 0 | 0 | 0.371477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.007143 | 0.003571 | 0 | 0.003571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.