hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
00d0c1f2466556d7dc4cf14719be378f12a2a51d | 276 | py | Python | northwind/adapter/repository/i_datastore_context.py | andrewwgordon/ddd-northwind-api | 33ace8456233acd1a39dabe3c53e0334fab26d66 | [
"MIT"
] | null | null | null | northwind/adapter/repository/i_datastore_context.py | andrewwgordon/ddd-northwind-api | 33ace8456233acd1a39dabe3c53e0334fab26d66 | [
"MIT"
] | 6 | 2021-09-11T23:10:04.000Z | 2021-09-12T14:35:18.000Z | northwind/adapter/repository/i_datastore_context.py | andrewwgordon/ddd-northwind-api | 33ace8456233acd1a39dabe3c53e0334fab26d66 | [
"MIT"
] | null | null | null | from abc import ABC, abstractclassmethod
class IDatastoreContext(ABC):
@abstractclassmethod
def open(self):
pass
@abstractclassmethod
def get_connection(self):
pass
@abstractclassmethod
def close_connection(self):
pass | 18.4 | 40 | 0.666667 | 25 | 276 | 7.28 | 0.52 | 0.362637 | 0.296703 | 0.32967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275362 | 276 | 15 | 41 | 18.4 | 0.91 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0.272727 | 0.090909 | 0 | 0.454545 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
daac488f60caee813996c27e43d534b5d0ef69b8 | 14 | py | Python | pyth/plugins/latex/__init__.py | eriol/pyth | f2a06fc8dc9b1cfc439ea14252d39b9845a7fa4b | [
"MIT"
] | 47 | 2015-01-26T22:06:53.000Z | 2022-01-04T15:11:14.000Z | pyth/plugins/latex/__init__.py | eriol/pyth | f2a06fc8dc9b1cfc439ea14252d39b9845a7fa4b | [
"MIT"
] | 16 | 2015-02-20T18:12:22.000Z | 2021-12-17T09:49:19.000Z | pyth/plugins/latex/__init__.py | eriol/pyth | f2a06fc8dc9b1cfc439ea14252d39b9845a7fa4b | [
"MIT"
] | 45 | 2015-01-29T02:47:39.000Z | 2022-01-26T12:50:27.000Z | """
Latex
"""
| 3.5 | 5 | 0.357143 | 1 | 14 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 14 | 3 | 6 | 4.666667 | 0.454545 | 0.357143 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dab90ff3376df2c50bec56a7091cbcd8a77b3a7c | 77 | py | Python | test/tests/68.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | 1 | 2020-02-06T14:28:45.000Z | 2020-02-06T14:28:45.000Z | test/tests/68.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | null | null | null | test/tests/68.py | kevinxucs/pyston | bdb87c1706ac74a0d15d9bc2bae53798678a5f14 | [
"Apache-2.0"
] | 1 | 2020-02-06T14:29:00.000Z | 2020-02-06T14:29:00.000Z | def f():
pass
print f.__name__
print f.__module__
print sum.__module__
| 9.625 | 20 | 0.727273 | 12 | 77 | 3.666667 | 0.583333 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194805 | 77 | 7 | 21 | 11 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.2 | 0 | null | null | 0.6 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 6 |
daba6f8825ff5cb63a6754319183ac900e3c120c | 133 | py | Python | modules/pymol/opengl/__init__.py | markdoerr/pymol-open-source | b891b59ffaea812600648aa131ea2dbecd59a199 | [
"CNRI-Python"
] | 2 | 2019-05-23T22:17:29.000Z | 2020-07-03T14:36:22.000Z | modules/pymol/opengl/__init__.py | markdoerr/pymol-open-source | b891b59ffaea812600648aa131ea2dbecd59a199 | [
"CNRI-Python"
] | null | null | null | modules/pymol/opengl/__init__.py | markdoerr/pymol-open-source | b891b59ffaea812600648aa131ea2dbecd59a199 | [
"CNRI-Python"
] | null | null | null | from __future__ import print_function
print("pymol.opengl is deprecated, us the OpenGL module from http://pyopengl.sf.net instead")
| 33.25 | 93 | 0.804511 | 20 | 133 | 5.1 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112782 | 133 | 3 | 94 | 44.333333 | 0.864407 | 0 | 0 | 0 | 0 | 0 | 0.631579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
dabc9f2b4ed9675bc28b7aeb82f434d393fae497 | 67 | py | Python | mainDirver.py | markgauda/GordonsPriceChecker | 8ae4a6cd964f21398d37ce7ceb64abc6cda503b3 | [
"MIT"
] | null | null | null | mainDirver.py | markgauda/GordonsPriceChecker | 8ae4a6cd964f21398d37ce7ceb64abc6cda503b3 | [
"MIT"
] | null | null | null | mainDirver.py | markgauda/GordonsPriceChecker | 8ae4a6cd964f21398d37ce7ceb64abc6cda503b3 | [
"MIT"
] | null | null | null | """Made by Mark Gauda in the in the winder of 2022
"""
import main
| 16.75 | 50 | 0.701493 | 13 | 67 | 3.615385 | 0.846154 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075472 | 0.208955 | 67 | 3 | 51 | 22.333333 | 0.811321 | 0.701493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
dad8c46363d8fd469e5f8b171e32300db1584128 | 51,134 | py | Python | tgtypes/models/message.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | tgtypes/models/message.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | tgtypes/models/message.py | autogram/tgtypes | 90f8d0d35d3c372767508e56c20777635e128e38 | [
"MIT"
] | null | null | null | from __future__ import annotations
import datetime
from typing import TYPE_CHECKING, List, Optional, Union
from pydantic import Field
from ..utils import helper
from ._base import UNSET, TelegramObject
if TYPE_CHECKING: # pragma: no cover
from .animation import Animation
from .audio import Audio
from .chat import Chat
from .contact import Contact
from .dice import Dice
from .document import Document
from .force_reply import ForceReply
from .game import Game
from .inline_keyboard_markup import InlineKeyboardMarkup
from .invoice import Invoice
from .input_file import InputFile
from .input_media_photo import InputMediaPhoto
from .input_media_video import InputMediaVideo
from .labeled_price import LabeledPrice
from .location import Location
from .message_entity import MessageEntity
from .passport_data import PassportData
from .photo_size import PhotoSize
from .poll import Poll
from .reply_keyboard_markup import ReplyKeyboardMarkup
from .reply_keyboard_remove import ReplyKeyboardRemove
from .sticker import Sticker
from .successful_payment import SuccessfulPayment
from .user import User
from .venue import Venue
from .video import Video
from .video_note import VideoNote
from .voice import Voice
from ..methods import (
SendAnimation,
SendAudio,
SendContact,
SendDocument,
SendGame,
SendInvoice,
SendLocation,
SendMediaGroup,
SendMessage,
SendPhoto,
SendPoll,
SendDice,
SendSticker,
SendVenue,
SendVideo,
SendVideoNote,
SendVoice,
)
class Message(TelegramObject):
"""
This object represents a message.
Source: https://core.telegram.org/bots/api#message
"""
message_id: int
"""Unique message identifier inside this chat"""
date: datetime.datetime
"""Date the message was sent in Unix time"""
chat: Chat
"""Conversation the message belongs to"""
from_user: Optional[User] = Field(None, alias="from")
"""Sender, empty for messages sent to channels"""
forward_from: Optional[User] = None
"""For forwarded messages, sender of the original message"""
forward_from_chat: Optional[Chat] = None
"""For messages forwarded from channels, information about the original channel"""
forward_from_message_id: Optional[int] = None
"""For messages forwarded from channels, identifier of the original message in the channel"""
forward_signature: Optional[str] = None
"""For messages forwarded from channels, signature of the post author if present"""
forward_sender_name: Optional[str] = None
"""Sender's name for messages forwarded from users who disallow adding a link to their account
in forwarded messages"""
forward_date: Optional[int] = None
"""For forwarded messages, date the original message was sent in Unix time"""
reply_to_message: Optional[Message] = None
"""For replies, the original message. Note that the Message object in this field will not
contain further reply_to_message fields even if it itself is a reply."""
via_bot: Optional[User] = None
"""Bot through which the message was sent"""
edit_date: Optional[int] = None
"""Date the message was last edited in Unix time"""
media_group_id: Optional[str] = None
"""The unique identifier of a media message group this message belongs to"""
author_signature: Optional[str] = None
"""Signature of the post author for messages in channels"""
text: Optional[str] = None
"""For text messages, the actual UTF-8 text of the message, 0-4096 characters"""
entities: Optional[List[MessageEntity]] = None
"""For text messages, special entities like usernames, URLs, bot commands, etc. that appear in
the text"""
animation: Optional[Animation] = None
"""Message is an animation, information about the animation. For backward compatibility, when
this field is set, the document field will also be set"""
audio: Optional[Audio] = None
"""Message is an audio file, information about the file"""
document: Optional[Document] = None
"""Message is a general file, information about the file"""
photo: Optional[List[PhotoSize]] = None
"""Message is a photo, available sizes of the photo"""
sticker: Optional[Sticker] = None
"""Message is a sticker, information about the sticker"""
video: Optional[Video] = None
"""Message is a video, information about the video"""
video_note: Optional[VideoNote] = None
"""Message is a video note, information about the video message"""
voice: Optional[Voice] = None
"""Message is a voice message, information about the file"""
caption: Optional[str] = None
"""Caption for the animation, audio, document, photo, video or voice, 0-1024 characters"""
caption_entities: Optional[List[MessageEntity]] = None
"""For messages with a caption, special entities like usernames, URLs, bot commands, etc. that
appear in the caption"""
contact: Optional[Contact] = None
"""Message is a shared contact, information about the contact"""
dice: Optional[Dice] = None
"""Message is a dice with random value from 1 to 6"""
game: Optional[Game] = None
"""Message is a game, information about the game."""
poll: Optional[Poll] = None
"""Message is a native poll, information about the poll"""
venue: Optional[Venue] = None
"""Message is a venue, information about the venue. For backward compatibility, when this
field is set, the location field will also be set"""
location: Optional[Location] = None
"""Message is a shared location, information about the location"""
new_chat_members: Optional[List[User]] = None
"""New members that were added to the group or supergroup and information about them (the bot
itself may be one of these members)"""
left_chat_member: Optional[User] = None
"""A member was removed from the group, information about them (this member may be the bot
itself)"""
new_chat_title: Optional[str] = None
"""A chat title was changed to this value"""
new_chat_photo: Optional[List[PhotoSize]] = None
"""A chat photo was change to this value"""
delete_chat_photo: Optional[bool] = None
"""Service message: the chat photo was deleted"""
group_chat_created: Optional[bool] = None
"""Service message: the group has been created"""
supergroup_chat_created: Optional[bool] = None
"""Service message: the supergroup has been created. This field can't be received in a message
coming through updates, because bot can't be a member of a supergroup when it is created.
It can only be found in reply_to_message if someone replies to a very first message in a
directly created supergroup."""
channel_chat_created: Optional[bool] = None
"""Service message: the channel has been created. This field can't be received in a message
coming through updates, because bot can't be a member of a channel when it is created. It
can only be found in reply_to_message if someone replies to a very first message in a
channel."""
migrate_to_chat_id: Optional[int] = None
"""The group has been migrated to a supergroup with the specified identifier. This number may
be greater than 32 bits and some programming languages may have difficulty/silent defects
in interpreting it. But it is smaller than 52 bits, so a signed 64 bit integer or
double-precision float type are safe for storing this identifier."""
migrate_from_chat_id: Optional[int] = None
"""The supergroup has been migrated from a group with the specified identifier. This number
may be greater than 32 bits and some programming languages may have difficulty/silent
defects in interpreting it. But it is smaller than 52 bits, so a signed 64 bit integer or
double-precision float type are safe for storing this identifier."""
pinned_message: Optional[Message] = None
"""Specified message was pinned. Note that the Message object in this field will not contain
further reply_to_message fields even if it is itself a reply."""
invoice: Optional[Invoice] = None
"""Message is an invoice for a payment, information about the invoice."""
successful_payment: Optional[SuccessfulPayment] = None
"""Message is a service message about a successful payment, information about the payment."""
connected_website: Optional[str] = None
"""The domain name of the website on which the user has logged in."""
passport_data: Optional[PassportData] = None
"""Telegram Passport data"""
reply_markup: Optional[InlineKeyboardMarkup] = None
"""Inline keyboard attached to the message. login_url buttons are represented as ordinary url
buttons."""
@property
def content_type(self) -> str:
if self.text:
return ContentType.TEXT
if self.audio:
return ContentType.AUDIO
if self.animation:
return ContentType.ANIMATION
if self.document:
return ContentType.DOCUMENT
if self.game:
return ContentType.GAME
if self.photo:
return ContentType.PHOTO
if self.sticker:
return ContentType.STICKER
if self.video:
return ContentType.VIDEO
if self.video_note:
return ContentType.VIDEO_NOTE
if self.voice:
return ContentType.VOICE
if self.contact:
return ContentType.CONTACT
if self.venue:
return ContentType.VENUE
if self.location:
return ContentType.LOCATION
if self.new_chat_members:
return ContentType.NEW_CHAT_MEMBERS
if self.left_chat_member:
return ContentType.LEFT_CHAT_MEMBER
if self.invoice:
return ContentType.INVOICE
if self.successful_payment:
return ContentType.SUCCESSFUL_PAYMENT
if self.connected_website:
return ContentType.CONNECTED_WEBSITE
if self.migrate_from_chat_id:
return ContentType.MIGRATE_FROM_CHAT_ID
if self.migrate_to_chat_id:
return ContentType.MIGRATE_TO_CHAT_ID
if self.pinned_message:
return ContentType.PINNED_MESSAGE
if self.new_chat_title:
return ContentType.NEW_CHAT_TITLE
if self.new_chat_photo:
return ContentType.NEW_CHAT_PHOTO
if self.delete_chat_photo:
return ContentType.DELETE_CHAT_PHOTO
if self.group_chat_created:
return ContentType.GROUP_CHAT_CREATED
if self.passport_data:
return ContentType.PASSPORT_DATA
if self.poll:
return ContentType.POLL
if self.dice:
return ContentType.DICE
return ContentType.UNKNOWN
def reply_animation(
self,
animation: Union[InputFile, str],
duration: Optional[int] = None,
width: Optional[int] = None,
height: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendAnimation:
"""
Reply with animation
:param animation:
:param duration:
:param width:
:param height:
:param thumb:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendAnimation
return SendAnimation(
chat_id=self.chat.id,
animation=animation,
duration=duration,
width=width,
height=height,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_animation(
self,
animation: Union[InputFile, str],
duration: Optional[int] = None,
width: Optional[int] = None,
height: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendAnimation:
"""
Answer with animation
:param animation:
:param duration:
:param width:
:param height:
:param thumb:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendAnimation
return SendAnimation(
chat_id=self.chat.id,
animation=animation,
duration=duration,
width=width,
height=height,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_audio(
self,
audio: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
duration: Optional[int] = None,
performer: Optional[str] = None,
title: Optional[str] = None,
thumb: Optional[Union[InputFile, str]] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendAudio:
"""
Reply with audio
:param audio:
:param caption:
:param parse_mode:
:param duration:
:param performer:
:param title:
:param thumb:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendAudio
return SendAudio(
chat_id=self.chat.id,
audio=audio,
caption=caption,
parse_mode=parse_mode,
duration=duration,
performer=performer,
title=title,
thumb=thumb,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_audio(
self,
audio: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
duration: Optional[int] = None,
performer: Optional[str] = None,
title: Optional[str] = None,
thumb: Optional[Union[InputFile, str]] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendAudio:
"""
Answer with audio
:param audio:
:param caption:
:param parse_mode:
:param duration:
:param performer:
:param title:
:param thumb:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendAudio
return SendAudio(
chat_id=self.chat.id,
audio=audio,
caption=caption,
parse_mode=parse_mode,
duration=duration,
performer=performer,
title=title,
thumb=thumb,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_contact(
self,
phone_number: str,
first_name: str,
last_name: Optional[str] = None,
vcard: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendContact:
"""
Reply with contact
:param phone_number:
:param first_name:
:param last_name:
:param vcard:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendContact
return SendContact(
chat_id=self.chat.id,
phone_number=phone_number,
first_name=first_name,
last_name=last_name,
vcard=vcard,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_contact(
self,
phone_number: str,
first_name: str,
last_name: Optional[str] = None,
vcard: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendContact:
"""
Answer with contact
:param phone_number:
:param first_name:
:param last_name:
:param vcard:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendContact
return SendContact(
chat_id=self.chat.id,
phone_number=phone_number,
first_name=first_name,
last_name=last_name,
vcard=vcard,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_document(
self,
document: Union[InputFile, str],
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendDocument:
"""
Reply with document
:param document:
:param thumb:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendDocument
return SendDocument(
chat_id=self.chat.id,
document=document,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_document(
self,
document: Union[InputFile, str],
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendDocument:
"""
Answer with document
:param document:
:param thumb:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendDocument
return SendDocument(
chat_id=self.chat.id,
document=document,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_game(
self,
game_short_name: str,
disable_notification: Optional[bool] = None,
reply_markup: Optional[InlineKeyboardMarkup] = None,
) -> SendGame:
"""
Reply with game
:param game_short_name:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendGame
return SendGame(
chat_id=self.chat.id,
game_short_name=game_short_name,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_game(
self,
game_short_name: str,
disable_notification: Optional[bool] = None,
reply_markup: Optional[InlineKeyboardMarkup] = None,
) -> SendGame:
"""
Answer with game
:param game_short_name:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendGame
return SendGame(
chat_id=self.chat.id,
game_short_name=game_short_name,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_invoice(
self,
title: str,
description: str,
payload: str,
provider_token: str,
start_parameter: str,
currency: str,
prices: List[LabeledPrice],
provider_data: Optional[str] = None,
photo_url: Optional[str] = None,
photo_size: Optional[int] = None,
photo_width: Optional[int] = None,
photo_height: Optional[int] = None,
need_name: Optional[bool] = None,
need_phone_number: Optional[bool] = None,
need_email: Optional[bool] = None,
need_shipping_address: Optional[bool] = None,
send_phone_number_to_provider: Optional[bool] = None,
send_email_to_provider: Optional[bool] = None,
is_flexible: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[InlineKeyboardMarkup] = None,
) -> SendInvoice:
"""
Reply with invoice
:param title:
:param description:
:param payload:
:param provider_token:
:param start_parameter:
:param currency:
:param prices:
:param provider_data:
:param photo_url:
:param photo_size:
:param photo_width:
:param photo_height:
:param need_name:
:param need_phone_number:
:param need_email:
:param need_shipping_address:
:param send_phone_number_to_provider:
:param send_email_to_provider:
:param is_flexible:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendInvoice
return SendInvoice(
chat_id=self.chat.id,
title=title,
description=description,
payload=payload,
provider_token=provider_token,
start_parameter=start_parameter,
currency=currency,
prices=prices,
provider_data=provider_data,
photo_url=photo_url,
photo_size=photo_size,
photo_width=photo_width,
photo_height=photo_height,
need_name=need_name,
need_phone_number=need_phone_number,
need_email=need_email,
need_shipping_address=need_shipping_address,
send_phone_number_to_provider=send_phone_number_to_provider,
send_email_to_provider=send_email_to_provider,
is_flexible=is_flexible,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_invoice(
self,
title: str,
description: str,
payload: str,
provider_token: str,
start_parameter: str,
currency: str,
prices: List[LabeledPrice],
provider_data: Optional[str] = None,
photo_url: Optional[str] = None,
photo_size: Optional[int] = None,
photo_width: Optional[int] = None,
photo_height: Optional[int] = None,
need_name: Optional[bool] = None,
need_phone_number: Optional[bool] = None,
need_email: Optional[bool] = None,
need_shipping_address: Optional[bool] = None,
send_phone_number_to_provider: Optional[bool] = None,
send_email_to_provider: Optional[bool] = None,
is_flexible: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[InlineKeyboardMarkup] = None,
) -> SendInvoice:
"""
Answer with invoice
:param title:
:param description:
:param payload:
:param provider_token:
:param start_parameter:
:param currency:
:param prices:
:param provider_data:
:param photo_url:
:param photo_size:
:param photo_width:
:param photo_height:
:param need_name:
:param need_phone_number:
:param need_email:
:param need_shipping_address:
:param send_phone_number_to_provider:
:param send_email_to_provider:
:param is_flexible:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendInvoice
return SendInvoice(
chat_id=self.chat.id,
title=title,
description=description,
payload=payload,
provider_token=provider_token,
start_parameter=start_parameter,
currency=currency,
prices=prices,
provider_data=provider_data,
photo_url=photo_url,
photo_size=photo_size,
photo_width=photo_width,
photo_height=photo_height,
need_name=need_name,
need_phone_number=need_phone_number,
need_email=need_email,
need_shipping_address=need_shipping_address,
send_phone_number_to_provider=send_phone_number_to_provider,
send_email_to_provider=send_email_to_provider,
is_flexible=is_flexible,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_location(
self,
latitude: float,
longitude: float,
live_period: Optional[int] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendLocation:
"""
Reply with location
:param latitude:
:param longitude:
:param live_period:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendLocation
return SendLocation(
chat_id=self.chat.id,
latitude=latitude,
longitude=longitude,
live_period=live_period,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_location(
self,
latitude: float,
longitude: float,
live_period: Optional[int] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendLocation:
"""
Answer with location
:param latitude:
:param longitude:
:param live_period:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendLocation
return SendLocation(
chat_id=self.chat.id,
latitude=latitude,
longitude=longitude,
live_period=live_period,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_media_group(
self,
media: List[Union[InputMediaPhoto, InputMediaVideo]],
disable_notification: Optional[bool] = None,
) -> SendMediaGroup:
"""
Reply with media group
:param media:
:param disable_notification:
:return:
"""
from ..methods import SendMediaGroup
return SendMediaGroup(
chat_id=self.chat.id,
media=media,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
)
def answer_media_group(
self,
media: List[Union[InputMediaPhoto, InputMediaVideo]],
disable_notification: Optional[bool] = None,
) -> SendMediaGroup:
"""
Answer with media group
:param media:
:param disable_notification:
:return:
"""
from ..methods import SendMediaGroup
return SendMediaGroup(
chat_id=self.chat.id,
media=media,
disable_notification=disable_notification,
reply_to_message_id=None,
)
def reply(
self,
text: str,
parse_mode: Optional[str] = UNSET,
disable_web_page_preview: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendMessage:
"""
Reply with text message
:param text:
:param parse_mode:
:param disable_web_page_preview:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendMessage
return SendMessage(
chat_id=self.chat.id,
text=text,
parse_mode=parse_mode,
disable_web_page_preview=disable_web_page_preview,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer(
self,
text: str,
parse_mode: Optional[str] = UNSET,
disable_web_page_preview: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendMessage:
"""
Answer with text message
:param text:
:param parse_mode:
:param disable_web_page_preview:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendMessage
return SendMessage(
chat_id=self.chat.id,
text=text,
parse_mode=parse_mode,
disable_web_page_preview=disable_web_page_preview,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_photo(
self,
photo: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendPhoto:
"""
Reply with photo
:param photo:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendPhoto
return SendPhoto(
chat_id=self.chat.id,
photo=photo,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_photo(
self,
photo: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendPhoto:
"""
Answer with photo
:param photo:
:param caption:
:param parse_mode:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendPhoto
return SendPhoto(
chat_id=self.chat.id,
photo=photo,
caption=caption,
parse_mode=parse_mode,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_poll(
self,
question: str,
options: List[str],
is_anonymous: Optional[bool] = None,
type: Optional[str] = None,
allows_multiple_answers: Optional[bool] = None,
correct_option_id: Optional[int] = None,
explanation: Optional[str] = None,
explanation_parse_mode: Optional[str] = UNSET,
open_period: Optional[int] = None,
close_date: Optional[Union[datetime.datetime, datetime.timedelta, int]] = None,
is_closed: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendPoll:
"""
Reply with poll
:param question:
:param options:
:param is_anonymous:
:param type:
:param allows_multiple_answers:
:param correct_option_id:
:param explanation:
:param explanation_parse_mode:
:param open_period:
:param close_date:
:param is_closed:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendPoll
return SendPoll(
chat_id=self.chat.id,
question=question,
options=options,
is_anonymous=is_anonymous,
type=type,
allows_multiple_answers=allows_multiple_answers,
correct_option_id=correct_option_id,
explanation=explanation,
explanation_parse_mode=explanation_parse_mode,
open_period=open_period,
close_date=close_date,
is_closed=is_closed,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_poll(
self,
question: str,
options: List[str],
is_anonymous: Optional[bool] = None,
type: Optional[str] = None,
allows_multiple_answers: Optional[bool] = None,
correct_option_id: Optional[int] = None,
explanation: Optional[str] = None,
explanation_parse_mode: Optional[str] = UNSET,
open_period: Optional[int] = None,
close_date: Optional[Union[datetime.datetime, datetime.timedelta, int]] = None,
is_closed: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendPoll:
"""
Answer with poll
:param question:
:param options:
:param is_anonymous:
:param type:
:param allows_multiple_answers:
:param correct_option_id:
:param explanation:
:param explanation_parse_mode:
:param open_period:
:param close_date:
:param is_closed:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendPoll
return SendPoll(
chat_id=self.chat.id,
question=question,
options=options,
is_anonymous=is_anonymous,
type=type,
allows_multiple_answers=allows_multiple_answers,
correct_option_id=correct_option_id,
explanation=explanation,
explanation_parse_mode=explanation_parse_mode,
open_period=open_period,
close_date=close_date,
is_closed=is_closed,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_dice(
self,
emoji: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendDice:
"""
Reply with dice
:param emoji:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendDice
return SendDice(
chat_id=self.chat.id,
emoji=emoji,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_dice(
self,
emoji: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendDice:
"""
Answer with dice
:param emoji:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendDice
return SendDice(
chat_id=self.chat.id,
emoji=emoji,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_sticker(
self,
sticker: Union[InputFile, str],
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendSticker:
"""
Reply with sticker
:param sticker:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendSticker
return SendSticker(
chat_id=self.chat.id,
sticker=sticker,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_sticker(
self,
sticker: Union[InputFile, str],
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendSticker:
"""
Answer with sticker
:param sticker:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendSticker
return SendSticker(
chat_id=self.chat.id,
sticker=sticker,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_venue(
self,
latitude: float,
longitude: float,
title: str,
address: str,
foursquare_id: Optional[str] = None,
foursquare_type: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVenue:
"""
Reply with venue
:param latitude:
:param longitude:
:param title:
:param address:
:param foursquare_id:
:param foursquare_type:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVenue
return SendVenue(
chat_id=self.chat.id,
latitude=latitude,
longitude=longitude,
title=title,
address=address,
foursquare_id=foursquare_id,
foursquare_type=foursquare_type,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_venue(
self,
latitude: float,
longitude: float,
title: str,
address: str,
foursquare_id: Optional[str] = None,
foursquare_type: Optional[str] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVenue:
"""
Answer with venue
:param latitude:
:param longitude:
:param title:
:param address:
:param foursquare_id:
:param foursquare_type:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVenue
return SendVenue(
chat_id=self.chat.id,
latitude=latitude,
longitude=longitude,
title=title,
address=address,
foursquare_id=foursquare_id,
foursquare_type=foursquare_type,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_video(
self,
video: Union[InputFile, str],
duration: Optional[int] = None,
width: Optional[int] = None,
height: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
supports_streaming: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVideo:
"""
Reply with video
:param video:
:param duration:
:param width:
:param height:
:param thumb:
:param caption:
:param parse_mode:
:param supports_streaming:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVideo
return SendVideo(
chat_id=self.chat.id,
video=video,
duration=duration,
width=width,
height=height,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
supports_streaming=supports_streaming,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_video(
self,
video: Union[InputFile, str],
duration: Optional[int] = None,
width: Optional[int] = None,
height: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
supports_streaming: Optional[bool] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVideo:
"""
Answer with video
:param video:
:param duration:
:param width:
:param height:
:param thumb:
:param caption:
:param parse_mode:
:param supports_streaming:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVideo
return SendVideo(
chat_id=self.chat.id,
video=video,
duration=duration,
width=width,
height=height,
thumb=thumb,
caption=caption,
parse_mode=parse_mode,
supports_streaming=supports_streaming,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_video_note(
self,
video_note: Union[InputFile, str],
duration: Optional[int] = None,
length: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVideoNote:
"""
Reply wit video note
:param video_note:
:param duration:
:param length:
:param thumb:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVideoNote
return SendVideoNote(
chat_id=self.chat.id,
video_note=video_note,
duration=duration,
length=length,
thumb=thumb,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_video_note(
self,
video_note: Union[InputFile, str],
duration: Optional[int] = None,
length: Optional[int] = None,
thumb: Optional[Union[InputFile, str]] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVideoNote:
"""
Answer wit video note
:param video_note:
:param duration:
:param length:
:param thumb:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVideoNote
return SendVideoNote(
chat_id=self.chat.id,
video_note=video_note,
duration=duration,
length=length,
thumb=thumb,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
def reply_voice(
self,
voice: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
duration: Optional[int] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVoice:
"""
Reply with voice
:param voice:
:param caption:
:param parse_mode:
:param duration:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVoice
return SendVoice(
chat_id=self.chat.id,
voice=voice,
caption=caption,
parse_mode=parse_mode,
duration=duration,
disable_notification=disable_notification,
reply_to_message_id=self.message_id,
reply_markup=reply_markup,
)
def answer_voice(
self,
voice: Union[InputFile, str],
caption: Optional[str] = None,
parse_mode: Optional[str] = UNSET,
duration: Optional[int] = None,
disable_notification: Optional[bool] = None,
reply_markup: Optional[
Union[InlineKeyboardMarkup, ReplyKeyboardMarkup, ReplyKeyboardRemove, ForceReply]
] = None,
) -> SendVoice:
"""
Answer with voice
:param voice:
:param caption:
:param parse_mode:
:param duration:
:param disable_notification:
:param reply_markup:
:return:
"""
from ..methods import SendVoice
return SendVoice(
chat_id=self.chat.id,
voice=voice,
caption=caption,
parse_mode=parse_mode,
duration=duration,
disable_notification=disable_notification,
reply_to_message_id=None,
reply_markup=reply_markup,
)
class ContentType(helper.Helper):
mode = helper.HelperMode.snake_case
TEXT = helper.Item() # text
AUDIO = helper.Item() # audio
DOCUMENT = helper.Item() # document
ANIMATION = helper.Item() # animation
GAME = helper.Item() # game
PHOTO = helper.Item() # photo
STICKER = helper.Item() # sticker
VIDEO = helper.Item() # video
VIDEO_NOTE = helper.Item() # video_note
VOICE = helper.Item() # voice
CONTACT = helper.Item() # contact
LOCATION = helper.Item() # location
VENUE = helper.Item() # venue
NEW_CHAT_MEMBERS = helper.Item() # new_chat_member
LEFT_CHAT_MEMBER = helper.Item() # left_chat_member
INVOICE = helper.Item() # invoice
SUCCESSFUL_PAYMENT = helper.Item() # successful_payment
CONNECTED_WEBSITE = helper.Item() # connected_website
MIGRATE_TO_CHAT_ID = helper.Item() # migrate_to_chat_id
MIGRATE_FROM_CHAT_ID = helper.Item() # migrate_from_chat_id
PINNED_MESSAGE = helper.Item() # pinned_message
NEW_CHAT_TITLE = helper.Item() # new_chat_title
NEW_CHAT_PHOTO = helper.Item() # new_chat_photo
DELETE_CHAT_PHOTO = helper.Item() # delete_chat_photo
GROUP_CHAT_CREATED = helper.Item() # group_chat_created
PASSPORT_DATA = helper.Item() # passport_data
POLL = helper.Item() # poll
DICE = helper.Item() # dice
UNKNOWN = helper.Item() # unknown
ANY = helper.Item() # any
| 32.52799 | 98 | 0.609379 | 5,152 | 51,134 | 5.846856 | 0.066188 | 0.085782 | 0.032932 | 0.03499 | 0.781894 | 0.767785 | 0.757262 | 0.757262 | 0.75288 | 0.749759 | 0 | 0.000711 | 0.31261 | 51,134 | 1,571 | 99 | 32.548695 | 0.856297 | 0.123968 | 0 | 0.737787 | 0 | 0 | 0.000108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034895 | false | 0.004985 | 0.068794 | 0 | 0.248255 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97320131802155941d744b4418dd2fba50310a53 | 19 | py | Python | torch_ema/__init__.py | Linux-cpp-lisp/pytorch_ema | 78b28c8e8510b0926c4e17a2f6e1c91ce29fd614 | [
"MIT"
] | 190 | 2019-05-25T13:17:22.000Z | 2022-03-30T00:15:17.000Z | torch_ema/__init__.py | PeterouZh/pytorch_ema | c801aa0dae092c96ec4dd63b2addcb7a36afa677 | [
"MIT"
] | 7 | 2019-06-17T07:41:27.000Z | 2021-11-24T17:51:46.000Z | torch_ema/__init__.py | PeterouZh/pytorch_ema | c801aa0dae092c96ec4dd63b2addcb7a36afa677 | [
"MIT"
] | 19 | 2019-07-10T02:47:43.000Z | 2022-03-29T05:51:08.000Z | from .ema import *
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 19 | 1 | 19 | 19 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
973fc372d66a382b8887d6cc6513db2f7637915b | 26,077 | py | Python | scenarios.py | Psybernetics/Trust-Toolkit | 3efbebea83b4a34e5e608808c828b78759766435 | [
"MIT"
] | 3 | 2016-01-22T16:46:30.000Z | 2020-05-10T19:17:56.000Z | scenarios.py | Psybernetics/Trust-Toolkit | 3efbebea83b4a34e5e608808c828b78759766435 | [
"MIT"
] | null | null | null | scenarios.py | Psybernetics/Trust-Toolkit | 3efbebea83b4a34e5e608808c828b78759766435 | [
"MIT"
] | 5 | 2017-03-27T02:49:36.000Z | 2021-04-22T12:09:54.000Z | # _*_ coding: utf-8 _*_
"""
The included algorithms work optimally when they have pre-trusted peers to
defer to.
If your test case does something like render all peers unlikeable you may want
to set your good peers up with some pre-trusted peers.
Scenario two is the most useful for seeing the kind of conditions that cause
peers to acheive consensus about the maliciousness of their peers.
This can be produced with the following:
./eigentrust.py -v -n20 -p2 -t10000 -s two > `date "+%d-%m-%Y-%H%M%S%N"`.log
Real-world networks are unlikely to begin with 20 users so it's advised to test
new algorithms with low node counts and high iteration counts.
"""
import utils
import random
def scenario_one(options):
"""
Half of the population are good peers.
Pre-trusted peers are selected from within the set of good peers though
this can be made to overextend by setting |P| > (|nodes| / 2).
Makes for an uncomplicated calculate_trust() computation.
"""
routers = utils.generate_routers(options, minimum=4)
good_routers = routers[:len(routers) / 2]
bad_routers = routers[len(routers) / 2:]
[setattr(_, "probably_malicious", True) for _ in bad_routers]
utils.introduce(good_routers)
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_routers]
utils.introduce(bad_routers)
utils.introduce(good_routers, bad_routers)
# Note that this is based on a definite transaction count but that it's
# through a random transaction count that the distributed trust algorithm
# can be used to detect malicious peers via the set of pre-trusted peers.
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in routers:
for peer in router:
c = random.randint(0, 1)
if options.verbose:
utils.log("%s is making %i transactions with %s." % (router, c, peer))
[router.transact_with(peer) for i in range(c)]
# Calculate trust every 5 rounds here. The periodicity in reality is a
# function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
# The return value of a scenario is used to populate "locals" in the event
# that you choose to use the --repl flag to spawn an interactive interpreter.
return {"routers": routers}
def scenario_two(options):
"""
Half of the population are good peers.
Pre-trusted peers are selected from within the set of good peers though
this can be made to overextend by setting |P| > (|nodes| / 2).
A mix of new peers are introduced every 1/5th of the iteration count.
Good peers have a 1 in 250 chance of receiving negative feedback from other
good peers.
This scenario has the highest likelihood of exhibiting consensus events.
"""
routers = utils.generate_routers(options, minimum=2)
good_routers = routers[:len(routers) / 2]
bad_routers = routers[len(routers) / 2:]
[setattr(_, "probably_malicious", True) for _ in bad_routers]
utils.introduce(good_routers)
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_routers]
utils.introduce(bad_routers)
utils.introduce(good_routers, bad_routers)
# Note that this is based on a definite transaction count but it's through a
# random transaction count with the possibility of some peers not transacting
# with some of their peers at all that the distributed trust algorithm can be
# used to detect malicious peers via the set of pre-trusted peers alone.
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in routers:
for peer in router:
if not random.randint(0, 1): continue
if not router.probably_malicious and not peer.router.probably_malicious:
if peer.trust and random.randint(0, 250) == 1:
utils.log("Good peer %s is having a bad transaction with good peer %s." % \
(router.node, peer))
router.transact_with(peer, transaction_type=False)
continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
# Introduce a mix of new peers every 1/5th of the iteration count
if _ > 5 and not _ % (options.transactions / 5):
new_good_routers = utils.generate_routers(options, maximum=random.randint(1, 3))
new_bad_routers = utils.generate_routers(options,
maximum=random.randint(1, 3),
attrs={'probably_malicious': True})
routers.extend(new_good_routers)
routers.extend(new_bad_routers)
[setattr(r, "routers", routers) for r in routers]
utils.introduce(new_good_routers, random.sample(routers,
random.choice(range(2, len(routers)))))
utils.introduce(new_bad_routers, random.sample(routers,
random.choice(range(2, len(routers)))))
for r in new_good_routers:
utils.log("Introduced %s %s into the system." % (r, r.node))
for r in new_bad_routers:
utils.log("Introduced %s %s into the system." % (r, r.node))
return {"routers": routers}
def scenario_three(options):
"""
Most of the population are good peers.
Pre trusted-peers are maximally deflationary.
A mix of new peers are introduced every 1/5th of the iteration count.
Good peers have a 1 in 250 chance of receiving negative feedback from other
good peers.
"""
class EvilRouter(utils.Router):
def __init__(self):
utils.Router.__init__(self)
self.probably_malicious = False
def render_peers(self):
response = []
for peer in self.peers:
data = peer.jsonify()
low = 0.5 - (data['transactions'] * self.node.epsilon)
data['trust'] = random.choice([low, 0])
response.append(data)
return response
routers = []
good_routers = utils.generate_routers(options, minimum=4)
bad_routers = utils.generate_routers(options, minimum=1,
maximum=options.pre_trusted,
router_class=EvilRouter)
routers.extend(good_routers)
routers.extend(bad_routers)
[setattr(r, "routers", routers) for r in routers]
utils.introduce(routers)
[r.tbucket.append(_) for _ in r.peers if _.router.__class__.__name__ == \
"EvilRouter" for r in good_routers]
# Note that this is based on a definite transaction count but it's through a
# random transaction count with the possibility of some peers not transacting
# with some of their peers at all that the distributed trust algorithm can be
# used to detect malicious peers via the set of pre-trusted peers alone.
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in routers:
for peer in router:
if not random.randint(0, 1): continue
if not router.probably_malicious and not peer.router.probably_malicious:
if random.randint(0, 250) == 1:
utils.log("Good peer %s is having a bad transaction with good peer %s." % \
(router.node, peer))
router.transact_with(peer, transaction_type=False)
continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
# Introduce a mix of new peers every 1/5th of the iteration count
if _ > 5 and not _ % (options.transactions / 5):
new_good_routers = utils.generate_routers(options,
maximum=random.randint(1, 3))
new_bad_routers = utils.generate_routers(options,
maximum=random.randint(1, 3),
attrs={'probably_malicious': True})
routers.extend(new_good_routers)
routers.extend(new_bad_routers)
[setattr(r, "routers", routers) for r in routers]
utils.introduce(new_good_routers, random.sample(good_routers,
random.choice(range(2, 6))))
utils.introduce(new_bad_routers, random.sample(good_routers,
random.choice(range(2, 6))))
for r in new_good_routers:
utils.log("Introduced %s %s into the system." % (r, r.node))
for r in new_bad_routers:
utils.log("Introduced %s %s into the system." % (r, r.node))
return {"routers": routers}
def scenario_four(options):
"""
There are no malicious peers.
A mix of new peers are introduced every 1/5th of the iteration count.
Peers have a 1 in 250 chance of receiving negative feedback from eachother.
This is to mimic a real-life system with growth from a small number of
initial users.
"""
routers = utils.generate_routers(options, minimum=2)
utils.introduce(routers)
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in routers]
# Note that this is based on a definite transaction count but it's through a
# random transaction count with the possibility of some peers not transacting
# with some of their peers at all that the distributed trust algorithm can be
# used to detect malicious peers via the set of pre-trusted peers alone.
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in routers:
for peer in router:
if not random.randint(0, 1): continue
if not router.probably_malicious and not peer.router.probably_malicious:
if peer.trust and random.randint(0, 250) == 1:
utils.log("Peer %s is having a bad transaction with %s." % \
(router.node, peer))
router.transact_with(peer, transaction_type=False)
continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
# Introduce a mix of new peers every 1/5th of the iteration count
if _ > 5 and not _ % (options.transactions / 5):
new_routers = utils.generate_routers(options, maximum=random.randint(1, 3))
routers.extend(new_routers)
[setattr(r, "routers", routers) for r in routers]
utils.introduce(new_routers, random.sample(routers,
random.choice(range(2, len(routers)))))
for r in new_routers:
utils.log("Introduced %s %s into the system." % (r, r.node))
return {"routers": routers}
def threat_model_a(options):
"""
Independently malicious peers who're not initially aware of eachother.
"""
routers = utils.generate_routers(options, minimum=10)
[setattr(r, "probably_malicious", True) for r in routers]
good_peer = utils.Router()
[r.routers.append(good_peer) for r in routers]
good_peer.routers = routers
utils.introduce(good_peer, routers)
routers.insert(0, good_peer)
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
return {"routers": routers}
def threat_model_b(options):
"""
Chain of Malicious Collectives who know eachother upfront and
deterministically give a high trust value to another malicious peer.
Resembles a malicious chain of mutual high local trust values.
"""
class EvilRouter(utils.Router):
def __init__(self):
utils.Router.__init__(self)
self.probably_malicious = True
def render_peers(self):
response = []
for peer in self.peers:
data = peer.jsonify()
if any(filter(lambda r: r.node == peer, self.collective)):
data['trust'] = peer.transactions * self.node.epsilon
response.append(data)
return response
routers = utils.generate_routers(options, minimum=7, router_class=EvilRouter)
good_peers = utils.generate_routers(options, minimum=3)
[setattr(r, "collective", routers) for r in routers]
all_routers = []
all_routers.extend(good_peers)
all_routers.extend(routers)
[setattr(r, "routers", all_routers) for r in routers]
[setattr(r, "routers", all_routers) for r in good_peers]
utils.introduce(routers)
utils.introduce(good_peers)
# Set good peers up with some pre-trusted friends
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_peers]
divisor = 1 if options.nodes == 1 else 2
utils.introduce(good_peers, random.sample(routers, len(routers) / divisor))
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in all_routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
return {"routers": all_routers}
def threat_model_c(options):
"""
Malicious Collectives with camouflage.
Malicious peers try to earn high local trust from good peers by providing
authentic services in f% of all cases.
"""
class EvilRouter(utils.Router):
def __init__(self):
utils.Router.__init__(self)
self.probably_malicious = True
self.counter = 0
self.f = 0.2 # out of 1.0.
self.responses = [0, 0] # [negative, positive]
@property
def malicious(self):
self.counter += 1
if self.counter >= 100: self.counter = 0
if self.counter <= max(int(100 * self.f), 1):
self.responses[0] += 1
return True
self.responses[1] += 1
return False
bad_peers = utils.generate_routers(options, minimum=10, router_class=EvilRouter)
good_peers = utils.generate_routers(options, minimum=5)
routers = []
routers.extend(bad_peers)
routers.extend(good_peers)
[setattr(r, "routers", routers) for r in bad_peers]
[setattr(r, "routers", routers) for r in good_peers]
utils.introduce(good_peers)
utils.introduce(bad_peers)
# Configure pre-trusted peers
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_peers]
utils.introduce(good_peers, random.sample(bad_peers, options.nodes))
transactions = max(options.transactions, 100)
utils.log("Emulating %s transactions with each peer." % \
"{:,}".format(transactions))
for _ in range(transactions):
for router in routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
for router in bad_peers:
utils.log("%s %i negative transactions, %i positive." % \
(router, router.responses[0], router.responses[1]))
return {"routers": routers}
def threat_model_d(options):
"""
Malicious peers who are strategically organised into two groups.
One group of peers act as normal peers and try to increase their global
reputation by only providing good services and use the reputation they
gain to boost the trust values of another group of malicious peers.
"""
class AccompliceRouter(utils.Router):
def render_peers(self):
response = []
for peer in self.peers:
data = peer.jsonify()
if any(filter(lambda r: r.node == peer, self.collective)):
data['trust'] = 0.5 + (peer.transactions * \
self.node.epsilon)
response.append(data)
return response
class EvilRouter(utils.Router):
def __init__(self):
utils.Router.__init__(self)
self.probably_malicious = True
def render_peers(self):
response = []
for peer in self.peers:
data = peer.jsonify()
if any(filter(lambda r: r.node == peer, self.collective)):
data['trust'] = max(0.5 + (peer.transactions * \
self.node.epsilon), 0.5)
response.append(data)
return response
bad_peers = utils.generate_routers(options, minimum=10,
router_class=EvilRouter)
accomplice_peers = utils.generate_routers(options, minimum=10,
router_class=AccompliceRouter)
good_peers = utils.generate_routers(options, minimum=20)
routers = []
routers.extend(bad_peers)
routers.extend(accomplice_peers)
routers.extend(good_peers)
[setattr(r, "collective", bad_peers) for r in bad_peers]
[setattr(r, "collective", bad_peers) for r in accomplice_peers]
[setattr(r, "routers", routers) for r in routers]
utils.introduce(routers)
# Set good peers up with some pre-trusted friends
[_.tbucket.append(random.sample(_.peers, options.pre_trusted)) for _ in good_peers]
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in good_peers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Accomplice routers work by doubling the trust trust rating of
# peers in the collective, which necessitates some good transactions
for router in routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
return {"routers": routers}
def threat_model_e(options):
"""
Sybil attack. A hundred malicious peers who only provide bad services,
who're then replaced with a new similarly malicious identity once contacted
by good peers.
"""
bad_peers = utils.generate_routers(options, minimum=100)
good_peers = utils.generate_routers(options, minimum=100)
[setattr(r, "probably_malicious", True) for r in bad_peers]
routers = []
routers.extend(bad_peers)
routers.extend(good_peers)
[setattr(r, "routers", routers) for r in bad_peers]
[setattr(r, "routers", routers) for r in good_peers]
utils.introduce(bad_peers)
utils.introduce(good_peers)
# Set good peers up with some pre-trusted friends
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_peers]
divisor = 1 if options.nodes == 1 else 2
utils.introduce(good_peers, random.sample(bad_peers, len(routers) / divisor))
utils.log("Emulating %s iterations of transactions with all peers." % \
"{:,}".format(options.transactions))
for _ in range(options.transactions):
for router in good_peers:
for peer in router.peers:
if not random.randint(0, 1): continue
positive_transaction = router.transact_with(peer)
if positive_transaction == False:
router.dereference(peer, and_router=True)
new_router = utils.Router()
new_router.probably_malicious = True
utils.introduce(router, new_router)
# Accomplice routers work by doubling the trust trust rating of
# peers in the collective, which necessitates some good transactions
for router in routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
return {"routers": routers}
def threat_model_f(options):
"""
Virus disseminating peers who send one inauthentic virus infected file every
100th request.
"""
class EvilRouter(utils.Router):
def __init__(self):
utils.Router.__init__(self)
self.probably_malicious = True
self.counter = 0
@property
def malicious(self):
self.counter += 1
return not self.counter % 100
bad_peers = utils.generate_routers(options, minimum=10,
router_class=EvilRouter)
good_peers = utils.generate_routers(options, minimum=5)
routers = []
routers.extend(bad_peers)
routers.extend(good_peers)
[setattr(r, "routers", routers) for r in bad_peers]
[setattr(r, "routers", routers) for r in good_peers]
utils.introduce(good_peers)
utils.introduce(bad_peers)
# It's at this point that you want to set up your pre-trusted peers
[_.tbucket.append(_.peers[:options.pre_trusted]) for _ in good_peers]
# and then some not so trustworthy peers
utils.introduce(good_peers, random.sample(bad_peers, options.nodes))
# Since our EvilRouter only does its thing once every hundred transactions
# we're going to define a minimum transaction count of 1,000 in this case.
transactions = max(options.transactions, 1000)
utils.log("Emulating %s transactions with each peer." % \
"{:,}".format(transactions))
for _ in range(transactions):
# Accomplice routers work by doubling the trust trust rating of
# peers in the collective, which requires some good transactions
for router in routers:
for peer in router.peers:
if not random.randint(0, 1): continue
router.transact_with(peer)
# Calculate trust every 5 rounds here. Normally the periodicity would be
# a function of network size.
if _ > 1 and not (_+1) % 5:
for i, router in enumerate(routers):
utils.log("%i %s %s is sensing." % (i+1, router, router.node))
router.tbucket.calculate_trust()
return {"routers": routers}
map = {
"one": scenario_one,
"two": scenario_two,
"three": scenario_three,
"four": scenario_four,
"A": threat_model_a,
"B": threat_model_b,
"C": threat_model_c,
"D": threat_model_d,
"E": threat_model_e,
"F": threat_model_f
}
| 40.242284 | 99 | 0.611343 | 3,254 | 26,077 | 4.785495 | 0.10295 | 0.02543 | 0.009633 | 0.038145 | 0.791228 | 0.783714 | 0.765091 | 0.728038 | 0.719175 | 0.701194 | 0 | 0.012973 | 0.299421 | 26,077 | 647 | 100 | 40.304482 | 0.8394 | 0.230778 | 0 | 0.724051 | 0 | 0 | 0.077473 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053165 | false | 0 | 0.005063 | 0 | 0.116456 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
973feb0eb28e39d99b092391e9fd930f83302335 | 88 | py | Python | autonomous_systems_project/callbacks/common.py | alessandropacielli/autonomous_systems_project | ae429099409356db5cdd19597af871f239300ffb | [
"MIT"
] | null | null | null | autonomous_systems_project/callbacks/common.py | alessandropacielli/autonomous_systems_project | ae429099409356db5cdd19597af871f239300ffb | [
"MIT"
] | null | null | null | autonomous_systems_project/callbacks/common.py | alessandropacielli/autonomous_systems_project | ae429099409356db5cdd19597af871f239300ffb | [
"MIT"
] | null | null | null | class Callback:
def __call__(self):
pass
def close(self):
pass
| 12.571429 | 23 | 0.545455 | 10 | 88 | 4.4 | 0.7 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 88 | 6 | 24 | 14.666667 | 0.8 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0.4 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
c1277531995339763002ec3111049be2d3edac1a | 131 | py | Python | bubbly/__init__.py | ChrisBeaumont/brut | f4223b84448d1db1b0e98e043dc6670adf05ee5d | [
"MIT"
] | 6 | 2015-03-06T20:32:50.000Z | 2018-04-17T12:14:07.000Z | bubbly/__init__.py | ChrisBeaumont/brut | f4223b84448d1db1b0e98e043dc6670adf05ee5d | [
"MIT"
] | 1 | 2016-08-14T17:12:58.000Z | 2016-08-14T17:12:58.000Z | bubbly/__init__.py | ChrisBeaumont/brut | f4223b84448d1db1b0e98e043dc6670adf05ee5d | [
"MIT"
] | 7 | 2015-07-29T16:17:02.000Z | 2018-12-14T03:15:59.000Z | import logging
logging.getLogger(__name__).setLevel(logging.DEBUG)
logging.getLogger(__name__).addHandler(logging.StreamHandler())
| 32.75 | 63 | 0.847328 | 14 | 131 | 7.357143 | 0.571429 | 0.31068 | 0.38835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030534 | 131 | 3 | 64 | 43.666667 | 0.811024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
c1281b8738ecfd77d5b47a23635520220993fafa | 5,466 | py | Python | snuba/migrations/snuba_migrations/querylog/0003_add_profile_fields.py | fpacifici/snuba | cf732b71383c948f9387fbe64e9404ca71f8e9c5 | [
"Apache-2.0"
] | null | null | null | snuba/migrations/snuba_migrations/querylog/0003_add_profile_fields.py | fpacifici/snuba | cf732b71383c948f9387fbe64e9404ca71f8e9c5 | [
"Apache-2.0"
] | null | null | null | snuba/migrations/snuba_migrations/querylog/0003_add_profile_fields.py | fpacifici/snuba | cf732b71383c948f9387fbe64e9404ca71f8e9c5 | [
"Apache-2.0"
] | null | null | null | from typing import Sequence
from snuba.clickhouse.columns import Array, Column, String, UInt
from snuba.clusters.storage_sets import StorageSetKey
from snuba.migrations import migration, operations
from snuba.migrations.columns import MigrationModifiers as Modifiers
class Migration(migration.MultiStepMigration):
"""
Adds fields for query profile.
"""
blocking = True
def __forward_migrations(self, table_name: str) -> Sequence[operations.Operation]:
return [
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.all_columns",
Array(
Array((String(Modifiers(low_cardinality=True)))),
Modifiers(
default="arrayResize([['']], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.consistent",
),
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.or_conditions",
Array(
UInt(8),
Modifiers(
default="arrayResize([0], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.all_columns",
),
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.where_columns",
Array(
Array(String(Modifiers(low_cardinality=True))),
Modifiers(
default="arrayResize([['']], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.or_conditions",
),
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.where_mapping_columns",
Array(
Array(String(Modifiers(low_cardinality=True))),
Modifiers(
default="arrayResize([['']], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.where_columns",
),
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.groupby_columns",
Array(
Array(String(Modifiers(low_cardinality=True))),
Modifiers(
default="arrayResize([['']], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.where_mapping_columns",
),
operations.AddColumn(
storage_set=StorageSetKey.QUERYLOG,
table_name=table_name,
column=Column(
"clickhouse_queries.array_join_columns",
Array(
Array(String(Modifiers(low_cardinality=True))),
Modifiers(
default="arrayResize([['']], length(clickhouse_queries.sql))"
),
),
),
after="clickhouse_queries.groupby_columns",
),
]
def __backwards_migrations(self, table_name: str) -> Sequence[operations.Operation]:
return [
operations.DropColumn(
StorageSetKey.QUERYLOG, table_name, "clickhouse_queries.all_columns"
),
operations.DropColumn(
StorageSetKey.QUERYLOG, table_name, "clickhouse_queries.or_conditions"
),
operations.DropColumn(
StorageSetKey.QUERYLOG, table_name, "clickhouse_queries.where_columns"
),
operations.DropColumn(
StorageSetKey.QUERYLOG,
table_name,
"clickhouse_queries.where_mapping_columns",
),
operations.DropColumn(
StorageSetKey.QUERYLOG, table_name, "clickhouse_queries.groupby_columns"
),
operations.DropColumn(
StorageSetKey.QUERYLOG,
table_name,
"clickhouse_queries.array_join_columns",
),
]
def forwards_local(self) -> Sequence[operations.Operation]:
return self.__forward_migrations("querylog_local")
def backwards_local(self) -> Sequence[operations.Operation]:
return self.__backwards_migrations("querylog_local")
def forwards_dist(self) -> Sequence[operations.Operation]:
return self.__forward_migrations("querylog_dist")
def backwards_dist(self) -> Sequence[operations.Operation]:
return self.__backwards_migrations("querylog_dist")
| 38.765957 | 89 | 0.512989 | 390 | 5,466 | 6.923077 | 0.164103 | 0.151111 | 0.115556 | 0.133333 | 0.820741 | 0.797778 | 0.792593 | 0.768148 | 0.747037 | 0.50963 | 0 | 0.000613 | 0.402854 | 5,466 | 140 | 90 | 39.042857 | 0.826593 | 0.005488 | 0 | 0.716535 | 0 | 0 | 0.17786 | 0.14631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047244 | false | 0 | 0.03937 | 0.047244 | 0.149606 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c15acb0d69445dfd59968ef442c3b289c53dcc60 | 1,602 | py | Python | tests/models/torchvision_models/faster_rcnn/test_backbones.py | ai-fast-track/mantisshrimp | cc6d6a4a048f6ddda2782b6593dcd6b083a673e4 | [
"Apache-2.0"
] | 17 | 2020-07-31T22:09:07.000Z | 2020-08-30T11:18:36.000Z | tests/models/torchvision_models/faster_rcnn/test_backbones.py | ai-fast-track/mantisshrimp | cc6d6a4a048f6ddda2782b6593dcd6b083a673e4 | [
"Apache-2.0"
] | 115 | 2020-08-01T09:19:54.000Z | 2020-09-04T18:51:28.000Z | tests/models/torchvision_models/faster_rcnn/test_backbones.py | lgvaz/mantisshrimp2 | 743cb7df0dae7eb1331fc2bb66fc9ca09db496cd | [
"Apache-2.0"
] | 1 | 2020-08-25T06:04:34.000Z | 2020-08-25T06:04:34.000Z | import pytest
from icevision.all import *
from icevision.models.torchvision import faster_rcnn
@pytest.mark.skip
@pytest.mark.parametrize(
"model_name,param_groups_len",
(
("resnet101_fpn", 8),
("resnet152_fpn", 8),
("resnext101_32x8d_fpn", 8),
("wide_resnet101_2_fpn", 8),
),
)
def test_faster_rcnn_fpn_backbones_large(model_name, param_groups_len):
backbone_fn = getattr(models.torchvision.faster_rcnn.backbones, model_name)
backbone = backbone_fn(pretrained=False)
model = faster_rcnn.model(num_classes=4, backbone=backbone)
assert len(model.param_groups()) == param_groups_len
@pytest.mark.skip
@pytest.mark.parametrize(
"model_name,param_groups_len",
(
("resnet34_fpn", 8),
("resnet50_fpn", 8),
("resnext50_32x4d_fpn", 8),
("wide_resnet50_2_fpn", 8),
),
)
def test_faster_rcnn_fpn_backbones_medium(model_name, param_groups_len):
backbone_fn = getattr(models.torchvision.faster_rcnn.backbones, model_name)
backbone = backbone_fn(pretrained=False)
model = faster_rcnn.model(num_classes=4, backbone=backbone)
assert len(model.param_groups()) == param_groups_len
@pytest.mark.parametrize(
"model_name,param_groups_len",
(("resnet18_fpn", 8),),
)
def test_faster_rcnn_fpn_backbones_small(model_name, param_groups_len):
backbone_fn = getattr(models.torchvision.faster_rcnn.backbones, model_name)
backbone = backbone_fn(pretrained=False)
model = faster_rcnn.model(num_classes=4, backbone=backbone)
assert len(model.param_groups()) == param_groups_len
| 30.807692 | 79 | 0.727216 | 208 | 1,602 | 5.235577 | 0.216346 | 0.121212 | 0.115702 | 0.110193 | 0.801653 | 0.801653 | 0.801653 | 0.801653 | 0.740129 | 0.677686 | 0 | 0.031065 | 0.156055 | 1,602 | 51 | 80 | 31.411765 | 0.774408 | 0 | 0 | 0.52381 | 0 | 0 | 0.137953 | 0.050562 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c17e7590eb9b57c71129f85290456bfd89765e23 | 75 | py | Python | aiomatrix/types/events/modules/instant_messaging/notice.py | Forden/aiomatrix | d258076bae8eb776495b92be46ee9f4baec8d9a6 | [
"MIT"
] | 2 | 2021-10-29T18:07:08.000Z | 2021-11-19T00:25:43.000Z | aiomatrix/types/events/modules/instant_messaging/notice.py | Forden/aiomatrix | d258076bae8eb776495b92be46ee9f4baec8d9a6 | [
"MIT"
] | 1 | 2022-03-06T11:17:43.000Z | 2022-03-06T11:17:43.000Z | aiomatrix/types/events/modules/instant_messaging/notice.py | Forden/aiomatrix | d258076bae8eb776495b92be46ee9f4baec8d9a6 | [
"MIT"
] | null | null | null | from .text import TextContent
class NoticeContent(TextContent):
pass
| 12.5 | 33 | 0.773333 | 8 | 75 | 7.25 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173333 | 75 | 5 | 34 | 15 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c1bfbc4668d6bf1241563a7ba94d41d678d29e73 | 46,026 | py | Python | Testing Phase/Testing Against Classifiers-ML/Pickle Dataset/train.py | L-Mancio/ovs | 42dabf25052782688b315d1d6d6560121782da17 | [
"Apache-2.0"
] | null | null | null | Testing Phase/Testing Against Classifiers-ML/Pickle Dataset/train.py | L-Mancio/ovs | 42dabf25052782688b315d1d6d6560121782da17 | [
"Apache-2.0"
] | null | null | null | Testing Phase/Testing Against Classifiers-ML/Pickle Dataset/train.py | L-Mancio/ovs | 42dabf25052782688b315d1d6d6560121782da17 | [
"Apache-2.0"
] | null | null | null | import pickle
import bz2
import cPickle
import classifiers
import warnings
import numpy
data_path_train = 'S:\Webpage fingerprinter\myCode\compressed_pickle_train.pbz2'
data_path_train_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_train_herman.pbz2'
data_path_train_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_train_inverted.pbz2'
#data_path_test = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test.pbz2'
#data_path_test_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_herman.pbz2'
#data_path_test_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_inverted.pbz2'
############ this data is not good ####################################################################################
#data_path_test_waggr7flows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr7flows.pbz2'
#data_path_test_waggr7flows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr7flows_inverted.pbz2'
#data_path_test_waggr7flows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr7flows_herman.pbz2'
########################################################################################################################
#data_path_test_wsplitflows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows.pbz2'
#data_path_test_wsplitflows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows_inverted.pbz2'
#data_path_test_wsplitflows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows_herman.pbz2'
#data_path_test_wsplit_oneinterface_flows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows_oneinterface.pbz2'
#data_path_test_wsplit_oneinterface_flows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows_oneinterface_inv.pbz2'
#data_path_test_wsplit_oneinterface_flows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_wsplitflows_oneinterface_her.pbz2'
#data_path_test_waggr2flows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2flows.pbz2'
#data_path_test_waggr2flows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2flows_inv.pbz2'
#data_path_test_waggr2flows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2flows_her.pbz2'
#data_path_test_waggr5flows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5flows.pbz2'
#data_path_test_waggr5flows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5flows_inv.pbz2'
#data_path_test_waggr5flows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5flows_her.pbz2'
#data_path_test_waggr2_RANDflows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2_RANDflows.pbz2'
#data_path_test_waggr2_RANDflows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2_RANDflows_inv.pbz2'
#data_path_test_waggr2_RANDflows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr2_RANDflows_her.pbz2'
#data_path_test_waggr5_RANDflows = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5_RANDflows.pbz2'
#data_path_test_waggr5_RANDflows_inv = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5_RANDflows_inv.pbz2'
#data_path_test_waggr5_RANDflows_her = 'S:\Webpage fingerprinter\myCode\compressed_pickle_test_waggr5_RANDflows_her.pbz2'
results_file = "results_all"
def read_data_from_pickles(path_):
data_ = bz2.BZ2File(path_, 'rb')
data_ = cPickle.load(data_)
X_flows, labels = [], []
for label, flow in data_:
#print(len(flow))
X_flows.append(flow)
labels.append(label)
return X_flows, labels
def train_it():
results = {}
results['my dataset'] = {}
# train and test normal traffic for everything else
x_train_flows, y_train_labels = read_data_from_pickles(data_path_train)
'''remove comments here to test normal data without flows (for all other classifiers except herrmann and Panchenko16'''
#x_test_flows, y_test_labels = read_data_from_pickles(data_path_test) #data_path_test
#train and test normal traffic for herman
xh_train_flows, yh_train_labels = read_data_from_pickles(data_path_train_her)
'''remove comments here to test normal data without flows collected for herrmann classifier(for herrmann)'''
#xh_test_flows, yh_test_labels = read_data_from_pickles(data_path_test_her)
# train and test normal traffic for pachenko
#xinv_train_flows, yinv_train_labels = read_data_from_pickles(data_path_train_inv)
xinv_train_flows, yinv_train_labels = read_data_from_pickles(data_path_train_inv)
'''remove comments here to test normal data without flows collected by inverting direction (for pachenko 16)'''
#xinv_test_flows, yinv_test_labels = read_data_from_pickles(data_path_test_inv)
'''remove comments here to test with aggregation-7 flows data not good'''
#xaggr7flows_test_flows, yaggr7flows_test_labels = read_data_from_pickles(data_path_test_waggr7flows)
#xaggr7flowsinv_test_flows, yaggr7flowsinv_test_labels = read_data_from_pickles(data_path_test_waggr7flows_inv)
#xaggr7flowsh_test_flows, yaggr7flowsh_test_labels = read_data_from_pickles(data_path_test_waggr7flows_her)
'''remove comments here to test with split flows collected on all interfaces'''
#xsplitflows_test_flows, ysplitflows_test_labels = read_data_from_pickles(data_path_test_wsplitflows)
#xsplitflowsinv_test_flows, ysplitflowsinv_test_labels = read_data_from_pickles(data_path_test_wsplitflows_inv)
#xsplitflowsh_test_flows, ysplitflowsh_test_labels = read_data_from_pickles(data_path_test_wsplitflows_her)
'''remove comments here to test with split flows collected on one interface'''
#xsplit_oneinterface_flows_test_flows, ysplit_oneinterface_flows_test_labels = read_data_from_pickles(data_path_test_wsplit_oneinterface_flows)
#xsplit_oneinterface_flowsinv_test_flows, ysplit_oneinterface_flowsinv_test_labels = read_data_from_pickles(data_path_test_wsplit_oneinterface_flows_inv)
#xsplit_oneinterface_flowsh_test_flows, ysplit_oneinterface_flowsh_test_labels = read_data_from_pickles(data_path_test_wsplit_oneinterface_flows_her)
'''remove comments here to test with aggregation-2 flows'''
#xaggr2flows_test_flows, yaggr2flows_test_labels = read_data_from_pickles(data_path_test_waggr2flows)
#xaggr2flowsinv_test_flows, yaggr2flowsinv_test_labels = read_data_from_pickles(data_path_test_waggr2flows_inv)
#xaggr2flowsh_test_flows, yaggr2flowsh_test_labels = read_data_from_pickles(data_path_test_waggr2flows_her)
'''remove comments here to test with aggregation-5 flows'''
#xaggr5flows_test_flows, yaggr5flows_test_labels = read_data_from_pickles(data_path_test_waggr5flows)
#xaggr5flowsinv_test_flows, yaggr5flowsinv_test_labels = read_data_from_pickles(data_path_test_waggr5flows_inv)
#xaggr5flowsh_test_flows, yaggr5flowsh_test_labels = read_data_from_pickles(data_path_test_waggr5flows_her)
'''remove comments here to test with aggregation-2 flows with random traffic'''
#xaggr2_RANDflows_test_flows, yaggr2_RANDflows_test_labels = read_data_from_pickles(data_path_test_waggr2_RANDflows)
#xaggr2_RANDflowsinv_test_flows, yaggr2_RANDflowsinv_test_labels = read_data_from_pickles(data_path_test_waggr2_RANDflows_inv)
#xaggr2_RANDflowsh_test_flows, yaggr2_RANDflowsh_test_labels = read_data_from_pickles(data_path_test_waggr2_RANDflows_her)
'''remove comments here to test with aggregation-5 flows with random traffic'''
#xaggr5_RANDflows_test_flows, yaggr5_RANDflows_test_labels = read_data_from_pickles(data_path_test_waggr5_RANDflows)
#xaggr5_RANDflowsinv_test_flows, yaggr5_RANDflowsinv_test_labels = read_data_from_pickles(data_path_test_waggr5_RANDflows_inv)
#xaggr5_RANDflowsh_test_flows, yaggr5_RANDflowsh_test_labels = read_data_from_pickles(data_path_test_waggr5_RANDflows_her)
'''
####################################################################
## THE CODE BELOW NEEDS NORMAL DATA TO FUNCTION
## TRAINING AND TESTING ON NON AGGREGATED FLOWS
#####################################################################
'''
'''
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
x_test_flows,
y_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
x_test_flows,
y_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
x_test_flows,
y_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
x_test_flows,
y_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xh_test_flows,
yh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xinv_test_flows,
yinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("TRAINING COMPLETE NOW TESTING AGAINST FLOWS WITH AGGREGATION")
print("AGGREGATION-2 TESTING")
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH AGGR-2 FLOWS INSTALLED
##
#####################################################################
'''
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xaggr2flows_test_flows,
yaggr2flows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xaggr2flows_test_flows,
yaggr2flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xaggr2flows_test_flows,
yaggr2flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xaggr2flows_test_flows,
yaggr2flows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xaggr2flowsinv_test_flows,
yaggr2flowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xaggr2flowsh_test_flows,
yaggr2flowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("AGGREGATION-5 TESTING")
'''
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH AGGR-5 FLOWS INSTALLED
##
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xaggr5flows_test_flows,
yaggr5flows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xaggr5flows_test_flows,
yaggr5flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xaggr5flows_test_flows,
yaggr5flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xaggr5flows_test_flows,
yaggr5flows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xaggr5flowsinv_test_flows,
yaggr5flowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xaggr5flowsh_test_flows,
yaggr5flowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("AGGREGATION-2 WITH RANDOM TRAFFIC")
'''
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH AGGREGATION-2 FLOWS INSTALLED, CONTAINING ALSO
## RANDOM WEBISTE TRAFFIC IN ADDITION TO THE ONE WE ARE FINGERPRINTING
##
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xaggr2_RANDflows_test_flows,
yaggr2_RANDflows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xaggr2_RANDflows_test_flows,
yaggr2_RANDflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xaggr2_RANDflows_test_flows,
yaggr2_RANDflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xaggr2_RANDflows_test_flows,
yaggr2_RANDflows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xaggr2_RANDflowsinv_test_flows,
yaggr2_RANDflowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xaggr2_RANDflowsh_test_flows,
yaggr2_RANDflowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("AGGREGATION-5 WITH RANDOM TRAFFIC")
'''
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH AGGREGATION-5 FLOWS INSTALLED, CONTAINING ALSO
## RANDOM WEBISTE TRAFFIC IN ADDITION TO THE ONE WE ARE FINGERPRINTING
##
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xaggr5_RANDflows_test_flows,
yaggr5_RANDflows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xaggr5_RANDflows_test_flows,
yaggr5_RANDflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xaggr5_RANDflows_test_flows,
yaggr5_RANDflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xaggr5_RANDflows_test_flows,
yaggr5_RANDflows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xaggr5_RANDflowsinv_test_flows,
yaggr5_RANDflowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xaggr5_RANDflowsh_test_flows,
yaggr5_RANDflowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("AGGREGATION TESTING COMPLETE NOW TESTING AGAINST FLOWS WITH SPLIT")
'''
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH SPLITTING FLOWS INSTALLED
##
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xsplitflows_test_flows,
ysplitflows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xsplitflows_test_flows,
ysplitflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xsplitflows_test_flows,
ysplitflows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xsplitflows_test_flows,
ysplitflows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xsplitflowsinv_test_flows,
ysplitflowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xsplitflowsh_test_flows,
ysplitflowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
'''
print("SPLIT TESTING ON ALL INTERFACE COMPLETE NOW TESTING AGAINST FLOWS WITH SPLIT BUT COLLECTED ON ONE INTERFACE ONLY")
'''
####################################################################
## THE CODE BELOW IS TO TEST THE TRAINED MODEL AGAINST TESTING DATA WITH SPLITTING FLOWS COLLECTED ON ONE INTERFACE INSTALLED
##
#####################################################################
classifier_name = "liberatoreNB2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_NB(x_train_flows,
y_train_labels,
xsplit_oneinterface_flows_test_flows,
ysplit_oneinterface_flows_test_labels,
)
results = classifiers.common_utils.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
# print(results)
#####################################################################
classifier_name = "liberatoreJaccard2006"
print("running", classifier_name)
y_test, y_pred = classifiers.liberatore_2006.classifier_liberatore_jaccard(x_train_flows,
y_train_labels,
xsplit_oneinterface_flows_test_flows,
ysplit_oneinterface_flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "panchenko_2011"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2011.classifier_panchenko2011(x_train_flows,
y_train_labels,
xsplit_oneinterface_flows_test_flows,
ysplit_oneinterface_flows_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#####################################################################
classifier_name = "dyer_2012_notime"
print("running", classifier_name)
y_test, y_pred = classifiers.dyer_2012.classifier_dyer2012(x_train_flows,
y_train_labels,
xsplit_oneinterface_flows_test_flows,
ysplit_oneinterface_flows_test_labels,
time_train=None,
time_test=None
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
classifier_name = "panchenko_2016"
print("running", classifier_name)
y_test, y_pred = classifiers.panchenko_2016.classifier_panchenko2016(xinv_train_flows,
yinv_train_labels,
xsplit_oneinterface_flowsinv_test_flows,
ysplit_oneinterface_flowsinv_test_labels,
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
cos_ = [True, False]
norm_ = [True, False]
TF_ = [True, False]
for c in cos_:
for n in norm_:
for tf in TF_:
classifier_name = "herrman_2009_TF_%s__cos_%s__norm_%s" % (tf, c, n) ##TESTED
print("running", classifier_name)
y_test, y_pred = classifiers.herrman_2009.classifier_herrman2009(xh_train_flows,
yh_train_labels,
xsplit_oneinterface_flowsh_test_flows,
ysplit_oneinterface_flowsh_test_labels,
cos_=c, TF_=tf, norm=n
)
results = classifiers.get_results(results, 'my dataset', classifier_name, y_test, y_pred, results_file)
#####################################################################
#print(results)
'''
train_it()
#read_data_from_pickles(data_path_train) | 63.747922 | 157 | 0.446291 | 3,498 | 46,026 | 5.418811 | 0.054031 | 0.093063 | 0.066473 | 0.084199 | 0.9512 | 0.92804 | 0.881562 | 0.856977 | 0.802532 | 0.74909 | 0 | 0.021886 | 0.386499 | 46,026 | 722 | 158 | 63.747922 | 0.649396 | 0.124343 | 0 | 0 | 0 | 0 | 0.330404 | 0.095489 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.28125 | 0.3125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
de08680d00e768a3be706623d8fca0cf929b2d5a | 1,484 | py | Python | spritesheet.py | qwmks/GSKurs | 926f589cb1da5fda398b03b1c9a3f4961fcdb70f | [
"CC-BY-4.0"
] | null | null | null | spritesheet.py | qwmks/GSKurs | 926f589cb1da5fda398b03b1c9a3f4961fcdb70f | [
"CC-BY-4.0"
] | null | null | null | spritesheet.py | qwmks/GSKurs | 926f589cb1da5fda398b03b1c9a3f4961fcdb70f | [
"CC-BY-4.0"
] | null | null | null |
import pygame
class SpriteSheet():
def __init__(self, image):
self.sheet = image
def get_image(self, frame, width, height, scale, colour):
image = pygame.Surface((width, height)).convert_alpha()
image.blit(self.sheet, (0, 0), ((frame * width), 0, width, height))
image = pygame.transform.scale(image, (width * scale, height * scale))
image.set_colorkey(colour)
return image
def get_player(self, frame, width, height, scale, colour):
border = (64 - width)/2
vert_offset = 64-height
image = pygame.Surface((width, height)).convert_alpha()
image.blit(self.sheet,
(0, 0),
((border+frame*(width+2*border)), vert_offset, width, height)
)
image = pygame.transform.scale(image, (width * scale, height * scale))
image.set_colorkey(colour)
return image
def get_icon(self,width,height,scale,colour):
image = pygame.Surface((width, height)).convert_alpha()
image.blit(self.sheet, (0, 0), (0, 64*2, width, height))
image = pygame.transform.scale(image, (width * scale, height * scale))
image.set_colorkey(colour)
return image
# def get_image(self, frame, width, height, scale, colour, border):
# image = pygame.Surface((width, height)).convert_alpha()
# # image.blit(self.sheet, (0, 0), ((frame * width), 0, width, height))
# image.blit(self.sheet, (border, border), ((frame * (width+(border*2)), 0, width, height)))
# image = pygame.transform.scale(image, (width * scale, height * scale))
# image.set_colorkey(colour)
# return image | 40.108108 | 94 | 0.686658 | 204 | 1,484 | 4.906863 | 0.151961 | 0.142857 | 0.064935 | 0.08991 | 0.828172 | 0.828172 | 0.828172 | 0.785215 | 0.785215 | 0.785215 | 0 | 0.017391 | 0.147574 | 1,484 | 37 | 95 | 40.108108 | 0.773913 | 0.268194 | 0 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.037037 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a9a9158fd405d588304d92b5802a4569495e2682 | 118 | py | Python | pyqumo/fitting/__init__.py | larioandr/thesis-queues | 282787021e1d74a74e56e28c03115e7252a5a08e | [
"MIT"
] | null | null | null | pyqumo/fitting/__init__.py | larioandr/thesis-queues | 282787021e1d74a74e56e28c03115e7252a5a08e | [
"MIT"
] | null | null | null | pyqumo/fitting/__init__.py | larioandr/thesis-queues | 282787021e1d74a74e56e28c03115e7252a5a08e | [
"MIT"
] | null | null | null | from .acph2 import fit_acph2
from .johnson89 import fit_mern2
from .horvath05 import fit_map_horvath05, optimize_lag1
| 29.5 | 55 | 0.855932 | 18 | 118 | 5.333333 | 0.555556 | 0.28125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 0.110169 | 118 | 3 | 56 | 39.333333 | 0.819048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a9b81f310e3be5cc6fac3bd61d1b99c55c7c045e | 27 | py | Python | thirdweb/storage/__init__.py | princetonwong/python-sdk | f35181d97620e29d055498fca75f3702f3bb2449 | [
"Apache-2.0"
] | 1 | 2022-02-18T16:59:12.000Z | 2022-02-18T16:59:12.000Z | thirdweb/storage/__init__.py | princetonwong/python-sdk | f35181d97620e29d055498fca75f3702f3bb2449 | [
"Apache-2.0"
] | null | null | null | thirdweb/storage/__init__.py | princetonwong/python-sdk | f35181d97620e29d055498fca75f3702f3bb2449 | [
"Apache-2.0"
] | null | null | null | from .ipfs_storage import * | 27 | 27 | 0.814815 | 4 | 27 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 1 | 27 | 27 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a9d64e3d76be3f94f173797fac8afc45c4c36810 | 119 | py | Python | test/test_settings.py | mudox/pytav | 4124628c45ffd3fc35913d781278eb349d9c8559 | [
"MIT"
] | null | null | null | test/test_settings.py | mudox/pytav | 4124628c45ffd3fc35913d781278eb349d9c8559 | [
"MIT"
] | null | null | null | test/test_settings.py | mudox/pytav | 4124628c45ffd3fc35913d781278eb349d9c8559 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import tav.settings as cfg
def test_tmux_section():
assert cfg.tmux.serverPID is not None
| 14.875 | 39 | 0.697479 | 19 | 119 | 4.263158 | 0.894737 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010204 | 0.176471 | 119 | 7 | 40 | 17 | 0.816327 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e71250c707234e3704f817387586ffb44e45e2c0 | 334 | py | Python | apps/classroom/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | 1 | 2021-11-28T14:18:16.000Z | 2021-11-28T14:18:16.000Z | apps/classroom/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | 1 | 2022-02-10T10:53:12.000Z | 2022-02-10T10:53:12.000Z | apps/classroom/admin.py | alfarhanzahedi/edumate | 76ced0063d25431098babb1d163c95c9ddaf3307 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Classroom
from .models import Post
from .models import Comment
@admin.register(Classroom)
class ClassroomAdmin(admin.ModelAdmin):
pass
@admin.register(Post)
class PostAdmin(admin.ModelAdmin):
pass
@admin.register(Comment)
class CommentAdmin(admin.ModelAdmin):
pass
| 18.555556 | 39 | 0.784431 | 41 | 334 | 6.390244 | 0.390244 | 0.114504 | 0.183206 | 0.183206 | 0.244275 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131737 | 334 | 17 | 40 | 19.647059 | 0.903448 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.230769 | 0.307692 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
e7368331339dda5e1a601bcfb2ce2e855e20225f | 91 | py | Python | src/foo.py | MalteIwanicki/dummy | 252ebfca7c078a4f7c8c7819ab7399959121c13f | [
"MIT"
] | 1 | 2021-09-06T04:16:14.000Z | 2021-09-06T04:16:14.000Z | src/foo.py | MalteIwanicki/dummy | 252ebfca7c078a4f7c8c7819ab7399959121c13f | [
"MIT"
] | null | null | null | src/foo.py | MalteIwanicki/dummy | 252ebfca7c078a4f7c8c7819ab7399959121c13f | [
"MIT"
] | null | null | null | from . import bar
def get_foo():
return "foo"
def func():
return bar.get_bar()
| 9.1 | 24 | 0.604396 | 14 | 91 | 3.785714 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263736 | 91 | 9 | 25 | 10.111111 | 0.791045 | 0 | 0 | 0 | 0 | 0 | 0.032967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e790f500ca710eaad9e1e90b323ade6b2f72d34e | 27,260 | py | Python | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab4/python/lab4.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | 1 | 2016-06-15T08:36:56.000Z | 2016-06-15T08:36:56.000Z | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab4/python/lab4.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | 1 | 2016-06-08T13:05:41.000Z | 2016-06-08T13:06:07.000Z | fileSystem/school-projects/development/softwaredesignandcomputerlogiccis122/cis122lab4/python/lab4.py | nomad-mystic/nomadmystic | 7814c1f7c1a45464df5896d03dd3c3bed0f763d0 | [
"MIT"
] | null | null | null | # programmer = Keith Murphy
# File = lab4.py
# Date Created = 2-22-2015
# Last Mod = 2-26-2015
# Hello Mark,
# This is lab4, its modeled off of a website'\ns chart check-out. It asks pretty typical inputs for an
# online shopping cart first name, last name, email, number, credit card type and number.
# Valid Formats:
# first name = string, one word no numbers no spaces
# Last name = string, one word no numbers no spaces
# email = string, need @ symbol can only be used once, can use dot more than once but need one
# Phone number = string, this is the only format that works 555-555-5555
# Credit card type = string, can be Mastercard, Visa, Discover capped or not
# Credit card number =
# for Mastercard number = 'All MasterCard numbers start with the
# numbers 51 through 55. All have 16 digits.'
# # valid Mastercard 5165686598754525
#
# for Visa number = 'All Visa card numbers start with a 4. New cards
# have 16 digits. Old cards have 13.'
# # valid visa = 4366182844327555
#
# for Discover number = 'Discover card numbers begin with 6011 or 65. All have 16 digits.'
# # valid discover 6011656459875421
#
# Credit for Regular expressions and quotes above = 'http://www.regular-expressions.info/'
#
# Two More Notes: One: I know two of the functions check_credit_card_type and credit_card_validation_loop
# in my program could be combined in a Do...While loop but I built it mostly and
# there was no turning back
#
# Two: There is one bug in my program I have noticed. If you type your name wrong on
# the credit card name input the first time you try, it will through an error even
# after you to through the loop correctly.
########################################################################################################################
# Inputs: name_input_value, email_input_value, number_input_value, credit_card_number
# credit_card_type
# Outputs: valid_matched_first_name, valid_matched_last_name, valid_matched_email, valid_matched_number,
# credit_card_type, credit_card_number
# Variables:
# Declare str valid
# Declare str first_name_input_value
# Declare str valid_matched_first_name
# Declare str valid_first_name
# Declare boolean tested_first_name_validation
# Declare str last_name_input_value
# Declare str valid_matched_last_name
# Declare str valid_last_name
# Declare Boolean tested_last_name_validation
# Declare str email_input_value
# Declare str valid_matched_email
# Declare Boolean tested_email_validation
# Declare str valid_email
# Declare str number_input_value
# Declare str valid_matched_number
# Declare str valid_number
# Declare Boolean tested_number_validation
# Declare str credit_card_number
# Declare str credit_card_type
# Declare str valid_card_number
# Declare str credit_card_number
# Declare str matched_valid_card_number
# Declare Boolean invalid_card_number
# Declare Boolean tested_card_number_validation
# Import regular expressions external library
import re
# Module welcome_message()
# Display Welcome Message
# End Module
def welcome_message():
print('Welcome to you basic shopping cart checkout.')
# Function first_name_input():
# Declare str name_input_value
#
# Display Please Type your first name:
# Input name_input_value
# Return name_input_value
# End Function
def first_name_input():
name_input_value = str(input('Please Type your first name: '))
return name_input_value
# Function str first_name_validation_loop( str first_name_input_value)
# Declare str Boolean valid_matched_first_name
# Declare Boolean tested_first_name_validation
# Declare str first_name_input_value
#
# Set tested_first_name_validation, valid_matched_first_name = run_first_name_validation(first_name_input_value)
#
# While Not tested_first_name_validation
# Set first_name_input_value = first_name_input()
# Set tested_first_name_validation, valid_matched_first_name = run_first_name_validation(first_name_input_value)
# End While
#
# Return valid_matched_first_name
# End Function
def first_name_validation_loop(first_name_input_value):
tested_first_name_validation, valid_matched_first_name = run_first_name_validation(first_name_input_value)
while not tested_first_name_validation:
first_name_input_value = first_name_input()
tested_first_name_validation, valid_matched_first_name = run_first_name_validation(first_name_input_value)
return valid_matched_first_name
# Function str run_first_name_validation(str first_name_input_value)
# Declare str valid
# Declare str valid_first_name
# Declare boolean tested_first_name_validation
#
# Set valid = re.compile('^[a-zA-Z]+$')
# Set valid_first_name = valid.match(first_name_input_value)
#
# If valid_first_name Then
# Return True, valid_first_name.group()
# Else
# Set tested_first_name_validation = False
# Return tested_first_name_validation, False
# End If
# End Function
def run_first_name_validation(first_name_input_value):
valid = re.compile('^[a-zA-Z]+$')
valid_first_name = valid.match(first_name_input_value)
if valid_first_name:
return True, valid_first_name.group()
else:
tested_first_name_validation = False
return tested_first_name_validation, False
###############################################################################
# Function last_name_input
# Declare name_input_value
# Display Please Type your last name:
# Input name_input_value
# Return name_input_value
# End Function
def last_name_input():
name_input_value = str(input('Please Type your last name: '))
return name_input_value
# Function str run_last_name_validation(str last_name_input_value)
# Declare str valid
# Declare str valid_last_name
# Declare boolean tested_last_name_validation
#
# Set valid = re.compile('^[a-zA-Z]+$')
# Set valid_last_name = valid.match(last_name_input_value)
#
# If valid_last_name Then
# Return True, valid_last_name.group()
# Else:
# Set tested_last_name_validation = False
# Return tested_last_name_validation, False
# End If
# End Function
def run_last_name_validation(last_name_input_value):
valid = re.compile('^[a-zA-Z]+$')
valid_last_name = valid.match(last_name_input_value)
if valid_last_name:
return True, valid_last_name.group()
else:
tested_last_name_validation = False
return tested_last_name_validation, False
# Function str last_name_validation_loop(str last_name_input_value)
# Declare Boolean valid_matched_last_name
# Declare Boolean tested_last_name_validation
# Declare str last_name_input_value
#
# Set tested_last_name_validation, valid_matched_last_name = run_last_name_validation(last_name_input_value)
# While Not tested_last_name_validation
# Set last_name_input_value = last_name_input()
# Set tested_last_name_validation, valid_matched_last_name = run_last_name_validation(last_name_input_value)
# End While
#
# Return valid_matched_last_name
# End Function
def last_name_validation_loop(last_name_input_value):
tested_last_name_validation, valid_matched_last_name = run_last_name_validation(last_name_input_value)
while not tested_last_name_validation:
last_name_input_value = last_name_input()
tested_last_name_validation, valid_matched_last_name = run_last_name_validation(last_name_input_value)
return valid_matched_last_name
##################################################################################
# Function email_input()
# Declare str email_input_value
#
# Display Please enter you email address (Examples: your_email@domain.*):
# Input email_input_value
#
# Return email_input_value
# End Function
def email_input():
email_input_value = input('Please enter you email address (Examples: your_email@domain.*): ')
return email_input_value
# Function str run_email_validation(str email_input_value)
# Declare str valid_email
# Declare str matched_valid_email
# Declare boolean tested_email_validation
#
# Set valid_email = re.compile('[^@]+@[^@]+\.[^@]+')
# Set matched_valid_email = valid_email.match(email_input_value)
#
# If matched_valid_email Then
#
# Return True, matched_valid_email.group()
# Else:
# Set tested_email_validation = False
# End If
# Return tested_email_validation, False
# End Function
def run_email_validation(email_input_value):
valid_email = re.compile('[^@]+@[^@]+\.[^@]+')
matched_valid_email = valid_email.match(email_input_value)
if matched_valid_email:
return True, matched_valid_email.group()
else:
tested_email_validation = False
return tested_email_validation, False
# Function str email_validation_loop(str email_input_value)
# Declare Boolean tested_email_validation
# Declare str valid_matched_email
#
# Set tested_email_validation, valid_matched_email = run_email_validation(email_input_value)
#
# While Not tested_email_validation
# Set email_input_value = email_input()
# Set tested_email_validation, valid_matched_email = run_email_validation(email_input_value)
#
# Return valid_matched_email
# End Function
def email_validation_loop(email_input_value):
tested_email_validation, valid_matched_email = run_email_validation(email_input_value)
while not tested_email_validation:
email_input_value = email_input()
tested_email_validation, valid_matched_email = run_email_validation(email_input_value)
return valid_matched_email
###############################################################################
# Function number_input()
# Declare str number_input_value
#
# Display Please enter you phone number (Examples: 555-555-5555):
# Input number_input_value
#
# Return number_input_value
# End Function
def number_input():
number_input_value = input('Please enter you phone number (Examples: 555-555-5555): ')
return number_input_value
# Function str run_number_validation(number_input_value)
# Declare str valid_number
# Declare str matched_valid_number
# Declare Boolean tested_number_validation
#
# Set valid_number = re.compile("^(\d{3})-(\d{3})-(\d{4})$")
# Set matched_valid_number = valid_number.match(number_input_value)
#
# If matched_valid_number Then
#
# Return True, matched_valid_number.group()
# Else
# tested_number_validation = False
# return tested_number_validation, False
# End If
# End Function
def run_number_validation(number_input_value):
valid_number = re.compile("^(\d{3})-(\d{3})-(\d{4})$")
matched_valid_number = valid_number.match(number_input_value)
if matched_valid_number:
return True, matched_valid_number.group()
else:
tested_number_validation = False
return tested_number_validation, False
# Function str number_validation_loop(str number_input_value)
# Declare Boolean tested_number_validation
# Declare str valid_matched_number
# Declare str number_input_value
#
# Set tested_number_validation, valid_matched_number = run_number_validation(number_input_value)
#
# While not tested_number_validation:
# Set number_input_value = number_input()
# Set tested_number_validation, valid_matched_number = run_number_validation(number_input_value)
# End While
#
# Return valid_matched_number
# End Function
def number_validation_loop(number_input_value):
tested_number_validation, valid_matched_number = run_number_validation(number_input_value)
while not tested_number_validation:
number_input_value = number_input()
tested_number_validation, valid_matched_number = run_number_validation(number_input_value)
return valid_matched_number
#############################################################################################
# Function credit_card_input()
# Declare str credit_card_type
# Declare str credit_card_number
#
# Display Please enter your credit card type (Examples: Mastercard, Visa, Discover):
# Input credit_card_type
# Display Please enter your credit card number(No Spaces):
# Input credit_card_number
#
# Return credit_card_number, credit_card_type
# End Function
def credit_card_input():
credit_card_type = input('Please enter your credit card type (Examples: Mastercard, Visa, Discover): ')
credit_card_number = input('Please enter your credit card number(No Spaces): ')
return credit_card_number, credit_card_type
# Function str str check_credit_card_type(str credit_card_number, str credit_card_type)
# Declare str credit_card_type
# Declare str credit_card_number
# Declare Boolean tested_card_number_validation
#
# If credit_card_type == 'Mastercard' or credit_card_type == 'mastercard' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_master_card_validation(credit_card_number, credit_card_type)
#
# Set credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
# credit_card_number,
# credit_card_type)
# Return credit_card_number, credit_card_type
#
# Else If credit_card_type == 'Visa' or credit_card_type == 'visa' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_visa_card_validation(credit_card_number, credit_card_type)
#
# Set credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
# credit_card_number,
# credit_card_type)
# Return credit_card_number, credit_card_type
#
# Else If credit_card_type == 'Discover' or credit_card_type == 'discover' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_discover_card_validation(credit_card_number, credit_card_type)
#
# Set credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
# credit_card_number,
# credit_card_type)
# Return credit_card_number, credit_card_type
# Else
# Display Sorry I didn't understand what you entered, Please try Again
# Call credit_card_input()
# End If
# End Function
def check_credit_card_type(credit_card_number, credit_card_type):
if credit_card_type == 'Mastercard' or credit_card_type == 'mastercard':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_master_card_validation(credit_card_number, credit_card_type)
credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
credit_card_number,
credit_card_type)
return credit_card_number, credit_card_type
elif credit_card_type == 'Visa' or credit_card_type == 'visa':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_visa_card_validation(credit_card_number, credit_card_type)
credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
credit_card_number,
credit_card_type)
return credit_card_number, credit_card_type
elif credit_card_type == 'Discover' or credit_card_type == 'discover':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_discover_card_validation(credit_card_number, credit_card_type)
credit_card_number, credit_card_type = credit_card_validation_loop(tested_card_number_validation,
credit_card_number,
credit_card_type)
return credit_card_number, credit_card_type
else:
print("Sorry I didn't understand what you entered, Please try Again")
credit_card_input()
# Function Boolean str str credit_card_validation_loop(Boolean tested_card_number_validation,
# str credit_card_number, str credit_card_type)
# Declare str credit_card_number
# Declare Boolean tested_card_number_validation
# Declare str credit_card_type
# While not tested_card_number_validation
#
# Set credit_card_number, credit_card_type = credit_card_input()
#
# If credit_card_type == 'Mastercard' or credit_card_type == 'mastercard' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_master_card_validation(credit_card_number, credit_card_type)
#
# Else If credit_card_type == 'Visa' or credit_card_type == 'visa' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_visa_card_validation(credit_card_number, credit_card_type)
#
# Else If credit_card_type == 'Discover' or credit_card_type == 'discover' Then
# Set tested_card_number_validation, credit_card_number, credit_card_type = \
# run_discover_card_validation(credit_card_number, credit_card_type)
# End If
# End While
# Set credit_card_number = 'Your card was approved. Thank You for shopping with us here at NomadMystics.com '
# Return credit_card_number, credit_card_type
# End Function
def credit_card_validation_loop(tested_card_number_validation, credit_card_number, credit_card_type):
while not tested_card_number_validation:
credit_card_number, credit_card_type = credit_card_input()
if credit_card_type == 'Mastercard' or credit_card_type == 'mastercard':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_master_card_validation(credit_card_number, credit_card_type)
elif credit_card_type == 'Visa' or credit_card_type == 'visa':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_visa_card_validation(credit_card_number, credit_card_type)
elif credit_card_type == 'Discover' or credit_card_type == 'discover':
tested_card_number_validation, credit_card_number, credit_card_type = \
run_discover_card_validation(credit_card_number, credit_card_type)
credit_card_number = 'Your card was approved. Thank You for shopping with us here at NomadMystics.com '
return credit_card_number, credit_card_type
# Function str str run_master_card_validation(str credit_card_number, str credit_card_type)
# Declare str valid_card_number
# Declare str credit_card_number
# Declare str matched_valid_card_number
# Declare Boolean invalid_card_number
# Declare Boolean tested_card_number_validation
#
# valid_card_number = re.compile('^5[1-5][0-9]{14}$')
# matched_valid_card_number = valid_card_number.match(credit_card_number)
#
# If matched_valid_card_number Then
#
# Return True, matched_valid_card_number.group(), credit_card_type
# Else
# Set invalid_card_number = False
# Set tested_card_number_validation = False
#
# Return tested_card_number_validation, invalid_card_number, credit_card_type
# End If
# End Function
def run_master_card_validation(credit_card_number, credit_card_type):
valid_card_number = re.compile('^5[1-5][0-9]{14}$')
matched_valid_card_number = valid_card_number.match(credit_card_number)
if matched_valid_card_number:
return True, matched_valid_card_number.group(), credit_card_type
else:
invalid_card_number = False
tested_card_number_validation = False
return tested_card_number_validation, invalid_card_number, credit_card_type
# Function str str run_visa_card_validation(str credit_card_number, str credit_card_type)
# Declare str valid_card_number
# Declare str credit_card_number
# Declare str matched_valid_card_number
# Declare Boolean invalid_card_number
# Declare Boolean tested_card_number_validation
#
# valid_card_number = re.compile('^4[0-9]{12}(?:[0-9]{3})?$')
# matched_valid_card_number = valid_card_number.match(credit_card_number)
#
# If matched_valid_card_number Then
#
# Return True, matched_valid_card_number.group(), credit_card_type
# Else
# Set invalid_card_number = False
# Set tested_card_number_validation = False
#
# Return tested_card_number_validation, invalid_card_number, credit_card_type
# End If
# End Function
def run_visa_card_validation(credit_card_number, credit_card_type):
valid_card_number = re.compile('^4[0-9]{12}(?:[0-9]{3})?$')
matched_valid_card_number = valid_card_number.match(credit_card_number)
if matched_valid_card_number:
return True, matched_valid_card_number.group(), credit_card_type
else:
invalid_card_number = False
tested_card_number_validation = False
return tested_card_number_validation, invalid_card_number, credit_card_type
# Function str str run_discover_card_validation(str credit_card_number, str credit_card_type)
# Declare str valid_card_number
# Declare str credit_card_number
# Declare str matched_valid_card_number
# Declare Boolean invalid_card_number
# Declare Boolean tested_card_number_validation
#
# valid_card_number = re.compile('^6(?:011|5[0-9]{2})[0-9]{12}$')
# matched_valid_card_number = valid_card_number.match(credit_card_number)
#
# If matched_valid_card_number Then
#
# Return True, matched_valid_card_number.group(), credit_card_type
# Else
# Set invalid_card_number = False
# Set tested_card_number_validation = False
#
# Return tested_card_number_validation, invalid_card_number, credit_card_type
# End If
# End Function
def run_discover_card_validation(credit_card_number, credit_card_type):
valid_card_number = re.compile('^6(?:011|5[0-9]{2})[0-9]{12}$')
matched_valid_card_number = valid_card_number.match(credit_card_number)
if matched_valid_card_number:
return True, matched_valid_card_number.group(), credit_card_type
else:
invalid_card_number = False
tested_card_number_validation = False
return tested_card_number_validation, invalid_card_number, credit_card_type
###########################################################################################
# Module str str str str str str str check_out_message(str valid_matched_first_name, str valid_matched_last_name,
# str valid_matched_email, str valid_matched_number,
# str credit_card_type, str credit_card_number)
# Display Conformation Letter
# Display This is the information we received for your cart:
# Display Name: valid_matched_first_name, valid_matched_last_name
# Display Email: valid_matched_email
# Display Phone Number: valid_matched_number
# Display Credit Card Type: credit_card_type
# Display credit_card_number
# End Module
def check_out_message(valid_matched_first_name, valid_matched_last_name, valid_matched_email,
valid_matched_number, credit_card_type, credit_card_number):
print('Success! Conformation Letter!!')
print('This is the information we received from your cart inputs:')
print('Name:', valid_matched_first_name, valid_matched_last_name)
print('Email:', valid_matched_email)
print('Phone Number:', valid_matched_number)
print('Credit Card Type:', credit_card_type)
print(credit_card_number)
# Module main()
#
# Declare str first_name_input_value
# Declare str valid_matched_first_name
# Declare str last_name_input_value
# Declare str valid_matched_last_name
# Declare str email_input_value
# Declare str valid_matched_email
# Declare str number_input_value
# Declare str valid_matched_number
# Declare str credit_card_number
# Declare str credit_card_type
#
# Call welcome_message()
#
# Set first_name_input_value = first_name_input()
# Set valid_matched_first_name = first_name_validation_loop(first_name_input_value)
#
# Set last_name_input_value = last_name_input()
# Set valid_matched_last_name = last_name_validation_loop(last_name_input_value)
#
# Set email_input_value = email_input()
# Set valid_matched_email = email_validation_loop(email_input_value)
#
# Set number_input_value = number_input()
# Set valid_matched_number = number_validation_loop(number_input_value)
#
# Set credit_card_number, credit_card_type = credit_card_input()
# Set credit_card_number, credit_card_type = check_credit_card_type(credit_card_number, credit_card_type)
#
# Call check_out_message(valid_matched_first_name, valid_matched_last_name, valid_matched_email,
# valid_matched_number, credit_card_type, credit_card_number)
# End Module
# Call main()
def main():
welcome_message()
first_name_input_value = first_name_input()
valid_matched_first_name = first_name_validation_loop(first_name_input_value)
last_name_input_value = last_name_input()
valid_matched_last_name = last_name_validation_loop(last_name_input_value)
email_input_value = email_input()
valid_matched_email = email_validation_loop(email_input_value)
number_input_value = number_input()
valid_matched_number = number_validation_loop(number_input_value)
credit_card_number, credit_card_type = credit_card_input()
credit_card_number, credit_card_type = check_credit_card_type(credit_card_number, credit_card_type)
check_out_message(valid_matched_first_name, valid_matched_last_name, valid_matched_email,
valid_matched_number, credit_card_type, credit_card_number)
main() | 39.679767 | 120 | 0.699817 | 3,448 | 27,260 | 5.086137 | 0.062935 | 0.135713 | 0.100587 | 0.080972 | 0.864401 | 0.81981 | 0.783315 | 0.731938 | 0.704453 | 0.65416 | 0 | 0.008142 | 0.220543 | 27,260 | 687 | 121 | 39.679767 | 0.817206 | 0.594644 | 0 | 0.484848 | 0 | 0 | 0.082847 | 0.009886 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127273 | false | 0 | 0.006061 | 0 | 0.29697 | 0.054545 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e7a6f640ee337337fc82080734dbf98ea41d8339 | 184 | py | Python | ope-backend/src/infra/repository/__init__.py | mthora/ope-talos | 84b9a10e98dffb9dd654ce81b5419e3eb610961a | [
"CC0-1.0"
] | null | null | null | ope-backend/src/infra/repository/__init__.py | mthora/ope-talos | 84b9a10e98dffb9dd654ce81b5419e3eb610961a | [
"CC0-1.0"
] | null | null | null | ope-backend/src/infra/repository/__init__.py | mthora/ope-talos | 84b9a10e98dffb9dd654ce81b5419e3eb610961a | [
"CC0-1.0"
] | null | null | null | from .user_repository import UserRepository
from .drink_repository import DrinkRepository
from .dessert_repository import DessertRepository
from .role_repository import RoleRepository
| 36.8 | 49 | 0.891304 | 20 | 184 | 8 | 0.55 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 184 | 4 | 50 | 46 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
823298be57750486760abb85abdd90981167084f | 30 | py | Python | dac/__init__.py | csinva/disentangled-attribution-curves | 6e30896a158bdbcf6f559d6763181f56ee399fd9 | [
"MIT"
] | 15 | 2019-06-04T16:16:34.000Z | 2022-03-11T21:42:52.000Z | dac/__init__.py | csinva/disentangled-attribution-curves | 6e30896a158bdbcf6f559d6763181f56ee399fd9 | [
"MIT"
] | 1 | 2021-02-16T12:39:08.000Z | 2021-02-16T14:12:39.000Z | dac/__init__.py | csinva/disentangled_attribution_curves | 6e30896a158bdbcf6f559d6763181f56ee399fd9 | [
"MIT"
] | 1 | 2020-07-01T01:39:43.000Z | 2020-07-01T01:39:43.000Z | from .dac import dac, dac_plot | 30 | 30 | 0.8 | 6 | 30 | 3.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
68f1070b846aef1200269b2d724586e3a5c6cf88 | 7,013 | py | Python | test_scheduler.py | HaeckelK/connected-cards-api | 97efbece2aac038a1380208f8dc3bf092023c78c | [
"MIT"
] | null | null | null | test_scheduler.py | HaeckelK/connected-cards-api | 97efbece2aac038a1380208f8dc3bf092023c78c | [
"MIT"
] | 20 | 2021-09-10T07:55:03.000Z | 2021-09-15T15:30:56.000Z | test_scheduler.py | HaeckelK/connected-cards-api | 97efbece2aac038a1380208f8dc3bf092023c78c | [
"MIT"
] | null | null | null | from dataclasses import asdict
from scheduler import Scheduler
from models import CardOut, CardIn, ReviewOut
import scheduler as module
def test_scheduler_init():
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# create_reviews
def test_scheduler_create_reviews_empty():
# Given a scheduler and empty cards and reviews
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
cards = []
reviews = []
# When creating reviews
reviews = scheduler.create_reviews(reviews=reviews, cards=cards)
# Then no reviews created
assert not reviews
def test_scheduler_create_reviews_properties(monkeypatch):
monkeypatch.setattr(module, "timestamp", lambda: 0)
# Given a scheduler, empty reviews and a card
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# TODO connect new card creation to app.py functions
# TODO options on regular
new_card = CardIn(note_id=1, direction="regular", deck_id=1, question="Hello", answer="Bonjour")
cards = [
CardOut(
id=1,
**asdict(new_card),
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
)
]
reviews = []
# When creating reviews
reviews = scheduler.create_reviews(reviews=reviews, cards=cards)
# Then review created with expected attributes
assert reviews == [
ReviewOut(
id=1,
card=CardOut(
id=1,
deck_id=1,
note_id=1,
direction="regular",
question="Hello",
answer="Bonjour",
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
),
time_created=0,
time_completed=-1,
review_status="not_reviewed",
correct=-1,
)
]
def test_scheduler_create_reviews_dispersal_group(monkeypatch):
monkeypatch.setattr(module, "timestamp", lambda: 0)
# Given a scheduler, empty reviews and multiple cards, and some with same dispersal group
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# TODO connect new card creation to app.py functions
# TODO options on regular
new_card = CardIn(note_id=1, direction="regular", deck_id=1, question="Hello", answer="Bonjour")
card_template = CardOut(
id=1,
**asdict(new_card),
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
)
card1 = card_template.copy()
card2 = card_template.copy()
card3 = card_template.copy()
card1.id = 1
card2.id = 2
card3.id = 3
card2.note_id = 2
card2.dispersal_groups = [1] # TODO make this a random number
card3.note_id = 3
card3.dispersal_groups = [1]
cards = [card1, card2, card3]
reviews = []
# When creating reviews
reviews = scheduler.create_reviews(reviews=reviews, cards=cards)
# Then review created with expected attributes
assert len(reviews) == 2
assert set([review.card.id for review in reviews]) == set([1,2])
def test_scheduler_create_reviews_dispersal_groups_all_distinct():
# Given a scheduler, empty reviews and multiple cards, with no common dispersal group ids
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# TODO connect new card creation to app.py functions
# TODO options on regular
new_card = CardIn(note_id=1, direction="regular", deck_id=1, question="Hello", answer="Bonjour")
card_template = CardOut(
id=1,
**asdict(new_card),
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
)
card1 = card_template.copy()
card2 = card_template.copy()
card3 = card_template.copy()
card1.id = 1
card2.id = 2
card3.id = 3
card1.dispersal_groups = [1]
card2.note_id = 2
card2.dispersal_groups = [2]
card3.note_id = 3
card3.dispersal_groups = [3]
cards = [card1, card2, card3]
reviews = []
# When creating reviews
reviews = scheduler.create_reviews(reviews=reviews, cards=cards)
# Then review created with expected attributes
assert len(reviews) == 3
assert set([review.card.id for review in reviews]) == set([1,2,3])
# TODO need to check what happens when running for multiple days
# card scheduling
def test_schedule_schedule_card_new_correct(monkeypatch):
monkeypatch.setattr(module, "timestamp", lambda: 0)
# Given a scheduler and a new card
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# TODO connect new card creation to app.py functions
new_card = CardIn(note_id=1, direction="regular", deck_id=1, question="Hello", answer="Bonjour")
card = CardOut(
id=1,
**asdict(new_card),
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
)
# When scheduling next review for correct answer
card = scheduler.schedule_card(card, correct=True)
# Then card time_latest_review updated
assert card.time_latest_review == scheduler.review_time
# Then current_review_interval is scheduler minimum
assert card.current_review_interval == scheduler.minimum_review_interval
def test_schedule_schedule_card_new_incorrect(monkeypatch):
monkeypatch.setattr(module, "timestamp", lambda: 0)
# Given a scheduler and a new card
scheduler = Scheduler(new_cards_limit=100, allow_cards_from_same_note=True, total_cards_limit=100, success_increment_factor=2.0)
# TODO connect new card creation to app.py functions
new_card = CardIn(note_id=1, direction="regular", deck_id=1, question="Hello", answer="Bonjour")
card = CardOut(
id=1,
**asdict(new_card),
status="new",
time_created=0,
time_latest_review=-1,
current_review_interval=-1,
grade="A",
)
# When scheduling next review for incorrect answer
card = scheduler.schedule_card(card, correct=False)
# Then card time_latest_review updated
assert card.time_latest_review == scheduler.review_time
# Then current_review_interval not updated
assert card.current_review_interval == -1
| 35.964103 | 132 | 0.661771 | 887 | 7,013 | 5.00451 | 0.129651 | 0.014192 | 0.041 | 0.041 | 0.852895 | 0.833746 | 0.786889 | 0.73913 | 0.726515 | 0.726515 | 0 | 0.028436 | 0.247825 | 7,013 | 194 | 133 | 36.149485 | 0.813081 | 0.184087 | 0 | 0.661871 | 0 | 0 | 0.032695 | 0 | 0 | 0 | 0 | 0.005155 | 0.071942 | 1 | 0.05036 | false | 0 | 0.028777 | 0 | 0.079137 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
68f4ca39fb953c85b73f67056c4a6917fa395436 | 46 | py | Python | algorithms/__init__.py | robinaar/genetic-algorithms | a94b56ce2269b1a23e713dbe9b231e5e63a60ca0 | [
"MIT"
] | null | null | null | algorithms/__init__.py | robinaar/genetic-algorithms | a94b56ce2269b1a23e713dbe9b231e5e63a60ca0 | [
"MIT"
] | null | null | null | algorithms/__init__.py | robinaar/genetic-algorithms | a94b56ce2269b1a23e713dbe9b231e5e63a60ca0 | [
"MIT"
] | null | null | null | from algorithms.bruteforce import bruteforce
| 23 | 45 | 0.869565 | 5 | 46 | 8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec1af8d1318acffae68043559636b81d82779741 | 20,824 | py | Python | Test/PythonTCs/LTE_4G_IMSIGUTIAttachDetach.py | clstefi/ignite | 042a30527c78dd329560d8ed49ad95eb7029ea4f | [
"Apache-2.0"
] | 2 | 2021-03-03T12:41:31.000Z | 2021-03-08T15:27:49.000Z | Test/PythonTCs/LTE_4G_IMSIGUTIAttachDetach.py | clstefi/ignite | 042a30527c78dd329560d8ed49ad95eb7029ea4f | [
"Apache-2.0"
] | 8 | 2020-03-30T04:58:00.000Z | 2021-01-18T05:59:06.000Z | Test/PythonTCs/LTE_4G_IMSIGUTIAttachDetach.py | clstefi/ignite | 042a30527c78dd329560d8ed49ad95eb7029ea4f | [
"Apache-2.0"
] | 11 | 2020-03-27T06:52:37.000Z | 2021-08-17T15:53:11.000Z | #
# Copyright (c) 2019, Infosys Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys, requests
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', 'Dev', 'Common'))
import igniteCommonUtil as icu
import sshUtils as su
sys.path.append(os.path.join(os.path.dirname(__file__), '..','ROBOTCs','keywords','systemkeywords'))
import dictOperations as do
clr_flag=False
ssh_client = None
try:
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'MessageTemplates', 'Util'))
from loadMessage import *
command = "export LD_LIBRARY_PATH=" + mme_lib_path + " && " + mme_grpc_client_path + "/mme-grpc-client mme-app show procedure-stats"
ssh_client = su.sshConnect(mmeIP, mme_username, mme_password, "ssh-password", timeout=10, port=None)
proc_stat = su.executeCommand(command,ssh_client)
ue_count_before_attach = int(do.splitProcStats(proc_stat, stats_type["subs_attached"]))
num_of_processed_aia = int(do.splitProcStats(proc_stat, stats_type["processed_aia"]))
num_of_processed_ula = int(do.splitProcStats(proc_stat, stats_type["processed_ula"]))
num_of_del_session_resp = int(do.splitProcStats(proc_stat, stats_type["del_session_resp"]))
num_of_handled_esm_info_resp = int(do.splitProcStats(proc_stat, stats_type["esm_info_resp"]))
num_of_processed_sec_mode_resp = int(do.splitProcStats(proc_stat, stats_type["processed_sec_mode"]))
num_of_processed_init_ctxt_resp = int(do.splitProcStats(proc_stat, stats_type["init_ctxt_resp"]))
num_of_processed_purge_resp = int(do.splitProcStats(proc_stat, stats_type["purge_resp"]))
#required message templates
initial_ue_guti = json.loads(open('../MessageTemplates/S1AP/initial_uemessage_guti.json').read())
nas_attach_request_guti = json.loads(open('../MessageTemplates/NAS/attach_request_guti.json').read())
print ("\n-------------------------------------\nIMSI Attach Detach and GUTI Attach Detach Execution Started\n---------------------------------------")
igniteLogger.logger.info("\n---------------------------------------\nSend Attach Request to MME\n---------------------------------------")
s1.sendS1ap('attach_request',initial_ue,enbues1ap_id,nas_attach_request,imsi)
igniteLogger.logger.info("\n---------------------------------------\nHSS receives AIR from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info("\n---------------------------------------\nHSS sends AIA to MME\n---------------------------------------")
ds.sendS6aMsg('authentication_info_response', msg_data_aia, imsi)
igniteLogger.logger.info(
"\n---------------------------------------\nAuth Request received from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nSend Auth Response to MME\n---------------------------------------")
s1.sendS1ap('authentication_response', uplinknastransport_auth_response, enbues1ap_id, nas_authentication_response)
igniteLogger.logger.info("\n---------------------------------------\nSecurity Mode Command received from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nSend Security Mode Complete to MME\n---------------------------------------")
s1.sendS1ap('securitymode_complete', uplinknastransport_securitymode_complete, enbues1ap_id, nas_securitymode_complete)
igniteLogger.logger.info("\n---------------------------------------\nESM Information Request from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nESM Information Response to MME\n---------------------------------------")
s1.sendS1ap('esm_information_response', uplinknastransport_esm_information_response, enbues1ap_id, nas_esm_information_response)
igniteLogger.logger.info("\n---------------------------------------\nHSS receives ULR from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info("\n---------------------------------------\nHSS sends ULA to MME\n---------------------------------------")
ds.sendS6aMsg('update_location_response', msg_data_ula, imsi)
clr_flag=True
igniteLogger.logger.info("\n---------------------------------------\nCreate Session Request received from MME\n---------------------------------------")
cs_req=gs.receiveGtp()
icu.validateProtocolIE(cs_req,'apn','apn1')
icu.validateProtocolIE(cs_req,'pdn_type',1)
igniteLogger.logger.info("\n---------------------------------------\nSend Create Session Response to MME\n---------------------------------------")
gs.sendGtp('create_session_response', create_session_response, msg_hierarchy)
igniteLogger.logger.info("\n---------------------------------------\nInitial Context Setup Request received from MME\n---------------------------------------")
init_ctxt_setup_req=s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nSend Initial Context Setup Response to MME\n---------------------------------------")
s1.sendS1ap('initial_context_setup_response', initialcontextsetup_response, enbues1ap_id)
time.sleep(1)
igniteLogger.logger.info("\n---------------------------------------\nSend Attach Complete to MME\n---------------------------------------")
s1.sendS1ap('attach_complete', uplinknastransport_attach_complete, enbues1ap_id, nas_attach_complete)
igniteLogger.logger.info("\n---------------------------------------\nModify Bearer Request received from MME\n---------------------------------------")
gs.receiveGtp()
igniteLogger.logger.info("\n---------------------------------------\nSend Modify Bearer Response to MME\n---------------------------------------")
gs.sendGtp('modify_bearer_response', modify_bearer_response, msg_hierarchy)
igniteLogger.logger.info("\n---------------------------------------\nEMM Information received from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nUE is Attached\n---------------------------------------")
time.sleep(2)
mme_ue_S1AP_id , mme_ue_S1AP_id_present = icu.getKeyValueFromDict(init_ctxt_setup_req, "MME-UE-S1AP-ID")
mob_ctxt_command = "export LD_LIBRARY_PATH=" + mme_lib_path + " && " + mme_grpc_client_path + "/mme-grpc-client mme-app show mobile-context "+str(mme_ue_S1AP_id)
mob_ctxt_af_attach = su.executeCommand(mob_ctxt_command,ssh_client)
icu.mobileContextValidation(str(imsi),mob_ctxt_af_attach)
proc_stat_af_attach = su.executeCommand(command,ssh_client)
ue_count_after_attach = int(do.splitProcStats(proc_stat_af_attach, stats_type["subs_attached"]))
icu.grpcValidation(ue_count_before_attach + 1,ue_count_after_attach,"Number of Subs Attached")
num_of_processed_aia_afattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_aia"]))
icu.grpcValidation(num_of_processed_aia + 1,num_of_processed_aia_afattach,"Number of Processed AIA")
num_of_processed_ula_afattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_ula"]))
icu.grpcValidation(num_of_processed_ula + 1,num_of_processed_ula_afattach,"Number of Processed ULA")
num_of_handled_esm_info_resp_afattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["esm_info_resp"]))
icu.grpcValidation(num_of_handled_esm_info_resp + 1,num_of_handled_esm_info_resp_afattach,"Number of Handled ESM info response")
num_of_processed_sec_mode_resp_afattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_sec_mode"]))
icu.grpcValidation(num_of_processed_sec_mode_resp + 1,num_of_processed_sec_mode_resp_afattach,"Number of Processed Sec Mode Response")
num_of_processed_init_ctxt_resp_afattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["init_ctxt_resp"]))
icu.grpcValidation(num_of_processed_init_ctxt_resp + 1,num_of_processed_init_ctxt_resp_afattach,"Number of Processed Initial Context Response")
igniteLogger.logger.info("\n---------------------------------------\nSend Detach Request to MME\n---------------------------------------")
s1.sendS1ap('detach_request', uplinknastransport_detach_request, enbues1ap_id, nas_detach_request)
igniteLogger.logger.info("\n---------------------------------------\nPurge Request received from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info("\n---------------------------------------\nDelete Session Request received from MME\n---------------------------------------")
gs.receiveGtp()
igniteLogger.logger.info("\n---------------------------------------\nSend Purge Response to MME\n---------------------------------------")
ds.sendS6aMsg('purge_response', msg_data_pua, imsi)
igniteLogger.logger.info("\n---------------------------------------\nSend Delete Session Response to MME\n---------------------------------------")
gs.sendGtp('delete_session_response', delete_session_response, msg_hierarchy)
igniteLogger.logger.info("\n---------------------------------------\nMME send Detach Accept to UE\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\neNB receives UE Context Release Request from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\neNB sends UE Context Release Complete to MME\n---------------------------------------")
s1.sendS1ap('ue_context_release_complete', uecontextrelease_complete, enbues1ap_id)
time.sleep(2)
proc_stat_af_detach = su.executeCommand(command,ssh_client)
ue_count_after_detach = int(do.splitProcStats(proc_stat_af_detach, stats_type["subs_attached"]))
icu.grpcValidation(ue_count_before_attach,ue_count_after_detach,"Number of Subs Attached After Detach")
num_of_del_session_resp_Afdetach = int(do.splitProcStats(proc_stat_af_detach, stats_type["del_session_resp"]))
icu.grpcValidation(num_of_del_session_resp+1,num_of_del_session_resp_Afdetach,"Number of Deleted Session Request")
uecountbeforereattach = int(do.splitProcStats(proc_stat_af_detach, stats_type["subs_attached"]))
num_of_processed_aia = int(do.splitProcStats(proc_stat_af_detach, stats_type["processed_aia"]))
num_of_processed_ula = int(do.splitProcStats(proc_stat_af_detach, stats_type["processed_ula"]))
num_of_del_session_resp = int(do.splitProcStats(proc_stat_af_detach, stats_type["del_session_resp"]))
num_of_handled_esm_info_resp = int(do.splitProcStats(proc_stat_af_detach, stats_type["esm_info_resp"]))
num_of_processed_sec_mode_resp = int(do.splitProcStats(proc_stat_af_detach, stats_type["processed_sec_mode"]))
num_of_processed_init_ctxt_resp = int(do.splitProcStats(proc_stat_af_detach, stats_type["init_ctxt_resp"]))
num_of_processed_purge_resp = int(do.splitProcStats(proc_stat_af_detach, stats_type["purge_resp"]))
igniteLogger.logger.info ("\n---------------------------------------\nSend Attach Request to MME\n---------------------------------------")
s1.sendS1ap('attach_request_guti',initial_ue_guti, enbues1ap_id, nas_attach_request_guti)
igniteLogger.logger.info ("\n---------------------------------------\nHSS receives AIR from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info ("\n---------------------------------------\nHSS sends AIA to MME\n---------------------------------------")
ds.sendS6aMsg('authentication_info_response', msg_data_aia, imsi)
igniteLogger.logger.info ("\n---------------------------------------\nAuth Request received from MME\n---------------------------------------")
authreq = s1.receiveS1ap()
igniteLogger.logger.info ("\n---------------------------------------\nSend Auth Response to MME\n---------------------------------------")
s1.sendS1ap('authentication_response', uplinknastransport_auth_response, enbues1ap_id, nas_authentication_response)
igniteLogger.logger.info ("\n---------------------------------------\nSecurity Mode Command received from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info ("\n---------------------------------------\nSend Security Mode Complete to MME\n---------------------------------------")
s1.sendS1ap('securitymode_complete', uplinknastransport_securitymode_complete, enbues1ap_id, nas_securitymode_complete)
igniteLogger.logger.info("\n---------------------------------------\nESM Information Request from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info("\n---------------------------------------\nESM Information Response to MME\n---------------------------------------")
s1.sendS1ap('esm_information_response', uplinknastransport_esm_information_response, enbues1ap_id, nas_esm_information_response)
igniteLogger.logger.info ("\n---------------------------------------\nHSS receives ULR from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info ("\n---------------------------------------\nHSS sends ULA to MME\n---------------------------------------")
ds.sendS6aMsg('update_location_response', msg_data_ula, imsi)
igniteLogger.logger.info ("\n---------------------------------------\nCreate Session Request received from MME\n---------------------------------------")
cs_req=gs.receiveGtp()
icu.validateProtocolIE(cs_req,'apn','apn1')
icu.validateProtocolIE(cs_req,'pdn_type',1)
igniteLogger.logger.info ("\n---------------------------------------\nSend Create Session Response to MME\n---------------------------------------")
gs.sendGtp('create_session_response', create_session_response, msg_hierarchy)
igniteLogger.logger.info ("\n---------------------------------------\nInitial Context Setup Request received from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info ("\n---------------------------------------\nSend Initial Context Setup Response to MME\n---------------------------------------")
s1.sendS1ap('initial_context_setup_response',initialcontextsetup_response, enbues1ap_id)
time.sleep(1)
igniteLogger.logger.info ("\n---------------------------------------\nSend Attach Complete to MME\n---------------------------------------")
s1.sendS1ap('attach_complete', uplinknastransport_attach_complete, enbues1ap_id, nas_attach_complete)
igniteLogger.logger.info ("\n---------------------------------------\nModify Bearer Request received from MME\n---------------------------------------")
gs.receiveGtp()
igniteLogger.logger.info ("\n---------------------------------------\nSend Modify Bearer Response to MME\n---------------------------------------")
gs.sendGtp('modify_bearer_response', modify_bearer_response, msg_hierarchy)
igniteLogger.logger.info ("\n---------------------------------------\nUE is Attached\n---------------------------------------")
time.sleep(2)
proc_stat_af_attach = su.executeCommand(command,ssh_client)
uecountafterreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["subs_attached"]))
icu.grpcValidation(uecountbeforereattach + 1,uecountafterreattach,"Number of Subs Attached After Reattach")
num_of_processed_aia_afreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_aia"]))
icu.grpcValidation(num_of_processed_aia + 1,num_of_processed_aia_afreattach,"Number of Processed AIA After Reattacheattach")
num_of_processed_ula_afreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_ula"]))
icu.grpcValidation(num_of_processed_ula + 1,num_of_processed_ula_afreattach,"Number of Processed ULA After Reattach")
num_of_handled_esm_info_resp_afreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["esm_info_resp"]))
icu.grpcValidation(num_of_handled_esm_info_resp + 1,num_of_handled_esm_info_resp_afreattach,"Number of Handled ESM info Response After Reattach")
num_of_processed_sec_mode_resp_afreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["processed_sec_mode"]))
icu.grpcValidation(num_of_processed_sec_mode_resp + 1,num_of_processed_sec_mode_resp_afreattach,"Number of Processed Sec Mode Response After Reattach")
num_of_processed_init_ctxt_resp_afreattach = int(do.splitProcStats(proc_stat_af_attach, stats_type["init_ctxt_resp"]))
icu.grpcValidation(num_of_processed_init_ctxt_resp + 1,num_of_processed_init_ctxt_resp_afreattach,"Number of Processed Initial Context Response After Reattach")
igniteLogger.logger.info ("\n---------------------------------------\nSend Detach Request to MME\n---------------------------------------")
s1.sendS1ap('detach_request', uplinknastransport_detach_request, enbues1ap_id, nas_detach_request)
igniteLogger.logger.info("\n---------------------------------------\nPurge Request received from MME\n---------------------------------------")
ds.receiveS6aMsg()
igniteLogger.logger.info("\n---------------------------------------\nDelete Session Request received from MME\n---------------------------------------")
gs.receiveGtp()
igniteLogger.logger.info("\n---------------------------------------\nSend Purge Response to MME\n---------------------------------------")
ds.sendS6aMsg('purge_response', msg_data_pua, imsi)
igniteLogger.logger.info ("\n---------------------------------------\nSend Delete Session Response to MME\n---------------------------------------")
gs.sendGtp('delete_session_response', delete_session_response, msg_hierarchy)
igniteLogger.logger.info ("\n---------------------------------------\nMME send Detach Accept to UE\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info ("\n---------------------------------------\neNB receives UE Context Release Request from MME\n---------------------------------------")
s1.receiveS1ap()
igniteLogger.logger.info ("\n---------------------------------------\neNB sends UE Context Release Complete to MME\n---------------------------------------")
s1.sendS1ap('ue_context_release_complete', uecontextrelease_complete, enbues1ap_id)
time.sleep(1)
proc_stat_af_detach = su.executeCommand(command,ssh_client)
ue_count_after_detach = int(do.splitProcStats(proc_stat_af_detach, stats_type["subs_attached"]))
icu.grpcValidation(uecountbeforereattach,ue_count_after_detach,"Number of Subs Attached After Detach 2")
num_of_del_session_resp_Afdetach = int(do.splitProcStats(proc_stat_af_detach, stats_type["del_session_resp"]))
icu.grpcValidation(num_of_del_session_resp+1,num_of_del_session_resp_Afdetach,"Number of Delete Session Request After Detach 2")
num_of_purge_resp_sent_Afdetach = int(do.splitProcStats(proc_stat_af_detach, stats_type["purge_resp"]))
icu.grpcValidation(num_of_processed_purge_resp + 1, num_of_purge_resp_sent_Afdetach, "Number of Purge Response")
print ("\n-------------------------------------\nIMSI Attach Detach and GUTI Attach Detach Execution Successful\n---------------------------------------")
except Exception as e:
print("**********\nEXCEPTION:"+e.__class__.__name__+"\nError Details : "+str(e)+"\n**********")
if e.__class__.__name__ != "ConnectionError":
time.sleep(10)
igniteLogger.logger.info("\n---------------------------------------\nClearing Buffer\n---------------------------------------")
icu.clearBuffer()
finally:
if clr_flag == True:
igniteLogger.logger.info("\n---------------------------------------\nHSS sends CLR to MME\n---------------------------------------")
ds.sendS6aMsg('cancel_location_request', msg_data_clr, imsi)
igniteLogger.logger.info("\n---------------------------------------\nHSS receives CLA from MME\n---------------------------------------")
ds.receiveS6aMsg()
| 63.10303 | 165 | 0.600989 | 2,267 | 20,824 | 5.209969 | 0.11513 | 0.088392 | 0.108035 | 0.112946 | 0.868597 | 0.840911 | 0.805436 | 0.77377 | 0.771992 | 0.763864 | 0 | 0.006986 | 0.092682 | 20,824 | 329 | 166 | 63.294833 | 0.618133 | 0.027901 | 0 | 0.574163 | 0 | 0 | 0.451992 | 0.302581 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.004785 | 0.028708 | 0 | 0.028708 | 0.014354 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ec22c1764f9ab41d4986c3534699352a04c14d25 | 32 | py | Python | src/text_prob/__init__.py | 0x71d3/text-prob | 7a5729e5cb3f7cb2926307d5ca398d0f1f6e5fef | [
"Apache-2.0"
] | null | null | null | src/text_prob/__init__.py | 0x71d3/text-prob | 7a5729e5cb3f7cb2926307d5ca398d0f1f6e5fef | [
"Apache-2.0"
] | null | null | null | src/text_prob/__init__.py | 0x71d3/text-prob | 7a5729e5cb3f7cb2926307d5ca398d0f1f6e5fef | [
"Apache-2.0"
] | null | null | null | from .gpt2_prob import GPT2Prob
| 16 | 31 | 0.84375 | 5 | 32 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.125 | 32 | 1 | 32 | 32 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ec4f0c25867a163e1f2ebe0b11befd02ce416c91 | 39 | py | Python | code/sample_5-1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | 1 | 2022-03-29T13:50:12.000Z | 2022-03-29T13:50:12.000Z | code/sample_5-1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | code/sample_5-1-2.py | KoyanagiHitoshi/AtCoder-Python-Introduction | 6d014e333a873f545b4d32d438e57cf428b10b96 | [
"MIT"
] | null | null | null | x = ["a", "b", "c", "a"]
print(len(x))
| 13 | 24 | 0.358974 | 8 | 39 | 1.75 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 2 | 25 | 19.5 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
6b54b9adc58251f6fd8e867eea3a9b860daa2f47 | 32,490 | py | Python | generated-libraries/python/netapp/ldap/__init__.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | 2 | 2017-03-28T15:31:26.000Z | 2018-08-16T22:15:18.000Z | generated-libraries/python/netapp/ldap/__init__.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | generated-libraries/python/netapp/ldap/__init__.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | from netapp.connection import NaConnection
from ldap_search_scope import LdapSearchScope # 0 properties
from ldap_config import LdapConfig # 3 properties
from ldap_client import LdapClient # 22 properties
from ldap_dn import LdapDn # 0 properties
from ldap_auth_method import LdapAuthMethod # 0 properties
from ldap_client_schema import LdapClientSchema # 20 properties
from ldap_config_get_iter_key_td import LdapConfigGetIterKeyTd # 1 properties
from ldap_client_get_iter_key_td import LdapClientGetIterKeyTd # 2 properties
from ldap_client_schema_get_iter_key_td import LdapClientSchemaGetIterKeyTd # 2 properties
class LdapConnection(NaConnection):
def ldap_client_schema_copy(self, new_schema_name, schema):
"""
Copy an existing LDAP schema. If the LDAP server that the storage
system needs to query does not support any of the default
read-only schemas, this API can be used to create an editable
copy of an existing read-only schema. After copying the schema,
the copy can be modified using the ldap-client-schema-modify
API.
:param new_schema_name: New Schema Template Name
:param schema: A name for the schema.
"""
return self.request( "ldap-client-schema-copy", {
'new_schema_name': [ new_schema_name, 'new-schema-name', [ basestring, 'None' ], False ],
'schema': [ schema, 'schema', [ basestring, 'None' ], False ],
}, {
} )
def ldap_config_delete(self):
"""
Delete a Vserver's association with a Lightweight Directory
Access Protocol (LDAP) configuration.
"""
return self.request( "ldap-config-delete", {
}, {
} )
def ldap_config_modify(self, client_config=None, client_enabled=None):
"""
Modify the Lightweight Directory Access Protocol (LDAP)
configuration for a Vserver.
:param client_config: The name of an existing Lightweight Directory Access Protocol
(LDAP) client configuration. The LDAP client configuration can be
created using the ldap-client-create API. The
ldap-client-get-iter API can be used to retrieve the list of
available LDAP client configurations for the cluster.
:param client_enabled: If true, the corresponding Lightweight Directory Access Protocol
(LDAP) configuration is enabled for this Vserver.
"""
return self.request( "ldap-config-modify", {
'client_config': [ client_config, 'client-config', [ basestring, 'None' ], False ],
'client_enabled': [ client_enabled, 'client-enabled', [ bool, 'None' ], False ],
}, {
} )
def ldap_config_create(self, client_config, client_enabled, return_record=None):
"""
Create a new association between a Lightweight Directory Access
Protocol (LDAP) client configuration and a Vserver. A Vserver can
have only one client configuration associated with it.
:param client_config: The name of an existing Lightweight Directory Access Protocol
(LDAP) client configuration. The LDAP client configuration can be
created using the ldap-client-create API. The
ldap-client-get-iter API can be used to retrieve the list of
available LDAP client configurations for the cluster.
:param client_enabled: If true, the corresponding Lightweight Directory Access Protocol
(LDAP) configuration is enabled for this Vserver.
:param return_record: If set to true, returns the ldap-config on successful creation.
Default: false
"""
return self.request( "ldap-config-create", {
'return_record': [ return_record, 'return-record', [ bool, 'None' ], False ],
'client_config': [ client_config, 'client-config', [ basestring, 'None' ], False ],
'client_enabled': [ client_enabled, 'client-enabled', [ bool, 'None' ], False ],
}, {
'result': [ LdapConfig, False ],
} )
def ldap_client_delete(self, ldap_client_config):
"""
Delete an existing Lightweight Directory Access Protocol (LDAP)
client configuration from the cluster.
:param ldap_client_config: The name of the LDAP client configuration.
"""
return self.request( "ldap-client-delete", {
'ldap_client_config': [ ldap_client_config, 'ldap-client-config', [ basestring, 'None' ], False ],
}, {
} )
def ldap_client_schema_modify(self, schema, comment=None, cn_netgroup_attribute=None, posix_group_object_class=None, home_directory_attribute=None, member_uid_attribute=None, gid_number_attribute=None, nis_netgroup_triple_attribute=None, gecos_attribute=None, uid_attribute=None, cn_group_attribute=None, uid_number_attribute=None, login_shell_attribute=None, user_password_attribute=None, posix_account_object_class=None, nis_netgroup_object_class=None, windows_account_attribute=None, member_nis_netgroup_attribute=None):
"""
Modify an existing Lightweight Directory Access Protocol (LDAP)
schema configuration. If the LDAP server that the storage system
needs to query does not support any of the default read-only
schemas, the ldap-client-schema-copy API can be used to create a
editable copy of an existing read-only schema. After copying the
schema, the copy can be modified using this API to support the
target schema.
:param schema: A name for the schema.
:param comment: A comment that can be associated with the schema.
:param cn_netgroup_attribute: Name that represents the RFC 2256 cn attribute used by RFC 2307
when working with netgroups.
:param posix_group_object_class: Name that represents the RFC 2307 posixGroup object class.
:param home_directory_attribute: Name that represents the RFC 2307 homeDirectory attribute.
:param member_uid_attribute: Name that represents the RFC 2307 memberUid attribute.
:param gid_number_attribute: Name that represents the RFC 2307 gidNumber attribute.
:param nis_netgroup_triple_attribute: Name that represents the RFC 2307 nisNetgroupTriple attribute.
:param gecos_attribute: Name that represents the RFC 2307 gecos attribute.
:param uid_attribute: Name that represents the RFC 1274 userid attribute used by RFC
2307 as uid.
:param cn_group_attribute: Name that represents the RFC 2256 cn attribute used by RFC 2307
when working with groups.
:param uid_number_attribute: Name that represents the RFC 2307 uidNumber attribute.
:param login_shell_attribute: Name that represents the RFC 2307 loginShell attribute.
:param user_password_attribute: Name that represents the RFC 2256 userPassword attribute used by
RFC 2307.
:param posix_account_object_class: Name that represents the RFC 2307 posixAccount object class.
:param nis_netgroup_object_class: Name that represents the RFC 2307 nisNetgroup object class.
:param windows_account_attribute: Attribute name to be used to get the windows account information
for a unix user account.
:param member_nis_netgroup_attribute: Name that represents the RFC 2307 memberNisNetgroup attribute.
"""
return self.request( "ldap-client-schema-modify", {
'comment': [ comment, 'comment', [ basestring, 'None' ], False ],
'cn_netgroup_attribute': [ cn_netgroup_attribute, 'cn-netgroup-attribute', [ basestring, 'None' ], False ],
'posix_group_object_class': [ posix_group_object_class, 'posix-group-object-class', [ basestring, 'None' ], False ],
'home_directory_attribute': [ home_directory_attribute, 'home-directory-attribute', [ basestring, 'None' ], False ],
'member_uid_attribute': [ member_uid_attribute, 'member-uid-attribute', [ basestring, 'None' ], False ],
'gid_number_attribute': [ gid_number_attribute, 'gid-number-attribute', [ basestring, 'None' ], False ],
'nis_netgroup_triple_attribute': [ nis_netgroup_triple_attribute, 'nis-netgroup-triple-attribute', [ basestring, 'None' ], False ],
'gecos_attribute': [ gecos_attribute, 'gecos-attribute', [ basestring, 'None' ], False ],
'uid_attribute': [ uid_attribute, 'uid-attribute', [ basestring, 'None' ], False ],
'cn_group_attribute': [ cn_group_attribute, 'cn-group-attribute', [ basestring, 'None' ], False ],
'uid_number_attribute': [ uid_number_attribute, 'uid-number-attribute', [ basestring, 'None' ], False ],
'login_shell_attribute': [ login_shell_attribute, 'login-shell-attribute', [ basestring, 'None' ], False ],
'user_password_attribute': [ user_password_attribute, 'user-password-attribute', [ basestring, 'None' ], False ],
'posix_account_object_class': [ posix_account_object_class, 'posix-account-object-class', [ basestring, 'None' ], False ],
'nis_netgroup_object_class': [ nis_netgroup_object_class, 'nis-netgroup-object-class', [ basestring, 'None' ], False ],
'windows_account_attribute': [ windows_account_attribute, 'windows-account-attribute', [ basestring, 'None' ], False ],
'member_nis_netgroup_attribute': [ member_nis_netgroup_attribute, 'member-nis-netgroup-attribute', [ basestring, 'None' ], False ],
'schema': [ schema, 'schema', [ basestring, 'None' ], False ],
}, {
} )
def ldap_client_create(self, ldap_client_config, schema, user_scope=None, use_start_tls=None, return_record=None, bind_dn=None, group_dn=None, tcp_port=None, preferred_ad_servers=None, bind_as_cifs_server=None, base_scope=None, servers=None, netgroup_scope=None, group_scope=None, netgroup_dn=None, user_dn=None, min_bind_level=None, ad_domain=None, query_timeout=None, bind_password=None, base_dn=None):
"""
Create a new Lightweight Directory Access Protocol (LDAP) client
configuration for the cluster.
:param ldap_client_config: The name of the LDAP client configuration.
:param schema: LDAP schema to use for this configuration. The list of possible
schemas can be obtained using the ldap-client-schema-get-iter
API.
:param user_scope: This indicates the scope for LDAP search when doing user
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param use_start_tls: This indicates if start_tls will be used over LDAP connections.
:param return_record: If set to true, returns the ldap-client on successful creation.
Default: false
:param bind_dn: The Bind Distinguished Name (DN) is the LDAP identity used during
the authentication process by the clients. This is required if
the LDAP server does not support anonymous binds. This field is
not used if 'bind-as-cfs-server' is set to 'true'. Example :
cn=username,cn=Users,dc=example,dc=com
:param group_dn: The Group Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for group lookups. If
not specified, group lookups will start at the base-dn.
:param tcp_port: The TCP port on the LDAP server to use for this configuration. If
omitted, this parameter defaults to 389.
:param preferred_ad_servers: Preferred Active Directory (AD) Domain controllers to use for
this configuration. This option is ONLY applicable for
configurations using Active Directory LDAP servers
:param bind_as_cifs_server: If set, the cluster will use the CIFS server's credentials to
bind to the LDAP server. If omitted, this parameter defaults to
'true' if the configuration uses Active Directory LDAP and
defaults to 'false' otherwise.
:param base_scope: This indicates the scope for LDAP search. If omitted, this
parameter defaults to 'subtree'.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param servers: List of LDAP Server IP addresses to use for this configuration.
The option is NOT applicable for configurations using Active
Directory LDAP servers.
:param netgroup_scope: This indicates the scope for LDAP search when doing netgroup
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param group_scope: This indicates the scope for LDAP search when doing group
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param netgroup_dn: The Netgoup Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for netgroup lookups.
If not specified, netgroup lookups will start at the base-dn.
:param user_dn: The User Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for user lookups. If
this parameter is omitted, user lookups will start at the
base-dn.
:param min_bind_level: The minimum authentication level that can be used to authenticate
with the LDAP server. If omitted, this parameter defaults to
'sasl' if the configuration uses Active Directory LDAP. For
configurations that use LDAP servers from other vendors, this
parameter defaults to 'simple' if a 'bind-dn' is specified and
'anonymous' otherwise.
Possible values:
<ul>
<li> "anonymous" - Anonymous bind,
<li> "simple" - Simple bind,
<li> "sasl" - Simple Authentication and Security Layer
(SASL) bind
</ul>
:param ad_domain: The Active Directory Domain Name for this LDAP configuration. The
option is ONLY applicable for configurations using Active
Directory LDAP servers.
:param query_timeout: Maximum time in seconds to wait for a query response from the
LDAP server. The default for this parameter is 3 seconds.
:param bind_password: The password to be used with the bind-dn.
:param base_dn: Indicates the starting point for searches within the LDAP
directory tree. If omitted, searches will start at the root of
the directory tree.
"""
return self.request( "ldap-client-create", {
'user_scope': [ user_scope, 'user-scope', [ basestring, 'ldap-search-scope' ], False ],
'use_start_tls': [ use_start_tls, 'use-start-tls', [ bool, 'None' ], False ],
'return_record': [ return_record, 'return-record', [ bool, 'None' ], False ],
'ldap_client_config': [ ldap_client_config, 'ldap-client-config', [ basestring, 'None' ], False ],
'bind_dn': [ bind_dn, 'bind-dn', [ basestring, 'ldap-dn' ], False ],
'group_dn': [ group_dn, 'group-dn', [ basestring, 'ldap-dn' ], False ],
'tcp_port': [ tcp_port, 'tcp-port', [ int, 'None' ], False ],
'preferred_ad_servers': [ preferred_ad_servers, 'preferred-ad-servers', [ basestring, 'ip-address' ], True ],
'bind_as_cifs_server': [ bind_as_cifs_server, 'bind-as-cifs-server', [ bool, 'None' ], False ],
'base_scope': [ base_scope, 'base-scope', [ basestring, 'ldap-search-scope' ], False ],
'servers': [ servers, 'servers', [ basestring, 'ip-address' ], True ],
'netgroup_scope': [ netgroup_scope, 'netgroup-scope', [ basestring, 'ldap-search-scope' ], False ],
'group_scope': [ group_scope, 'group-scope', [ basestring, 'ldap-search-scope' ], False ],
'netgroup_dn': [ netgroup_dn, 'netgroup-dn', [ basestring, 'ldap-dn' ], False ],
'user_dn': [ user_dn, 'user-dn', [ basestring, 'ldap-dn' ], False ],
'min_bind_level': [ min_bind_level, 'min-bind-level', [ basestring, 'ldap-auth-method' ], False ],
'ad_domain': [ ad_domain, 'ad-domain', [ basestring, 'None' ], False ],
'query_timeout': [ query_timeout, 'query-timeout', [ int, 'None' ], False ],
'bind_password': [ bind_password, 'bind-password', [ basestring, 'None' ], False ],
'base_dn': [ base_dn, 'base-dn', [ basestring, 'ldap-dn' ], False ],
'schema': [ schema, 'schema', [ basestring, 'None' ], False ],
}, {
'result': [ LdapClient, False ],
} )
def ldap_client_get_iter(self, max_records=None, query=None, tag=None, desired_attributes=None):
"""
Retrieve the list of Lightweight Directory Access Protocol (LDAP)
client configurations for the cluster.
:param max_records: The maximum number of records to return in this call.
Default: 20
:param query: A query that specifies which objects to return.
A query could be specified on any number of attributes in the
ldap-client object.
All ldap-client objects matching this query up to 'max-records'
will be returned.
:param tag: Specify the tag from the last call.
It is usually not specified for the first call. For subsequent
calls, copy values from the 'next-tag' obtained from the previous
call.
:param desired_attributes: Specify the attributes that should be returned.
If not present, all attributes for which information is available
will be returned.
If present, only the desired attributes for which information is
available will be returned.
"""
return self.request( "ldap-client-get-iter", {
'max_records': max_records,
'query': [ query, 'query', [ LdapClient, 'None' ], False ],
'tag': tag,
'desired_attributes': [ desired_attributes, 'desired-attributes', [ LdapClient, 'None' ], False ],
}, {
'attributes-list': [ LdapClient, True ],
} )
def ldap_client_schema_delete(self, schema):
"""
Delete an existing Lightweight Directory Access Protocol (LDAP)
schema configuration. Only the schemas that are defined using the
ldap-client-schema-copy API can be deleted using this API.
:param schema: A name for the schema.
"""
return self.request( "ldap-client-schema-delete", {
'schema': [ schema, 'schema', [ basestring, 'None' ], False ],
}, {
} )
def ldap_client_modify(self, ldap_client_config, user_scope=None, use_start_tls=None, bind_dn=None, group_dn=None, tcp_port=None, preferred_ad_servers=None, bind_as_cifs_server=None, base_scope=None, servers=None, netgroup_scope=None, group_scope=None, netgroup_dn=None, user_dn=None, min_bind_level=None, ad_domain=None, query_timeout=None, bind_password=None, base_dn=None, schema=None):
"""
Modify an existing Lightweight Directory Access Protocol (LDAP)
client configuration.
:param ldap_client_config: The name of the LDAP client configuration.
:param user_scope: This indicates the scope for LDAP search when doing user
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param use_start_tls: This indicates if start_tls will be used over LDAP connections.
:param bind_dn: The Bind Distinguished Name (DN) is the LDAP identity used during
the authentication process by the clients. This is required if
the LDAP server does not support anonymous binds. This field is
not used if 'bind-as-cfs-server' is set to 'true'. Example :
cn=username,cn=Users,dc=example,dc=com
:param group_dn: The Group Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for group lookups. If
not specified, group lookups will start at the base-dn.
:param tcp_port: The TCP port on the LDAP server to use for this configuration. If
omitted, this parameter defaults to 389.
:param preferred_ad_servers: Preferred Active Directory (AD) Domain controllers to use for
this configuration. This option is ONLY applicable for
configurations using Active Directory LDAP servers
:param bind_as_cifs_server: If set, the cluster will use the CIFS server's credentials to
bind to the LDAP server. If omitted, this parameter defaults to
'true' if the configuration uses Active Directory LDAP and
defaults to 'false' otherwise.
:param base_scope: This indicates the scope for LDAP search. If omitted, this
parameter defaults to 'subtree'.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param servers: List of LDAP Server IP addresses to use for this configuration.
The option is NOT applicable for configurations using Active
Directory LDAP servers.
:param netgroup_scope: This indicates the scope for LDAP search when doing netgroup
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param group_scope: This indicates the scope for LDAP search when doing group
lookups.
Possible values:
<ul>
<li> "base" - Search only the base directory entry,
<li> "onelevel" - Search the base directory entry and the
children of the base entry,
<li> "subtree" - Search the base directory entry and all its
decendants
</ul>
:param netgroup_dn: The Netgoup Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for netgroup lookups.
If not specified, netgroup lookups will start at the base-dn.
:param user_dn: The User Distinguished Name (DN), if specified, is used as the
starting point in the LDAP directory tree for user lookups. If
this parameter is omitted, user lookups will start at the
base-dn.
:param min_bind_level: The minimum authentication level that can be used to authenticate
with the LDAP server. If omitted, this parameter defaults to
'sasl' if the configuration uses Active Directory LDAP. For
configurations that use LDAP servers from other vendors, this
parameter defaults to 'simple' if a 'bind-dn' is specified and
'anonymous' otherwise.
Possible values:
<ul>
<li> "anonymous" - Anonymous bind,
<li> "simple" - Simple bind,
<li> "sasl" - Simple Authentication and Security Layer
(SASL) bind
</ul>
:param ad_domain: The Active Directory Domain Name for this LDAP configuration. The
option is ONLY applicable for configurations using Active
Directory LDAP servers.
:param query_timeout: Maximum time in seconds to wait for a query response from the
LDAP server. The default for this parameter is 3 seconds.
:param bind_password: The password to be used with the bind-dn.
:param base_dn: Indicates the starting point for searches within the LDAP
directory tree. If omitted, searches will start at the root of
the directory tree.
:param schema: LDAP schema to use for this configuration. The list of possible
schemas can be obtained using the ldap-client-schema-get-iter
API.
"""
return self.request( "ldap-client-modify", {
'user_scope': [ user_scope, 'user-scope', [ basestring, 'ldap-search-scope' ], False ],
'use_start_tls': [ use_start_tls, 'use-start-tls', [ bool, 'None' ], False ],
'ldap_client_config': [ ldap_client_config, 'ldap-client-config', [ basestring, 'None' ], False ],
'bind_dn': [ bind_dn, 'bind-dn', [ basestring, 'ldap-dn' ], False ],
'group_dn': [ group_dn, 'group-dn', [ basestring, 'ldap-dn' ], False ],
'tcp_port': [ tcp_port, 'tcp-port', [ int, 'None' ], False ],
'preferred_ad_servers': [ preferred_ad_servers, 'preferred-ad-servers', [ basestring, 'ip-address' ], True ],
'bind_as_cifs_server': [ bind_as_cifs_server, 'bind-as-cifs-server', [ bool, 'None' ], False ],
'base_scope': [ base_scope, 'base-scope', [ basestring, 'ldap-search-scope' ], False ],
'servers': [ servers, 'servers', [ basestring, 'ip-address' ], True ],
'netgroup_scope': [ netgroup_scope, 'netgroup-scope', [ basestring, 'ldap-search-scope' ], False ],
'group_scope': [ group_scope, 'group-scope', [ basestring, 'ldap-search-scope' ], False ],
'netgroup_dn': [ netgroup_dn, 'netgroup-dn', [ basestring, 'ldap-dn' ], False ],
'user_dn': [ user_dn, 'user-dn', [ basestring, 'ldap-dn' ], False ],
'min_bind_level': [ min_bind_level, 'min-bind-level', [ basestring, 'ldap-auth-method' ], False ],
'ad_domain': [ ad_domain, 'ad-domain', [ basestring, 'None' ], False ],
'query_timeout': [ query_timeout, 'query-timeout', [ int, 'None' ], False ],
'bind_password': [ bind_password, 'bind-password', [ basestring, 'None' ], False ],
'base_dn': [ base_dn, 'base-dn', [ basestring, 'ldap-dn' ], False ],
'schema': [ schema, 'schema', [ basestring, 'None' ], False ],
}, {
} )
def ldap_config_get_iter(self, max_records=None, query=None, tag=None, desired_attributes=None):
"""
Retrieve the list of Lightweight Directory Access Protocol (LDAP)
configurations in the cluster.
:param max_records: The maximum number of records to return in this call.
Default: 20
:param query: A query that specifies which objects to return.
A query could be specified on any number of attributes in the
ldap-config object.
All ldap-config objects matching this query up to 'max-records'
will be returned.
:param tag: Specify the tag from the last call.
It is usually not specified for the first call. For subsequent
calls, copy values from the 'next-tag' obtained from the previous
call.
:param desired_attributes: Specify the attributes that should be returned.
If not present, all attributes for which information is available
will be returned.
If present, only the desired attributes for which information is
available will be returned.
"""
return self.request( "ldap-config-get-iter", {
'max_records': max_records,
'query': [ query, 'query', [ LdapConfig, 'None' ], False ],
'tag': tag,
'desired_attributes': [ desired_attributes, 'desired-attributes', [ LdapConfig, 'None' ], False ],
}, {
'attributes-list': [ LdapConfig, True ],
} )
def ldap_client_schema_get_iter(self, max_records=None, query=None, tag=None, desired_attributes=None):
"""
Retrieve the list of Lightweight Directory Access Protocol (LDAP)
client schema configurations that are defined for the cluster.
:param max_records: The maximum number of records to return in this call.
Default: 20
:param query: A query that specifies which objects to return.
A query could be specified on any number of attributes in the
ldap-client-schema object.
All ldap-client-schema objects matching this query up to
'max-records' will be returned.
:param tag: Specify the tag from the last call.
It is usually not specified for the first call. For subsequent
calls, copy values from the 'next-tag' obtained from the previous
call.
:param desired_attributes: Specify the attributes that should be returned.
If not present, all attributes for which information is available
will be returned.
If present, only the desired attributes for which information is
available will be returned.
"""
return self.request( "ldap-client-schema-get-iter", {
'max_records': max_records,
'query': [ query, 'query', [ LdapClientSchema, 'None' ], False ],
'tag': tag,
'desired_attributes': [ desired_attributes, 'desired-attributes', [ LdapClientSchema, 'None' ], False ],
}, {
'attributes-list': [ LdapClientSchema, True ],
} )
| 56.211073 | 527 | 0.613543 | 3,787 | 32,490 | 5.143913 | 0.072881 | 0.032854 | 0.031212 | 0.025873 | 0.866068 | 0.826745 | 0.788296 | 0.761499 | 0.729107 | 0.700616 | 0 | 0.004485 | 0.306833 | 32,490 | 577 | 528 | 56.308492 | 0.860486 | 0.543552 | 0 | 0.555556 | 0 | 0 | 0.247974 | 0.051774 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.041667 | 0.069444 | 0 | 0.243056 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b7541611c7cad8466c6d3dc2c43e4e34c910c37 | 35 | py | Python | src/models/__init__.py | Platzi-Master-C8/research-jobplacement | af62f71168e5f6945d6c6efcf65ed2a4c721bab4 | [
"MIT"
] | 1 | 2022-01-24T17:57:51.000Z | 2022-01-24T17:57:51.000Z | src/models/__init__.py | Platzi-Master-C8/research-jobplacement | af62f71168e5f6945d6c6efcf65ed2a4c721bab4 | [
"MIT"
] | null | null | null | src/models/__init__.py | Platzi-Master-C8/research-jobplacement | af62f71168e5f6945d6c6efcf65ed2a4c721bab4 | [
"MIT"
] | null | null | null | from .job_placement_model import *
| 17.5 | 34 | 0.828571 | 5 | 35 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.870968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6b951a7a328a6a40d0137780aa1eb4b2ffe198eb | 44 | py | Python | vnpy/gateway/binance/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 323 | 2015-11-21T14:45:29.000Z | 2022-03-16T08:54:37.000Z | vnpy/gateway/binance/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 9 | 2017-03-21T08:26:21.000Z | 2021-08-23T06:41:17.000Z | vnpy/gateway/binance/__init__.py | funrunskypalace/vnpy | 2d87aede685fa46278d8d3392432cc127b797926 | [
"MIT"
] | 148 | 2016-09-26T03:25:39.000Z | 2022-02-06T14:43:48.000Z | from .binance_gateway import BinanceGateway
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6bb807f1416b6c3bc126ee554dfbaf7e6dd3190d | 43 | py | Python | edflow/metrics/__init__.py | ffeldmann/edflow | a5318e8bf9e791e0b6c9b336f728a59a4330f25d | [
"MIT"
] | 23 | 2019-04-04T07:52:57.000Z | 2022-02-02T03:11:07.000Z | edflow/metrics/__init__.py | ffeldmann/edflow | a5318e8bf9e791e0b6c9b336f728a59a4330f25d | [
"MIT"
] | 149 | 2019-04-04T09:53:01.000Z | 2020-07-21T16:55:32.000Z | edflow/metrics/__init__.py | ArWeHei/edflow | 3383cfbc42a43e906bc7781ad05714fd4fc9616e | [
"MIT"
] | 12 | 2019-04-04T07:52:58.000Z | 2020-08-28T12:30:03.000Z | from edflow.metrics.image_metrics import *
| 21.5 | 42 | 0.837209 | 6 | 43 | 5.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6bd0efcd0a5c7de11c1d126297c8293d4bca7bd5 | 18,623 | py | Python | scripts/step3.py | PhantomInsights/mexican-jobs-2020 | 831e95b3656867e64620881e5308faad74cd6c1f | [
"MIT"
] | 22 | 2020-09-06T20:29:08.000Z | 2021-11-09T11:02:54.000Z | scripts/step3.py | PhantomInsights/mexican-jobs-2020 | 831e95b3656867e64620881e5308faad74cd6c1f | [
"MIT"
] | null | null | null | scripts/step3.py | PhantomInsights/mexican-jobs-2020 | 831e95b3656867e64620881e5308faad74cd6c1f | [
"MIT"
] | 2 | 2020-09-08T18:24:42.000Z | 2020-11-11T17:16:12.000Z | """
Functions used to generate the EDA on the mexican job offers dataset.
"""
import json
from datetime import datetime
import pandas as pd
import plotly.graph_objects as go
def days_stats(df):
"""Gets the daily counts by weekday and plots the daily counts.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
weekdays = df["isodate"].dt.weekday.value_counts()
weekdays.sort_index(inplace=True)
print(weekdays.to_markdown(floatfmt=",.0f"))
# Create a Series with the counts of each day.
days_counts = df["isodate"].value_counts()
days_counts.sort_index(inplace=True)
# Create a new DataFrame with only the data of Mondays.
monday_df = days_counts[days_counts.index.weekday == 0]
# Initialize our Figure.
fig = go.Figure()
fig.add_traces(go.Scatter(x=days_counts.index, y=days_counts.values, line_color="#ffa000",
mode="markers+lines", line_width=3, marker_size=12))
# Highlight Mondays with a bigger marker.
fig.add_traces(go.Scatter(x=monday_df.index, y=monday_df.values,
line_color="#c6ff00", mode="markers", marker_size=18))
fig.update_xaxes(title="Date (2020)", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=12)
fig.update_yaxes(title="Number of Job Offers", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=12)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Job Offers by Day",
title_x=0.5,
title_y=0.93,
margin_l=120,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("1.png")
def salaries_stats(df):
"""Plots the salaries distribution in an Histograms.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
print(df["salary"].describe().to_markdown(floatfmt=",.0f"))
salaries = df[df["salary"] <= 35000]
fig = go.Figure()
fig.add_traces(go.Histogram(
x=salaries["salary"], nbinsx=35, marker_color="#ffa000"))
fig.update_xaxes(title="Monthly Salary", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=35, title_standoff=20)
fig.update_yaxes(title="Number of Job Offers", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=12, title_standoff=5)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Salaries Distribution",
title_x=0.5,
title_y=0.93,
margin_l=120,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("2.png")
def plot_states_offers(df):
"""Plots the job offers distribution in av horizontal Bar plot.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
states_series = df["state"].value_counts()
fig = go.Figure()
fig.add_traces(go.Bar(x=states_series.values, y=states_series.index, text=states_series.values,
orientation="h", marker={"color": states_series.values, "colorscale": "tropic"}))
fig.update_xaxes(title="Number of Job Offers", ticks="outside", ticklen=10, tickcolor="#FFFFFF",
linewidth=2, showline=True, mirror=True, nticks=12, title_standoff=30, gridwidth=0.5, range=[0, states_series.values.max() * 1.1])
fig.update_yaxes(title="", ticks="outside", ticklen=10, showgrid=False,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True)
fig.update_traces(texttemplate=" %{text:,.0f}", textposition="outside")
# Add final customizations.
fig.update_layout(
uniformtext_minsize=8,
uniformtext_mode="hide",
showlegend=False,
width=1200,
height=1400,
font_color="#FFFFFF",
font_size=18,
title_text="Job Offers by State",
title_x=0.5,
title_y=0.96,
margin_l=120,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("3.png")
def plot_states_map(df):
"""Plots the job offers distribution in a Choropleth map.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
states_series = df["state"].value_counts()
geojson = json.loads(open("mexico.json", "r", encoding="utf-8").read())
fig = go.Figure()
fig.add_traces(go.Choropleth(geojson=geojson,
locations=states_series.index,
z=states_series.values,
featureidkey="properties.ADMIN_NAME",
marker_line_color="#FFFFFF",
marker_line_width=1,
colorbar_outlinecolor="#FFFFFF",
colorbar_outlinewidth=1.75,
colorbar_ticks="outside",
colorbar_ticklen=10,
colorbar_tickcolor="#FFFFFF"))
fig.update_geos(fitbounds="locations",
showocean=True, oceancolor="#263238",
showcountries=True, countrycolor="#FFFFFF", countrywidth=1.5,
framewidth=2, framecolor="#FFFFFF",
showlakes=False,
landcolor="#1B2327")
# Add final customizations.
fig.update_layout(
title_text="Job Offers by State",
title_x=0.5,
title_y=0.96,
title_font_size=30,
font_color="#FFFFFF",
margin={"r": 50, "t": 50, "l": 50, "b": 50},
width=1200,
height=650,
paper_bgcolor="#37474f",
)
fig.write_image("4.png")
def plot_states_median_salary(df):
"""Plots the median salary in each state using an horizontal Bar chart.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
median_salaries = df.pivot_table(
index="state", values="salary", aggfunc="median").sort_values("salary", ascending=False)
fig = go.Figure()
fig.add_traces(go.Bar(x=median_salaries["salary"], y=median_salaries.index, text=median_salaries["salary"],
orientation="h", marker={"color": median_salaries["salary"], "colorscale": "peach"}))
fig.update_xaxes(title="Monthly Median Salary in MXN", ticks="outside", ticklen=10, tickcolor="#FFFFFF", separatethousands=True,
linewidth=2, showline=True, mirror=True, nticks=12, title_standoff=30, gridwidth=0.5, range=[0, median_salaries["salary"].max() * 1.1])
fig.update_yaxes(title="", ticks="outside", ticklen=10, showgrid=False,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True)
fig.update_traces(texttemplate=" %{text:,.0f}", textposition="outside")
# Add final customizations.
fig.update_layout(
uniformtext_minsize=8,
uniformtext_mode="hide",
showlegend=False,
width=1200,
height=1400,
font_color="#FFFFFF",
font_size=18,
title_text="Median Salary by State",
title_x=0.5,
title_y=0.96,
margin_l=120,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("5.png")
def plot_median_salary_map(df):
"""Plots the median salary by state in a Choropleth map.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
median_salaries = df.pivot_table(
index="state", values="salary", aggfunc="median").sort_values("salary", ascending=False)
geojson = json.loads(open("mexico.json", "r", encoding="utf-8").read())
fig = go.Figure()
fig.add_traces(go.Choropleth(geojson=geojson,
locations=median_salaries.index,
z=median_salaries["salary"],
featureidkey="properties.ADMIN_NAME",
marker_line_color="#FFFFFF",
marker_line_width=1,
colorbar_outlinecolor="#FFFFFF",
colorbar_outlinewidth=1.75,
colorbar_separatethousands=True,
colorbar_ticks="outside",
colorbar_ticklen=10,
colorbar_tickcolor="#FFFFFF"))
fig.update_geos(fitbounds="locations",
showocean=True, oceancolor="#263238",
showcountries=True, countrycolor="#FFFFFF", countrywidth=1.5,
framewidth=2, framecolor="#FFFFFF",
showlakes=False,
landcolor="#1B2327")
# Add final customizations.
fig.update_layout(
title_text="Median Salary by State",
title_x=0.5,
title_y=0.96,
title_font_size=30,
font_color="#FFFFFF",
margin={"r": 50, "t": 50, "l": 50, "b": 50},
width=1200,
height=650,
paper_bgcolor="#37474f",
)
fig.write_image("6.png")
def plot_hours(df):
"""Plots the hours required to work in an Histogram.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
fig = go.Figure()
fig.add_traces(go.Histogram(x=df["hours_worked"], marker_color="#ffa000"))
fig.update_xaxes(title="Hours Required", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=35, title_standoff=20)
fig.update_yaxes(title="Number of Job Offers", ticks="outside", ticklen=10, separatethousands=True,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=18, gridwidth=0.5, title_standoff=5)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Labour Hours Distribution",
title_x=0.5,
title_y=0.93,
margin_l=120,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("7.png")
def plot_days(df):
"""Plots the days required to work in an Histogram.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
fig = go.Figure()
fig.add_traces(go.Histogram(x=df["days_worked"], marker_color="#ffa000"))
fig.update_xaxes(title="Days Required", ticks="outside", ticklen=10, gridwidth=0.5,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=35, title_standoff=20)
fig.update_yaxes(title="Number of Job Offers", ticks="outside", ticklen=10, separatethousands=True,
tickcolor="#FFFFFF", linewidth=2, showline=True, mirror=True, nticks=18, gridwidth=0.5, title_standoff=5)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Labour Days Distribution",
title_x=0.5,
title_y=0.93,
margin_l=120,
margin_b=120,
title_font_size=30,
plot_bgcolor="#37474f"
)
fig.write_image("8.png")
def plot_education_level(df):
"""Plots the education level distribution in a Donut plot.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
# Define our custom culors.
colors = ["#0091ea", "#ff5722", "#43a047", "#7e57c2", "#1565c0",
"#2e7d32", "#c62828", "#ef6c00", "#ffc400", "#64dd17"]
education_level = df["education_level"].value_counts()
fig = go.Figure()
fig.add_traces(go.Pie(labels=education_level.index,
values=education_level.values,
marker_colors=colors,
hole=0.5,
insidetextfont_color="#FFFFFF"))
# Add final customizations.
fig.update_layout(
legend_bordercolor="#FFFFFF",
legend_borderwidth=1.5,
legend_x=0.88,
legend_y=0.5,
font_color="#FFFFFF",
font_size=18,
title_text="Education Level Distribution",
title_x=0.5,
title_y=0.93,
margin={"r": 0, "t": 150, "l": 0, "b": 50},
width=1200,
height=800,
title_font_size=30,
paper_bgcolor="#37474f"
)
fig.write_image("9.png")
def plot_experience(df):
"""Plots the experience distribution in a Donut plot.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
# Define our custom culors.
colors = ["#0091ea", "#ff5722", "#43a047", "#7e57c2", "#1565c0",
"#2e7d32", "#c62828", "#ef6c00", "#ffc400", "#64dd17"]
experience = df["experience"].value_counts()
fig = go.Figure()
fig.add_traces(go.Pie(labels=experience.index,
values=experience.values,
marker_colors=colors,
hole=0.5,
insidetextfont_color="#FFFFFF"))
# Add final customizations.
fig.update_layout(
legend_bordercolor="#FFFFFF",
legend_borderwidth=1.5,
legend_x=0.88,
legend_y=0.5,
font_color="#FFFFFF",
font_size=18,
title_text="Required Experience Distribution",
title_x=0.5,
title_y=0.93,
margin={"r": 0, "t": 150, "l": 0, "b": 50},
width=1200,
height=800,
title_font_size=30,
paper_bgcolor="#37474f"
)
fig.write_image("10.png")
def hours_worked_salary(df):
"""Plots the correlation between salary and daily hours worked in a Scatter plot.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
# Remove outliers.
df = df[df["salary"] <= 35000]
# Initialize our Figure.
fig = go.Figure()
fig.add_traces(go.Scatter(x=df["hours_worked"], y=df["salary"], line_color="#ffa000",
mode="markers", line_width=3, marker_size=8))
fig.update_xaxes(title="Number of Daily Hours", ticks="outside", ticklen=10,
tickcolor="#FFFFFF", title_standoff=30, linewidth=2, showline=True, mirror=True, nticks=25, gridwidth=0.5)
fig.update_yaxes(title="Monthly Salary in MXN", ticks="outside", ticklen=10, separatethousands=True,
tickcolor="#FFFFFF", title_standoff=20, linewidth=2, showline=True, mirror=True, nticks=12, gridwidth=0.5)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Comparison of Hours Worked and Monthly Salary",
title_x=0.5,
title_y=0.93,
margin_l=140,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("11.png")
def education_level_salary(df):
"""Plots the correlation between salary and education level in a Scatter plot.
Parameters
----------
df : pandas.DataFrame
A pandas DataFrame containing job offers data.
"""
# Remove outliers.
df = df[df["salary"] <= 35000].copy()
# We are going to map the education levels with a number.
# The greater th enumber is the greater the education level.
education_map = {
"Primaria": 1,
"Secundaria/sec. técnica": 2,
"Prepa o vocacional": 3,
"Carrera técnica": 4,
"Carrera comercial": 4,
"Profesional técnico": 4,
"T. superior universitario": 4,
"Licenciatura": 5,
"Maestría": 6,
"Doctorado": 7
}
# We convert the categorical data to numerical.
df["education_level"] = df["education_level"].apply(
lambda x: education_map[x])
# Initialize our Figure.
fig = go.Figure()
fig.add_traces(go.Scatter(x=df["education_level"], y=df["salary"], line_color="#ffa000",
mode="markers", line_width=3, marker_size=8))
fig.update_xaxes(title="Education Level", ticks="outside", ticklen=10,
tickcolor="#FFFFFF", title_standoff=30, linewidth=2, showline=True, mirror=True, nticks=7, gridwidth=0.5)
fig.update_yaxes(title="Monthly Salary in MXN", ticks="outside", ticklen=10, separatethousands=True,
tickcolor="#FFFFFF", title_standoff=20, linewidth=2, showline=True, mirror=True, nticks=12, gridwidth=0.5)
# Add final customizations.
fig.update_layout(
showlegend=False,
width=1200,
height=800,
font_color="#FFFFFF",
font_size=18,
title_text="Comparison of Education Level and Monthly Salary",
title_x=0.5,
title_y=0.93,
margin_l=140,
margin_b=120,
title_font_size=30,
paper_bgcolor="#37474f",
plot_bgcolor="#263238"
)
fig.write_image("12.png")
if __name__ == "__main__":
df = pd.read_csv("data.csv", parse_dates=["isodate"])
# days_stats(df)
# salaries_stats(df)
# plot_states_offers(df)
# plot_states_map(df)
# plot_states_median_salary(df)
# plot_median_salary_map(df)
# plot_hours(df)
# plot_days(df)
# plot_education_level(df)
# plot_experience(df)
# hours_worked_salary(df)
# education_level_salary(df)
| 30.629934 | 156 | 0.590399 | 2,185 | 18,623 | 4.875057 | 0.132265 | 0.027037 | 0.028539 | 0.031543 | 0.783233 | 0.755539 | 0.75 | 0.737608 | 0.721649 | 0.708881 | 0 | 0.054169 | 0.281319 | 18,623 | 607 | 157 | 30.680395 | 0.741707 | 0.15889 | 0 | 0.678977 | 1 | 0 | 0.135983 | 0.002746 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034091 | false | 0 | 0.011364 | 0 | 0.045455 | 0.005682 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6bf21b5e044fb6e5f7dcd933df7c6a518da7aeac | 44 | py | Python | src/graceful_killer/__init__.py | Sciocatti/python_scheduler_and_clean_forced_exit | 4e5373ba33798c08096087058773412257230662 | [
"MIT"
] | null | null | null | src/graceful_killer/__init__.py | Sciocatti/python_scheduler_and_clean_forced_exit | 4e5373ba33798c08096087058773412257230662 | [
"MIT"
] | null | null | null | src/graceful_killer/__init__.py | Sciocatti/python_scheduler_and_clean_forced_exit | 4e5373ba33798c08096087058773412257230662 | [
"MIT"
] | null | null | null | from .graceful_killer import GracefulKiller
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d40df4b28ef49b1f42bf3694e54265066aca7a5c | 30 | py | Python | get_fw_version.py | idimitrakopoulos/illuminOS | e50174c58972739662f7e8c7f6e73b0275a7a598 | [
"MIT"
] | 84 | 2016-10-20T22:58:07.000Z | 2021-02-07T11:52:31.000Z | get_fw_version.py | idimitrakopoulos/illuminOS | e50174c58972739662f7e8c7f6e73b0275a7a598 | [
"MIT"
] | null | null | null | get_fw_version.py | idimitrakopoulos/illuminOS | e50174c58972739662f7e8c7f6e73b0275a7a598 | [
"MIT"
] | 8 | 2016-10-21T00:22:25.000Z | 2019-12-29T13:10:51.000Z | import os
print(os.uname()[3]) | 15 | 20 | 0.7 | 6 | 30 | 3.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.066667 | 30 | 2 | 20 | 15 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5cfdf30fd5de1b6c51b93eaa4edb762026b5be53 | 120 | py | Python | lino_xl/lib/cal/roles.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | 1 | 2018-01-12T14:09:48.000Z | 2018-01-12T14:09:48.000Z | lino_xl/lib/cal/roles.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | 1 | 2019-09-10T05:03:47.000Z | 2019-09-10T05:03:47.000Z | lino_xl/lib/cal/roles.py | khchine5/xl | b1634937a9ce87af1e948eb712b934b11f221d9d | [
"BSD-2-Clause"
] | null | null | null | from lino.core.roles import UserRole
class CalendarReader(UserRole):
pass
class GuestOperator(UserRole):
pass
| 15 | 36 | 0.766667 | 14 | 120 | 6.571429 | 0.714286 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 120 | 7 | 37 | 17.142857 | 0.92 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.4 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
cf278ed88b08113d46495cb7f6339a29c616ec53 | 53 | py | Python | src/test/transformer/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 1 | 2021-03-14T08:20:51.000Z | 2021-03-14T08:20:51.000Z | src/test/transformer/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 1 | 2021-11-29T09:56:18.000Z | 2021-12-01T22:01:13.000Z | src/test/transformer/__init__.py | HenrikPilz/BMEcatConverter | 28c6840fc70a3f04e3eae5fc7be32c7bc779c1da | [
"BSD-3-Clause"
] | 2 | 2021-08-30T08:14:34.000Z | 2021-09-28T15:10:23.000Z | from .separatorTest import SeparatorTransformerTest
| 26.5 | 52 | 0.886792 | 4 | 53 | 11.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09434 | 53 | 1 | 53 | 53 | 0.979167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d84dc072017e8b2d33a6a020be7d4295ae9fc61a | 3,142 | py | Python | matplotlib_example/matplotlib_example/exprLexer.py | mwisslead/Random | 39e580db13d2c3e329b2a2edb590ea04e953d912 | [
"Unlicense"
] | null | null | null | matplotlib_example/matplotlib_example/exprLexer.py | mwisslead/Random | 39e580db13d2c3e329b2a2edb590ea04e953d912 | [
"Unlicense"
] | null | null | null | matplotlib_example/matplotlib_example/exprLexer.py | mwisslead/Random | 39e580db13d2c3e329b2a2edb590ea04e953d912 | [
"Unlicense"
] | null | null | null | # Generated from expr.g4 by ANTLR 4.5.3
from antlr4 import *
from io import StringIO
def serializedATN():
with StringIO() as buf:
buf.write("\3\u0430\ud6d1\u8206\uad2d\u4417\uaef1\u8d80\uaadd\2\16")
buf.write("K\b\1\4\2\t\2\4\3\t\3\4\4\t\4\4\5\t\5\4\6\t\6\4\7\t\7")
buf.write("\4\b\t\b\4\t\t\t\4\n\t\n\4\13\t\13\4\f\t\f\4\r\t\r\3\2")
buf.write("\3\2\3\3\3\3\3\4\3\4\3\5\3\5\3\6\3\6\3\7\3\7\3\b\3\b\3")
buf.write("\t\3\t\3\n\3\n\3\13\3\13\7\13\60\n\13\f\13\16\13\63\13")
buf.write("\13\3\f\7\f\66\n\f\f\f\16\f9\13\f\3\f\3\f\6\f=\n\f\r\f")
buf.write("\16\f>\3\f\6\fB\n\f\r\f\16\fC\5\fF\n\f\3\r\3\r\3\r\3\r")
buf.write("\2\2\16\3\3\5\4\7\5\t\6\13\7\r\b\17\t\21\n\23\13\25\f")
buf.write("\27\r\31\16\3\2\6\4\2C\\c|\5\2\62;C\\c|\3\2\62;\4\2\13")
buf.write("\13\"\"O\2\3\3\2\2\2\2\5\3\2\2\2\2\7\3\2\2\2\2\t\3\2\2")
buf.write("\2\2\13\3\2\2\2\2\r\3\2\2\2\2\17\3\2\2\2\2\21\3\2\2\2")
buf.write("\2\23\3\2\2\2\2\25\3\2\2\2\2\27\3\2\2\2\2\31\3\2\2\2\3")
buf.write("\33\3\2\2\2\5\35\3\2\2\2\7\37\3\2\2\2\t!\3\2\2\2\13#\3")
buf.write("\2\2\2\r%\3\2\2\2\17\'\3\2\2\2\21)\3\2\2\2\23+\3\2\2\2")
buf.write("\25-\3\2\2\2\27E\3\2\2\2\31G\3\2\2\2\33\34\7*\2\2\34\4")
buf.write("\3\2\2\2\35\36\7+\2\2\36\6\3\2\2\2\37 \7`\2\2 \b\3\2\2")
buf.write("\2!\"\7,\2\2\"\n\3\2\2\2#$\7\61\2\2$\f\3\2\2\2%&\7-\2")
buf.write("\2&\16\3\2\2\2\'(\7/\2\2(\20\3\2\2\2)*\7\'\2\2*\22\3\2")
buf.write("\2\2+,\7.\2\2,\24\3\2\2\2-\61\t\2\2\2.\60\t\3\2\2/.\3")
buf.write("\2\2\2\60\63\3\2\2\2\61/\3\2\2\2\61\62\3\2\2\2\62\26\3")
buf.write("\2\2\2\63\61\3\2\2\2\64\66\t\4\2\2\65\64\3\2\2\2\669\3")
buf.write("\2\2\2\67\65\3\2\2\2\678\3\2\2\28:\3\2\2\29\67\3\2\2\2")
buf.write(":<\7\60\2\2;=\t\4\2\2<;\3\2\2\2=>\3\2\2\2><\3\2\2\2>?")
buf.write("\3\2\2\2?F\3\2\2\2@B\t\4\2\2A@\3\2\2\2BC\3\2\2\2CA\3\2")
buf.write("\2\2CD\3\2\2\2DF\3\2\2\2E\67\3\2\2\2EA\3\2\2\2F\30\3\2")
buf.write("\2\2GH\t\5\2\2HI\3\2\2\2IJ\b\r\2\2J\32\3\2\2\2\b\2\61")
buf.write("\67>CE\3\b\2\2")
return buf.getvalue()
class exprLexer(Lexer):
atn = ATNDeserializer().deserialize(serializedATN())
decisionsToDFA = [ DFA(ds, i) for i, ds in enumerate(atn.decisionToState) ]
T__0 = 1
T__1 = 2
T__2 = 3
T__3 = 4
T__4 = 5
T__5 = 6
T__6 = 7
T__7 = 8
T__8 = 9
ID = 10
NUM = 11
WS = 12
modeNames = [ "DEFAULT_MODE" ]
literalNames = [ "<INVALID>",
"'('", "')'", "'^'", "'*'", "'/'", "'+'", "'-'", "'%'", "','" ]
symbolicNames = [ "<INVALID>",
"ID", "NUM", "WS" ]
ruleNames = [ "T__0", "T__1", "T__2", "T__3", "T__4", "T__5", "T__6",
"T__7", "T__8", "ID", "NUM", "WS" ]
grammarFileName = "expr.g4"
def __init__(self, input=None):
super().__init__(input)
self.checkVersion("4.5.3")
self._interp = LexerATNSimulator(self, self.atn, self.decisionsToDFA, PredictionContextCache())
self._actions = None
self._predicates = None
| 39.772152 | 103 | 0.499045 | 789 | 3,142 | 1.926489 | 0.169835 | 0.171053 | 0.110526 | 0.110526 | 0.246053 | 0.155921 | 0.064474 | 0.036842 | 0.009211 | 0.009211 | 0 | 0.252726 | 0.182686 | 3,142 | 78 | 104 | 40.282051 | 0.339174 | 0.011776 | 0 | 0 | 1 | 0.42623 | 0.48952 | 0.438568 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032787 | false | 0 | 0.032787 | 0 | 0.409836 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d855a89e0dfe24341bc1b155fa518aad671851e0 | 45 | py | Python | 14_Exponentiation/main.py | jmmedel/Python-Tutorials- | 243ae9a6b51a4fce03dd90c02da13b859cbfbe5f | [
"MIT"
] | null | null | null | 14_Exponentiation/main.py | jmmedel/Python-Tutorials- | 243ae9a6b51a4fce03dd90c02da13b859cbfbe5f | [
"MIT"
] | null | null | null | 14_Exponentiation/main.py | jmmedel/Python-Tutorials- | 243ae9a6b51a4fce03dd90c02da13b859cbfbe5f | [
"MIT"
] | null | null | null |
x = 5
y = 2
print(x ** y ) # 5x5 = 25
| 5.625 | 27 | 0.355556 | 9 | 45 | 1.777778 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.466667 | 45 | 7 | 28 | 6.428571 | 0.416667 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d86ff2243cca016f32a1b972200511063d9a0de4 | 132 | py | Python | api/app/views.py | cedarmora/lifestyle-choice-emissions | 77cace989b5ca9493a6fb98dcce4e1dd3d6b686f | [
"MIT"
] | null | null | null | api/app/views.py | cedarmora/lifestyle-choice-emissions | 77cace989b5ca9493a6fb98dcce4e1dd3d6b686f | [
"MIT"
] | null | null | null | api/app/views.py | cedarmora/lifestyle-choice-emissions | 77cace989b5ca9493a6fb98dcce4e1dd3d6b686f | [
"MIT"
] | null | null | null | from flask import jsonify
from app import app
@app.route('/')
def index():
return jsonify('{urbanite: "hey", ruralist: "hey"}') | 22 | 56 | 0.674242 | 18 | 132 | 4.944444 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.151515 | 132 | 6 | 56 | 22 | 0.794643 | 0 | 0 | 0 | 0 | 0 | 0.263158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
d887f29239e1b7c3dc56f8167165943fa87b1896 | 23 | py | Python | diffusion_imaging/__init__.py | lytai1/difusion_imaging | 4884e63d7c36cb05564a736a2f31f2beaacde6d4 | [
"MIT"
] | 12 | 2020-02-01T03:41:17.000Z | 2022-03-02T06:02:36.000Z | __init__.py | pijawca/pijawcabot | 7b7bc2dc716fe26f5fd5fcb9b1fd4faf71eefd25 | [
"Unlicense"
] | 6 | 2020-03-15T02:28:52.000Z | 2021-08-31T11:16:00.000Z | __init__.py | pijawca/pijawcabot | 7b7bc2dc716fe26f5fd5fcb9b1fd4faf71eefd25 | [
"Unlicense"
] | 3 | 2020-03-19T12:42:00.000Z | 2021-12-28T05:06:02.000Z | from . import handlers
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d88c80c0db3404506d6734f6bb8411021858c8c7 | 182 | py | Python | common/aist_common/grammar/equivalence_class/invalid_special_characters.py | sfahad1414/AGENT | 84069edc96b6190bb03ffd5099cbc8966061a563 | [
"Apache-2.0"
] | 15 | 2020-05-06T16:17:56.000Z | 2022-03-30T12:25:16.000Z | common/aist_common/grammar/equivalence_class/invalid_special_characters.py | dionny/AGENT | 8a833406b590e23623fcc67db99f6f964d002396 | [
"Apache-2.0"
] | 2 | 2021-08-25T16:17:16.000Z | 2022-02-10T06:35:58.000Z | common/aist_common/grammar/equivalence_class/invalid_special_characters.py | dionny/AGENT | 8a833406b590e23623fcc67db99f6f964d002396 | [
"Apache-2.0"
] | 7 | 2020-04-07T18:47:55.000Z | 2022-03-30T12:14:58.000Z |
class InvalidSpecialCharacters:
def __init__(self):
self.equivalence_class = "INVALID_SPECIAL_CHARACTERS"
def __str__(self):
return self.equivalence_class
| 20.222222 | 61 | 0.725275 | 18 | 182 | 6.666667 | 0.611111 | 0.25 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208791 | 182 | 8 | 62 | 22.75 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.144444 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
2b270b5cd60865a2e2cc1587cc9feceebe1e7afb | 4,824 | py | Python | cmdb/forms.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | 16 | 2020-08-13T04:28:50.000Z | 2021-06-10T06:24:51.000Z | cmdb/forms.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | 1 | 2019-04-15T07:01:42.000Z | 2019-04-15T07:01:42.000Z | cmdb/forms.py | vikifox/CMDB | bac9b7da204c3eee344f55bb2187df38ef3b3d4c | [
"Apache-2.0"
] | 1 | 2021-12-10T15:44:11.000Z | 2021-12-10T15:44:11.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
from django import forms
from django.forms.widgets import *
from .models import Host, Idc, HostGroup, Cabinet
class AssetForm(forms.ModelForm):
class Meta:
model = Host
exclude = ("id",)
widgets = {
'hostname': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'必填项'}),
'ip': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'必填项'}),
'account': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'other_ip': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'group': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'asset_no': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'asset_type': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'status': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'os': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'vendor': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'up_time': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'cpu_model': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'cpu_num': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'memory': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'disk': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'sn': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'idc': Select(attrs={'class': 'form-control', 'style': 'width:530px;'}),
'position': TextInput(attrs={'class': 'form-control', 'style': 'width:530px;', 'placeholder': u'物理机写位置,虚机写宿主'}),
'memo': Textarea(attrs={'rows': 4, 'cols': 15, 'class': 'form-control', 'style': 'width:530px;'}),
}
class IdcForm(forms.ModelForm):
# def clean(self):
# cleaned_data = super(IdcForm, self).clean()
# value = cleaned_data.get('ids')
# try:
# Idc.objects.get(name=value)
# self._errors['ids'] = self.error_class(["%s的信息已经存在" % value])
# except Idc.DoesNotExist:
# pass
# return cleaned_data
class Meta:
model = Idc
exclude = ("id",)
widgets = {
'ids': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'name': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'address': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'tel': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'contact': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'contact_phone': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'ip_range': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'jigui': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
'bandwidth': TextInput(attrs={'class': 'form-control','style': 'width:450px;'}),
}
class GroupForm(forms.ModelForm):
def clean(self):
cleaned_data = super(GroupForm, self).clean()
value = cleaned_data.get('name')
try:
Cabinet.objects.get(name=value)
self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
except Cabinet.DoesNotExist:
pass
return cleaned_data
class Meta:
model = HostGroup
exclude = ("id", )
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'desc': Textarea(attrs={'rows': 4, 'cols': 15, 'class': 'form-control', 'style': 'width:450px;'}),
}
class CabinetForm(forms.ModelForm):
# def clean(self):
# cleaned_data = super(CabinetForm, self).clean()
# value = cleaned_data.get('name')
# try:
# Cabinet.objects.get(name=value)
# self._errors['name'] = self.error_class(["%s的信息已经存在" % value])
# except Cabinet.DoesNotExist:
# pass
# return cleaned_data
class Meta:
model = Cabinet
exclude = ("id", )
widgets = {
'name': TextInput(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'idc': Select(attrs={'class': 'form-control', 'style': 'width:450px;'}),
'desc': Textarea(attrs={'rows': 4, 'cols': 15, 'class': 'form-control', 'style': 'width:450px;'}),
}
| 43.071429 | 124 | 0.553068 | 499 | 4,824 | 5.300601 | 0.186373 | 0.112287 | 0.199622 | 0.262004 | 0.842344 | 0.842344 | 0.806427 | 0.802647 | 0.428355 | 0.31569 | 0 | 0.029571 | 0.235904 | 4,824 | 111 | 125 | 43.459459 | 0.688009 | 0.128731 | 0 | 0.246377 | 0 | 0 | 0.337557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0.014493 | 0.043478 | 0 | 0.188406 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2b4abacad1e51fbba84ca6f0b559215db4e820be | 49 | py | Python | python/testData/refactoring/move/staleFromImportRemovedWhenNewImportCombinedWithExistingImport/before/src/importing.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/refactoring/move/staleFromImportRemovedWhenNewImportCombinedWithExistingImport/before/src/importing.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/refactoring/move/staleFromImportRemovedWhenNewImportCombinedWithExistingImport/before/src/importing.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from a import A
from b import B
print(A(), B())
| 9.8 | 15 | 0.632653 | 11 | 49 | 2.818182 | 0.454545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22449 | 49 | 4 | 16 | 12.25 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
994ba8d7c9c51a4800c278d422df9e3c5ccbd3f9 | 41 | py | Python | apps/front/__init__.py | dengjinshan/bbs | b61ae9839d3b72ffba7adb94ab4327ecd78be14e | [
"MIT"
] | null | null | null | apps/front/__init__.py | dengjinshan/bbs | b61ae9839d3b72ffba7adb94ab4327ecd78be14e | [
"MIT"
] | 5 | 2021-03-19T02:01:00.000Z | 2022-03-11T23:52:57.000Z | apps/front/__init__.py | dengjinshan/bbs | b61ae9839d3b72ffba7adb94ab4327ecd78be14e | [
"MIT"
] | null | null | null | from .views import bp
from . import hooks | 20.5 | 21 | 0.780488 | 7 | 41 | 4.571429 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 41 | 2 | 22 | 20.5 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9974f5b85e4018ba68e01937a053605f660ae0c9 | 40 | py | Python | src/fluxml/pipelines/__init__.py | achillesrasquinha/fluxml | 063288716250bee2030fa15fe59710048c5e3152 | [
"MIT"
] | 2 | 2021-10-13T02:28:21.000Z | 2021-12-17T18:47:23.000Z | src/fluxml/pipelines/__init__.py | achillesrasquinha/fluxml | 063288716250bee2030fa15fe59710048c5e3152 | [
"MIT"
] | 2 | 2021-12-28T18:00:39.000Z | 2022-01-02T18:45:01.000Z | src/fluxml/pipelines/__init__.py | achillesrasquinha/fluxml | 063288716250bee2030fa15fe59710048c5e3152 | [
"MIT"
] | null | null | null | from fluxml.pipelines.train import train | 40 | 40 | 0.875 | 6 | 40 | 5.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 40 | 1 | 40 | 40 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5149a68f25b65f7fac8c71cf5153619cbb547c5c | 163 | py | Python | capital/tl/__init__.py | ykat0/capital | 7e202045057633e5e57029967881db37f907a220 | [
"BSD-3-Clause"
] | 3 | 2019-12-12T03:45:59.000Z | 2021-01-14T05:48:35.000Z | capital/tl/__init__.py | ykat0/capital | 7e202045057633e5e57029967881db37f907a220 | [
"BSD-3-Clause"
] | 2 | 2020-05-21T13:47:50.000Z | 2020-08-15T05:20:01.000Z | capital/tl/__init__.py | ykat0/capital | 7e202045057633e5e57029967881db37f907a220 | [
"BSD-3-Clause"
] | 1 | 2021-01-08T06:24:54.000Z | 2021-01-08T06:24:54.000Z | from .tl import preprocessing, trajectory_tree, genes_similarity_score
from .tl import _tree_align_dpt, tree_alignment, dpt, dtw
from .tl import read_capital_data
| 40.75 | 70 | 0.846626 | 25 | 163 | 5.16 | 0.64 | 0.139535 | 0.27907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.104294 | 163 | 3 | 71 | 54.333333 | 0.883562 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
516ea2315281f47cca7be8f57980eaec7c8a9b49 | 176 | py | Python | standard_lib/set_union_intersection.py | DahlitzFlorian/python-snippets | 212f63f820b6f5842f74913ed08da18d41dfe7a4 | [
"MIT"
] | 29 | 2019-03-25T09:35:12.000Z | 2022-01-08T22:09:03.000Z | standard_lib/set_union_intersection.py | DahlitzFlorian/python-snippets | 212f63f820b6f5842f74913ed08da18d41dfe7a4 | [
"MIT"
] | null | null | null | standard_lib/set_union_intersection.py | DahlitzFlorian/python-snippets | 212f63f820b6f5842f74913ed08da18d41dfe7a4 | [
"MIT"
] | 4 | 2020-05-19T21:18:12.000Z | 2021-05-18T12:49:21.000Z | set_a = {1, 2}
set_b = {2, 3}
print("Use | and & for set union and intersection.")
print(f"{set_a} & {set_b} = {set_a & set_b}")
print(f"{set_a} | {set_b} = {set_a | set_b}")
| 25.142857 | 52 | 0.585227 | 36 | 176 | 2.583333 | 0.361111 | 0.215054 | 0.301075 | 0.344086 | 0.473118 | 0.473118 | 0.473118 | 0.473118 | 0.473118 | 0.473118 | 0 | 0.027778 | 0.181818 | 176 | 6 | 53 | 29.333333 | 0.618056 | 0 | 0 | 0 | 0 | 0 | 0.642045 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
51f01f54de8d7fc75c7826331ac9f5751e71d73f | 53 | py | Python | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_0/_pkg0_0_0/__init__.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_0/_pkg0_0_0/__init__.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/completion/heavyStarPropagation/lib/_pkg0/_pkg0_0/_pkg0_0_0/__init__.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | from ._pkg0_0_0_0 import *
from ._pkg0_0_0_1 import * | 26.5 | 26 | 0.792453 | 12 | 53 | 2.833333 | 0.416667 | 0.176471 | 0.529412 | 0.588235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0.132075 | 53 | 2 | 27 | 26.5 | 0.565217 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
cfd300b8622e8e7c3be3e82b1ef7648917fb1af3 | 2,025 | py | Python | accelbyte_py_sdk/api/achievement/wrappers/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/achievement/wrappers/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/achievement/wrappers/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
"""Auto-generated package that contains models used by the justice-achievement-service."""
__version__ = ""
__author__ = "AccelByte"
__email__ = "dev@accelbyte.net"
# pylint: disable=line-too-long
from ._achievements import admin_create_new_achievement
from ._achievements import admin_create_new_achievement_async
from ._achievements import admin_delete_achievement
from ._achievements import admin_delete_achievement_async
from ._achievements import admin_get_achievement
from ._achievements import admin_get_achievement_async
from ._achievements import admin_list_achievements
from ._achievements import admin_list_achievements_async
from ._achievements import admin_list_user_achievements
from ._achievements import admin_list_user_achievements_async
from ._achievements import admin_unlock_achievement
from ._achievements import admin_unlock_achievement_async
from ._achievements import admin_update_achievement
from ._achievements import admin_update_achievement_async
from ._achievements import admin_update_achievement_list_order
from ._achievements import admin_update_achievement_list_order_async
from ._achievements import export_achievements
from ._achievements import export_achievements_async
from ._achievements import import_achievements
from ._achievements import import_achievements_async
from ._achievements import public_get_achievement
from ._achievements import public_get_achievement_async
from ._achievements import public_list_achievements
from ._achievements import public_list_achievements_async
from ._achievements import public_list_user_achievements
from ._achievements import public_list_user_achievements_async
from ._achievements import public_unlock_achievement
from ._achievements import public_unlock_achievement_async
| 45 | 90 | 0.877531 | 249 | 2,025 | 6.64257 | 0.277108 | 0.270859 | 0.37243 | 0.261185 | 0.788392 | 0.695889 | 0.314389 | 0.110036 | 0 | 0 | 0 | 0.002165 | 0.087407 | 2,025 | 44 | 91 | 46.022727 | 0.892857 | 0.181728 | 0 | 0 | 1 | 0 | 0.015805 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.903226 | 0 | 0.903226 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
cfd7c3e9e6e900029de2e4af6c1760d9b2baf67d | 34 | py | Python | novahyper/virt/hyper/unixconn/__init__.py | hyperhq/hypernova | a68a217628a3191b7413e447d2d8b81bebf4f010 | [
"Apache-2.0"
] | 7 | 2016-04-13T13:50:37.000Z | 2020-09-27T13:15:43.000Z | novahyper/virt/hyper/unixconn/__init__.py | hyperhq/hypernova | a68a217628a3191b7413e447d2d8b81bebf4f010 | [
"Apache-2.0"
] | null | null | null | novahyper/virt/hyper/unixconn/__init__.py | hyperhq/hypernova | a68a217628a3191b7413e447d2d8b81bebf4f010 | [
"Apache-2.0"
] | 3 | 2016-07-21T10:31:57.000Z | 2019-04-28T09:53:17.000Z | from .unixconn import UnixAdapter
| 17 | 33 | 0.852941 | 4 | 34 | 7.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.966667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3206d284ce96d75130752b68b3f41eab0da53437 | 4,397 | py | Python | test/test_compiler.py | karimb9/punica-python | d396f36ff98ec6a9f889cfb7bfeede724477673e | [
"MIT"
] | null | null | null | test/test_compiler.py | karimb9/punica-python | d396f36ff98ec6a9f889cfb7bfeede724477673e | [
"MIT"
] | null | null | null | test/test_compiler.py | karimb9/punica-python | d396f36ff98ec6a9f889cfb7bfeede724477673e | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import unittest
from punica.compile.contract_compile import PunicaCompiler
class TestCompiler(unittest.TestCase):
def test_compile_contract_remote(self):
contract_path = os.path.join(os.getcwd(), 'test_file', 'test_compile_remote', 'oep4.py')
PunicaCompiler.compile_contract(contract_path)
avm_save_path = os.path.join(os.getcwd(), 'test_file', 'test_compile_remote', 'build', 'oep4.avm')
abi_save_path = os.path.join(os.getcwd(), 'test_file', 'test_compile_remote', 'build', 'oep4_abi.json')
with open(avm_save_path, 'r') as f:
avm_save = f.read()
self.assertIsNotNone(avm_save)
with open(abi_save_path, 'r') as f3:
abi_save = f3.read()
self.assertIsNotNone(abi_save)
os.remove(avm_save_path)
os.remove(abi_save_path)
def test_compile_contract(self):
contract_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'contracts', 'oep4.py')
PunicaCompiler.compile_contract(contract_path)
split_path = os.path.split(contract_path)
save_path = os.path.join(os.getcwd(), 'build', split_path[1])
avm_save_path = save_path.replace('.py', '.avm')
abi_save_path = save_path.replace('.py', '.json')
with open(os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.avm'), 'r') as f:
target_avm = f.read()
with open(os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.json'), 'r') as f:
target_abi = f.read()
with open(avm_save_path, 'r') as f:
hex_avm_code = f.read()
self.assertEqual(target_avm, hex_avm_code)
with open(abi_save_path, 'r') as f:
abi = f.read()
self.assertEqual(target_abi, abi)
os.remove(avm_save_path)
os.remove(abi_save_path)
os.removedirs('build')
def test_generate_avm_file(self):
contract_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.py')
PunicaCompiler.generate_avm_file(contract_path)
split_path = os.path.split(contract_path)
save_path = os.path.join(os.getcwd(), 'build', split_path[1])
avm_save_path = save_path.replace('.py', '.avm')
with open(os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.avm'), 'r') as f:
target_avm = f.read()
with open(avm_save_path, 'r') as f:
hex_avm_code = f.read()
self.assertEqual(target_avm, hex_avm_code)
os.remove(avm_save_path)
os.removedirs('build')
def test_generate_avm_code(self):
path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.py')
hex_avm = PunicaCompiler.generate_avm_code(path)
with open(os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.avm'), 'r') as f:
self.assertEqual(f.read(), hex_avm)
def test_generate_abi_file(self):
contract_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.py')
PunicaCompiler.generate_abi_file(contract_path)
split_path = os.path.split(contract_path)
save_path = os.path.join(os.getcwd(), 'build', split_path[1])
abi_save_path = save_path.replace('.py', '.json')
with open(os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4.json'), 'r') as f:
target_abi = f.read()
with open(abi_save_path, 'r') as f:
abi = f.read()
self.assertEqual(target_abi, abi)
os.remove(abi_save_path)
os.removedirs('build')
def test_generate_invoke_config(self):
abi_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4_token_abi.json')
invoke_config_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'invoke_config.json')
PunicaCompiler.generate_invoke_config(abi_path, invoke_config_path)
os.remove(invoke_config_path)
def test_update_invoke_config(self):
abi_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'oep4_token_abi.json')
invoke_config_path = os.path.join(os.getcwd(), 'test_file', 'test_compile', 'invoke_config.json')
PunicaCompiler.update_invoke_config(abi_path, invoke_config_path)
os.remove(invoke_config_path)
if __name__ == '__main__':
unittest.main()
| 45.802083 | 111 | 0.643393 | 615 | 4,397 | 4.305691 | 0.095935 | 0.075529 | 0.071752 | 0.086103 | 0.83648 | 0.83648 | 0.832326 | 0.793051 | 0.784366 | 0.766994 | 0 | 0.006054 | 0.211053 | 4,397 | 95 | 112 | 46.284211 | 0.757279 | 0.009779 | 0 | 0.6375 | 0 | 0 | 0.144072 | 0 | 0 | 0 | 0 | 0 | 0.0875 | 1 | 0.0875 | false | 0 | 0.0375 | 0 | 0.1375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5c706ce9aa20801998782b2e930934a72e111a3a | 4,075 | py | Python | models_config.py | EnderPicture/SwinIR-GUI | 16aaedba5227af2b52c5fd7dfa9c04ef12e3b98c | [
"Apache-2.0"
] | 1 | 2022-01-08T04:32:46.000Z | 2022-01-08T04:32:46.000Z | models_config.py | EnderPicture/SwinIR-GUI | 16aaedba5227af2b52c5fd7dfa9c04ef12e3b98c | [
"Apache-2.0"
] | null | null | null | models_config.py | EnderPicture/SwinIR-GUI | 16aaedba5227af2b52c5fd7dfa9c04ef12e3b98c | [
"Apache-2.0"
] | null | null | null | MODLES = {
'classicalSR s48 x2': {
'task': 'classical_sr',
'scale': 2,
'training_patch_size': 48,
'path': 'model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x2.pth',
},
'classicalSR s48 x3': {
'task': 'classical_sr',
'scale': 3,
'training_patch_size': 48,
'path': 'model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x3.pth',
},
'classicalSR s48 x4': {
'task': 'classical_sr',
'scale': 4,
'training_patch_size': 48,
'path': 'model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth',
},
'classicalSR s48 x8': {
'task': 'classical_sr',
'scale': 8,
'training_patch_size': 48,
'path': 'model_zoo/swinir/001_classicalSR_DIV2K_s48w8_SwinIR-M_x8.pth',
},
'classicalSR s64 x2': {
'task': 'classical_sr',
'scale': 2,
'training_patch_size': 64,
'path': 'model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x2.pth',
},
'classicalSR s64 x3': {
'task': 'classical_sr',
'scale': 3,
'training_patch_size': 64,
'path': 'model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x3.pth',
},
'classicalSR s64 x4': {
'task': 'classical_sr',
'scale': 4,
'training_patch_size': 64,
'path': 'model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x4.pth',
},
'classicalSR s64 x8': {
'task': 'classical_sr',
'scale': 8,
'training_patch_size': 64,
'path': 'model_zoo/swinir/001_classicalSR_DF2K_s64w8_SwinIR-M_x8.pth',
},
'lightweightSR x2': {
'task': 'lightweight_sr',
'scale': 2,
'path': 'model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x2.pth',
},
'lightweightSR x3': {
'task': 'lightweight_sr',
'scale': 3,
'path': 'model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x3.pth',
},
'lightweightSR x4': {
'task': 'lightweight_sr',
'scale': 4,
'path': 'model_zoo/swinir/002_lightweightSR_DIV2K_s64w8_SwinIR-S_x4.pth',
},
'realSR M x4': {
'task': 'real_sr',
'model_size': 'm',
'scale': 4,
'path': 'model_zoo/swinir/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth',
},
'realSR L x4': {
'task': 'real_sr',
'model_size': 'l',
'scale': 4,
'path': 'model_zoo/swinir/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth',
},
'gray denoise 15': {
'task': 'gray_dn',
'scale': 1,
'path': 'model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth',
},
'gray denoise 25': {
'task': 'gray_dn',
'scale': 1,
'path': 'model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth',
},
'gray denoise 50': {
'task': 'gray_dn',
'scale': 1,
'path': 'model_zoo/swinir/004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth',
},
'color denoise 15': {
'task': 'color_dn',
'scale': 1,
'path': 'model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth',
},
'color denoise 25': {
'task': 'color_dn',
'scale': 1,
'path': 'model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth',
},
'color denoise 50': {
'task': 'color_dn',
'scale': 1,
'path': 'model_zoo/swinir/005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth',
},
'de-jpeg 10': {
'task': 'jpeg_car',
'scale': 1,
'path': 'model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth',
},
'de-jpeg 20': {
'task': 'jpeg_car',
'scale': 1,
'path': 'model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth',
},
'de-jpeg 30': {
'task': 'jpeg_car',
'scale': 1,
'path': 'model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth',
},
'de-jpeg 40': {
'task': 'jpeg_car',
'scale': 1,
'path': 'model_zoo/swinir/006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth',
},
}
| 31.835938 | 87 | 0.574233 | 500 | 4,075 | 4.314 | 0.148 | 0.095967 | 0.127955 | 0.191933 | 0.814557 | 0.770051 | 0.719981 | 0.719981 | 0.719981 | 0.598516 | 0 | 0.093677 | 0.266503 | 4,075 | 127 | 88 | 32.086614 | 0.627969 | 0 | 0 | 0.425197 | 0 | 0 | 0.596319 | 0.338896 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5caef81c63c5af15db1f7cd87c7138242be1b3df | 88 | py | Python | Main.py | xuhaoteoh/car-sound-classification-with-keras | 7c71c6e8b200aac24da78462b2820baceec9e087 | [
"MIT"
] | null | null | null | Main.py | xuhaoteoh/car-sound-classification-with-keras | 7c71c6e8b200aac24da78462b2820baceec9e087 | [
"MIT"
] | null | null | null | Main.py | xuhaoteoh/car-sound-classification-with-keras | 7c71c6e8b200aac24da78462b2820baceec9e087 | [
"MIT"
] | null | null | null | from Models import DataModel
dataModel = DataModel.DataModel()
dataModel.prepare_data() | 22 | 33 | 0.829545 | 10 | 88 | 7.2 | 0.6 | 1 | 1.125 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 88 | 4 | 34 | 22 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5cd93fadcd24bfc5f9fe36be28062f14502d13b6 | 27 | py | Python | python/testData/psi/FStringTerminatedByQuoteInsideNestedFStringLiteralInFormatPart.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/psi/FStringTerminatedByQuoteInsideNestedFStringLiteralInFormatPart.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/psi/FStringTerminatedByQuoteInsideNestedFStringLiteralInFormatPart.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | s = f'{f"""{42:{f"'"}}"""}' | 27 | 27 | 0.222222 | 5 | 27 | 1.2 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 0.074074 | 27 | 1 | 27 | 27 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5cda26800728d8879a49a1039a5efff6cd7681b7 | 18,718 | py | Python | restraintlib/lib/ribose_pyrimidine.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | restraintlib/lib/ribose_pyrimidine.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | 1 | 2021-11-11T18:45:10.000Z | 2021-11-11T18:45:10.000Z | restraintlib/lib/ribose_pyrimidine.py | mkowiel/restraintlib | 32de01d67ae290a45f3199e90c729acc258a6249 | [
"BSD-3-Clause"
] | null | null | null | RIBOSE_PYRIMIDINE_PDB_CODES = ['C', 'T', 'U', 'IC']
RIBOSE_PYRIMIDINE_ALL_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_CHI_GAMMA_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_CHI_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_CONFORMATION_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_SUGAR_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_GAMMA_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_PDB_CODES = RIBOSE_PYRIMIDINE_PDB_CODES
RIBOSE_PYRIMIDINE_ATOM_NAMES = {
"C1'": "C1'",
"C1*": "C1'",
"C2": "C2",
"C2'": "C2'",
"C2*": "C2'",
"C3'": "C3'",
"C3*": "C3'",
"C4'": "C4'",
"C4*": "C4'",
"C5'": "C5'",
"C5*": "C5'",
"C6": "C6",
"N1": "N1",
"O2'": "O2'",
"O2*": "O2'",
"O3'": "O3'",
"O3*": "O3'",
"O4'": "O4'",
"O4*": "O4'",
"O5'": "O5'",
"O5*": "O5'",
"P": "P"
}
RIBOSE_PYRIMIDINE_ALL_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_CHI_GAMMA_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_CHI_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_CONFORMATION_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_SUGAR_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_GAMMA_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_ATOM_NAMES = RIBOSE_PYRIMIDINE_ATOM_NAMES
RIBOSE_PYRIMIDINE_ATOM_RES = {
"C1'": 0,
"C2": 0,
"C2'": 0,
"C3'": 0,
"C4'": 0,
"C5'": 0,
"C6": 0,
"N1": 0,
"O2'": 0,
"O3'": 0,
"O4'": 0,
"O5'": 0
}
RIBOSE_PYRIMIDINE_ALL_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_CHI_GAMMA_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_CHI_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_CONFORMATION_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_SUGAR_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_GAMMA_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_ATOM_RES = RIBOSE_PYRIMIDINE_ATOM_RES
RIBOSE_PYRIMIDINE_REQUIRED_CONDITION = [
("C1'", "C2'", 2.0, 0, 0),
("C2'", "C3'", 2.0, 0, 0),
("C3'", "C4'", 2.0, 0, 0),
("C4'", "O4'", 2.0, 0, 0),
("C1'", "O4'", 2.0, 0, 0),
("C3'", "O3'", 2.0, 0, 0),
("C4'", "C5'", 2.0, 0, 0),
("C5'", "O5'", 2.0, 0, 0),
("C2'", "O2'", 2.0, 0, 0),
("C1'", 'N1', 2.0, 0, 0),
("O5'", 'P', 2.5, 0, 0),
("O3'", 'P', 2.5, 0, 1)
]
RIBOSE_PYRIMIDINE_ALL_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_CHI_GAMMA_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_CHI_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_CONFORMATION_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_SUGAR_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_GAMMA_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_REQUIRED_CONDITION = RIBOSE_PYRIMIDINE_REQUIRED_CONDITION
RIBOSE_PYRIMIDINE_DISTANCE_MEASURE = {
'measure': 'euclidean_angles',
'restraint_names': ["aC4'C5'O5'", "aC4'C3'O3'", "aN1C1'C2'", "aC1'N1C2", "aC1'N1C6", "aN1C1'O4'", "aC2'C1'O4'", "aC1'C2'O2'", "aC3'C2'O2'", "aC2'C3'O3'", "aC1'C2'C3'", "aC2'C3'C4'", "aC3'C4'O4'", "aC1'O4'C4'", "aC3'C4'C5'", "aC5'C4'O4'"]
}
RIBOSE_PYRIMIDINE_ALL_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_GAMMA_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CONFORMATION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_SUGAR_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_GAMMA_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE = {
'measure': 'euclidean_angles',
'restraint_names': ["tO4'C1'N1C2", "tC3'C4'C5'O5'", "pC1'C2'C3'C4'O4'"]
}
RIBOSE_PYRIMIDINE_ALL_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_GAMMA_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CONFORMATION_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_SUGAR_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_GAMMA_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_CONDITION_DISTANCE_MEASURE = RIBOSE_PYRIMIDINE_CONDITION_DISTANCE_MEASURE
RIBOSE_PYRIMIDINE_ALL_RESTRAINTS = [{
'conditions': [], 'name': 'ribose_pyrimidine==All=All', 'restraints': [['dist', "dC1'C2'", ["C1'", "C2'"], 1.525, 0.012], ['dist', "dC2'C3'", ["C2'", "C3'"], 1.523, 0.011]]
}
]
RIBOSE_PYRIMIDINE_CHI_GAMMA_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_pyrimidine==Chi=anti__Gamma=gauche+',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 110.6, 1.9]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_pyrimidine==Chi=anti__Gamma=gauche-',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 109.6, 1.8]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_pyrimidine==Chi=anti__Gamma=trans',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 110.2, 1.9]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_pyrimidine==Chi=syn__Gamma=gauche+',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 112.5, 1.9]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_pyrimidine==Chi=syn__Gamma=gauche-',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 111.0, 0.9]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5], ['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_pyrimidine==Chi=syn__Gamma=trans',
'restraints': [['angle', "aC4'C5'O5'", ["C4'", "C5'", "O5'"], 110.5, 2.3]]
}
]
RIBOSE_PYRIMIDINE_CHI_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5]],
'name': 'ribose_pyrimidine==Chi=anti',
'restraints': [['angle', "aC4'C3'O3'", ["C4'", "C3'", "O3'"], 110.7, 2.3]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5]],
'name': 'ribose_pyrimidine==Chi=syn',
'restraints': [['angle', "aC4'C3'O3'", ["C4'", "C3'", "O3'"], 109.8, 2.1]]
}
]
RIBOSE_PYRIMIDINE_BASE_FUNC_OF_TORSION_CHI_RESTRAINTS = [
{
'conditions': [],
'name': 'ribose_pyrimidine==Base=pyrimidine',
'restraints': [ ['angle', "aN1C1'C2'", ['N1', "C1'", "C2'"], None, None, None, None, "pyrimidine-N1-C1'-C2' or N9-C1'-C2'.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]],
['angle', "aC1'N1C2", ["C1'", 'N1', 'C2'], None, None, None, None, "pyrimidine-C1'-N1-C2 or C1'-N9-C4.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]],
['angle', "aC1'N1C6", ["C1'", 'N1', 'C6'], None, None, None, None, "pyrimidine-C1'-N1-C6 or C1'-N9-C8.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]],
['angle', "aN1C1'O4'", ['N1', "C1'", "O4'"], None, None, None, None, "pyrimidine-N1-C1'-O4' or N9-C1'-O4'.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]]]
}
]
RIBOSE_PYRIMIDINE_CONFORMATION_RESTRAINTS = [
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_pyrimidine==Conformation=C2'-endo",
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.527, 0.01], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.41, 0.009], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 106.0, 0.8]]
},
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_pyrimidine==Conformation=C3'-endo",
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.52, 0.009], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.416, 0.008], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 107.3, 0.6]]
},
{
'conditions': [],
'name': 'ribose_pyrimidine==Conformation=Other',
'restraints': [['dist', "dC3'C4'", ["C3'", "C4'"], 1.531, 0.009], ['dist', "dC2'O2'", ["C2'", "O2'"], 1.413, 0.008], ['angle', "aC2'C1'O4'", ["C2'", "C1'", "O4'"], 106.2, 1.3]]
}
]
RIBOSE_PYRIMIDINE_SUGAR_RESTRAINTS = [{
'conditions': [], 'name': 'ribose_pyrimidine==Sugar=ribose', 'restraints': [['dist', "dC4'O4'", ["C4'", "O4'"], 1.45, 0.009]]
}
]
RIBOSE_PYRIMIDINE_CHI_CONFORMATION_RESTRAINTS = [
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_pyrimidine==Chi=anti__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.0, 2.1],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 113.6, 2.5],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 109.4, 2.4]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_pyrimidine==Chi=anti__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 108.7, 2.3],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 110.4, 2.1],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 113.4, 2.1]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 180, 22.5]],
'name': 'ribose_pyrimidine==Chi=anti__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.9, 1.4],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 113.3, 0.9],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 111.9, 2.5]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_pyrimidine==Chi=syn__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 112.5, 2.1],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 114.1, 1.9],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 110.1, 2.2]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5], ['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_pyrimidine==Chi=syn__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 109.9, 2.7],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 110.0, 2.0],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 114.2, 0.9]]
},
{
'conditions': [['torsion', "tO4'C1'N1C2", ["O4'", "C1'", 'N1', 'C2'], 0, 22.5]],
'name': 'ribose_pyrimidine==Chi=syn__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'O2'", ["C1'", "C2'", "O2'"], 107.7, 1.6],
['angle', "aC3'C2'O2'", ["C3'", "C2'", "O2'"], 111.9, 1.1],
['angle', "aC2'C3'O3'", ["C2'", "C3'", "O3'"], 113.0, 1.7]]
}
]
RIBOSE_PYRIMIDINE_SUGAR_CONFORMATION_FUNC_OF_TAU_MAX_RESTRAINTS = [
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 162, 4.5]],
'name': "ribose_pyrimidine==Sugar=ribose__Conformation=C2'-endo",
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-C2'-endo-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-C2'-endo-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-C2'-endo-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-C2'-endo-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
},
{
'conditions': [['pseudorotation', "pC1'C2'C3'C4'O4'", ["C1'", "C2'", "C3'", "C4'", "O4'"], 18, 4.5]],
'name': "ribose_pyrimidine==Sugar=ribose__Conformation=C3'-endo",
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-C3'-endo-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-C3'-endo-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-C3'-endo-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-C3'-endo-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
},
{
'conditions': [],
'name': 'ribose_pyrimidine==Sugar=ribose__Conformation=Other',
'restraints': [ ['angle', "aC1'C2'C3'", ["C1'", "C2'", "C3'"], None, None, None, None, "ribose-Other-C1'-C2'-C3'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC2'C3'C4'", ["C2'", "C3'", "C4'"], None, None, None, None, "ribose-Other-C2'-C3'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC3'C4'O4'", ["C3'", "C4'", "O4'"], None, None, None, None, "ribose-Other-C3'-C4'-O4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]],
['angle', "aC1'O4'C4'", ["C1'", "O4'", "C4'"], None, None, None, None, "ribose-Other-C1'-O4'-C4'.pickle", ['tau_max', ["C1'", "C2'", "C3'", "C4'", "O4'"]]]]
}
]
RIBOSE_PYRIMIDINE_GAMMA_RESTRAINTS = [
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 60, 8.75]],
'name': 'ribose_pyrimidine==Gamma=gauche+',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.508, 0.009], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 115.7, 1.2], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 109.4, 1.0]]
},
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], -60, 8.75]],
'name': 'ribose_pyrimidine==Gamma=gauche-',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.518, 0.009], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 114.5, 1.2], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 107.8, 0.9]]
},
{
'conditions': [['torsion', "tC3'C4'C5'O5'", ["C3'", "C4'", "C5'", "O5'"], 180, 21.25]],
'name': 'ribose_pyrimidine==Gamma=trans',
'restraints': [['dist', "dC4'C5'", ["C4'", "C5'"], 1.509, 0.01], ['angle', "aC3'C4'C5'", ["C3'", "C4'", "C5'"], 113.8, 1.3], ['angle', "aC5'C4'O4'", ["C5'", "C4'", "O4'"], 109.9, 1.2]]
}
]
RIBOSE_PYRIMIDINE_ALL_FUNC_OF_TORSION_CHI_RESTRAINTS = [
{
'conditions': [],
'name': 'ribose_pyrimidine==All=All',
'restraints': [ ['dist', "dC1'N1", ["C1'", 'N1'], None, None, None, None, "All-C1'-N1 or C1'-N9.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]],
['dist', "dC1'O4'", ["C1'", "O4'"], None, None, None, None, "All-C1'-O4'.pickle", ['torsion_chi', ["O4'", "C1'", 'N1', 'C2']]]]
}
] | 60.380645 | 241 | 0.580137 | 2,471 | 18,718 | 4.122218 | 0.047754 | 0.256038 | 0.082466 | 0.121736 | 0.904084 | 0.893579 | 0.870901 | 0.826625 | 0.750834 | 0.561162 | 0 | 0.092941 | 0.168234 | 18,718 | 310 | 242 | 60.380645 | 0.561308 | 0 | 0 | 0.047458 | 0 | 0 | 0.304717 | 0.08088 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a2f7370876832e3df6ca4b304af6ca6dc71371c | 146 | py | Python | src/agent/__init__.py | hsjharvey/DeepRL | ff4d5e33406f196abe5633ae6f456ea4b28b8b4d | [
"Apache-2.0"
] | 3 | 2020-01-07T23:24:14.000Z | 2021-01-21T04:22:26.000Z | src/agent/__init__.py | hsjharvey/DeepRL | ff4d5e33406f196abe5633ae6f456ea4b28b8b4d | [
"Apache-2.0"
] | null | null | null | src/agent/__init__.py | hsjharvey/DeepRL | ff4d5e33406f196abe5633ae6f456ea4b28b8b4d | [
"Apache-2.0"
] | 1 | 2022-01-25T19:45:00.000Z | 2022-01-25T19:45:00.000Z | # -*- coding:utf-8 -*-
from .CategoricalDQN import *
from .DQN import *
from .QuantileDQN import *
from .ExpectileDQN import *
from .A2C import *
| 20.857143 | 29 | 0.705479 | 18 | 146 | 5.722222 | 0.555556 | 0.38835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 0.164384 | 146 | 6 | 30 | 24.333333 | 0.827869 | 0.136986 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a52776c2ea5d250e9223e14593412b4cb2d5e74 | 253 | py | Python | angr/procedures/glibc/__errno_location.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 6,132 | 2015-08-06T23:24:47.000Z | 2022-03-31T21:49:34.000Z | angr/procedures/glibc/__errno_location.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 2,272 | 2015-08-10T08:40:07.000Z | 2022-03-31T23:46:44.000Z | angr/procedures/glibc/__errno_location.py | Kyle-Kyle/angr | 345b2131a7a67e3a6ffc7d9fd475146a3e12f837 | [
"BSD-2-Clause"
] | 1,155 | 2015-08-06T23:37:39.000Z | 2022-03-31T05:54:11.000Z | import angr
######################################
# __errno_location
######################################
class __errno_location(angr.SimProcedure):
def run(self): #pylint:disable=arguments-differ
return self.state.libc.errno_location
| 25.3 | 52 | 0.529644 | 22 | 253 | 5.772727 | 0.727273 | 0.307087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110672 | 253 | 9 | 53 | 28.111111 | 0.564444 | 0.185771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7a831d38dba0b1f3b7f8c01c92aefff8b3622177 | 153 | py | Python | tests/test_bfs.py | gzxultra/GraphAlg | 9ca203a5451636f2c32ff47494c60c2f57f94ddd | [
"MIT"
] | null | null | null | tests/test_bfs.py | gzxultra/GraphAlg | 9ca203a5451636f2c32ff47494c60c2f57f94ddd | [
"MIT"
] | null | null | null | tests/test_bfs.py | gzxultra/GraphAlg | 9ca203a5451636f2c32ff47494c60c2f57f94ddd | [
"MIT"
] | null | null | null | from src.bfs import bfs_iterative
def test_bfs_iterative(graph2):
src, dst = 1, 6
assert bfs_iterative(graph2, src, dst) == [1, 2, 3, 4, 5, 6]
| 21.857143 | 64 | 0.660131 | 27 | 153 | 3.592593 | 0.592593 | 0.371134 | 0.371134 | 0.43299 | 0.515464 | 0.515464 | 0 | 0 | 0 | 0 | 0 | 0.082645 | 0.20915 | 153 | 6 | 65 | 25.5 | 0.719008 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8fb40f111401602d135c471e1e76d9bfe6747fd9 | 101 | py | Python | raidex/utils/random.py | luehrsFred/raidex | ade805bf3aa60b8ff37f73ea4caa365de513085f | [
"MIT"
] | 6 | 2019-06-28T12:34:08.000Z | 2021-06-22T05:01:56.000Z | raidex/utils/random.py | luehrsFred/raidex | ade805bf3aa60b8ff37f73ea4caa365de513085f | [
"MIT"
] | 3 | 2019-07-01T08:37:12.000Z | 2020-04-30T07:49:53.000Z | raidex/utils/random.py | luehrsFred/raidex | ade805bf3aa60b8ff37f73ea4caa365de513085f | [
"MIT"
] | 3 | 2019-07-01T08:34:26.000Z | 2020-05-06T10:17:38.000Z | from uuid import uuid4
def create_random_32_bytes_id():
return int(uuid4().int % (2 ** 32 - 1)) | 20.2 | 43 | 0.673267 | 17 | 101 | 3.764706 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 0.188119 | 101 | 5 | 43 | 20.2 | 0.682927 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
8fdc7f66ad433e67c62beeac9b0b02753e2e32a2 | 94 | py | Python | jumpscale/tools/errorhandler/__init__.py | zaibon/js-ng | 8b63c04757d1432ed4aa588500a113610701de14 | [
"Apache-2.0"
] | 2 | 2021-04-28T10:46:08.000Z | 2021-12-22T12:33:34.000Z | jumpscale/tools/errorhandler/__init__.py | zaibon/js-ng | 8b63c04757d1432ed4aa588500a113610701de14 | [
"Apache-2.0"
] | 321 | 2020-06-15T11:48:21.000Z | 2022-03-29T22:13:33.000Z | jumpscale/tools/errorhandler/__init__.py | zaibon/js-ng | 8b63c04757d1432ed4aa588500a113610701de14 | [
"Apache-2.0"
] | 4 | 2020-06-18T06:19:29.000Z | 2021-07-14T12:54:47.000Z | def export_module_as():
from .errorhandler import ErrorHandler
return ErrorHandler()
| 18.8 | 42 | 0.755319 | 10 | 94 | 6.9 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.180851 | 94 | 4 | 43 | 23.5 | 0.896104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
64e1c378b27c2011e3ad1aa821f29a227dd56bfe | 48 | py | Python | pandagg/interactive/__init__.py | alk-lbinet/pandagg | 542350f84ca4497ab4a5f01b054aff2385f6827e | [
"Apache-2.0"
] | null | null | null | pandagg/interactive/__init__.py | alk-lbinet/pandagg | 542350f84ca4497ab4a5f01b054aff2385f6827e | [
"Apache-2.0"
] | null | null | null | pandagg/interactive/__init__.py | alk-lbinet/pandagg | 542350f84ca4497ab4a5f01b054aff2385f6827e | [
"Apache-2.0"
] | null | null | null | from .mappings import *
from .response import *
| 16 | 23 | 0.75 | 6 | 48 | 6 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 48 | 2 | 24 | 24 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8f05a0370c8933260912c421a541cc0b59c4394e | 679 | py | Python | simscale_sdk/api/__init__.py | SimScaleGmbH/simscale-python-sdk | 34c881ca0be87e2b0bb315a9fee1d73f0da61e78 | [
"MIT"
] | 8 | 2021-01-22T13:41:03.000Z | 2022-01-03T09:00:10.000Z | simscale_sdk/api/__init__.py | eljoelopez/simscale-python-sdk | 189f1337b2ab40feed123111ddead0cdecf83c93 | [
"MIT"
] | null | null | null | simscale_sdk/api/__init__.py | eljoelopez/simscale-python-sdk | 189f1337b2ab40feed123111ddead0cdecf83c93 | [
"MIT"
] | 3 | 2021-03-18T15:52:52.000Z | 2022-01-03T08:59:30.000Z | from __future__ import absolute_import
# flake8: noqa
# import apis into api package
from simscale_sdk.api.geometries_api import GeometriesApi
from simscale_sdk.api.geometry_imports_api import GeometryImportsApi
from simscale_sdk.api.mesh_operations_api import MeshOperationsApi
from simscale_sdk.api.meshes_api import MeshesApi
from simscale_sdk.api.projects_api import ProjectsApi
from simscale_sdk.api.reports_api import ReportsApi
from simscale_sdk.api.simulation_runs_api import SimulationRunsApi
from simscale_sdk.api.simulations_api import SimulationsApi
from simscale_sdk.api.storage_api import StorageApi
from simscale_sdk.api.table_imports_api import TableImportsApi
| 42.4375 | 68 | 0.885125 | 96 | 679 | 5.958333 | 0.364583 | 0.20979 | 0.262238 | 0.314685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001603 | 0.081001 | 679 | 15 | 69 | 45.266667 | 0.915064 | 0.060383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8f6c67363a7eefb3904a97ef59b4f7747d0359f2 | 3,450 | py | Python | tests/openapi/test_responses_records.py | peterdemin/kinto | ffdc764cdd0e69b277a7bdcd9151f9809eaa78d4 | [
"Apache-2.0"
] | null | null | null | tests/openapi/test_responses_records.py | peterdemin/kinto | ffdc764cdd0e69b277a7bdcd9151f9809eaa78d4 | [
"Apache-2.0"
] | null | null | null | tests/openapi/test_responses_records.py | peterdemin/kinto | ffdc764cdd0e69b277a7bdcd9151f9809eaa78d4 | [
"Apache-2.0"
] | null | null | null | from bravado_core.response import validate_response
from .support import OpenAPITest, MINIMALIST_RECORD
class OpenAPIRecordResponsesTest(OpenAPITest):
def test_get_record_200(self):
response = self.app.get('/buckets/b1/collections/c1/records/r1',
headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].get_record
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
def test_post_record_200(self):
response = self.app.post_json('/buckets/b1/collections/c1/records',
self.record, headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].create_record
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
def test_post_record_201(self):
response = self.app.post_json('/buckets/b1/collections/c1/records',
MINIMALIST_RECORD, headers=self.headers, status=201)
response = self.cast_bravado_response(response)
op = self.resources['Records'].create_record
schema = self.spec.deref(op.op_spec['responses']['201'])
validate_response(schema, op, response)
def test_put_record_200(self):
response = self.app.put_json('/buckets/b1/collections/c1/records/r1',
MINIMALIST_RECORD, headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].update_record
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
def test_put_record_201(self):
response = self.app.put_json('/buckets/b1/collections/c1/records/r2',
MINIMALIST_RECORD, headers=self.headers, status=201)
response = self.cast_bravado_response(response)
op = self.resources['Records'].update_record
schema = self.spec.deref(op.op_spec['responses']['201'])
validate_response(schema, op, response)
def test_delete_record_200(self):
response = self.app.delete('/buckets/b1/collections/c1/records/r1',
headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].delete_record
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
def test_get_records_200(self):
response = self.app.get('/buckets/b1/collections/c1/records',
headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].get_records
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
def test_delete_records_200(self):
response = self.app.delete('/buckets/b1/collections/c1/records',
headers=self.headers, status=200)
response = self.cast_bravado_response(response)
op = self.resources['Records'].delete_records
schema = self.spec.deref(op.op_spec['responses']['200'])
validate_response(schema, op, response)
| 48.591549 | 90 | 0.646087 | 400 | 3,450 | 5.3875 | 0.1025 | 0.089095 | 0.059397 | 0.070534 | 0.930394 | 0.930394 | 0.907193 | 0.907193 | 0.901624 | 0.893271 | 0 | 0.034901 | 0.235942 | 3,450 | 70 | 91 | 49.285714 | 0.782625 | 0 | 0 | 0.610169 | 0 | 0 | 0.126377 | 0.082319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135593 | false | 0 | 0.033898 | 0 | 0.186441 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
71098cafac12ff6977f83ed5e691e02fa40a8c45 | 96 | py | Python | SimPEG/utils/drivers/__init__.py | Prithwijit-Chak/simpeg | d93145d768b5512621cdd75566b4a8175fee9ed3 | [
"MIT"
] | 358 | 2015-03-11T05:48:41.000Z | 2022-03-26T02:04:12.000Z | SimPEG/utils/drivers/__init__.py | thast/simpeg | 8021082b8b53f3c08fa87fc085547bdd56437c6b | [
"MIT"
] | 885 | 2015-01-19T09:23:48.000Z | 2022-03-29T12:08:34.000Z | SimPEG/utils/drivers/__init__.py | thast/simpeg | 8021082b8b53f3c08fa87fc085547bdd56437c6b | [
"MIT"
] | 214 | 2015-03-11T05:48:43.000Z | 2022-03-02T01:05:11.000Z | from .gravity_driver import GravityDriver_Inv
from .magnetics_driver import MagneticsDriver_Inv
| 32 | 49 | 0.895833 | 12 | 96 | 6.833333 | 0.666667 | 0.292683 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 96 | 2 | 50 | 48 | 0.931818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8559685378d6d3ad066044ac9aab0035e490eabd | 17 | py | Python | tests/basics/bytes_compare2.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 13,648 | 2015-01-01T01:34:51.000Z | 2022-03-31T16:19:53.000Z | tests/basics/bytes_compare2.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 7,092 | 2015-01-01T07:59:11.000Z | 2022-03-31T23:52:18.000Z | tests/basics/bytes_compare2.py | sebastien-riou/micropython | 116c15842fd48ddb77b0bc016341d936a0756573 | [
"MIT"
] | 4,942 | 2015-01-02T11:48:50.000Z | 2022-03-31T19:57:10.000Z | print(b"1" == 1)
| 8.5 | 16 | 0.470588 | 4 | 17 | 2 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0.176471 | 17 | 1 | 17 | 17 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
855fab057f7ea1aabafcf455ae814b0814048bdb | 23 | py | Python | Python/Tests/TestData/Coverage/MultiModule/foo/__init__.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 404 | 2019-05-07T02:21:57.000Z | 2022-03-31T17:03:04.000Z | Python/Tests/TestData/Coverage/MultiModule/foo/__init__.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 1,672 | 2019-05-06T21:09:38.000Z | 2022-03-31T23:16:04.000Z | Python/Tests/TestData/Coverage/MultiModule/foo/__init__.py | techkey/PTVS | 8355e67eedd8e915ca49bd38a2f36172696fd903 | [
"Apache-2.0"
] | 186 | 2019-05-13T03:17:37.000Z | 2022-03-31T16:24:05.000Z | import blah
print('hi') | 11.5 | 11 | 0.73913 | 4 | 23 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 23 | 2 | 12 | 11.5 | 0.809524 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
8571f0a787d972eada06db4285598955c35c74a6 | 43 | py | Python | tushare/01 Trade/0901 tushare_version.py | CoderDream/python-best-practice | 40e6b5315daefb37c59daa1a1990ac1ae10f8cca | [
"MIT"
] | null | null | null | tushare/01 Trade/0901 tushare_version.py | CoderDream/python-best-practice | 40e6b5315daefb37c59daa1a1990ac1ae10f8cca | [
"MIT"
] | null | null | null | tushare/01 Trade/0901 tushare_version.py | CoderDream/python-best-practice | 40e6b5315daefb37c59daa1a1990ac1ae10f8cca | [
"MIT"
] | null | null | null | import tushare
print(tushare.__version__)
| 10.75 | 26 | 0.837209 | 5 | 43 | 6.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 3 | 27 | 14.333333 | 0.820513 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a4308d413750e0aa8205eff244b4d5b1bb2d9173 | 1,357 | py | Python | constants.py | FilippoGuarda/shallow-backup | 4d46d2262d0ffb85d62d21c212bac17772b8475b | [
"MIT"
] | null | null | null | constants.py | FilippoGuarda/shallow-backup | 4d46d2262d0ffb85d62d21c212bac17772b8475b | [
"MIT"
] | null | null | null | constants.py | FilippoGuarda/shallow-backup | 4d46d2262d0ffb85d62d21c212bac17772b8475b | [
"MIT"
] | null | null | null | class Constants:
PROJECT_NAME='shallow-backup'
VERSION='1.3'
AUTHOR_GITHUB='alichtman'
AUTHOR_FULL_NAME='Aaron Lichtman'
DESCRIPTION="Easily create lightweight documentation of installed packages, dotfiles, and more."
URL='https://github.com/alichtman/shallow-backup'
AUTHOR_EMAIL='aaronlichtman@gmail.com'
CONFIG_PATH='.shallow-backup'
INVALID_DIRS = [".Trash", ".npm", ".cache", ".rvm"]
PACKAGE_MANAGERS = ["gem", "brew-cask", "cargo", "npm", "pip", "brew", "apm"]
LOGO = """
dP dP dP dP dP
88 88 88 88 88
,d8888' 88d888b. .d8888b. 88 88 .d8888b. dP dP dP 88d888b. .d8888b. .d8888b. 88 .dP dP dP 88d888b.
Y8ooooo, 88' `88 88' `88 88 88 88' `88 88 88 88 88' `88 88' `88 88' `\"\" 88888\" 88 88 88' `88
88 88 88 88. .88 88 88 88. .88 88.88b.88' 88. .88 88. .88 88. ... 88 `8b. 88. .88 88. .88
`88888P' dP dP `88888P8 dP dP `88888P' 8888P Y8P 88Y8888' `88888P8 `88888P' dP `YP `88888P' 88Y888P'
88
dP """
| 61.681818 | 117 | 0.453943 | 151 | 1,357 | 4.02649 | 0.430464 | 0.276316 | 0.355263 | 0.407895 | 0.138158 | 0.121711 | 0.121711 | 0.098684 | 0.098684 | 0.098684 | 0 | 0.250323 | 0.428887 | 1,357 | 21 | 118 | 64.619048 | 0.534194 | 0 | 0 | 0 | 0 | 0.2 | 0.823746 | 0.016962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
a430c6d2e7f23666b36bfeafc29db5a772bc9103 | 34 | py | Python | dictsheet/__init__.py | partizaans/dictsheet | fa097a57a93aed6c2ff0df8cf2d9c9419eb45f30 | [
"MIT"
] | 1 | 2016-08-24T14:15:25.000Z | 2016-08-24T14:15:25.000Z | dictsheet/__init__.py | partizaans/dictsheet | fa097a57a93aed6c2ff0df8cf2d9c9419eb45f30 | [
"MIT"
] | 1 | 2021-04-30T20:35:46.000Z | 2021-04-30T20:35:46.000Z | dictsheet/__init__.py | partizaans/dictsheet | fa097a57a93aed6c2ff0df8cf2d9c9419eb45f30 | [
"MIT"
] | 1 | 2016-08-03T05:46:25.000Z | 2016-08-03T05:46:25.000Z | from .dictsheet import DictSheet
| 11.333333 | 32 | 0.823529 | 4 | 34 | 7 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 34 | 2 | 33 | 17 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a438ae02bc325d806b25debc3b2f4b50d43a1a1f | 3,946 | py | Python | tests/objects/server/test_runmode.py | OwenMcDonnell/pyiron_base | 16230ee6d456c5e01bc832673461b19bf3024e87 | [
"BSD-3-Clause"
] | null | null | null | tests/objects/server/test_runmode.py | OwenMcDonnell/pyiron_base | 16230ee6d456c5e01bc832673461b19bf3024e87 | [
"BSD-3-Clause"
] | null | null | null | tests/objects/server/test_runmode.py | OwenMcDonnell/pyiron_base | 16230ee6d456c5e01bc832673461b19bf3024e87 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from pyiron_base.objects.server.runmode import Runmode
class TestRunmode(unittest.TestCase):
def setUp(self):
self.run_mode_default = Runmode()
self.run_mode_modal = Runmode()
self.run_mode_modal.mode = 'modal'
self.run_mode_non_modal = Runmode()
self.run_mode_non_modal.mode = 'non_modal'
self.run_mode_queue = Runmode()
self.run_mode_queue.mode = 'queue'
self.run_mode_manual = Runmode()
self.run_mode_manual.mode = 'manual'
def test_modal(self):
self.assertTrue(self.run_mode_default.modal)
self.assertTrue(self.run_mode_modal.modal)
self.assertFalse(self.run_mode_non_modal.modal)
self.assertFalse(self.run_mode_queue.modal)
self.assertFalse(self.run_mode_manual.modal)
def test_non_modal(self):
self.assertFalse(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_modal.non_modal)
self.assertTrue(self.run_mode_non_modal.non_modal)
self.assertFalse(self.run_mode_queue.non_modal)
self.assertFalse(self.run_mode_manual.non_modal)
def test_queue(self):
self.assertFalse(self.run_mode_default.queue)
self.assertFalse(self.run_mode_modal.queue)
self.assertFalse(self.run_mode_non_modal.queue)
self.assertTrue(self.run_mode_queue.queue)
self.assertFalse(self.run_mode_manual.queue)
def test_manual(self):
self.assertFalse(self.run_mode_default.manual)
self.assertFalse(self.run_mode_modal.manual)
self.assertFalse(self.run_mode_non_modal.manual)
self.assertFalse(self.run_mode_queue.manual)
self.assertTrue(self.run_mode_manual.manual)
def test_mode(self):
self.run_mode_default.mode = 'non_modal'
self.assertFalse(self.run_mode_default.modal)
self.assertTrue(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
self.run_mode_default.mode = 'queue'
self.assertFalse(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertTrue(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
self.run_mode_default.mode = 'manual'
self.assertFalse(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertTrue(self.run_mode_default.manual)
self.run_mode_default.mode = 'modal'
self.assertTrue(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
def test_setter(self):
self.run_mode_default.non_modal = True
self.assertFalse(self.run_mode_default.modal)
self.assertTrue(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
self.run_mode_default.queue = True
self.assertFalse(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertTrue(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
self.run_mode_default.manual = True
self.assertFalse(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertTrue(self.run_mode_default.manual)
self.run_mode_default.modal = True
self.assertTrue(self.run_mode_default.modal)
self.assertFalse(self.run_mode_default.non_modal)
self.assertFalse(self.run_mode_default.queue)
self.assertFalse(self.run_mode_default.manual)
if __name__ == '__main__':
unittest.main() | 43.362637 | 58 | 0.723517 | 530 | 3,946 | 5.054717 | 0.054717 | 0.180291 | 0.283315 | 0.302352 | 0.862635 | 0.790967 | 0.738335 | 0.60321 | 0.578947 | 0.577454 | 0 | 0 | 0.183224 | 3,946 | 91 | 59 | 43.362637 | 0.831213 | 0 | 0 | 0.444444 | 0 | 0 | 0.014695 | 0 | 0 | 0 | 0 | 0 | 0.641975 | 1 | 0.08642 | false | 0 | 0.024691 | 0 | 0.123457 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a4751d122669933b9c4fb3c07a2ef2d69ab7979d | 29 | py | Python | deep_rl/models/__init__.py | mscott99/DeepRL_FDR | 284a84809992ce339ad84c175bb77e8df21ae972 | [
"MIT"
] | 1 | 2020-12-09T19:07:33.000Z | 2020-12-09T19:07:33.000Z | deep_rl/models/__init__.py | mscott99/DeepRL_FDR | 284a84809992ce339ad84c175bb77e8df21ae972 | [
"MIT"
] | null | null | null | deep_rl/models/__init__.py | mscott99/DeepRL_FDR | 284a84809992ce339ad84c175bb77e8df21ae972 | [
"MIT"
] | null | null | null | from .a2c_models import *
| 5.8 | 25 | 0.689655 | 4 | 29 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 0.241379 | 29 | 4 | 26 | 7.25 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f10d368a8848e4b007eef21f277aa7384214e793 | 161 | py | Python | ansible_filter/omit.py | nl2go/ansible-filter-hetzner | 9984bc83d96acb06cdb6443b3ae1d13671510b36 | [
"MIT"
] | 2 | 2020-01-28T12:13:09.000Z | 2020-02-12T01:38:44.000Z | ansible_filter/omit.py | nl2go/ansible-filter-hetzner | 9984bc83d96acb06cdb6443b3ae1d13671510b36 | [
"MIT"
] | 1 | 2020-01-31T14:25:21.000Z | 2020-01-31T14:25:21.000Z | ansible_filter/omit.py | nl2go/ansible-filter | 9984bc83d96acb06cdb6443b3ae1d13671510b36 | [
"MIT"
] | null | null | null | #!/usr/bin/python
from ansible_filter.helpers import filter_object, TYPE_OMIT
def omit(obj, attributes):
return filter_object(obj, attributes, TYPE_OMIT)
| 20.125 | 59 | 0.782609 | 23 | 161 | 5.26087 | 0.652174 | 0.198347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.124224 | 161 | 7 | 60 | 23 | 0.858156 | 0.099379 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f133f10a9b95376bb11f56ebbb965cb4e5b8fa42 | 70 | py | Python | minipgm/__init__.py | zermelozf/minipgm | 4be0a1a94b9744870213dbfe8e0e7c05a622a1f5 | [
"BSD-3-Clause"
] | null | null | null | minipgm/__init__.py | zermelozf/minipgm | 4be0a1a94b9744870213dbfe8e0e7c05a622a1f5 | [
"BSD-3-Clause"
] | null | null | null | minipgm/__init__.py | zermelozf/minipgm | 4be0a1a94b9744870213dbfe8e0e7c05a622a1f5 | [
"BSD-3-Clause"
] | null | null | null | from .variables import *
from .models import *
from .samplers import * | 23.333333 | 24 | 0.757143 | 9 | 70 | 5.888889 | 0.555556 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.157143 | 70 | 3 | 25 | 23.333333 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
74cc1bdc00c53cac3dea1a2e9261a8e3b402bb1e | 43 | py | Python | src/data/__init__.py | dbdmg/robust-speech-challenge | 8c12ba595f7b5497aa696cc148b5d6bfa3170328 | [
"MIT"
] | null | null | null | src/data/__init__.py | dbdmg/robust-speech-challenge | 8c12ba595f7b5497aa696cc148b5d6bfa3170328 | [
"MIT"
] | null | null | null | src/data/__init__.py | dbdmg/robust-speech-challenge | 8c12ba595f7b5497aa696cc148b5d6bfa3170328 | [
"MIT"
] | 1 | 2022-02-04T11:27:41.000Z | 2022-02-04T11:27:41.000Z | from .normalization import normalize_string | 43 | 43 | 0.906977 | 5 | 43 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2d3a61247fb202eb773fc1b04fc1bdfff963a91b | 45 | py | Python | pyfacebook/api/__init__.py | sns-sdks/python-facebook | 9536d30393dee8b2a887b81f103d76262a677eee | [
"Apache-2.0"
] | 181 | 2019-08-28T10:03:49.000Z | 2022-03-26T19:36:05.000Z | pyfacebook/api/__init__.py | sns-sdks/python-facebook | 9536d30393dee8b2a887b81f103d76262a677eee | [
"Apache-2.0"
] | 159 | 2019-08-28T10:07:43.000Z | 2022-03-30T16:42:23.000Z | pyfacebook/api/__init__.py | sns-sdks/python-facebook | 9536d30393dee8b2a887b81f103d76262a677eee | [
"Apache-2.0"
] | 40 | 2019-09-10T20:12:47.000Z | 2022-03-12T16:16:46.000Z | from .graph import GraphAPI, BasicDisplayAPI
| 22.5 | 44 | 0.844444 | 5 | 45 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
744fe03004540238b376543881c70148344bbca6 | 10,957 | py | Python | backend/tests/task_test.py | ElPapi42/flask-todo | 526c5c397068754bfb67aac417558cbb7e640dfb | [
"MIT"
] | 1 | 2021-07-30T11:24:59.000Z | 2021-07-30T11:24:59.000Z | backend/tests/task_test.py | ElPapi42/flask-todo | 526c5c397068754bfb67aac417558cbb7e640dfb | [
"MIT"
] | 5 | 2019-12-29T15:31:31.000Z | 2021-05-10T22:25:27.000Z | backend/tests/task_test.py | ElPapi42/flask-todo | 526c5c397068754bfb67aac417558cbb7e640dfb | [
"MIT"
] | 8 | 2019-12-22T18:33:50.000Z | 2019-12-28T20:13:21.000Z | import uuid
from flask import Response
from .test_fixtures import instance, user, admin
"""
def test_create_task(instance, user):
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 201
assert response.json.get("description") == "this is a test task"
"""
def test_create_task_without_description(instance, user):
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 02",
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 201
assert response.json.get("description") == ""
def test_create_task_without_authorization(instance, user, admin):
response = instance.post(
"/api/users/{}/tasks/".format(admin.get("id")),
data={
"title": "test task 03",
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 403
assert response.is_json == True
def test_create_task_without_token(instance, user):
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task",
}
)
assert response.status_code == 401
assert response.is_json == True
def test_get_tasks(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/current/tasks/",
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json[0].get("description") == "this is a test task"
def test_get_complete_tasks(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"done": "True"
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/current/tasks/?done={}".format("1"),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json[0].get("description") == "this is a test task"
def test_get_uncomplete_tasks(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/current/tasks/?done={}".format("0"),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json[0].get("description") == "this is a test task"
def test_get_zero_complete_tasks(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/current/tasks/?done={}".format("1"),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert len(response.json) == 0
def test_get_zero_uncomplete_tasks(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"done": "True"
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/current/tasks/?done={}".format("0"),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert len(response.json) == 0
def test_get_tasks_without_authorization(instance, user, admin):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": admin.get("token")}
)
response = instance.get(
"/api/users/{}/tasks/".format(admin.get("id")),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 403
assert response.is_json == True
def test_get_tasks_of_other_user_as_admin(instance, admin, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
response = instance.get(
"/api/users/{}/tasks/".format(user.get("id")),
headers={"Authorization": admin.get("token")}
)
assert response.status_code == 200
assert response.json[0].get("description") == "this is a test task"
def test_get_task_by_id(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.get(
"/api/users/current/tasks/{}/".format(task_id),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json.get("description") == "this is a test task"
def test_get_task_by_unregistered_id(instance, user):
# Generate Dummy id
task_id = uuid.uuid4()
response = instance.get(
"/api/users/current/tasks/{}/".format(task_id),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 404
assert response.is_json == True
def test_get_task_by_invalid_id(instance, user):
response = instance.get(
"/api/users/current/tasks/nonuuid/",
headers={"Authorization": user.get("token")}
)
assert response.status_code == 422
assert response.is_json == True
def test_update_task_by_id(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"title": "test task 99",
"description": "this is not a test task",
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json.get("description") == "this is not a test task"
assert response.json.get("title") == "test task 99"
def test_update_task_by_id_as_complete(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"done": True
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json.get("done") == True
def test_update_task_by_id_as_uncomplete(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"done": True
},
headers={"Authorization": user.get("token")}
)
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"done": False
},
headers={"Authorization": user.get("token")}
)
assert response.status_code == 200
assert response.json.get("done") == False
def test_update_task_by_id_with_bad_authorization(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.put(
"/api/users/current/tasks/{}/".format(task_id),
data={
"title": "test task 99",
"description": "this is not a test task",
},
headers={"Authorization": user.get("token")[:-1]}
)
assert response.status_code == 401
assert response.is_json == True
def test_delete_task(instance, user):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": user.get("token")}
)
task_id = response.json.get("id")
response = instance.delete(
"/api/users/current/tasks/{}/".format(task_id),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 204
def test_delete__other_user_task(instance, user, admin):
# Create Task
response = instance.post(
"/api/users/current/tasks/",
data={
"title": "test task 01",
"description": "this is a test task",
},
headers={"Authorization": admin.get("token")}
)
task_id = response.json.get("id")
response = instance.delete(
"/api/users/{}/tasks/{}/".format(admin.get("id"), task_id),
headers={"Authorization": user.get("token")}
)
assert response.status_code == 403
| 27.3925 | 72 | 0.575157 | 1,225 | 10,957 | 5.035918 | 0.057959 | 0.058356 | 0.08024 | 0.106987 | 0.93905 | 0.91052 | 0.907116 | 0.875344 | 0.869833 | 0.851029 | 0 | 0.013993 | 0.269508 | 10,957 | 399 | 73 | 27.461153 | 0.756747 | 0.016884 | 0 | 0.663333 | 0 | 0 | 0.25765 | 0.086205 | 0 | 0 | 0 | 0 | 0.123333 | 1 | 0.063333 | false | 0 | 0.01 | 0 | 0.073333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
745abdf086f2836bbecd6a12ed435729c0422e9b | 29 | py | Python | src/indexer/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | src/indexer/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | src/indexer/__init__.py | 2kodevs/Search-Engine | 840001f825d9632c6c7a5fd24151b79ca1a9a06b | [
"MIT"
] | null | null | null | from .Indexer import Indexer
| 14.5 | 28 | 0.827586 | 4 | 29 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
748ef2aca744ccf107c080a350db7c975b3a17f5 | 15,213 | py | Python | vpbuf/tests/vectors/python/vectors.py | markraley/versioned-polymorphic-buffers | c2f0424f05013cfcaf5a55464846e9bcf26818e2 | [
"MIT"
] | null | null | null | vpbuf/tests/vectors/python/vectors.py | markraley/versioned-polymorphic-buffers | c2f0424f05013cfcaf5a55464846e9bcf26818e2 | [
"MIT"
] | null | null | null | vpbuf/tests/vectors/python/vectors.py | markraley/versioned-polymorphic-buffers | c2f0424f05013cfcaf5a55464846e9bcf26818e2 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# Software released under the MIT license (see project root for license file)
from pyamf import amf3
from vp_vectors.persist import *
from vp_vectors.vp_vectors import *
# ------------------------------------------------------------------------------
def buffer_to_file(file_name, encoder):
output_buffer = amf3.DataOutput(encoder)
f = open(file_name, "wb")
bytes = output_buffer.stream.getvalue()
f.write(bytes)
f.close()
return len(bytes)
def file_to_buffer(file_name):
f_in = open(file_name, "rb")
istream = amf3.util.BufferedByteStream(f_in)
f_in.close()
return istream
out_dir = "./out/"
file_ext = ".dat"
# ------------------------------------------------------------------------------
class write_context:
def __init__(self, ver = 1):
self.stream = amf3.util.BufferedByteStream()
self.encoder = amf3.Encoder(self.stream)
self.encoder.string_references = False # disables string caching
self.reorder_map = {}
self.ver = ver
self.set_version(ver)
def set_version(self, ver):
self.ver = ver
init_reorder_map(self.reorder_map, ver, 1)
class read_context:
def __init__(self, test_name, ver = 1):
self.istream = file_to_buffer(out_dir + test_name + file_ext)
self.decoder = amf3.Decoder(self.istream)
self.bytes_read = len(self.istream)
print(test_name, len(self.istream), 'bytes read')
self.reorder_map = {}
self.ver = ver
self.set_version(ver)
def set_version(self, ver):
self.ver = ver
init_reorder_map(self.reorder_map, ver, 1)
# ------------------------------------------------------------------------------
# vectors_A - test vector of pointer to struct
class Test_vector_A:
def __init__(self):
self.test_name = "vectors_A"
self.count = 10
self.base = 50000
self.h1 = Header() # initialized and then serialized
self.o1 = OuterA()
self.h2 = Header() # deserialized and validated
self.o2 = OuterA()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = -99
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append(A(j, 'A-' + str(j)))
write_Header(wc, self.h1)
write_OuterA(wc, self.o1)
out_file = out_dir + self.test_name + file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterA(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i].i1 == self.o2.v[i].i1)
assert(self.o1.v[i].s1 == self.o2.v[i].s1)
a = Test_vector_A()
a.serialize()
a.load()
a.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_B - test vector of struct
class Test_vector_B:
def __init__(self):
self.test_name = "vectors_B"
self.count = 10
self.base = 50000
self.h1 = Header() # initialized and then serialized
self.o1 = OuterB()
self.h2 = Header() # deserialized and validated
self.o2 = OuterB()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = -99
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append(A(j, 'A-' + str(j)))
write_Header(wc, self.h1)
write_OuterB(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterB(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i].i1 == self.o2.v[i].i1)
assert(self.o1.v[i].s1 == self.o2.v[i].s1)
b = Test_vector_B()
b.serialize()
b.load()
b.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_C - test vector of string
class Test_vector_C:
def __init__(self):
self.test_name = "vectors_C"
self.count = 13
self.base = 113
self.h1 = Header() # initialized and then serialized
self.o1 = OuterC()
self.h2 = Header() # deserialized and validated
self.o2 = OuterC()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = 101
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append('C-' + str(j))
write_Header(wc, self.h1)
write_OuterC(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterC(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i] == self.o2.v[i])
c = Test_vector_C()
c.serialize()
c.load()
c.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_D - test vector of pointer to string
class Test_vector_D:
def __init__(self):
self.test_name = "vectors_D"
self.count = 13
self.base = 113
self.h1 = Header() # initialized and then serialized
self.o1 = OuterD()
self.h2 = Header() # deserialized and validated
self.o2 = OuterD()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = 101
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append('D-' + str(j))
write_Header(wc, self.h1)
write_OuterD(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterD(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i] == self.o2.v[i])
d = Test_vector_D()
d.serialize()
d.load()
d.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_E - test vector of integer
class Test_vector_E:
def __init__(self):
self.test_name = "vectors_E"
self.count = 76
self.base = 112
self.h1 = Header() # initialized and then serialized
self.o1 = OuterE()
self.h2 = Header() # deserialized and validated
self.o2 = OuterE()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = 1121
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append(j)
write_Header(wc, self.h1)
write_OuterE(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterE(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i] == self.o2.v[i])
e = Test_vector_E()
e.serialize()
e.load()
e.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_E - test vector pointer to integer
class Test_vector_F:
def __init__(self):
self.test_name = "vectors_F"
self.count = 76
self.base = 112
self.h1 = Header() # initialized and then serialized
self.o1 = OuterE()
self.h2 = Header() # deserialized and validated
self.o2 = OuterE()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = 1121
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append(j)
write_Header(wc, self.h1)
write_OuterF(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterF(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i] == self.o2.v[i])
f = Test_vector_F()
f.serialize()
f.load()
f.validate()
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# vectors_G - test vector of pointer to polymorphic class
class Test_vector_G:
def __init__(self):
self.test_name = "vectors_G"
self.count = 17
self.base = 500
self.h1 = Header() # initialized and then serialized
self.o1 = OuterG()
self.h2 = Header() # deserialized and validated
self.o2 = OuterG()
self.bytes_read = 0
self.bytes_written = 0
# initialize the data structures under test and serialize
def serialize(self):
wc = write_context()
self.h1.version = 1
self.h1.test_name = self.test_name
for i in range(0, self.count):
j = self.base + i
self.o1.v.append(Derived1(j, 'D1-' + str(j)))
self.o1.v.append(Derived2(j, 'D2-' + str(j)))
write_Header(wc, self.h1)
write_OuterG(wc, self.o1)
out_file = out_dir+self.test_name+file_ext
self.bytes_written = buffer_to_file(out_file, wc.encoder)
print(self.test_name, self.bytes_written, 'bytes written')
# deserialize the data structures under test
def load(self):
rc = read_context(self.test_name)
self.h2 = read_Header(rc)
self.o2 = read_OuterG(rc)
self.bytes_read = rc.bytes_read
# compare the serialized and deserialized data structures against each other
def validate(self):
assert(self.h1.version == self.h2.version)
assert(self.h1.test_name == self.h2.test_name)
assert(self.bytes_read == self.bytes_written)
assert(len(self.o1.v) == len(self.o2.v))
for i in range(0, len(self.o1.v)):
assert(self.o1.v[i].i1 == self.o2.v[i].i1)
assert(self.o1.v[i].s1 == self.o2.v[i].s1)
g = Test_vector_G()
g.serialize()
g.load()
g.validate()
# ------------------------------------------------------------------------------
| 28.595865 | 81 | 0.542102 | 1,919 | 15,213 | 4.133403 | 0.071391 | 0.059506 | 0.054463 | 0.03883 | 0.822365 | 0.805472 | 0.805472 | 0.778996 | 0.741931 | 0.736132 | 0 | 0.023591 | 0.275554 | 15,213 | 531 | 82 | 28.649718 | 0.696126 | 0.217183 | 0 | 0.681388 | 0 | 0 | 0.016967 | 0 | 0 | 0 | 0 | 0 | 0.119874 | 1 | 0.107256 | false | 0 | 0.009464 | 0 | 0.15142 | 0.025237 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77748a4a1e540901be0558a67a20ceb326d0a864 | 104 | py | Python | tfcaps/losses/__init__.py | ericup/tensorflow-capsules | e4856a9026bccc80aca553382e6fb6f8072746cd | [
"Apache-2.0"
] | 1 | 2019-03-02T21:35:47.000Z | 2019-03-02T21:35:47.000Z | tfcaps/losses/__init__.py | ericup/tensorflow-capsules | e4856a9026bccc80aca553382e6fb6f8072746cd | [
"Apache-2.0"
] | null | null | null | tfcaps/losses/__init__.py | ericup/tensorflow-capsules | e4856a9026bccc80aca553382e6fb6f8072746cd | [
"Apache-2.0"
] | null | null | null | from .losses import capsule_net_loss, margin, margin_loss, reconstruction_loss, logarithmic_margin_loss
| 52 | 103 | 0.875 | 14 | 104 | 6.071429 | 0.642857 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 104 | 1 | 104 | 104 | 0.885417 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7774915674bd670ddbb600359de5a4f83aca7360 | 199 | py | Python | passport_check/models.py | fd239/gelios_services | 625b580f11bd087fa3f75dbf88a4e29253240f0a | [
"Apache-2.0"
] | null | null | null | passport_check/models.py | fd239/gelios_services | 625b580f11bd087fa3f75dbf88a4e29253240f0a | [
"Apache-2.0"
] | 4 | 2021-04-08T21:39:57.000Z | 2021-09-22T19:32:27.000Z | passport_check/models.py | fd239/gelios_services | 625b580f11bd087fa3f75dbf88a4e29253240f0a | [
"Apache-2.0"
] | null | null | null | from django.core.validators import RegexValidator
from django.db import models
class Passport(models.Model):
series = models.CharField(max_length=4)
number = models.CharField(max_length=6)
| 24.875 | 49 | 0.78392 | 27 | 199 | 5.703704 | 0.666667 | 0.12987 | 0.233766 | 0.311688 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011561 | 0.130653 | 199 | 7 | 50 | 28.428571 | 0.878613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.2 | 0.4 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
777b21691f72b480c3448289957966fff860534d | 33,123 | py | Python | cogs/logging.py | ThisIsanAlt/AlternativeBot | 7ca68ceef912c89b72c49b7dc8bfae4d2d9fee10 | [
"CC-BY-4.0"
] | 1 | 2021-06-15T19:50:15.000Z | 2021-06-15T19:50:15.000Z | cogs/logging.py | notsweting/AlternativeBot | dd838d6a2cfb770baeba902bfef4fd55104571c6 | [
"MIT"
] | 1 | 2021-06-15T02:30:40.000Z | 2021-06-15T02:30:40.000Z | cogs/logging.py | notsweting/AlternativeBot | dd838d6a2cfb770baeba902bfef4fd55104571c6 | [
"MIT"
] | null | null | null | import discord
import asyncio
from discord.ext import commands, tasks, menus
from discord.ext.commands.cooldowns import BucketType
import aiosqlite
from discord.ext import menus
import datetime
'''
Table definitions:
TABLE MUTEDROLES, columns (ServerID, RoleID)
TABLE LOGGING, columns (ServerID, LoggingToggle, LoggingChannelID, \
OnMsgDeleteToggle, OnBulkMsgDeleteToggle, OnMsgEditToggle, \
OnReactionClearToggle, OnChannelCreateDeleteToggle, OnChannelEditToggle, \
OnMemberJoinToggle, OnMemberLeaveToggle, OnMemberEditToggle, \
OnGuildEditToggle, OnGuildRoleCreateDeleteToggle, OnGuildRoleUpdateToggle, \
OnGuildMemberBanUnbanToggle, OnGuildMemberKickToggle, OnGuildInviteCreateDeleteToggle)
'''
class MyMenu(menus.Menu):
async def return_values(self):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (self.guild_id,))
info = await cursor.fetchone()
await connection.close()
listtoreturn = []
if info[1] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :play_pause: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :play_pause: reaction.')
if info[3] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :one: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :one: reaction.')
if info[4] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :two: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :two: reaction.')
if info[5] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :three: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :three: reaction.')
if info[6] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :one: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :one: reaction.')
if info[7] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :two: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :two: reaction.')
if info[8] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :three: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :three: reaction.')
if info[9] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :one: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :one: reaction.')
if info[10] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :two: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :two: reaction.')
if info[11] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :three: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :three: reaction.')
if info[12] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :one: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :one: reaction.')
if info[13] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :two: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :two: reaction.')
if info[14] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :three: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :three: reaction.')
if info[15] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :one: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :one: reaction.')
if info[16] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :two: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :two: reaction.')
if info[17] == True:
listtoreturn.append(':white_check_mark: Enabled. Toggle with the :three: reaction.')
else:
listtoreturn.append(':x: Disabled. Toggle with the :three: reaction.')
return tuple(listtoreturn)
async def send_initial_message(self, ctx, channel):
self.menupage = 1
self.guild_name = ctx.guild.name
self.guild_id = ctx.guild.id
self.embed = discord.Embed (title=f'Logging Toggles for {ctx.guild.name}', description='Use the reactions to navigate through the available options!')
info = await self.return_values()
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On message delete toggle:', value = info[1])
self.embed.add_field(name='On bulk message delete toggle:', value = info[2])
self.embed.add_field(name='On message edit toggle:', value = info[3])
self.embed.set_footer(text='Page 1/5: Support: https://discord.gg/33utPs9')
return await channel.send(embed=self.embed)
@menus.button('\U000023ef\U0000fe0f')
async def maintoggle(self, payload):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT LoggingToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :play_pause: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :play_pause: reaction.'
await cursor.execute('UPDATE LOGGING SET LoggingToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(0, name='Main logging toggle:', value=enabled)
self.embed.remove_field(1)
await self.message.edit(embed=self.embed)
await connection.commit()
@menus.button('\U00000031\U0000fe0f\U000020e3')
async def on_one(self, payload):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
if self.menupage == 1:
await cursor.execute('SELECT OnMsgDeleteToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :one: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :one: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMsgDeleteToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(1, name='On message delete toggle:', value=enabled)
elif self.menupage == 2:
await cursor.execute('SELECT OnReactionClearToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :one: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :one: reaction.'
await cursor.execute('UPDATE LOGGING SET OnReactionClearToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(1, name='On reaction clear toggle:', value=enabled)
elif self.menupage == 3:
await cursor.execute('SELECT OnMemberJoinToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :one: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :one: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMemberJoinToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(1, name='On member join toggle:', value=enabled)
elif self.menupage == 4:
await cursor.execute('SELECT OnGuildEditToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :one: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :one: reaction.'
await cursor.execute('UPDATE LOGGING SET OnGuildEditToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(1, name='On guild edit toggle:', value=enabled)
else:
await cursor.execute('SELECT OnMemberBanUnbanToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :one: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :one: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMemberBanUnbanToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(1, name='On member ban/unban toggle:', value=enabled)
self.embed.remove_field(2)
await self.message.edit(embed=self.embed)
await connection.commit()
@menus.button('\U00000032\U0000fe0f\U000020e3')
async def on_two(self, payload):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
if self.menupage == 1:
await cursor.execute('SELECT OnBulkMsgDeleteToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnBulkMsgDeleteToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(2, name='On bulk message delete toggle:', value=enabled)
self.embed.remove_field(3)
await self.message.edit(embed=self.embed)
await connection.commit()
elif self.menupage == 2:
await cursor.execute('SELECT OnChannelCreateDeleteToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnChannelCreateDeleteToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(2, name='On channel create/delete toggle:', value=enabled)
self.embed.remove_field(3)
await self.message.edit(embed=self.embed)
await connection.commit()
elif self.menupage == 3:
await cursor.execute('SELECT OnMemberLeaveToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMemberLeaveToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(2, name='On member leave toggle:', value=enabled)
self.embed.remove_field(3)
await self.message.edit(embed=self.embed)
elif self.menupage == 4:
await cursor.execute('SELECT OnGuildRoleCreateDeleteToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnGuildRoleCreateDeleteToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(2, name='On role create/delete toggle:', value=enabled)
self.embed.remove_field(3)
await self.message.edit(embed=self.embed)
else:
await cursor.execute('SELECT OnMemberKickToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMemberKickToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(2, name='On member kick toggle:', value=enabled)
self.embed.remove_field(3)
await self.message.edit(embed=self.embed)
@menus.button('\U00000033\U0000fe0f\U000020e3')
async def on_three(self, payload):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
if self.menupage == 1:
await cursor.execute('SELECT OnMsgEditToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :three: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMsgEditToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(3, name='On message edit toggle:', value=enabled)
elif self.menupage == 2:
await cursor.execute('SELECT OnChannelEditToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :three: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnChannelEditToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(3, name='On channel edit toggle:', value=enabled)
elif self.menupage == 3:
await cursor.execute('SELECT OnMemberEditToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnMemberEditToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(3, name='On member edit toggle:', value=enabled)
elif self.menupage == 4:
await cursor.execute('SELECT OnGuildRoleUpdateToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnGuildRoleUpdateToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(3, name='On role edit toggle:', value=enabled)
else:
await cursor.execute('SELECT OnGuildInviteCreateDeleteToggle FROM LOGGING WHERE ServerID = ?', (payload.guild_id,))
info = await cursor.fetchone()
info = info[0]
if info is None or info == False:
info = True
enabled = ':white_check_mark: Enabled. Toggle with the :two: reaction.'
else:
info = False
enabled = ':x: Disabled. Toggle with the :two: reaction.'
await cursor.execute('UPDATE LOGGING SET OnGuildInviteCreateDeleteToggle = ? WHERE ServerID = ?', (info, payload.guild_id))
self.embed.insert_field_at(3, name='On invite create/delete toggle:', value=enabled)
self.embed.remove_field(4)
await self.message.edit(embed=self.embed)
await connection.commit()
@menus.button('\U000025c0\U0000fe0f')
async def on_left(self, payload):
self.embed.clear_fields()
info = await self.return_values()
if self.menupage == 1:
self.menupage = 5
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On member ban toggle:', value = info[13])
self.embed.add_field(name='On member kick toggle:', value = info[14])
self.embed.add_field(name='On guild invite create toggle:', value = info[15])
self.embed.set_footer(text='Page 5/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 2:
self.menupage = 1
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On message delete toggle:', value = info[1])
self.embed.add_field(name='On bulk message delete toggle:', value = info[2])
self.embed.add_field(name='On message edit toggle:', value = info[3])
self.embed.set_footer(text='Page 1/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 3:
self.menupage = 2
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On reaction clear toggle:', value = info[4])
self.embed.add_field(name='On channel create/delete toggle:', value = info[5])
self.embed.add_field(name='On channel edit toggle:', value = info[6])
self.embed.set_footer(text='Page 2/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 4:
self.menupage = 3
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On member join toggle:', value = info[7])
self.embed.add_field(name='On member leave toggle:', value = info[8])
self.embed.add_field(name='On member edit toggle:', value = info[9])
self.embed.set_footer(text='Page 3/5: Support: https://discord.gg/33utPs9')
else:
self.menupage = 4
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On guild edit toggle:', value = info[10])
self.embed.add_field(name='On role create/delete toggle:', value = info[11])
self.embed.add_field(name='On role edit toggle:', value = info[12])
self.embed.set_footer(text='Page 4/5: Support: https://discord.gg/33utPs9')
await self.message.edit(embed = self.embed)
@menus.button('\U000025b6\U0000fe0f')
async def on_right(self, payload):
self.embed.clear_fields()
info = await self.return_values()
if self.menupage == 4:
self.menupage = 5
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On member ban toggle:', value = info[13])
self.embed.add_field(name='On member kick toggle:', value = info[14])
self.embed.add_field(name='On guild invite create toggle:', value = info[15])
self.embed.set_footer(text='Page 5/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 5:
self.menupage = 1
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On message delete toggle:', value = info[1])
self.embed.add_field(name='On bulk message delete toggle:', value = info[2])
self.embed.add_field(name='On message edit toggle:', value = info[3])
self.embed.set_footer(text='Page 1/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 1:
self.menupage = 2
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On reaction clear toggle:', value = info[4])
self.embed.add_field(name='On channel create/delete toggle:', value = info[5])
self.embed.add_field(name='On channel edit toggle:', value = info[6])
self.embed.set_footer(text='Page 2/5: Support: https://discord.gg/33utPs9')
elif self.menupage == 2:
self.menupage = 3
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On member join toggle:', value = info[7])
self.embed.add_field(name='On member leave toggle:', value = info[8])
self.embed.add_field(name='On member edit toggle:', value = info[9])
self.embed.set_footer(text='Page 3/5: Support: https://discord.gg/33utPs9')
else:
self.menupage = 4
self.embed.add_field(name='Main logging toggle:', value = info[0])
self.embed.add_field(name='On guild edit toggle:', value = info[10])
self.embed.add_field(name='On role create/delete toggle:', value = info[11])
self.embed.add_field(name='On role edit toggle:', value = info[12])
self.embed.set_footer(text='Page 4/5: Support: https://discord.gg/33utPs9')
await self.message.edit(embed = self.embed)
@menus.button('\U000023f9\U0000fe0f')
async def on_stop(self, payload):
await self.message.delete()
self.stop()
class Logging(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.guild_only()
@commands.has_permissions(manage_guild=True)
@commands.command()
async def togglelogging(self, ctx):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (ctx.guild.id,))
info = await cursor.fetchone()
if info is None or info[2] is None:
await ctx.send('You don\'t have a logging channel set up yet! Please run `/bindloggingchannel` to set up a logging channel!')
else:
m = MyMenu()
await m.start(ctx)
await connection.close()
@commands.guild_only()
@commands.has_permissions(manage_channels=True)
@commands.command()
async def bindloggingchannel(self, ctx, channel : discord.TextChannel):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute(f'SELECT LoggingChannelID FROM LOGGING WHERE ServerID=?', (ctx.guild.id,))
info = await cursor.fetchone()
if info == None:
await cursor.execute(f'INSERT INTO LOGGING(ServerID, LoggingChannelID) VALUES (?, ?)', (ctx.guild.id, channel.id))
await ctx.send(f':thumbsup: Bound {channel.mention} to {ctx.guild.name} as logging channel.')
elif discord.utils.get(ctx.guild.channels, id=int(channel.id)) != None:
await ctx.send('A logging channel has already been bound to this server! Are you sure you want to continue? `yes/no`')
def check(message : discord.Message) -> bool:
return message.author == ctx.author and message.channel == ctx.channel
try:
message = await self.bot.wait_for('message', timeout = 60, check = check)
except asyncio.TimeoutError:
await ctx.send('You took too long to respond! Aborting process.')
else:
if message.content.lower() == 'yes':
await cursor.execute('UPDATE LOGGING SET LoggingChannelID = ? WHERE ServerID = ?', (channel.id, ctx.guild.id))
await ctx.send(f':thumbsup: Bound {channel.mention} to {ctx.guild.name} as logging channel.')
else:
await ctx.send('Process aborted.')
else:
await ctx.send('A logging channel has already been bound to this server, but it was deleted. Binding new Muted role.')
await cursor.execute('UPDATE LOGGING SET LoggingChannelID = ? WHERE ServerID = ?', (channel.id, ctx.guild.id))
await ctx.send(f':thumbsup: Bound {channel.mention} to {ctx.guild.name} as logging channel.')
await connection.commit()
await connection.close()
@commands.Cog.listener()
async def on_message_delete(self, message):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (message.guild.id,))
info = await cursor.fetchone()
await connection.close()
if (
info != None
and message.author.id != 527682196744699924
and info[3] != None
and info[3] != False
and info[1] != None
and info[1] != False
):
channel = discord.utils.get(message.guild.channels, id=info[2])
async for entry in message.guild.audit_logs(limit=1, action=discord.AuditLogAction.message_delete, before=datetime.datetime.utcnow()):
author = entry.user if not None else author
try:
embed = discord.Embed(title = f'Message deleted in #{message.channel}', description = f'**Message content:** {message.content}\n\n**Deleted by:**{author.mention}', timestamp = datetime.datetime.utcnow(), colour = 0xFF5353)
embed.set_footer(text=f'Author: {message.author} | Support: https://discord.gg/33utPs9', icon_url=message.author.avatar_url)
await channel.send(embed=embed)
except:
pass
@commands.Cog.listener()
async def on_message_edit(self, before, after):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (before.guild.id,))
info = await cursor.fetchone()
await connection.close()
if (
info != None
and info[5] != None
and info[5] != False
and info[1] != False
and before.content != after.content
):
loggingchannel = self.bot.get_channel(info[2])
embed = discord.Embed(title = f'Message edited in {before.channel.name}', description = f'**Before:**\n{before.content}\n\n**After:**\n{after.content}', timestamp = datetime.datetime.utcnow(), colour = 0xFF5353)
embed.set_footer(text=f'Author: {before.author} Support: https://discord.gg/33utPs9', icon_url=before.author.avatar_url)
await loggingchannel.send(embed=embed)
@commands.Cog.listener()
async def on_reaction_clear(self, message, reactions):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (message.guild.id,))
info = await cursor.fetchone()
await connection.close()
if info == None:
pass
else:
if info[6] == None or info[6] == False or info[5] == None or info[1] == False:
pass
else:
loggingchannel = self.bot.get_channel(info[2])
embed = discord.Embed(title = f'Reactions cleared in {message.channel.name}', description = f'Reactions cleared: {i for i in reactions}', timestamp = datetime.datetime.utcnow(), colour = 0xFF5353)
embed.set_footer(text=f'Support: https://discord.gg/33utPs9', icon_url=self.bot.user.avatar_url)
await loggingchannel.send(embed=embed)
@commands.Cog.listener()
async def on_guild_channel_create(self, channel):
connection = await aiosqlite.connect('AltBotDataBase.db')
cursor = await connection.cursor()
await cursor.execute('SELECT * FROM LOGGING WHERE ServerID = ?', (channel.guild.id,))
info = await cursor.fetchone()
await connection.close()
if (
info is not None
and info[7] != None
and info[7] != False
and info[5] != None
and info[1] != False
):
async for entry in channel.guild.audit_logs(limit=1, action=discord.AuditLogAction.channel_create, before=datetime.datetime.utcnow()):
creator = entry.user
reason = entry.reason
loggingchannel = self.bot.get_channel(info[2])
embed = discord.Embed(title = f'New channel created: {channel.name}', description = f'**Creator of the channel:**\n{creator.mention}\n\n**Reason:**\n{reason}', timestamp = datetime.datetime.utcnow(), colour = 0xFF5353)
embed.set_footer(text=f'Support: https://discord.gg/33utPs9', icon_url=self.bot.user.avatar_url)
await loggingchannel.send(embed=embed)
@commands.Cog.listener()
async def on_guild_channel_delete(self, channel):
pass
@commands.Cog.listener()
async def on_guild_channel_edit(self, before, after):
pass
@commands.Cog.listener()
async def on_member_join(self, member):
pass
@commands.Cog.listener()
async def on_member_remove(self, member):
pass
@commands.Cog.listener()
async def on_user_update(self, before, after):
pass
@commands.Cog.listener()
async def on_guild_update(self, before, after):
pass
@commands.Cog.listener()
async def on_guild_role_create(self, role):
pass
@commands.Cog.listener()
async def on_guild_role_delete(self, role):
pass
@commands.Cog.listener()
async def on_guild_role_update(self, before, after):
pass
@commands.Cog.listener()
async def on_member_ban(self, guild, user):
pass
@commands.Cog.listener()
async def on_member_unban(self, guild, user):
pass
@commands.Cog.listener()
async def on_invite_create(self, invite):
pass
@commands.Cog.listener()
async def on_invite_delete(self, invite):
pass
'''
TABLE LOGGING, columns (ServerID, LoggingToggle, LoggingChannelID, \
OnMsgDeleteToggle, OnBulkMsgDeleteToggle, OnMsgEditToggle, \
OnReactionClearToggle, OnChannelCreateDeleteToggle, OnChannelEditToggle, \
OnMemberJoinToggle, OnMemberLeaveToggle, OnMemberEditToggle, \
OnGuildEditToggle, OnGuildRoleCreateDeleteToggle, OnGuildRoleUpdateToggle, \
OnGuildMemberBanUnbanToggle, OnGuildMemberKickToggle, OnGuildInviteCreateDeleteToggle)
'''
def setup(bot):
bot.add_cog(Logging(bot)) | 52.244479 | 238 | 0.614981 | 3,869 | 33,123 | 5.191781 | 0.069527 | 0.041669 | 0.04142 | 0.037238 | 0.833076 | 0.819087 | 0.81152 | 0.796087 | 0.776024 | 0.759048 | 0 | 0.016024 | 0.270869 | 33,123 | 634 | 239 | 52.24448 | 0.815701 | 0 | 0 | 0.692845 | 0 | 0.006981 | 0.288299 | 0.020086 | 0 | 0 | 0.000995 | 0 | 0 | 1 | 0.005236 | false | 0.027923 | 0.012216 | 0.001745 | 0.026178 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
77af8024517f589a068495885f3a71d2a0e8cc47 | 79 | py | Python | nsvision/xml/__init__.py | challengerinteractive/nsvision | e7870f21c23987bf3ca833857e60463efe511703 | [
"MIT"
] | null | null | null | nsvision/xml/__init__.py | challengerinteractive/nsvision | e7870f21c23987bf3ca833857e60463efe511703 | [
"MIT"
] | null | null | null | nsvision/xml/__init__.py | challengerinteractive/nsvision | e7870f21c23987bf3ca833857e60463efe511703 | [
"MIT"
] | null | null | null | from .xml_pipeline import XMLPipeline
from .xml_conversion import XMLConversion | 39.5 | 41 | 0.886076 | 10 | 79 | 6.8 | 0.7 | 0.205882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088608 | 79 | 2 | 41 | 39.5 | 0.944444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77bbbd4fce18c865b35451c85399e9c5a017f260 | 220 | py | Python | marvel/exceptions.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 28 | 2018-10-27T08:36:29.000Z | 2021-11-08T12:55:58.000Z | marvel/exceptions.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 2 | 2020-08-31T17:01:35.000Z | 2021-07-29T13:46:39.000Z | marvel/exceptions.py | wrap-away/Marvellous | d4312fc91c45df6910d0f5f8b52be2b46cc73a3f | [
"MIT"
] | 4 | 2019-04-08T00:59:13.000Z | 2021-12-17T21:55:10.000Z | class MarvelException(Exception):
"""
Raises an exception related to API errors.
"""
pass
class BadInputException(Exception):
"""
Raises an exception related to library errors.
"""
pass
| 16.923077 | 50 | 0.645455 | 22 | 220 | 6.454545 | 0.545455 | 0.211268 | 0.239437 | 0.366197 | 0.492958 | 0.492958 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263636 | 220 | 12 | 51 | 18.333333 | 0.876543 | 0.404545 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
77e1e30db89c014ea4b8492a3d161356d9503ae3 | 35 | py | Python | python/8Kyu/You cant code under pressure.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/8Kyu/You cant code under pressure.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | python/8Kyu/You cant code under pressure.py | athasv/Codewars-data | 5e106466e709fd776f23585ad9f652d0d65b48d3 | [
"MIT"
] | null | null | null | def double_integer(i): return i * 2 | 35 | 35 | 0.742857 | 7 | 35 | 3.571429 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.142857 | 35 | 1 | 35 | 35 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | false | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7af63e3f3778189101ea66e9246036ef349f924f | 540 | py | Python | src/networks/__init__.py | sodabeans/Deep-SVDD-PyTorch | ae42c39bf91c82fe367c4d334eb1f3da1aaee815 | [
"MIT"
] | null | null | null | src/networks/__init__.py | sodabeans/Deep-SVDD-PyTorch | ae42c39bf91c82fe367c4d334eb1f3da1aaee815 | [
"MIT"
] | null | null | null | src/networks/__init__.py | sodabeans/Deep-SVDD-PyTorch | ae42c39bf91c82fe367c4d334eb1f3da1aaee815 | [
"MIT"
] | null | null | null | from .main import build_network, build_autoencoder
from .mnist_LeNet import MNIST_LeNet, MNIST_LeNet_Autoencoder
from .cifar10_LeNet import CIFAR10_LeNet, CIFAR10_LeNet_Autoencoder
from .cifar10_LeNet_elu import CIFAR10_LeNet_ELU, CIFAR10_LeNet_ELU_Autoencoder
from .ct_LeNet import CT_LeNet, CT_LeNet_Autoencoder
from .ep_LeNet import EP_LeNet, EP_LeNet_Autoencoder
from .rs_LeNet import RS_LeNet, RS_LeNet_Autoencoder
from .sad_LeNet import SAD_LeNet, SAD_LeNet_Autoencoder
from .natops_LeNet import NATOPS_LeNet, NATOPS_LeNet_Autoencoder | 60 | 79 | 0.885185 | 82 | 540 | 5.378049 | 0.182927 | 0.272109 | 0.272109 | 0.122449 | 0.145125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024194 | 0.081481 | 540 | 9 | 80 | 60 | 0.864919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb2f140184b76a515d0df65ccb5eb20d4b1bd124 | 46 | py | Python | tests/tasks/dropbox/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 8,633 | 2019-03-23T17:51:03.000Z | 2022-03-31T22:17:42.000Z | tests/tasks/dropbox/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 3,903 | 2019-03-23T19:11:21.000Z | 2022-03-31T23:21:23.000Z | tests/tasks/dropbox/__init__.py | concreted/prefect | dd732f5990ee2b0f3d816adb285168fd63b239e4 | [
"Apache-2.0"
] | 937 | 2019-03-23T18:49:44.000Z | 2022-03-31T21:45:13.000Z | import pytest
pytest.importorskip("dropbox")
| 11.5 | 30 | 0.804348 | 5 | 46 | 7.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 46 | 3 | 31 | 15.333333 | 0.880952 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb30844168e397b5d3006bac0cdbc714fb3e08d3 | 259 | py | Python | Ranmath/MatrixGenerators/__init__.py | pawel-ta/ranmath | f52a15b10bdb5830a50c43da11fed5f182026587 | [
"MIT"
] | null | null | null | Ranmath/MatrixGenerators/__init__.py | pawel-ta/ranmath | f52a15b10bdb5830a50c43da11fed5f182026587 | [
"MIT"
] | null | null | null | Ranmath/MatrixGenerators/__init__.py | pawel-ta/ranmath | f52a15b10bdb5830a50c43da11fed5f182026587 | [
"MIT"
] | null | null | null |
from .MultivariateGaussianGenerator import MultivariateGaussianGenerator
from .InverseWishartGenerator import InverseWishartGenerator
from .ExponentialDecayGenerator import ExponentialDecayGenerator
from .MatrixGeneratorAdapter import MatrixGeneratorAdapter
| 43.166667 | 72 | 0.918919 | 16 | 259 | 14.875 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065637 | 259 | 5 | 73 | 51.8 | 0.983471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bb42bb09c5b704327d2d3618ac63dae5c15a3ef8 | 8,179 | py | Python | model/densenet.py | FAHAI-1/tensorflow-cifar100 | 4b5b76d0a310e007620e9a4f7d14da4e5db939e3 | [
"Apache-2.0"
] | 109 | 2019-05-12T13:26:10.000Z | 2022-03-09T01:45:20.000Z | model/densenet.py | FAHAI-1/tensorflow-cifar100 | 4b5b76d0a310e007620e9a4f7d14da4e5db939e3 | [
"Apache-2.0"
] | 10 | 2019-05-16T08:26:01.000Z | 2020-07-27T05:56:35.000Z | model/densenet.py | FAHAI-1/tensorflow-cifar100 | 4b5b76d0a310e007620e9a4f7d14da4e5db939e3 | [
"Apache-2.0"
] | 40 | 2019-05-19T14:38:00.000Z | 2022-02-25T16:18:39.000Z | import tensorflow as tf
import numpy as np
import math
class bottleneck:
def __init__(self, input_tensor, growth_rate, is_training=True, reuse=False, name='Default', kernel_initializer=None):
self.growth_rate = growth_rate
self.inner_channel = 4 * growth_rate
self.is_training = is_training
self.reuse = reuse
self.name = name
self.kernel_initializer = kernel_initializer
self.input_tensor = input_tensor
def bottle_neck(self):
x = tf.layers.batch_normalization(self.input_tensor, training=self.is_training, reuse=self.reuse, name=self.name + 'bn0')
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, self.inner_channel, (1,1), use_bias=False, name=self.name+'conv1', reuse=self.reuse, kernel_initializer=self.kernel_initializer)
x = tf.layers.batch_normalization(x, training=self.is_training, reuse=self.reuse, name=self.name + 'bn1')
x = tf.nn.relu(x)
x = tf.layers.conv2d(x, self.growth_rate, (3,3), use_bias=False, padding='SAME', name=self.name+'conv2', reuse=self.reuse, kernel_initializer=self.kernel_initializer)
# return tf.concat([x, self.input_tensor], axis=1)
# print('Bottle_neck_name: ', self.name)
return tf.concat([x, self.input_tensor], 3)
class transition:
def __init__(self, input_tensor, out_channels, is_training=True, reuse=False, name='Default', kernel_initializer=None):
self.input_tensor = input_tensor
self.out_channels = out_channels
self.name = name
self.is_training = is_training
self.reuse = reuse
self.kernel_initializer = kernel_initializer
def down_sample(self):
x = tf.layers.batch_normalization(self.input_tensor, training=self.is_training, reuse=self.reuse, name=self.name + 'bn0')
x = tf.layers.conv2d(x, self.out_channels, (1,1), use_bias=False, name=self.name+'conv1', reuse=self.reuse, kernel_initializer=self.kernel_initializer)
x = tf.layers.average_pooling2d(x, pool_size=[2,2], strides=[2,2], name=self.name+'avg_pool1')
# print('Transition: ', self.name)
return x
class Densenet:
def __init__(self, input_tensor, block, nblocks, growth_rate, reduction=0.5, n_class=100, is_training=True, reuse=False, kernel_initializer=None):
self.inner_channel = 2 * growth_rate
self.input_tensor = input_tensor
self.block = block
self.nblocks = nblocks
self.growth_rate = growth_rate
self.reduction = reduction
self.n_class = n_class
self.is_training = is_training
self.reuse = reuse
x = tf.layers.conv2d(self.input_tensor, self.inner_channel, (3,3), padding='SAME', reuse=reuse, use_bias=False, name='conv_first', kernel_initializer=kernel_initializer)
for index in range(len(nblocks) - 1):
# print('make_layer_%d:'%(index), x)
x = self.make_dense_layer(x, block, nblocks[index], name='block_'+str(index), kernel_initializer=kernel_initializer)
self.inner_channel += growth_rate * nblocks[index]
out_channels = int(self.reduction * self.inner_channel)
x = transition(x, out_channels, is_training=self.is_training, reuse=self.reuse, name='trainsition_'+str(index), kernel_initializer=kernel_initializer).down_sample()
x = self.make_dense_layer(x, block, nblocks[len(nblocks) - 1], name='last_block',kernel_initializer=kernel_initializer)
x = tf.layers.batch_normalization(x, training=self.is_training, reuse=self.reuse, name='bn-1')
x = tf.nn.relu(x)
# print('before gap:', x)
x = tf.reduce_mean(x, [1, 2], name='gap')
# print('after gap:', x)
x = tf.layers.dense(x, n_class, name='dense', reuse=reuse, kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=False))
self.output = x
def make_dense_layer(self, x, block, nblocks, name='Default', kernel_initializer=None):
for index in range(nblocks):
obj = self.block(x, self.growth_rate, is_training=self.is_training, reuse=self.reuse, name=name+'blocks_'+str(index), kernel_initializer=kernel_initializer)
x = obj.bottle_neck()
return x
class Densenet_BC:
def __init__(self, input_tensor, block, depth, growth_rate=12, reduction=0.5, n_class=100, is_training=True, reuse=False, kernel_initializer=None):
self.inner_channel = 2 * growth_rate
self.input_tensor = input_tensor
self.block = block
self.nblocks = (depth - 4) // 6
self.growth_rate = growth_rate
self.reduction = reduction
self.n_class = n_class
self.is_training = is_training
self.reuse = reuse
x = tf.layers.conv2d(self.input_tensor, self.inner_channel, (3,3), padding='SAME', reuse=reuse, use_bias=False, name='conv_first', kernel_initializer=kernel_initializer)
x = self.make_dense_layer(x, self.block, self.nblocks, name='dense1', kernel_initializer=kernel_initializer)
# print(x.shape, x.shape[-1])
x = transition(x, int(math.floor(int(x.shape[-1]) * self.reduction)), is_training=self.is_training, reuse=self.reuse, name='trainsition_1', kernel_initializer=kernel_initializer).down_sample()
x = self.make_dense_layer(x, self.block, self.nblocks, name='dense2', kernel_initializer=kernel_initializer)
x = transition(x, int(math.floor(int(x.shape[-1]) * self.reduction)), is_training=self.is_training, reuse=self.reuse, name='trainsition_2', kernel_initializer=kernel_initializer).down_sample()
x = self.make_dense_layer(x, self.block, self.nblocks, name='dense3', kernel_initializer=kernel_initializer)
x = tf.layers.batch_normalization(x, training=self.is_training, reuse=self.reuse, name='bn-1')
x = tf.nn.relu(x)
# print('before gap:', x)
x = tf.reduce_mean(x, [1, 2], name='gap')
# print('after gap:', x)
x = tf.layers.dense(x, n_class, name='dense', reuse=reuse, kernel_initializer=tf.contrib.layers.xavier_initializer(uniform=False))
self.output = x
def make_dense_layer(self, x, block, nblocks, name='Default', kernel_initializer=None):
for index in range(nblocks):
obj = self.block(x, self.growth_rate, is_training=self.is_training, reuse=self.reuse, name=name+'blocks_'+str(index), kernel_initializer=kernel_initializer)
x = obj.bottle_neck()
return x
def densenet121(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet(input_tensor, bottleneck, [6, 12, 24, 16], 32, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
def densenet169(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet(input_tensor, bottleneck, [6, 12, 32, 32], 32, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
def densenet201(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet(input_tensor, bottleneck, [6, 12, 48, 32], 32, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
def densenet161(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet(input_tensor, bottleneck, [6, 12, 36, 24], 48, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
def densenet100bc(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet_BC(input_tensor, bottleneck, 100, 12, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
def densenet190bc(input_tensor, is_training, reuse, kernel_initializer=None):
return Densenet_BC(input_tensor, bottleneck, 190, 40, is_training=is_training, reuse=reuse, kernel_initializer=kernel_initializer).output
if __name__ == '__main__':
a = np.random.rand(3, 32, 32, 3)
inp = tf.placeholder(tf.float32, [None, 32, 32, 3])
out = densenet190bc(inp, is_training=True, reuse=False, kernel_initializer=None)
conv_vars = [var for var in tf.trainable_variables() if 'conv' in var.name]
print(conv_vars)
print(out)
| 52.095541 | 200 | 0.70192 | 1,133 | 8,179 | 4.84113 | 0.11827 | 0.189061 | 0.060164 | 0.123974 | 0.822972 | 0.790155 | 0.758979 | 0.74804 | 0.722334 | 0.707019 | 0 | 0.021078 | 0.176305 | 8,179 | 156 | 201 | 52.429487 | 0.793083 | 0.033867 | 0 | 0.504673 | 0 | 0 | 0.02725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130841 | false | 0 | 0.028037 | 0.056075 | 0.28972 | 0.018692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
24af67eca9db1d99af5312b505e28bfeb03b5d42 | 2,125 | py | Python | DigitalSecurityHub/products/forms.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | DigitalSecurityHub/products/forms.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | DigitalSecurityHub/products/forms.py | vineethsai/DigitalSecurityHub | fb3380e983d71bbd67dde19346fad274f6ed2ba8 | [
"MIT"
] | null | null | null | from django import forms
from django.forms import CharField, Form
from django.utils.translation import ugettext_lazy as _
RELEVANCE_CHOICES = (
(_("True"), True),
(_("False"), False)
)
class ProductCreationForm(forms.Form):
"""
Registration form
"""
title = forms.CharField(label="Title", max_length=120, required=True, widget=forms.TextInput(attrs={"class": "form-control", "placeholder": "Title"}))
description = forms.CharField(label="Description", max_length=1500, required=True, widget=forms.TextInput(attrs={"class": "form-control", "placeholder": "Description"}))
active = forms.ChoiceField(choices=RELEVANCE_CHOICES, widget=forms.RadioSelect, required=True)
category = forms.CharField(label="Category", required=True, widget=forms.TextInput(attrs={"class": "form-control", "placeholder": "Enter a category here!"}))
price = forms.IntegerField(label="Price", required= True, widget=forms.NumberInput(attrs={"class": "form-control", "placeholder": "100"}))
stock = forms.IntegerField(label="Stock", required= True, widget=forms.NumberInput(attrs={"class": "form-control", "placeholder": "5"}))
class ProductEditForm(forms.Form):
"""
Registration form
"""
title = forms.CharField(label="Title", max_length=120, required=True, widget=forms.TextInput(attrs={"id": "title", "placeholder": "New title"}))
description = forms.CharField(label="Description", max_length=1500, required=True, widget=forms.TextInput(attrs={"id": "description", "placeholder": "New description"}))
active = forms.ChoiceField(choices=RELEVANCE_CHOICES, widget=forms.RadioSelect(attrs={"id": "active"}), required=True)
category = forms.CharField(label="Category", required=True, widget=forms.TextInput(attrs={"id": "category", "placeholder": "Edit category here!"}))
price = forms.IntegerField(label="Price", required= True, widget=forms.NumberInput(attrs={"id": "price", "placeholder": "New price value"}))
stock = forms.IntegerField(label="Stock", required= True, widget=forms.NumberInput(attrs={"id": "stock", "placeholder": "New stock value"}))
| 60.714286 | 173 | 0.712 | 239 | 2,125 | 6.284519 | 0.205021 | 0.095872 | 0.11984 | 0.153129 | 0.78229 | 0.78229 | 0.78229 | 0.775632 | 0.775632 | 0.775632 | 0 | 0.009574 | 0.115294 | 2,125 | 34 | 174 | 62.5 | 0.789362 | 0.016471 | 0 | 0 | 0 | 0 | 0.213314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.809524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
7038cde7c8e55e367d3558d6d39a81530e546920 | 418 | py | Python | osspeak/recognition/actions/library/pywindow/__init__.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | 1 | 2020-03-17T10:24:41.000Z | 2020-03-17T10:24:41.000Z | osspeak/recognition/actions/library/pywindow/__init__.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | 12 | 2016-09-28T05:16:00.000Z | 2020-11-27T22:32:40.000Z | osspeak/recognition/actions/library/pywindow/__init__.py | OSSpeak/OSSpeak | 327c38a37684165f87bf8d76ab2ca135b43b8ab7 | [
"MIT"
] | null | null | null | import os
import sys
if sys.platform == 'win32':
import recognition.actions.library.pywindow._windows as os_specific_implementation
else:
raise RuntimeError('Only Windows is currently supported.')
def all_windows():
return os_specific_implementation.all_windows()
def foreground_window():
return os_specific_implementation.foreground_window()
class WindowDoesNotExistError(BaseException):
pass | 26.125 | 87 | 0.796651 | 48 | 418 | 6.708333 | 0.625 | 0.093168 | 0.223602 | 0.186335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005495 | 0.129187 | 418 | 16 | 88 | 26.125 | 0.879121 | 0 | 0 | 0 | 0 | 0 | 0.097852 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0.083333 | 0.25 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
70743f530a911e5bcb119481dd84b228ff831717 | 272 | py | Python | sky/admin.py | eethan1/IMnight2018_Backend | 39780f737e57763fdfb171c4687a375d3c5a4bb0 | [
"Apache-2.0"
] | null | null | null | sky/admin.py | eethan1/IMnight2018_Backend | 39780f737e57763fdfb171c4687a375d3c5a4bb0 | [
"Apache-2.0"
] | null | null | null | sky/admin.py | eethan1/IMnight2018_Backend | 39780f737e57763fdfb171c4687a375d3c5a4bb0 | [
"Apache-2.0"
] | 4 | 2018-01-27T06:01:41.000Z | 2018-02-21T12:18:35.000Z | from django.contrib import admin
from sky.models import Article, News, Course
from IMnight.utils import ModelWithLabelAdmin
admin.site.register(Article, ModelWithLabelAdmin)
admin.site.register(News, ModelWithLabelAdmin)
admin.site.register(Course, ModelWithLabelAdmin)
| 30.222222 | 49 | 0.845588 | 32 | 272 | 7.1875 | 0.46875 | 0.313043 | 0.365217 | 0.469565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080882 | 272 | 8 | 50 | 34 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
708f549a23778f67eb2d29e762ad4142c5e64a9e | 3,379 | py | Python | 075_ERFNet/05_full_integer_quantization.py | IgiArdiyanto/PINTO_model_zoo | 9247b56a7dff37f28a8a7822a7ef4dd9adf7234d | [
"MIT"
] | 1,529 | 2019-12-11T13:36:23.000Z | 2022-03-31T18:38:27.000Z | 075_ERFNet/05_full_integer_quantization.py | IgiArdiyanto/PINTO_model_zoo | 9247b56a7dff37f28a8a7822a7ef4dd9adf7234d | [
"MIT"
] | 200 | 2020-01-06T09:24:42.000Z | 2022-03-31T17:29:08.000Z | 075_ERFNet/05_full_integer_quantization.py | IgiArdiyanto/PINTO_model_zoo | 9247b56a7dff37f28a8a7822a7ef4dd9adf7234d | [
"MIT"
] | 288 | 2020-02-21T14:56:02.000Z | 2022-03-30T03:00:35.000Z | ### tf_nightly==2.5.0-dev20201204
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
mean = np.asarray([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 1, 3)
std = np.asarray([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 1, 3)
def representative_dataset_gen_256():
for data in raw_test_data.take(10):
image = data['image'].numpy()
image = tf.image.resize(image, (256, 512))
image = (image / 255 - mean) / std
image = image[np.newaxis,:,:,:]
yield [image]
def representative_dataset_gen_384():
for data in raw_test_data.take(10):
image = data['image'].numpy()
image = tf.image.resize(image, (384, 768))
image = (image / 255 - mean) / std
image = image[np.newaxis,:,:,:]
yield [image]
def representative_dataset_gen_512():
for data in raw_test_data.take(10):
image = data['image'].numpy()
image = tf.image.resize(image, (512, 1024))
image = (image / 255 - mean) / std
image = image[np.newaxis,:,:,:]
yield [image]
raw_test_data, info = tfds.load(name="coco/2017", with_info=True, split="test", data_dir="~/TFDS", download=False)
# Full Integer Quantization - Input/Output=float32
height = 256
width = 512
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model_{}x{}'.format(height, width))
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = representative_dataset_gen_256
tflite_model = converter.convert()
with open('erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width), 'wb') as w:
w.write(tflite_model)
print('Full Integer Quantization complete! - erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width))
height = 384
width = 768
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model_{}x{}'.format(height, width))
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = representative_dataset_gen_384
tflite_model = converter.convert()
with open('erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width), 'wb') as w:
w.write(tflite_model)
print('Full Integer Quantization complete! - erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width))
height = 512
width = 1024
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model_{}x{}'.format(height, width))
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = representative_dataset_gen_512
tflite_model = converter.convert()
with open('erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width), 'wb') as w:
w.write(tflite_model)
print('Full Integer Quantization complete! - erfnet_{}x{}_cityscapes_full_integer_quant.tflite'.format(height, width)) | 44.460526 | 118 | 0.743711 | 472 | 3,379 | 5.088983 | 0.207627 | 0.029975 | 0.063697 | 0.04746 | 0.849292 | 0.849292 | 0.849292 | 0.829309 | 0.829309 | 0.829309 | 0 | 0.044197 | 0.122817 | 3,379 | 76 | 119 | 44.460526 | 0.766194 | 0.023084 | 0 | 0.666667 | 0 | 0 | 0.15135 | 0.089172 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.047619 | 0 | 0.095238 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
56255fa9e6e70a02672bc8047b6aba2f25b2b460 | 26 | py | Python | python/testData/quickFixes/PyRenameElementQuickFixTest/renameAwaitFunctionInPy36_after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/quickFixes/PyRenameElementQuickFixTest/renameAwaitFunctionInPy36_after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/quickFixes/PyRenameElementQuickFixTest/renameAwaitFunctionInPy36_after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | def A_NEW_NAME():
pass | 13 | 17 | 0.653846 | 5 | 26 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.230769 | 26 | 2 | 18 | 13 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
567be2010b0e6bd7fc23889da68c1f700f284dea | 39 | py | Python | flask_tests/home/__init__.py | fp12/flask-tests | b14913ea4d56ff6429df34a08f23ca802f52d01f | [
"MIT"
] | null | null | null | flask_tests/home/__init__.py | fp12/flask-tests | b14913ea4d56ff6429df34a08f23ca802f52d01f | [
"MIT"
] | null | null | null | flask_tests/home/__init__.py | fp12/flask-tests | b14913ea4d56ff6429df34a08f23ca802f52d01f | [
"MIT"
] | null | null | null | from .home_bp import blueprint # noqa
| 19.5 | 38 | 0.769231 | 6 | 39 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 39 | 1 | 39 | 39 | 0.90625 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
5693a3220571fb47206c329fea1834447aea7057 | 17,095 | py | Python | test/test_services/test_admin.py | idfy-io/idfy-sdk-python | 0f7ced0cf0df080b1c73e2451bf02a23710b5bf1 | [
"Apache-2.0"
] | null | null | null | test/test_services/test_admin.py | idfy-io/idfy-sdk-python | 0f7ced0cf0df080b1c73e2451bf02a23710b5bf1 | [
"Apache-2.0"
] | null | null | null | test/test_services/test_admin.py | idfy-io/idfy-sdk-python | 0f7ced0cf0df080b1c73e2451bf02a23710b5bf1 | [
"Apache-2.0"
] | null | null | null | import asyncio
import functools
import unittest
import unittest.mock
from test.base_test import BaseTest
from idfy_sdk.version import version
import idfy_sdk
class TestAdmin(BaseTest):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.admin_service = idfy_sdk.services.AdminService()
def test_get_account(self):
data = self.admin_service.get_account()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_account(self):
data = self.admin_service.update_account(account_update_options=self.params)
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.put.assert_called_once_with('{}/admin/account'.format(self.base_url), data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
@unittest.skip("Mock server error")
def test_create_account(self):
data = self.admin_service.create_account(account_create_options=self.params)
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with()
@unittest.skip("Mock server error")
def test_disable_account(self):
data = self.admin_service.disable_account()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with()
def test_list_accounts(self):
data = self.admin_service.list_accounts()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account/list'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params={'name': None, 'orgNo': None, 'uniCustomerNo': None, 'createdBefore': None, 'createdAfter': None, 'lastModifiedBefore': None,
'lastModifiedAfter': None, 'dealerName': None, 'dealerReference': None, 'tags': None, 'enable': None})
def test_list_account_names(self):
data = self.admin_service.list_account_names()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account/list/names'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_get_dealer(self):
data = self.admin_service.get_dealer(dealer_id="1")
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/dealer/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_dealer(self):
data = self.admin_service.update_dealer(dealer_id="1", dealer=self.params)
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.post.assert_called_once_with('{}/admin/dealer/1'.format(self.base_url), auth=None, data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_list_accounts_for_dealer(self):
data = self.admin_service.list_accounts_for_dealer(dealer_id="1")
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/dealer/1/accounts'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_list_transactions(self):
data = self.admin_service.list_transactions()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/invoice'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params={'year': None, 'month': None, 'get_as_csv': None})
def test_list_templates(self):
data = self.admin_service.list_templates()
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/template'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_create_template(self):
data = self.admin_service.create_template(pdf_template_options=self.params)
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.post.assert_called_once_with('{}/admin/template'.format(self.base_url), auth=None, data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_get_template(self):
data = self.admin_service.get_template(id="1")
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/template/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_template(self):
data = self.admin_service.update_template(id="1", pdf_template_options=self.params)
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.put.assert_called_once_with('{}/admin/template/1'.format(self.base_url), data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_delete_template(self):
data = self.admin_service.delete_template(id="1")
self.assertIsNone(data)
#self.AssertEqual()
self.mock_http.delete.assert_called_once_with('{}/admin/template/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'})
class TestAdminAsync(BaseTest):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.admin_service = idfy_sdk.services.AdminService()
def setUp(self):
super().setUp()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(None)
def tearDown(self):
self.loop.close()
def test_get_account_async(self):
async def func():
return await self.admin_service.get_account(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_account_async(self):
async def func():
return await self.admin_service.update_account(account_update_options=self.params, threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.put.assert_called_once_with('{}/admin/account'.format(self.base_url), data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
@unittest.skip("Mock server error")
def test_create_account_async(self):
async def func():
return await self.admin_service.create_account(account_create_options=self.params, threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#mock_service.create_account.assert_called_once_with(account_create_options=self.params, threaded=True)
@unittest.skip("Mock server error")
def test_disable_account_async(self):
async def func():
return await self.admin_service.disable_account(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#mock_service.disable_account.assert_called_once_with(threaded=True)
def test_list_accounts_async(self):
async def func():
return await self.admin_service.list_accounts(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account/list'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params={'name': None, 'orgNo': None, 'uniCustomerNo': None, 'createdBefore': None, 'createdAfter': None, 'lastModifiedBefore': None,
'lastModifiedAfter': None, 'dealerName': None, 'dealerReference': None, 'tags': None, 'enable': None})
def test_list_account_names_async(self):
async def func():
return await self.admin_service.list_account_names(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/account/list/names'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_get_dealer_async(self):
async def func():
return await self.admin_service.get_dealer(dealer_id="1", threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/dealer/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_dealer_async(self):
async def func():
return await self.admin_service.update_dealer(dealer_id="1", dealer=self.params, threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.post.assert_called_once_with('{}/admin/dealer/1'.format(self.base_url), auth=None, data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_list_accounts_for_dealer_async(self):
async def func():
return await self.admin_service.list_accounts_for_dealer(dealer_id="1", threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/dealer/1/accounts'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_list_transactions_async(self):
async def func():
return await self.admin_service.list_transactions(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/invoice'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params={'year': None, 'month': None, 'get_as_csv': None})
def test_list_templates_async(self):
async def func():
return await self.admin_service.list_templates(threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/template'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_create_template_async(self):
async def func():
return await self.admin_service.create_template(pdf_template_options=self.params, threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.post.assert_called_once_with('{}/admin/template'.format(self.base_url), auth=None, data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_get_template_async(self):
async def func():
return await self.admin_service.get_template(id="1", threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.get.assert_called_once_with('{}/admin/template/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_update_template_async(self):
async def func():
return await self.admin_service.update_template(id="1", pdf_template_options=self.params, threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNotNone(data)
#self.AssertEqual()
self.mock_http.put.assert_called_once_with('{}/admin/template/1'.format(self.base_url), data='{"unit": "test"}', headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'}, params=None)
def test_delete_template_async(self):
async def func():
return await self.admin_service.delete_template(id="1", threaded=True)
data = self.loop.run_until_complete(func())
self.assertIsNone(data)
#self.AssertEqual()
self.mock_http.delete.assert_called_once_with('{}/admin/template/1'.format(self.base_url), headers={'X-Idfy-SDK': 'Python {}'.format(version), 'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmb28iOiJiYXIifQ.UIZchxQD36xuhacrJF9HQ5SIUxH5HBiv9noESAacsxU', 'Content-Type': 'application/json'})
#Provide CLI to the test script.
if __name__ == '__main__':
unittest.main()
| 58.948276 | 446 | 0.727055 | 1,860 | 17,095 | 6.482258 | 0.06129 | 0.038484 | 0.039811 | 0.049764 | 0.967571 | 0.958447 | 0.939371 | 0.926765 | 0.925189 | 0.917807 | 0 | 0.022837 | 0.139339 | 17,095 | 289 | 447 | 59.152249 | 0.796642 | 0.041182 | 0 | 0.569832 | 0 | 0 | 0.323839 | 0.166381 | 0 | 0 | 0 | 0 | 0.324022 | 1 | 0.189944 | false | 0 | 0.039106 | 0 | 0.324022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3b8d0bb4d8e615fb187d761188a54570e24ba9de | 105 | py | Python | mass_conversion/__init__.py | matthewkirby/mass_conversion | 11885016d42addc058b0143a65e3da0911e72881 | [
"MIT"
] | 1 | 2019-10-17T16:14:53.000Z | 2019-10-17T16:14:53.000Z | mass_conversion/__init__.py | matthewkirby/mass_conversion | 11885016d42addc058b0143a65e3da0911e72881 | [
"MIT"
] | 3 | 2019-10-17T16:32:33.000Z | 2019-10-17T16:50:29.000Z | mass_conversion/__init__.py | matthewkirby/mass_conversion | 11885016d42addc058b0143a65e3da0911e72881 | [
"MIT"
] | null | null | null | from .conversions import *
from .utils import *
from .hu_kravtsov_2002 import *
from .cM_models import *
| 21 | 31 | 0.771429 | 15 | 105 | 5.2 | 0.6 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 0.152381 | 105 | 4 | 32 | 26.25 | 0.831461 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3b9d638546795abcf7d3aae75f15e866456ae8eb | 160 | py | Python | maguey/tests/test_files_api.py | andrewmagill/maguey | 54efb60a5cab432cf5a3f1cbdaae0d1ffd1f3763 | [
"MIT"
] | null | null | null | maguey/tests/test_files_api.py | andrewmagill/maguey | 54efb60a5cab432cf5a3f1cbdaae0d1ffd1f3763 | [
"MIT"
] | null | null | null | maguey/tests/test_files_api.py | andrewmagill/maguey | 54efb60a5cab432cf5a3f1cbdaae0d1ffd1f3763 | [
"MIT"
] | null | null | null | from unittest import TestCase
import maguey
class TestFiles(TestCase):
def test_add_file(self):
pass
def test_delete_file(self):
pass
| 16 | 31 | 0.69375 | 21 | 160 | 5.095238 | 0.666667 | 0.130841 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 160 | 9 | 32 | 17.777778 | 0.891667 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0.285714 | 0.285714 | 0 | 0.714286 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
8e72070341c39b5d1dd4bcb44ed799c2291835f9 | 4,193 | py | Python | tests/libs/datasets/region_aggregation_test.py | phc-health/covid-data-model | 13c5084d631cf2dd33a7fe558c212dbd32b686e6 | [
"MIT"
] | null | null | null | tests/libs/datasets/region_aggregation_test.py | phc-health/covid-data-model | 13c5084d631cf2dd33a7fe558c212dbd32b686e6 | [
"MIT"
] | null | null | null | tests/libs/datasets/region_aggregation_test.py | phc-health/covid-data-model | 13c5084d631cf2dd33a7fe558c212dbd32b686e6 | [
"MIT"
] | null | null | null | import io
from datapublic.common_fields import CommonFields
from datapublic.common_fields import FieldName
from libs.datasets import region_aggregation
from libs.datasets import timeseries
from libs.pipeline import Region
from tests import test_helpers
def test_aggregate_states_to_country():
ts = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,county,aggregate_level,date,m1,m2,population\n"
"iso1:us#fips:97111,Bar County,county,2020-04-03,3,,\n"
"iso1:us#fips:97222,Foo County,county,2020-04-01,,10,\n"
"iso1:us#iso2:us-tx,Texas,state,2020-04-01,1,2,\n"
"iso1:us#iso2:us-tx,Texas,state,2020-04-02,3,4,\n"
"iso1:us#iso2:us-tx,Texas,state,,,,1000\n"
"iso1:us#iso2:us-az,Arizona,state,2020-04-01,1,2,\n"
"iso1:us#iso2:us-az,Arizona,state,,,,2000\n"
)
)
region_us = Region.from_iso1("us")
country = region_aggregation.aggregate_regions(
ts,
{Region.from_state("AZ"): region_us, Region.from_state("TX"): region_us},
[],
reporting_ratio_required_to_aggregate=1.0,
)
expected = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,aggregate_level,date,m1,m2,population\n"
"iso1:us,country,2020-04-01,2,4,\n"
"iso1:us,country,,,,3000\n"
)
)
test_helpers.assert_dataset_like(country, expected)
def test_aggregate_states_to_country_scale():
ts = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,county,aggregate_level,date,m1,m2,population\n"
"iso1:us#iso2:us-tx,Texas,state,2020-04-01,4,2,\n"
"iso1:us#iso2:us-tx,Texas,state,2020-04-02,4,4,\n"
"iso1:us#iso2:us-tx,Texas,state,,,,2500\n"
"iso1:us#iso2:us-az,Arizona,state,2020-04-01,8,20,\n"
"iso1:us#iso2:us-az,Arizona,state,2020-04-02,12,40,\n"
"iso1:us#iso2:us-az,Arizona,state,,,,7500\n"
)
)
region_us = Region.from_iso1("us")
country = region_aggregation.aggregate_regions(
ts,
{Region.from_state("AZ"): region_us, Region.from_state("TX"): region_us},
[
region_aggregation.StaticWeightedAverageAggregation(
FieldName("m1"), CommonFields.POPULATION
),
],
)
# The column m1 is scaled by population.
# On 2020-04-01: 4 * 0.25 + 8 * 0.75 = 7
# On 2020-04-02: 4 * 0.25 + 12 * 0.75 = 10
expected = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,aggregate_level,date,m1,m2,population\n"
"iso1:us,country,2020-04-01,7,22,\n"
"iso1:us,country,2020-04-02,10,44,\n"
"iso1:us,country,,,,10000\n"
)
)
test_helpers.assert_dataset_like(country, expected)
def test_aggregate_states_to_country_scale_static():
ts = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,county,aggregate_level,date,m1,s1,population\n"
"iso1:us#iso2:us-tx,Texas,state,2020-04-01,4,,\n"
"iso1:us#iso2:us-tx,Texas,state,,,4,2500\n"
"iso1:us#iso2:us-az,Arizona,state,2020-04-01,8,,\n"
"iso1:us#iso2:us-az,Arizona,state,,,12,7500\n"
)
)
region_us = Region.from_iso1("us")
country = region_aggregation.aggregate_regions(
ts,
{Region.from_state("AZ"): region_us, Region.from_state("TX"): region_us},
[
region_aggregation.StaticWeightedAverageAggregation(
FieldName("m1"), CommonFields.POPULATION
),
region_aggregation.StaticWeightedAverageAggregation(
FieldName("s1"), CommonFields.POPULATION
),
],
)
# The column m1 is scaled by population.
# 4 * 0.25 + 12 * 0.75 = 10
expected = timeseries.MultiRegionDataset.from_csv(
io.StringIO(
"location_id,aggregate_level,date,m1,s1,population\n"
"iso1:us,country,2020-04-01,7,,\n"
"iso1:us,country,,,10,10000\n"
)
)
test_helpers.assert_dataset_like(country, expected)
| 37.4375 | 81 | 0.618889 | 565 | 4,193 | 4.456637 | 0.162832 | 0.064337 | 0.06672 | 0.065528 | 0.82645 | 0.801033 | 0.780778 | 0.780778 | 0.753773 | 0.653296 | 0 | 0.098242 | 0.240162 | 4,193 | 111 | 82 | 37.774775 | 0.69209 | 0.043644 | 0 | 0.42268 | 0 | 0.113402 | 0.33991 | 0.333417 | 0 | 0 | 0 | 0 | 0.030928 | 1 | 0.030928 | false | 0 | 0.072165 | 0 | 0.103093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8ea6021ad3c5c3d080e03089095aec34106e5541 | 41 | py | Python | dataset/communal/__init__.py | zhigangjiang/LGT-Net | d9a619158b2dc66a50c100e7fa7e491f1df16fd7 | [
"MIT"
] | 11 | 2022-03-03T17:49:33.000Z | 2022-03-25T11:23:11.000Z | dataset/communal/__init__.py | zhigangjiang/LGT-Net | d9a619158b2dc66a50c100e7fa7e491f1df16fd7 | [
"MIT"
] | null | null | null | dataset/communal/__init__.py | zhigangjiang/LGT-Net | d9a619158b2dc66a50c100e7fa7e491f1df16fd7 | [
"MIT"
] | 1 | 2022-03-04T06:39:50.000Z | 2022-03-04T06:39:50.000Z | """
@Date: 2021/09/22
@description:
"""
| 8.2 | 17 | 0.560976 | 5 | 41 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.228571 | 0.146341 | 41 | 4 | 18 | 10.25 | 0.428571 | 0.756098 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.