hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2d33a5df6de81ffccc1cc28e6ed456c15bb617a5 | 771 | py | Python | image_repo/repo/models.py | HollisHolmes/Image_Repository | 4ec3060f79d0e09ea78f515e2bf6665572dd5f60 | [
"MIT"
] | 1 | 2021-05-11T00:03:28.000Z | 2021-05-11T00:03:28.000Z | image_repo/repo/models.py | HollisHolmes/image_repository | 4ec3060f79d0e09ea78f515e2bf6665572dd5f60 | [
"MIT"
] | null | null | null | image_repo/repo/models.py | HollisHolmes/image_repository | 4ec3060f79d0e09ea78f515e2bf6665572dd5f60 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class Tag(models.Model):
tag = models.CharField(max_length=20)
def __str(self):
return f'{self.id}: {self.tag}'
class Item(models.Model):
name = models.CharField(max_length=50)
image_url = models.URLField(max_length=400)
num_reviews = models.IntegerField(default=0)
price = models.FloatField(default=10.99)
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='inventory', blank=True, null=True)
tags = models.ManyToManyField(Tag, blank=True, related_name='items')
def __str__(self):
return f'{self.id}: {self.name} | ${self.price} | {self.image_url}'
def is_valid_item(self):
pass
| 28.555556 | 109 | 0.696498 | 108 | 771 | 4.814815 | 0.509259 | 0.051923 | 0.069231 | 0.092308 | 0.103846 | 0.103846 | 0.103846 | 0.103846 | 0 | 0 | 0 | 0.018809 | 0.172503 | 771 | 26 | 110 | 29.653846 | 0.796238 | 0.031128 | 0 | 0 | 0 | 0.058824 | 0.123822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0.058824 | 0.117647 | 0.117647 | 0.941176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 3 |
2d44e27056004631c8787e68bd9ce63961504ea5 | 1,174 | py | Python | tests/test_cisco_simple.py | mostau1/netmiko | 5b5463fb01e39e771be553281748477a48c7391c | [
"MIT"
] | 1 | 2021-11-16T11:37:00.000Z | 2021-11-16T11:37:00.000Z | tests/test_cisco_simple.py | mostau1/netmiko | 5b5463fb01e39e771be553281748477a48c7391c | [
"MIT"
] | null | null | null | tests/test_cisco_simple.py | mostau1/netmiko | 5b5463fb01e39e771be553281748477a48c7391c | [
"MIT"
] | 1 | 2016-10-03T08:57:35.000Z | 2016-10-03T08:57:35.000Z | #!/usr/bin/env python
from netmiko import ConnectHandler
from getpass import getpass
#ip_addr = raw_input("Enter IP Address: ")
pwd = getpass()
ip_addr = '184.105.247.70'
telnet_device = {
'device_type': 'cisco_ios_telnet',
'ip': ip_addr,
'username': 'pyclass',
'password': pwd,
'port': 23,
}
ssh_device = {
'device_type': 'cisco_ios_ssh',
'ip': ip_addr,
'username': 'pyclass',
'password': pwd,
'port': 22,
}
print "telnet"
net_connect1 = ConnectHandler(**telnet_device)
print "telnet prompt: {}".format(net_connect1.find_prompt())
print "send_command: "
print '-' * 50
print net_connect1.send_command_timing("show arp")
print '-' * 50
print '-' * 50
print net_connect1.send_command("show run")
print '-' * 50
print
print "SSH"
net_connect2 = ConnectHandler(**ssh_device)
print "SSH prompt: {}".format(net_connect2.find_prompt())
print "send_command: "
print '-' * 50
print net_connect2.send_command("show arp")
print '-' * 50
print '-' * 50
print net_connect1.send_command("show run")
print '-' * 50
print
#output = net_connect.send_command_expect("show version")
#print
#print '#' * 50
#print output
#print '#' * 50
#print
| 20.964286 | 60 | 0.681431 | 158 | 1,174 | 4.841772 | 0.316456 | 0.091503 | 0.156863 | 0.078431 | 0.469281 | 0.406536 | 0.406536 | 0.381699 | 0.282353 | 0.175163 | 0 | 0.043478 | 0.157581 | 1,174 | 55 | 61 | 21.345455 | 0.73003 | 0.140545 | 0 | 0.5 | 0 | 0 | 0.231231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.1 | 0.05 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 3 |
7423eec73eba4ebbb15b1bb00e9a238b23da68de | 1,265 | py | Python | setup.py | qyliu-hkust/fasthaversine | 5ec8f96d827369705cf313ec363a035b14952673 | [
"MIT"
] | null | null | null | setup.py | qyliu-hkust/fasthaversine | 5ec8f96d827369705cf313ec363a035b14952673 | [
"MIT"
] | null | null | null | setup.py | qyliu-hkust/fasthaversine | 5ec8f96d827369705cf313ec363a035b14952673 | [
"MIT"
] | 1 | 2022-01-26T16:10:59.000Z | 2022-01-26T16:10:59.000Z | from setuptools import setup
setup(
name='fasthaversine',
version='0.1.1',
description='A fast vectorized version of haversine distance calculation.',
include_package_data=True,
install_requires=['numpy'],
long_description=open('README.md').read(),
long_description_content_type="text/markdown",
author='Qiyu Liu',
author_email='keiyuk.liu@gmail.com',
maintainer='Qiyu Liu',
maintainer_email='keiyuk.liu@gmail.com',
url='https://github.com/qyliu-hkust/fasthaversine',
download_url='https://github.com/qyliu-hkust/fasthaversine/archive/0.1.1.tar.gz',
packages=['fasthaversine'],
license=['MIT'],
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.1',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering :: Mathematics'
],
)
| 37.205882 | 85 | 0.635573 | 138 | 1,265 | 5.753623 | 0.543478 | 0.191436 | 0.251889 | 0.261965 | 0.224181 | 0.100756 | 0.100756 | 0 | 0 | 0 | 0 | 0.022044 | 0.211067 | 1,265 | 33 | 86 | 38.333333 | 0.773547 | 0 | 0 | 0 | 0 | 0.03125 | 0.573123 | 0.017391 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.03125 | 0 | 0.03125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7443b2e010d0c7d8af95a0c0a62c63ce9d36dec4 | 330 | py | Python | MonitorFolders_test.py | judysu1983/PythonMBSi | 9481bf1409a888c3f8511bcd05718ea81a063fa1 | [
"bzip2-1.0.6"
] | null | null | null | MonitorFolders_test.py | judysu1983/PythonMBSi | 9481bf1409a888c3f8511bcd05718ea81a063fa1 | [
"bzip2-1.0.6"
] | null | null | null | MonitorFolders_test.py | judysu1983/PythonMBSi | 9481bf1409a888c3f8511bcd05718ea81a063fa1 | [
"bzip2-1.0.6"
] | null | null | null | import sys
import time, os
import logging
from watchdog.observers import Observer
from watchdog.events import LoggingEventHandler
from watchdog.events import RegexMatchingEventHandler
def D365Shell:
#sync souce depot master files
os.chdir(C:\\Depots\\MBSI\\Projects\\D365Shell\\UI\\Master\\Source\\Portal\\lcl)
| 30 | 85 | 0.775758 | 41 | 330 | 6.243902 | 0.682927 | 0.140625 | 0.140625 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021127 | 0.139394 | 330 | 10 | 86 | 33 | 0.880282 | 0.087879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.75 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7450b1e06fd88c0dee4fbbcb7b5f06feb4416c38 | 624 | bzl | Python | source/bazel/deps/big_integer_cpp/get.bzl | luxe/unilang | 6c8a431bf61755f4f0534c6299bd13aaeba4b69e | [
"MIT"
] | 33 | 2019-05-30T07:43:32.000Z | 2021-12-30T13:12:32.000Z | source/bazel/deps/big_integer_cpp/get.bzl | luxe/unilang | 6c8a431bf61755f4f0534c6299bd13aaeba4b69e | [
"MIT"
] | 371 | 2019-05-16T15:23:50.000Z | 2021-09-04T15:45:27.000Z | source/bazel/deps/big_integer_cpp/get.bzl | luxe/unilang | 6c8a431bf61755f4f0534c6299bd13aaeba4b69e | [
"MIT"
] | 6 | 2019-08-22T17:37:36.000Z | 2020-11-07T07:15:32.000Z | # Do not edit this file directly.
# It was auto-generated by: code/programs/reflexivity/reflexive_refresh
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
def bigIntegerCpp():
http_archive(
name="big_integer_cpp" ,
build_file="//bazel/deps/big_integer_cpp:build.BUILD" ,
sha256="1c9505406accb1216947ca60299ed70726eade7c9458c7c7f94ca2aea68d288e" ,
strip_prefix="BigIntegerCPP-79e7b023bf5157c0f8d308d3791cf3b081d1e156" ,
urls = [
"https://github.com/Unilang/BigIntegerCPP/archive/79e7b023bf5157c0f8d308d3791cf3b081d1e156.tar.gz",
],
)
| 36.705882 | 111 | 0.727564 | 60 | 624 | 7.383333 | 0.733333 | 0.049661 | 0.058691 | 0.081264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18618 | 0.165064 | 624 | 16 | 112 | 39 | 0.664107 | 0.161859 | 0 | 0 | 1 | 0 | 0.625 | 0.388462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | true | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7451d7877239c61cee4b8ef27730e2958d28e2a9 | 1,406 | py | Python | xrpl/models/transactions/offer_create.py | mDuo13/xrpl-py | 70f927dcd2dbb8644b3e210b0a8de2a214e71e3d | [
"0BSD"
] | null | null | null | xrpl/models/transactions/offer_create.py | mDuo13/xrpl-py | 70f927dcd2dbb8644b3e210b0a8de2a214e71e3d | [
"0BSD"
] | null | null | null | xrpl/models/transactions/offer_create.py | mDuo13/xrpl-py | 70f927dcd2dbb8644b3e210b0a8de2a214e71e3d | [
"0BSD"
] | null | null | null | """
Represents an OfferCreate transaction on the XRP Ledger. An OfferCreate transaction is
effectively a limit order. It defines an intent to exchange currencies, and creates an
Offer object if not completely fulfilled when placed. Offers can be partially fulfilled.
`See OfferCreate <https://xrpl.org/offercreate.html>`_
"""
from dataclasses import dataclass, field
from typing import Optional
from xrpl.models.amounts import Amount
from xrpl.models.required import REQUIRED
from xrpl.models.transactions.transaction import Transaction, TransactionType
from xrpl.models.utils import require_kwargs_on_init
@require_kwargs_on_init
@dataclass(frozen=True)
class OfferCreate(Transaction):
"""
Represents an OfferCreate transaction on the XRP Ledger. An OfferCreate
transaction is effectively a limit order. It defines an intent to exchange
currencies, and creates an Offer object if not completely fulfilled when
placed. Offers can be partially fulfilled.
`See OfferCreate <https://xrpl.org/offercreate.html>`_
"""
#: This field is required.
taker_gets: Amount = REQUIRED # type: ignore
#: This field is required.
taker_pays: Amount = REQUIRED # type: ignore
expiration: Optional[int] = None
offer_sequence: Optional[int] = None
transaction_type: TransactionType = field(
default=TransactionType.OFFER_CREATE,
init=False,
)
| 36.051282 | 88 | 0.763158 | 179 | 1,406 | 5.921788 | 0.396648 | 0.103774 | 0.090566 | 0.064151 | 0.535849 | 0.490566 | 0.490566 | 0.490566 | 0.490566 | 0.490566 | 0 | 0 | 0.171408 | 1,406 | 38 | 89 | 37 | 0.909871 | 0.507824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.352941 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
746431f999141fab0a1700aa2226e12544bdeba5 | 7,294 | py | Python | trait/test.py | zhao-embassy/learn_language | 2719f27dea1877ba1da32f1c1d53ebedd5a81fdf | [
"FSFAP"
] | 48 | 2015-02-05T15:25:47.000Z | 2022-01-07T05:52:03.000Z | trait/test.py | zhao-embassy/learn_language | 2719f27dea1877ba1da32f1c1d53ebedd5a81fdf | [
"FSFAP"
] | null | null | null | trait/test.py | zhao-embassy/learn_language | 2719f27dea1877ba1da32f1c1d53ebedd5a81fdf | [
"FSFAP"
] | 21 | 2015-02-12T23:50:10.000Z | 2019-12-12T08:25:35.000Z | from coderunner import *
header("Instantiation")
test(Scala,
"""
trait Foo{
def foo() = println("foo!")
}
new Foo // error
""", """
...tmp.scala:5: error: trait Foo is abstract; cannot be instantiated
new Foo // error
^
one error found
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
print value:(Foo new).
""", """
a Foo
""")
comment('Oops, trait in Squeak can be instanciated...')
test(Ruby,
"""
module Foo end
Foo new
""", """
tmp.rb:3:in `<main>': undefined local variable or method `new' for main:Object (NameError)
""")
header('Single inheritance')
test(Scala,
"""
trait Foo{
def foo() = println("foo!")
}
class C extends Foo{}
new C().foo
""", """
foo!
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
foo
^''foo''
'.
Object subclass: #C
uses: Foo
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
print value: (C new foo).
""", """
foo
""")
test(Ruby,
"""
module Foo
def foo
puts "foo"
end
end
class C
include Foo
end
C.new.foo
""", """
foo
""")
header('Multiple inheritance')
test(Scala,
"""
trait Foo{
def foo() = println("foo!")
}
trait Bar{
def bar() = println("bar!")
}
class C extends Foo with Bar{}
new C().foo
new C().bar
""", """
foo!
bar!
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
foo
^''foo''
'.
Trait named: #Bar
uses: {}
category: #MyCategory.
Bar compile: '
bar
^''bar''
'.
Object subclass: #C
uses: Foo + Bar
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
print value: (C new foo).
print value: (C new bar).
""", """
foo
bar
""")
test(Ruby,
"""
module Foo
def foo
puts "foo"
end
end
module Bar
def bar
puts "bar"
end
end
class C
include Foo
include Bar
end
C.new.foo
C.new.bar
""", """
foo
bar
""")
header('Conflicting name')
test(Scala,
"""
trait Foo{
def hello() = println("foo!")
}
trait Bar{
def hello() = println("bar!")
}
class C extends Foo with Bar{}
""", """
...tmp.scala:9: error: class C inherits conflicting members:
method hello in trait Foo of type ()Unit and
method hello in trait Bar of type ()Unit
(Note: this can be resolved by declaring an override in class C.)
class C extends Foo with Bar{}
^
one error found
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
hello
^''foo''
'.
Trait named: #Bar
uses: {}
category: #MyCategory.
Bar compile: '
hello
^''bar''
'.
Object subclass: #C
uses: Foo + Bar
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
[
print value: (C new hello).
] on: Exception
do: printException.
""", """
Error: A class or trait does not properly resolve a conflict between multiple traits it uses.
""")
comment('error occurs when you send a message, not when you define a class')
test(Ruby,
"""
module Foo
def hello
puts "foo"
end
end
module Bar
def hello
puts "bar"
end
end
class C
include Foo
include Bar
end
C.new.hello
""", """
bar
""")
comment("Ruby silently overrides conflicting methods")
header('Choose one of the methods')
test(Scala,
"""
trait Foo{
def hello() = println("foo!")
}
trait Bar{
def hello() = println("bar!")
}
class C extends Foo with Bar{
override def hello() = super[Bar].hello
}
new C().hello
""", """
bar!
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
hello
^''foo''
'.
Trait named: #Bar
uses: {}
category: #MyCategory.
Bar compile: '
hello
^''bar''
'.
Object subclass: #C
uses: Foo - {#hello} + Bar
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
print value: (C new hello).
""", """
bar
""")
test(Ruby,
"""
module Foo
def hello
puts "foo"
end
end
module Bar
def hello
puts "bar"
end
end
class C
include Bar
include Foo
def hello
Bar.instance_method(:hello).bind(self).call
end
end
C.new.hello
""", """
bar
""")
header('Use both of the methods')
test(Scala,
"""
trait Foo{
def hello() = println("foo!")
}
trait Bar{
def hello() = println("bar!")
}
class C extends Foo with Bar{
override def hello() = { // use both
super[Foo].hello
super[Bar].hello
}
}
new C().hello
""", """
foo!
bar!
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
hello
^''foo''
'.
Trait named: #Bar
uses: {}
category: #MyCategory.
Bar compile: '
hello
^''bar''
'.
Object subclass: #C
uses: (Foo @ {#foo -> #hello} - {#hello} +
Bar @ {#bar -> #hello} - {#hello})
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
C compile: '
hello
^(self foo , self bar)
'.
print value: (C new hello).
""", """
foobar
""")
test(Ruby,
"""
module Foo
def hello
puts "foo"
end
end
module Bar
def hello
puts "bar"
end
end
class C
include Foo
include Bar
def hello
Foo.instance_method(:hello).bind(self).call
Bar.instance_method(:hello).bind(self).call
end
end
C.new.hello
""", """
foo
bar
""")
header('required trait(self type annotation of Scala)')
test(Scala,
"""
trait HaveFoo{
def foo() : String = "foo"
}
trait NeedFoo{
self : HaveFoo =>
def hello() = println(foo())
}
// error: NeedFoo should be with HaveFoo
class C extends NeedFoo{}
""", """
...tmp.scala:11: error: illegal inheritance;
self-type this.C does not conform to this.NeedFoo's selftype this.NeedFoo with this.HaveFoo
class C extends NeedFoo{}
^
one error found
""")
test(Scala,
"""
trait HaveFoo{
def foo() : String = "foo"
}
trait NeedFoo{
self : HaveFoo =>
def hello() = println(foo())
}
class C extends NeedFoo with HaveFoo{}
new C().hello
""", """
foo
""")
header('conflict between parent class and trait')
test(Scala,
"""
trait Foo{
def hello() = println("foo!")
}
class ParentClass{
def hello() = println("parent class!")
}
class C extends ParentClass with Foo{}
""", """
...tmp.scala:9: error: class C inherits conflicting members:
method hello in class ParentClass of type ()Unit and
method hello in trait Foo of type ()Unit
(Note: this can be resolved by declaring an override in class C.)
class C extends ParentClass with Foo{}
^
one error found
""")
test(Squeak,
"""
Trait named: #Foo
uses: {}
category: #MyCategory.
Foo compile: '
hello
^''foo''
'.
Object subclass: #ParentClass
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
ParentClass compile: '
hello
^''parent class''
'.
ParentClass subclass: #C
uses: Foo
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''
category: #MyCategory.
print value: (C new hello).
""", """
foo
""")
test(Ruby,
"""
module Foo
def hello
puts "foo"
end
end
class ParentClass
def hello
puts "parent class"
end
end
class C < ParentClass
include Foo
end
C.new.hello
""", """
foo
""")
main()
| 12.864198 | 93 | 0.612421 | 921 | 7,294 | 4.846906 | 0.136808 | 0.039427 | 0.054211 | 0.026658 | 0.739919 | 0.66353 | 0.637097 | 0.623432 | 0.584453 | 0.55466 | 0 | 0.001067 | 0.229092 | 7,294 | 566 | 94 | 12.886926 | 0.792815 | 0 | 0 | 0.751938 | 0 | 0.015504 | 0.675388 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.007752 | 0 | 0.007752 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
746f6002d2b7353a1d48482a81945178f62ef005 | 1,354 | py | Python | project/apps/core/templatetags/coretags.py | havencruise/emberjam | 38153f84dc09f84100f4c0f2c9523be905c16762 | [
"MIT"
] | 1 | 2016-04-04T05:43:05.000Z | 2016-04-04T05:43:05.000Z | project/apps/core/templatetags/coretags.py | havencruise/emberjam | 38153f84dc09f84100f4c0f2c9523be905c16762 | [
"MIT"
] | null | null | null | project/apps/core/templatetags/coretags.py | havencruise/emberjam | 38153f84dc09f84100f4c0f2c9523be905c16762 | [
"MIT"
] | null | null | null | from django import template
register = template.Library()
@register.filter('field_type')
def field_type(field):
"""
Get the name of the field class.
"""
if hasattr(field, 'field'):
field = field.field
s = (type(field.widget).__name__).replace('Input', '').lower()
return s
@register.filter('get_form_field')
def get_form_field(form, field):
return form[field]
@register.filter('all_fields_hidden')
def all_fields_hidden(form):
return all([field.is_hidden for field in form])
@register.inclusion_tag('core/form_fieldset_fields.html')
def form_as_fieldset_fields(form, fieldsets):
"""
Render the form as a fieldset form.
Example usage in template with 'myform' and 'myfieldsets as context attributes:
{% form_as_fieldset_fields myform myfieldsets %}
Sample fieldset:
MY_FIELDSETS = (
(
'info',
('first_name', 'middle_name', 'last_name', 'is_published')
),
(
'image',
('profile_image', 'avatar_image', 'profile_image_crop')
),
(
'profile',
('title', 'location', 'profile_full', 'profile_brief',
'website_url', 'average_artwork_cost', 'born_year',
'deceased_year')
),
)
"""
return {'form': form, 'fieldsets' : fieldsets}
| 27.632653 | 84 | 0.611521 | 153 | 1,354 | 5.150327 | 0.457516 | 0.050761 | 0.057107 | 0.050761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257016 | 1,354 | 48 | 85 | 28.208333 | 0.7833 | 0.453471 | 0 | 0 | 0 | 0 | 0.148499 | 0.047393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.058824 | 0.117647 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7483b320b336f62d749d65e13237175c0e96a262 | 506 | py | Python | apps/goods/migrations/0009_auto_20180727_1713.py | lianxiaopang/camel-store-api | b8021250bf3d8cf7adc566deebdba55225148316 | [
"Apache-2.0"
] | 12 | 2020-02-01T01:52:01.000Z | 2021-04-28T15:06:43.000Z | apps/goods/migrations/0009_auto_20180727_1713.py | lianxiaopang/camel-store-api | b8021250bf3d8cf7adc566deebdba55225148316 | [
"Apache-2.0"
] | 5 | 2020-02-06T08:07:58.000Z | 2020-06-02T13:03:45.000Z | apps/goods/migrations/0009_auto_20180727_1713.py | lianxiaopang/camel-store-api | b8021250bf3d8cf7adc566deebdba55225148316 | [
"Apache-2.0"
] | 11 | 2020-02-03T13:07:46.000Z | 2020-11-29T01:44:06.000Z | # Generated by Django 2.0.7 on 2018-07-27 09:13
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('goods', '0008_auto_20180727_1203'),
]
operations = [
migrations.RenameField(
model_name='goods',
old_name='num',
new_name='asset_ratio_1',
),
migrations.RenameField(
model_name='goods',
old_name='num2',
new_name='asset_ratio_2',
),
]
| 21.083333 | 47 | 0.557312 | 54 | 506 | 4.981481 | 0.648148 | 0.156134 | 0.193309 | 0.223048 | 0.312268 | 0.312268 | 0.312268 | 0 | 0 | 0 | 0 | 0.100295 | 0.33004 | 506 | 23 | 48 | 22 | 0.693215 | 0.088933 | 0 | 0.352941 | 1 | 0 | 0.154684 | 0.050109 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
776c877396570b98ec8d30903006cdcceed3ffb5 | 2,661 | py | Python | hikcamerabot/registry.py | tropicoo/hik-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | 1 | 2019-02-09T20:08:50.000Z | 2019-02-09T20:08:50.000Z | hikcamerabot/registry.py | SirNoish/hikvision-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | 3 | 2019-02-10T12:42:10.000Z | 2019-02-16T00:33:29.000Z | hikcamerabot/registry.py | SirNoish/hikvision-camera-bot | a7108c08a8e009e7361bbb9904c3a71f3226afd5 | [
"MIT"
] | null | null | null | """Registry module."""
import logging
from collections import defaultdict
from typing import Iterator
from hikcamerabot.camera import HikvisionCam
RegistryValue = dict[str, HikvisionCam | dict | str]
CamRegistryType = dict[str, RegistryValue]
class CameraRegistry:
"""Registry class with camera meta information."""
def __init__(self) -> None:
self._log = logging.getLogger(self.__class__.__name__)
self._cam_registry: CamRegistryType = {}
self._group_registry = defaultdict(dict)
self._group_command_alias: dict[str, str] = {}
def __repr__(self) -> str:
return str(self._cam_registry)
def add(
self, cam: HikvisionCam, commands: dict, commands_presentation: str
) -> None:
"""Add metadata to teh registry."""
self._cam_registry[cam.id] = {
'cam': cam,
'cmds': commands,
'cmds_presentation': commands_presentation,
}
self._add_to_group_registry(cam)
def _add_to_group_registry(self, cam: HikvisionCam) -> None:
try:
key = self._group_command_alias[cam.group]
except KeyError:
key = f'group_{len(self._group_registry) + 1}'
try:
self._group_registry[key]['cams'].append(cam)
except KeyError:
self._group_command_alias[cam.group] = key
self._group_registry[key] = {
'name': cam.group,
'cams': [cam],
}
def get_commands(self, cam_id: str) -> dict:
"""Get camera commands."""
return self._cam_registry[cam_id]['cmds']
def get_commands_presentation(self, cam_id: str) -> dict:
"""Get camera commands presentation string."""
return self._cam_registry[cam_id]['cmds_presentation']
def get_instance(self, cam_id: str) -> HikvisionCam:
return self._cam_registry[cam_id]['cam']
def get_meta(self, cam_id: str) -> RegistryValue:
return self._cam_registry[cam_id]
def get_instances(self) -> Iterator[HikvisionCam]:
return (v['cam'] for v in self._cam_registry.values())
def get_all(self) -> CamRegistryType:
"""Return raw registry metadata dict."""
return self._cam_registry
def count(self) -> int:
"""Get cameras count."""
return len(self._cam_registry)
def get_instances_by_group(self, group_name: str) -> list[HikvisionCam]:
return self._group_registry.get(group_name, [])
def get_groups_registry(self) -> dict:
return self._group_registry
def get_group(self, group_id: str) -> dict:
return self._group_registry[group_id]
| 31.678571 | 76 | 0.637354 | 312 | 2,661 | 5.121795 | 0.214744 | 0.070088 | 0.093867 | 0.05632 | 0.197747 | 0.163955 | 0.078849 | 0.041302 | 0 | 0 | 0 | 0.0005 | 0.248779 | 2,661 | 83 | 77 | 32.060241 | 0.798899 | 0.07779 | 0 | 0.071429 | 0 | 0 | 0.041356 | 0.013234 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.071429 | 0.125 | 0.535714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
777dad77b3e1c97e085e619432a2765455de4b49 | 223 | py | Python | invoices/context_processors.py | GDGSNF/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 21 | 2020-08-29T14:32:13.000Z | 2021-08-28T21:40:32.000Z | invoices/context_processors.py | GDGSNF/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 1 | 2020-10-11T21:56:15.000Z | 2020-10-11T21:56:15.000Z | invoices/context_processors.py | yezz123/My-Business | 792bb13a5b296260e5de7e03fba6445a13922851 | [
"MIT"
] | 5 | 2021-09-11T23:31:10.000Z | 2022-03-06T20:29:59.000Z | from django.db.models import Q
from invoices.models import Invoice
def review_invoices_processor(request):
invoices = Invoice.objects.filter(Q(status=2) | Q(status=0) | Q(status=1))
return {"invoices": invoices}
| 24.777778 | 78 | 0.73991 | 32 | 223 | 5.09375 | 0.59375 | 0.128834 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.139013 | 223 | 8 | 79 | 27.875 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.035874 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
7793cca5bbff4cb83222a737628d6f02856214fe | 66,156 | py | Python | pyboto3/elasticloadbalancingv2.py | thecraftman/pyboto3 | 653a0db2b00b06708334431da8f169d1f7c7734f | [
"MIT"
] | null | null | null | pyboto3/elasticloadbalancingv2.py | thecraftman/pyboto3 | 653a0db2b00b06708334431da8f169d1f7c7734f | [
"MIT"
] | null | null | null | pyboto3/elasticloadbalancingv2.py | thecraftman/pyboto3 | 653a0db2b00b06708334431da8f169d1f7c7734f | [
"MIT"
] | null | null | null | '''
The MIT License (MIT)
Copyright (c) 2016 WavyCloud
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
def add_tags(ResourceArns=None, Tags=None):
"""
Adds the specified tags to the specified resource. You can tag your Application Load Balancers and your target groups.
Each tag consists of a key and an optional value. If a resource already has a tag with the same key, AddTags updates its value.
To list the current tags for your resources, use DescribeTags . To remove tags from your resources, use RemoveTags .
See also: AWS API Documentation
Examples
This example adds the specified tags to the specified load balancer.
Expected Output:
:example: response = client.add_tags(
ResourceArns=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type ResourceArns: list
:param ResourceArns: [REQUIRED]
The Amazon Resource Name (ARN) of the resource.
(string) --
:type Tags: list
:param Tags: [REQUIRED]
The tags. Each resource can have a maximum of 10 tags.
(dict) --Information about a tag.
Key (string) -- [REQUIRED]The key of the tag.
Value (string) --The value of the tag.
:rtype: dict
:return: {}
:returns:
(dict) --
"""
pass
def can_paginate(operation_name=None):
"""
Check if an operation can be paginated.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is create_foo, and you'd normally invoke the
operation as client.create_foo(**kwargs), if the
create_foo operation can be paginated, you can use the
call client.get_paginator('create_foo').
"""
pass
def create_listener(LoadBalancerArn=None, Protocol=None, Port=None, SslPolicy=None, Certificates=None, DefaultActions=None):
"""
Creates a listener for the specified Application Load Balancer.
You can create up to 10 listeners per load balancer.
To update a listener, use ModifyListener . When you are finished with a listener, you can delete it using DeleteListener . If you are finished with both the listener and the load balancer, you can delete them both using DeleteLoadBalancer .
For more information, see Listeners for Your Application Load Balancers in the Application Load Balancers Guide .
See also: AWS API Documentation
Examples
This example creates an HTTP listener for the specified load balancer that forwards requests to the specified target group.
Expected Output:
This example creates an HTTPS listener for the specified load balancer that forwards requests to the specified target group. Note that you must specify an SSL certificate for an HTTPS listener. You can create and manage certificates using AWS Certificate Manager (ACM). Alternatively, you can create a certificate using SSL/TLS tools, get the certificate signed by a certificate authority (CA), and upload the certificate to AWS Identity and Access Management (IAM).
Expected Output:
:example: response = client.create_listener(
LoadBalancerArn='string',
Protocol='HTTP'|'HTTPS',
Port=123,
SslPolicy='string',
Certificates=[
{
'CertificateArn': 'string'
},
],
DefaultActions=[
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:type Protocol: string
:param Protocol: [REQUIRED]
The protocol for connections from clients to the load balancer.
:type Port: integer
:param Port: [REQUIRED]
The port on which the load balancer is listening.
:type SslPolicy: string
:param SslPolicy: The security policy that defines which ciphers and protocols are supported. The default is the current predefined security policy.
:type Certificates: list
:param Certificates: The SSL server certificate. You must provide exactly one certificate if the protocol is HTTPS.
(dict) --Information about an SSL server certificate deployed on a load balancer.
CertificateArn (string) --The Amazon Resource Name (ARN) of the certificate.
:type DefaultActions: list
:param DefaultActions: [REQUIRED]
The default action for the listener.
(dict) --Information about an action.
Type (string) -- [REQUIRED]The type of action.
TargetGroupArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {
'Listeners': [
{
'ListenerArn': 'string',
'LoadBalancerArn': 'string',
'Port': 123,
'Protocol': 'HTTP'|'HTTPS',
'Certificates': [
{
'CertificateArn': 'string'
},
],
'SslPolicy': 'string',
'DefaultActions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
},
]
}
"""
pass
def create_load_balancer(Name=None, Subnets=None, SecurityGroups=None, Scheme=None, Tags=None, IpAddressType=None):
"""
Creates an Application Load Balancer.
When you create a load balancer, you can specify security groups, subnets, IP address type, and tags. Otherwise, you could do so later using SetSecurityGroups , SetSubnets , SetIpAddressType , and AddTags .
To create listeners for your load balancer, use CreateListener . To describe your current load balancers, see DescribeLoadBalancers . When you are finished with a load balancer, you can delete it using DeleteLoadBalancer .
You can create up to 20 load balancers per region per account. You can request an increase for the number of load balancers for your account. For more information, see Limits for Your Application Load Balancer in the Application Load Balancers Guide .
For more information, see Application Load Balancers in the Application Load Balancers Guide .
See also: AWS API Documentation
Examples
This example creates an Internet-facing load balancer and enables the Availability Zones for the specified subnets.
Expected Output:
This example creates an internal load balancer and enables the Availability Zones for the specified subnets.
Expected Output:
:example: response = client.create_load_balancer(
Name='string',
Subnets=[
'string',
],
SecurityGroups=[
'string',
],
Scheme='internet-facing'|'internal',
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
IpAddressType='ipv4'|'dualstack'
)
:type Name: string
:param Name: [REQUIRED]
The name of the load balancer.
This name must be unique per region per account, can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and must not begin or end with a hyphen.
:type Subnets: list
:param Subnets: [REQUIRED]
The IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify subnets from at least two Availability Zones.
(string) --
:type SecurityGroups: list
:param SecurityGroups: The IDs of the security groups to assign to the load balancer.
(string) --
:type Scheme: string
:param Scheme: The nodes of an Internet-facing load balancer have public IP addresses. The DNS name of an Internet-facing load balancer is publicly resolvable to the public IP addresses of the nodes. Therefore, Internet-facing load balancers can route requests from clients over the Internet.
The nodes of an internal load balancer have only private IP addresses. The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes. Therefore, internal load balancers can only route requests from clients with access to the VPC for the load balancer.
The default is an Internet-facing load balancer.
:type Tags: list
:param Tags: One or more tags to assign to the load balancer.
(dict) --Information about a tag.
Key (string) -- [REQUIRED]The key of the tag.
Value (string) --The value of the tag.
:type IpAddressType: string
:param IpAddressType: The type of IP addresses used by the subnets for your load balancer. The possible values are ipv4 (for IPv4 addresses) and dualstack (for IPv4 and IPv6 addresses). Internal load balancers must use ipv4 .
:rtype: dict
:return: {
'LoadBalancers': [
{
'LoadBalancerArn': 'string',
'DNSName': 'string',
'CanonicalHostedZoneId': 'string',
'CreatedTime': datetime(2015, 1, 1),
'LoadBalancerName': 'string',
'Scheme': 'internet-facing'|'internal',
'VpcId': 'string',
'State': {
'Code': 'active'|'provisioning'|'failed',
'Reason': 'string'
},
'Type': 'application',
'AvailabilityZones': [
{
'ZoneName': 'string',
'SubnetId': 'string'
},
],
'SecurityGroups': [
'string',
],
'IpAddressType': 'ipv4'|'dualstack'
},
]
}
:returns:
(string) --
"""
pass
def create_rule(ListenerArn=None, Conditions=None, Priority=None, Actions=None):
"""
Creates a rule for the specified listener.
Each rule can have one action and one condition. Rules are evaluated in priority order, from the lowest value to the highest value. When the condition for a rule is met, the specified action is taken. If no conditions are met, the default action for the default rule is taken. For more information, see Listener Rules in the Application Load Balancers Guide .
To view your current rules, use DescribeRules . To update a rule, use ModifyRule . To set the priorities of your rules, use SetRulePriorities . To delete a rule, use DeleteRule .
See also: AWS API Documentation
Examples
This example creates a rule that forwards requests to the specified target group if the URL contains the specified pattern (for example, /img/*).
Expected Output:
:example: response = client.create_rule(
ListenerArn='string',
Conditions=[
{
'Field': 'string',
'Values': [
'string',
]
},
],
Priority=123,
Actions=[
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
)
:type ListenerArn: string
:param ListenerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the listener.
:type Conditions: list
:param Conditions: [REQUIRED]
A condition. Each condition specifies a field name and a single value.
If the field name is host-header , you can specify a single host name (for example, my.example.com). A host name is case insensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
If the field name is path-pattern , you can specify a single path pattern. A path pattern is case sensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
_ - . $ / ~ ' ' @ : +
(using amp;)
(matches 0 or more characters)
? (matches exactly 1 character)
(dict) --Information about a condition for a rule.
Field (string) --The name of the field. The possible values are host-header and path-pattern .
Values (list) --The condition value.
If the field name is host-header , you can specify a single host name (for example, my.example.com). A host name is case insensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
If the field name is path-pattern , you can specify a single path pattern (for example, /img/*). A path pattern is case sensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
_ - . $ / ~ ' ' @ : +
(using amp;)
(matches 0 or more characters)
? (matches exactly 1 character)
(string) --
:type Priority: integer
:param Priority: [REQUIRED]
The priority for the rule. A listener can't have multiple rules with the same priority.
:type Actions: list
:param Actions: [REQUIRED]
An action. Each action has the type forward and specifies a target group.
(dict) --Information about an action.
Type (string) -- [REQUIRED]The type of action.
TargetGroupArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {
'Rules': [
{
'RuleArn': 'string',
'Priority': 'string',
'Conditions': [
{
'Field': 'string',
'Values': [
'string',
]
},
],
'Actions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
],
'IsDefault': True|False
},
]
}
:returns:
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
"""
pass
def create_target_group(Name=None, Protocol=None, Port=None, VpcId=None, HealthCheckProtocol=None, HealthCheckPort=None, HealthCheckPath=None, HealthCheckIntervalSeconds=None, HealthCheckTimeoutSeconds=None, HealthyThresholdCount=None, UnhealthyThresholdCount=None, Matcher=None):
"""
Creates a target group.
To register targets with the target group, use RegisterTargets . To update the health check settings for the target group, use ModifyTargetGroup . To monitor the health of targets in the target group, use DescribeTargetHealth .
To route traffic to the targets in a target group, specify the target group in an action using CreateListener or CreateRule .
To delete a target group, use DeleteTargetGroup .
For more information, see Target Groups for Your Application Load Balancers in the Application Load Balancers Guide .
See also: AWS API Documentation
Examples
This example creates a target group that you can use to route traffic to targets using HTTP on port 80. This target group uses the default health check configuration.
Expected Output:
:example: response = client.create_target_group(
Name='string',
Protocol='HTTP'|'HTTPS',
Port=123,
VpcId='string',
HealthCheckProtocol='HTTP'|'HTTPS',
HealthCheckPort='string',
HealthCheckPath='string',
HealthCheckIntervalSeconds=123,
HealthCheckTimeoutSeconds=123,
HealthyThresholdCount=123,
UnhealthyThresholdCount=123,
Matcher={
'HttpCode': 'string'
}
)
:type Name: string
:param Name: [REQUIRED]
The name of the target group.
This name must be unique per region per account, can have a maximum of 32 characters, must contain only alphanumeric characters or hyphens, and must not begin or end with a hyphen.
:type Protocol: string
:param Protocol: [REQUIRED]
The protocol to use for routing traffic to the targets.
:type Port: integer
:param Port: [REQUIRED]
The port on which the targets receive traffic. This port is used unless you specify a port override when registering the target.
:type VpcId: string
:param VpcId: [REQUIRED]
The identifier of the virtual private cloud (VPC).
:type HealthCheckProtocol: string
:param HealthCheckProtocol: The protocol the load balancer uses when performing health checks on targets. The default is the HTTP protocol.
:type HealthCheckPort: string
:param HealthCheckPort: The port the load balancer uses when performing health checks on targets. The default is traffic-port , which indicates the port on which each target receives traffic from the load balancer.
:type HealthCheckPath: string
:param HealthCheckPath: The ping path that is the destination on the targets for health checks. The default is /.
:type HealthCheckIntervalSeconds: integer
:param HealthCheckIntervalSeconds: The approximate amount of time, in seconds, between health checks of an individual target. The default is 30 seconds.
:type HealthCheckTimeoutSeconds: integer
:param HealthCheckTimeoutSeconds: The amount of time, in seconds, during which no response from a target means a failed health check. The default is 5 seconds.
:type HealthyThresholdCount: integer
:param HealthyThresholdCount: The number of consecutive health checks successes required before considering an unhealthy target healthy. The default is 5.
:type UnhealthyThresholdCount: integer
:param UnhealthyThresholdCount: The number of consecutive health check failures required before considering a target unhealthy. The default is 2.
:type Matcher: dict
:param Matcher: The HTTP codes to use when checking for a successful response from a target. The default is 200.
HttpCode (string) -- [REQUIRED]The HTTP codes. You can specify values between 200 and 499. The default value is 200. You can specify multiple values (for example, '200,202') or a range of values (for example, '200-299').
:rtype: dict
:return: {
'TargetGroups': [
{
'TargetGroupArn': 'string',
'TargetGroupName': 'string',
'Protocol': 'HTTP'|'HTTPS',
'Port': 123,
'VpcId': 'string',
'HealthCheckProtocol': 'HTTP'|'HTTPS',
'HealthCheckPort': 'string',
'HealthCheckIntervalSeconds': 123,
'HealthCheckTimeoutSeconds': 123,
'HealthyThresholdCount': 123,
'UnhealthyThresholdCount': 123,
'HealthCheckPath': 'string',
'Matcher': {
'HttpCode': 'string'
},
'LoadBalancerArns': [
'string',
]
},
]
}
:returns:
(string) --
"""
pass
def delete_listener(ListenerArn=None):
"""
Deletes the specified listener.
Alternatively, your listener is deleted when you delete the load balancer it is attached to using DeleteLoadBalancer .
See also: AWS API Documentation
Examples
This example deletes the specified listener.
Expected Output:
:example: response = client.delete_listener(
ListenerArn='string'
)
:type ListenerArn: string
:param ListenerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the listener.
:rtype: dict
:return: {}
"""
pass
def delete_load_balancer(LoadBalancerArn=None):
"""
Deletes the specified Application Load Balancer and its attached listeners.
You can't delete a load balancer if deletion protection is enabled. If the load balancer does not exist or has already been deleted, the call succeeds.
Deleting a load balancer does not affect its registered targets. For example, your EC2 instances continue to run and are still registered to their target groups. If you no longer need these EC2 instances, you can stop or terminate them.
See also: AWS API Documentation
Examples
This example deletes the specified load balancer.
Expected Output:
:example: response = client.delete_load_balancer(
LoadBalancerArn='string'
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:rtype: dict
:return: {}
"""
pass
def delete_rule(RuleArn=None):
"""
Deletes the specified rule.
See also: AWS API Documentation
Examples
This example deletes the specified rule.
Expected Output:
:example: response = client.delete_rule(
RuleArn='string'
)
:type RuleArn: string
:param RuleArn: [REQUIRED]
The Amazon Resource Name (ARN) of the rule.
:rtype: dict
:return: {}
"""
pass
def delete_target_group(TargetGroupArn=None):
"""
Deletes the specified target group.
You can delete a target group if it is not referenced by any actions. Deleting a target group also deletes any associated health checks.
See also: AWS API Documentation
Examples
This example deletes the specified target group.
Expected Output:
:example: response = client.delete_target_group(
TargetGroupArn='string'
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {}
"""
pass
def deregister_targets(TargetGroupArn=None, Targets=None):
"""
Deregisters the specified targets from the specified target group. After the targets are deregistered, they no longer receive traffic from the load balancer.
See also: AWS API Documentation
Examples
This example deregisters the specified instance from the specified target group.
Expected Output:
:example: response = client.deregister_targets(
TargetGroupArn='string',
Targets=[
{
'Id': 'string',
'Port': 123
},
]
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:type Targets: list
:param Targets: [REQUIRED]
The targets. If you specified a port override when you registered a target, you must specify both the target ID and the port when you deregister it.
(dict) --Information about a target.
Id (string) -- [REQUIRED]The ID of the target.
Port (integer) --The port on which the target is listening.
:rtype: dict
:return: {}
:returns:
(dict) --
"""
pass
def describe_account_limits(Marker=None, PageSize=None):
"""
Describes the current Elastic Load Balancing resource limits for your AWS account.
For more information, see Limits for Your Application Load Balancer in the Application Load Balancer Guide .
See also: AWS API Documentation
:example: response = client.describe_account_limits(
Marker='string',
PageSize=123
)
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'Limits': [
{
'Name': 'string',
'Max': 'string'
},
],
'NextMarker': 'string'
}
:returns:
application-load-balancers
listeners-per-application-load-balancer
rules-per-application-load-balancer
target-groups
targets-per-application-load-balancer
"""
pass
def describe_listeners(LoadBalancerArn=None, ListenerArns=None, Marker=None, PageSize=None):
"""
Describes the specified listeners or the listeners for the specified Application Load Balancer. You must specify either a load balancer or one or more listeners.
See also: AWS API Documentation
Examples
This example describes the specified listener.
Expected Output:
:example: response = client.describe_listeners(
LoadBalancerArn='string',
ListenerArns=[
'string',
],
Marker='string',
PageSize=123
)
:type LoadBalancerArn: string
:param LoadBalancerArn: The Amazon Resource Name (ARN) of the load balancer.
:type ListenerArns: list
:param ListenerArns: The Amazon Resource Names (ARN) of the listeners.
(string) --
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'Listeners': [
{
'ListenerArn': 'string',
'LoadBalancerArn': 'string',
'Port': 123,
'Protocol': 'HTTP'|'HTTPS',
'Certificates': [
{
'CertificateArn': 'string'
},
],
'SslPolicy': 'string',
'DefaultActions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
},
],
'NextMarker': 'string'
}
"""
pass
def describe_load_balancer_attributes(LoadBalancerArn=None):
"""
Describes the attributes for the specified Application Load Balancer.
See also: AWS API Documentation
Examples
This example describes the attributes of the specified load balancer.
Expected Output:
:example: response = client.describe_load_balancer_attributes(
LoadBalancerArn='string'
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:rtype: dict
:return: {
'Attributes': [
{
'Key': 'string',
'Value': 'string'
},
]
}
"""
pass
def describe_load_balancers(LoadBalancerArns=None, Names=None, Marker=None, PageSize=None):
"""
Describes the specified Application Load Balancers or all of your Application Load Balancers.
To describe the listeners for a load balancer, use DescribeListeners . To describe the attributes for a load balancer, use DescribeLoadBalancerAttributes .
See also: AWS API Documentation
Examples
This example describes the specified load balancer.
Expected Output:
:example: response = client.describe_load_balancers(
LoadBalancerArns=[
'string',
],
Names=[
'string',
],
Marker='string',
PageSize=123
)
:type LoadBalancerArns: list
:param LoadBalancerArns: The Amazon Resource Names (ARN) of the load balancers. You can specify up to 20 load balancers in a single call.
(string) --
:type Names: list
:param Names: The names of the load balancers.
(string) --
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'LoadBalancers': [
{
'LoadBalancerArn': 'string',
'DNSName': 'string',
'CanonicalHostedZoneId': 'string',
'CreatedTime': datetime(2015, 1, 1),
'LoadBalancerName': 'string',
'Scheme': 'internet-facing'|'internal',
'VpcId': 'string',
'State': {
'Code': 'active'|'provisioning'|'failed',
'Reason': 'string'
},
'Type': 'application',
'AvailabilityZones': [
{
'ZoneName': 'string',
'SubnetId': 'string'
},
],
'SecurityGroups': [
'string',
],
'IpAddressType': 'ipv4'|'dualstack'
},
],
'NextMarker': 'string'
}
:returns:
(string) --
"""
pass
def describe_rules(ListenerArn=None, RuleArns=None, Marker=None, PageSize=None):
"""
Describes the specified rules or the rules for the specified listener. You must specify either a listener or one or more rules.
See also: AWS API Documentation
Examples
This example describes the specified rule.
Expected Output:
:example: response = client.describe_rules(
ListenerArn='string',
RuleArns=[
'string',
],
Marker='string',
PageSize=123
)
:type ListenerArn: string
:param ListenerArn: The Amazon Resource Name (ARN) of the listener.
:type RuleArns: list
:param RuleArns: The Amazon Resource Names (ARN) of the rules.
(string) --
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'Rules': [
{
'RuleArn': 'string',
'Priority': 'string',
'Conditions': [
{
'Field': 'string',
'Values': [
'string',
]
},
],
'Actions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
],
'IsDefault': True|False
},
],
'NextMarker': 'string'
}
:returns:
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
"""
pass
def describe_ssl_policies(Names=None, Marker=None, PageSize=None):
"""
Describes the specified policies or all policies used for SSL negotiation.
For more information, see Security Policies in the Application Load Balancers Guide .
See also: AWS API Documentation
Examples
This example describes the specified policy used for SSL negotiation.
Expected Output:
:example: response = client.describe_ssl_policies(
Names=[
'string',
],
Marker='string',
PageSize=123
)
:type Names: list
:param Names: The names of the policies.
(string) --
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'SslPolicies': [
{
'SslProtocols': [
'string',
],
'Ciphers': [
{
'Name': 'string',
'Priority': 123
},
],
'Name': 'string'
},
],
'NextMarker': 'string'
}
:returns:
(string) --
"""
pass
def describe_tags(ResourceArns=None):
"""
Describes the tags for the specified resources. You can describe the tags for one or more Application Load Balancers and target groups.
See also: AWS API Documentation
Examples
This example describes the tags assigned to the specified load balancer.
Expected Output:
:example: response = client.describe_tags(
ResourceArns=[
'string',
]
)
:type ResourceArns: list
:param ResourceArns: [REQUIRED]
The Amazon Resource Names (ARN) of the resources.
(string) --
:rtype: dict
:return: {
'TagDescriptions': [
{
'ResourceArn': 'string',
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
]
},
]
}
"""
pass
def describe_target_group_attributes(TargetGroupArn=None):
"""
Describes the attributes for the specified target group.
See also: AWS API Documentation
Examples
This example describes the attributes of the specified target group.
Expected Output:
:example: response = client.describe_target_group_attributes(
TargetGroupArn='string'
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {
'Attributes': [
{
'Key': 'string',
'Value': 'string'
},
]
}
"""
pass
def describe_target_groups(LoadBalancerArn=None, TargetGroupArns=None, Names=None, Marker=None, PageSize=None):
"""
Describes the specified target groups or all of your target groups. By default, all target groups are described. Alternatively, you can specify one of the following to filter the results: the ARN of the load balancer, the names of one or more target groups, or the ARNs of one or more target groups.
To describe the targets for a target group, use DescribeTargetHealth . To describe the attributes of a target group, use DescribeTargetGroupAttributes .
See also: AWS API Documentation
Examples
This example describes the specified target group.
Expected Output:
:example: response = client.describe_target_groups(
LoadBalancerArn='string',
TargetGroupArns=[
'string',
],
Names=[
'string',
],
Marker='string',
PageSize=123
)
:type LoadBalancerArn: string
:param LoadBalancerArn: The Amazon Resource Name (ARN) of the load balancer.
:type TargetGroupArns: list
:param TargetGroupArns: The Amazon Resource Names (ARN) of the target groups.
(string) --
:type Names: list
:param Names: The names of the target groups.
(string) --
:type Marker: string
:param Marker: The marker for the next set of results. (You received this marker from a previous call.)
:type PageSize: integer
:param PageSize: The maximum number of results to return with this call.
:rtype: dict
:return: {
'TargetGroups': [
{
'TargetGroupArn': 'string',
'TargetGroupName': 'string',
'Protocol': 'HTTP'|'HTTPS',
'Port': 123,
'VpcId': 'string',
'HealthCheckProtocol': 'HTTP'|'HTTPS',
'HealthCheckPort': 'string',
'HealthCheckIntervalSeconds': 123,
'HealthCheckTimeoutSeconds': 123,
'HealthyThresholdCount': 123,
'UnhealthyThresholdCount': 123,
'HealthCheckPath': 'string',
'Matcher': {
'HttpCode': 'string'
},
'LoadBalancerArns': [
'string',
]
},
],
'NextMarker': 'string'
}
:returns:
(string) --
"""
pass
def describe_target_health(TargetGroupArn=None, Targets=None):
"""
Describes the health of the specified targets or all of your targets.
See also: AWS API Documentation
Examples
This example describes the health of the targets for the specified target group. One target is healthy but the other is not specified in an action, so it can't receive traffic from the load balancer.
Expected Output:
This example describes the health of the specified target. This target is healthy.
Expected Output:
:example: response = client.describe_target_health(
TargetGroupArn='string',
Targets=[
{
'Id': 'string',
'Port': 123
},
]
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:type Targets: list
:param Targets: The targets.
(dict) --Information about a target.
Id (string) -- [REQUIRED]The ID of the target.
Port (integer) --The port on which the target is listening.
:rtype: dict
:return: {
'TargetHealthDescriptions': [
{
'Target': {
'Id': 'string',
'Port': 123
},
'HealthCheckPort': 'string',
'TargetHealth': {
'State': 'initial'|'healthy'|'unhealthy'|'unused'|'draining',
'Reason': 'Elb.RegistrationInProgress'|'Elb.InitialHealthChecking'|'Target.ResponseCodeMismatch'|'Target.Timeout'|'Target.FailedHealthChecks'|'Target.NotRegistered'|'Target.NotInUse'|'Target.DeregistrationInProgress'|'Target.InvalidState'|'Elb.InternalError',
'Description': 'string'
}
},
]
}
:returns:
Elb.RegistrationInProgress - The target is in the process of being registered with the load balancer.
Elb.InitialHealthChecking - The load balancer is still sending the target the minimum number of health checks required to determine its health status.
"""
pass
def generate_presigned_url(ClientMethod=None, Params=None, ExpiresIn=None, HttpMethod=None):
"""
Generate a presigned url given a client, its method, and arguments
:type ClientMethod: string
:param ClientMethod: The client method to presign for
:type Params: dict
:param Params: The parameters normally passed to
ClientMethod.
:type ExpiresIn: int
:param ExpiresIn: The number of seconds the presigned url is valid
for. By default it expires in an hour (3600 seconds)
:type HttpMethod: string
:param HttpMethod: The http method to use on the generated url. By
default, the http method is whatever is used in the method's model.
"""
pass
def get_paginator(operation_name=None):
"""
Create a paginator for an operation.
:type operation_name: string
:param operation_name: The operation name. This is the same name
as the method name on the client. For example, if the
method name is create_foo, and you'd normally invoke the
operation as client.create_foo(**kwargs), if the
create_foo operation can be paginated, you can use the
call client.get_paginator('create_foo').
:rtype: L{botocore.paginate.Paginator}
"""
pass
def get_waiter():
"""
"""
pass
def modify_listener(ListenerArn=None, Port=None, Protocol=None, SslPolicy=None, Certificates=None, DefaultActions=None):
"""
Modifies the specified properties of the specified listener.
Any properties that you do not specify retain their current values. However, changing the protocol from HTTPS to HTTP removes the security policy and SSL certificate properties. If you change the protocol from HTTP to HTTPS, you must add the security policy and server certificate.
See also: AWS API Documentation
Examples
This example changes the default action for the specified listener.
Expected Output:
This example changes the server certificate for the specified HTTPS listener.
Expected Output:
:example: response = client.modify_listener(
ListenerArn='string',
Port=123,
Protocol='HTTP'|'HTTPS',
SslPolicy='string',
Certificates=[
{
'CertificateArn': 'string'
},
],
DefaultActions=[
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
)
:type ListenerArn: string
:param ListenerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the listener.
:type Port: integer
:param Port: The port for connections from clients to the load balancer.
:type Protocol: string
:param Protocol: The protocol for connections from clients to the load balancer.
:type SslPolicy: string
:param SslPolicy: The security policy that defines which protocols and ciphers are supported. For more information, see Security Policies in the Application Load Balancers Guide .
:type Certificates: list
:param Certificates: The SSL server certificate.
(dict) --Information about an SSL server certificate deployed on a load balancer.
CertificateArn (string) --The Amazon Resource Name (ARN) of the certificate.
:type DefaultActions: list
:param DefaultActions: The default actions.
(dict) --Information about an action.
Type (string) -- [REQUIRED]The type of action.
TargetGroupArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {
'Listeners': [
{
'ListenerArn': 'string',
'LoadBalancerArn': 'string',
'Port': 123,
'Protocol': 'HTTP'|'HTTPS',
'Certificates': [
{
'CertificateArn': 'string'
},
],
'SslPolicy': 'string',
'DefaultActions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
},
]
}
"""
pass
def modify_load_balancer_attributes(LoadBalancerArn=None, Attributes=None):
"""
Modifies the specified attributes of the specified Application Load Balancer.
If any of the specified attributes can't be modified as requested, the call fails. Any existing attributes that you do not modify retain their current values.
See also: AWS API Documentation
Examples
This example enables deletion protection for the specified load balancer.
Expected Output:
This example changes the idle timeout value for the specified load balancer.
Expected Output:
This example enables access logs for the specified load balancer. Note that the S3 bucket must exist in the same region as the load balancer and must have a policy attached that grants access to the Elastic Load Balancing service.
Expected Output:
:example: response = client.modify_load_balancer_attributes(
LoadBalancerArn='string',
Attributes=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:type Attributes: list
:param Attributes: [REQUIRED]
The load balancer attributes.
(dict) --Information about a load balancer attribute.
Key (string) --The name of the attribute.
access_logs.s3.enabled - Indicates whether access logs stored in Amazon S3 are enabled. The value is true or false .
access_logs.s3.bucket - The name of the S3 bucket for the access logs. This attribute is required if access logs in Amazon S3 are enabled. The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permission to write to the bucket.
access_logs.s3.prefix - The prefix for the location in the S3 bucket. If you don't specify a prefix, the access logs are stored in the root of the bucket.
deletion_protection.enabled - Indicates whether deletion protection is enabled. The value is true or false .
idle_timeout.timeout_seconds - The idle timeout value, in seconds. The valid range is 1-3600. The default is 60 seconds.
Value (string) --The value of the attribute.
:rtype: dict
:return: {
'Attributes': [
{
'Key': 'string',
'Value': 'string'
},
]
}
:returns:
access_logs.s3.enabled - Indicates whether access logs stored in Amazon S3 are enabled. The value is true or false .
access_logs.s3.bucket - The name of the S3 bucket for the access logs. This attribute is required if access logs in Amazon S3 are enabled. The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permission to write to the bucket.
access_logs.s3.prefix - The prefix for the location in the S3 bucket. If you don't specify a prefix, the access logs are stored in the root of the bucket.
deletion_protection.enabled - Indicates whether deletion protection is enabled. The value is true or false .
idle_timeout.timeout_seconds - The idle timeout value, in seconds. The valid range is 1-3600. The default is 60 seconds.
"""
pass
def modify_rule(RuleArn=None, Conditions=None, Actions=None):
"""
Modifies the specified rule.
Any existing properties that you do not modify retain their current values.
To modify the default action, use ModifyListener .
See also: AWS API Documentation
Examples
This example modifies the condition for the specified rule.
Expected Output:
:example: response = client.modify_rule(
RuleArn='string',
Conditions=[
{
'Field': 'string',
'Values': [
'string',
]
},
],
Actions=[
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
]
)
:type RuleArn: string
:param RuleArn: [REQUIRED]
The Amazon Resource Name (ARN) of the rule.
:type Conditions: list
:param Conditions: The conditions.
(dict) --Information about a condition for a rule.
Field (string) --The name of the field. The possible values are host-header and path-pattern .
Values (list) --The condition value.
If the field name is host-header , you can specify a single host name (for example, my.example.com). A host name is case insensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
If the field name is path-pattern , you can specify a single path pattern (for example, /img/*). A path pattern is case sensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.
A-Z, a-z, 0-9
_ - . $ / ~ ' ' @ : +
(using amp;)
(matches 0 or more characters)
? (matches exactly 1 character)
(string) --
:type Actions: list
:param Actions: The actions.
(dict) --Information about an action.
Type (string) -- [REQUIRED]The type of action.
TargetGroupArn (string) -- [REQUIRED]The Amazon Resource Name (ARN) of the target group.
:rtype: dict
:return: {
'Rules': [
{
'RuleArn': 'string',
'Priority': 'string',
'Conditions': [
{
'Field': 'string',
'Values': [
'string',
]
},
],
'Actions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
],
'IsDefault': True|False
},
]
}
:returns:
A-Z, a-z, 0-9
.
(matches 0 or more characters)
? (matches exactly 1 character)
"""
pass
def modify_target_group(TargetGroupArn=None, HealthCheckProtocol=None, HealthCheckPort=None, HealthCheckPath=None, HealthCheckIntervalSeconds=None, HealthCheckTimeoutSeconds=None, HealthyThresholdCount=None, UnhealthyThresholdCount=None, Matcher=None):
"""
Modifies the health checks used when evaluating the health state of the targets in the specified target group.
To monitor the health of the targets, use DescribeTargetHealth .
See also: AWS API Documentation
Examples
This example changes the configuration of the health checks used to evaluate the health of the targets for the specified target group.
Expected Output:
:example: response = client.modify_target_group(
TargetGroupArn='string',
HealthCheckProtocol='HTTP'|'HTTPS',
HealthCheckPort='string',
HealthCheckPath='string',
HealthCheckIntervalSeconds=123,
HealthCheckTimeoutSeconds=123,
HealthyThresholdCount=123,
UnhealthyThresholdCount=123,
Matcher={
'HttpCode': 'string'
}
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:type HealthCheckProtocol: string
:param HealthCheckProtocol: The protocol to use to connect with the target.
:type HealthCheckPort: string
:param HealthCheckPort: The port to use to connect with the target.
:type HealthCheckPath: string
:param HealthCheckPath: The ping path that is the destination for the health check request.
:type HealthCheckIntervalSeconds: integer
:param HealthCheckIntervalSeconds: The approximate amount of time, in seconds, between health checks of an individual target.
:type HealthCheckTimeoutSeconds: integer
:param HealthCheckTimeoutSeconds: The amount of time, in seconds, during which no response means a failed health check.
:type HealthyThresholdCount: integer
:param HealthyThresholdCount: The number of consecutive health checks successes required before considering an unhealthy target healthy.
:type UnhealthyThresholdCount: integer
:param UnhealthyThresholdCount: The number of consecutive health check failures required before considering the target unhealthy.
:type Matcher: dict
:param Matcher: The HTTP codes to use when checking for a successful response from a target.
HttpCode (string) -- [REQUIRED]The HTTP codes. You can specify values between 200 and 499. The default value is 200. You can specify multiple values (for example, '200,202') or a range of values (for example, '200-299').
:rtype: dict
:return: {
'TargetGroups': [
{
'TargetGroupArn': 'string',
'TargetGroupName': 'string',
'Protocol': 'HTTP'|'HTTPS',
'Port': 123,
'VpcId': 'string',
'HealthCheckProtocol': 'HTTP'|'HTTPS',
'HealthCheckPort': 'string',
'HealthCheckIntervalSeconds': 123,
'HealthCheckTimeoutSeconds': 123,
'HealthyThresholdCount': 123,
'UnhealthyThresholdCount': 123,
'HealthCheckPath': 'string',
'Matcher': {
'HttpCode': 'string'
},
'LoadBalancerArns': [
'string',
]
},
]
}
:returns:
(string) --
"""
pass
def modify_target_group_attributes(TargetGroupArn=None, Attributes=None):
"""
Modifies the specified attributes of the specified target group.
See also: AWS API Documentation
Examples
This example sets the deregistration delay timeout to the specified value for the specified target group.
Expected Output:
:example: response = client.modify_target_group_attributes(
TargetGroupArn='string',
Attributes=[
{
'Key': 'string',
'Value': 'string'
},
]
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:type Attributes: list
:param Attributes: [REQUIRED]
The attributes.
(dict) --Information about a target group attribute.
Key (string) --The name of the attribute.
deregistration_delay.timeout_seconds - The amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused . The range is 0-3600 seconds. The default value is 300 seconds.
stickiness.enabled - Indicates whether sticky sessions are enabled. The value is true or false .
stickiness.type - The type of sticky sessions. The possible value is lb_cookie .
stickiness.lb_cookie.duration_seconds - The time period, in seconds, during which requests from a client should be routed to the same target. After this time period expires, the load balancer-generated cookie is considered stale. The range is 1 second to 1 week (604800 seconds). The default value is 1 day (86400 seconds).
Value (string) --The value of the attribute.
:rtype: dict
:return: {
'Attributes': [
{
'Key': 'string',
'Value': 'string'
},
]
}
:returns:
deregistration_delay.timeout_seconds - The amount time for Elastic Load Balancing to wait before changing the state of a deregistering target from draining to unused . The range is 0-3600 seconds. The default value is 300 seconds.
stickiness.enabled - Indicates whether sticky sessions are enabled. The value is true or false .
stickiness.type - The type of sticky sessions. The possible value is lb_cookie .
stickiness.lb_cookie.duration_seconds - The time period, in seconds, during which requests from a client should be routed to the same target. After this time period expires, the load balancer-generated cookie is considered stale. The range is 1 second to 1 week (604800 seconds). The default value is 1 day (86400 seconds).
"""
pass
def register_targets(TargetGroupArn=None, Targets=None):
"""
Registers the specified targets with the specified target group.
By default, the load balancer routes requests to registered targets using the protocol and port number for the target group. Alternatively, you can override the port for a target when you register it.
The target must be in the virtual private cloud (VPC) that you specified for the target group. If the target is an EC2 instance, it must be in the running state when you register it.
To remove a target from a target group, use DeregisterTargets .
See also: AWS API Documentation
Examples
This example registers the specified instances with the specified target group.
Expected Output:
This example registers the specified instance with the specified target group using multiple ports. This enables you to register ECS containers on the same instance as targets in the target group.
Expected Output:
:example: response = client.register_targets(
TargetGroupArn='string',
Targets=[
{
'Id': 'string',
'Port': 123
},
]
)
:type TargetGroupArn: string
:param TargetGroupArn: [REQUIRED]
The Amazon Resource Name (ARN) of the target group.
:type Targets: list
:param Targets: [REQUIRED]
The targets. The default port for a target is the port for the target group. You can specify a port override. If a target is already registered, you can register it again using a different port.
(dict) --Information about a target.
Id (string) -- [REQUIRED]The ID of the target.
Port (integer) --The port on which the target is listening.
:rtype: dict
:return: {}
:returns:
(dict) --
"""
pass
def remove_tags(ResourceArns=None, TagKeys=None):
"""
Removes the specified tags from the specified resource.
To list the current tags for your resources, use DescribeTags .
See also: AWS API Documentation
Examples
This example removes the specified tags from the specified load balancer.
Expected Output:
:example: response = client.remove_tags(
ResourceArns=[
'string',
],
TagKeys=[
'string',
]
)
:type ResourceArns: list
:param ResourceArns: [REQUIRED]
The Amazon Resource Name (ARN) of the resource.
(string) --
:type TagKeys: list
:param TagKeys: [REQUIRED]
The tag keys for the tags to remove.
(string) --
:rtype: dict
:return: {}
:returns:
(dict) --
"""
pass
def set_ip_address_type(LoadBalancerArn=None, IpAddressType=None):
"""
Sets the type of IP addresses used by the subnets of the specified Application Load Balancer.
See also: AWS API Documentation
:example: response = client.set_ip_address_type(
LoadBalancerArn='string',
IpAddressType='ipv4'|'dualstack'
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:type IpAddressType: string
:param IpAddressType: [REQUIRED]
The IP address type. The possible values are ipv4 (for IPv4 addresses) and dualstack (for IPv4 and IPv6 addresses). Internal load balancers must use ipv4 .
:rtype: dict
:return: {
'IpAddressType': 'ipv4'|'dualstack'
}
"""
pass
def set_rule_priorities(RulePriorities=None):
"""
Sets the priorities of the specified rules.
You can reorder the rules as long as there are no priority conflicts in the new order. Any existing rules that you do not specify retain their current priority.
See also: AWS API Documentation
Examples
This example sets the priority of the specified rule.
Expected Output:
:example: response = client.set_rule_priorities(
RulePriorities=[
{
'RuleArn': 'string',
'Priority': 123
},
]
)
:type RulePriorities: list
:param RulePriorities: [REQUIRED]
The rule priorities.
(dict) --Information about the priorities for the rules for a listener.
RuleArn (string) --The Amazon Resource Name (ARN) of the rule.
Priority (integer) --The rule priority.
:rtype: dict
:return: {
'Rules': [
{
'RuleArn': 'string',
'Priority': 'string',
'Conditions': [
{
'Field': 'string',
'Values': [
'string',
]
},
],
'Actions': [
{
'Type': 'forward',
'TargetGroupArn': 'string'
},
],
'IsDefault': True|False
},
]
}
:returns:
A-Z, a-z, 0-9
_ - . $ / ~ " ' @ : +
(using amp;)
(matches 0 or more characters)
? (matches exactly 1 character)
"""
pass
def set_security_groups(LoadBalancerArn=None, SecurityGroups=None):
"""
Associates the specified security groups with the specified load balancer. The specified security groups override the previously associated security groups.
See also: AWS API Documentation
Examples
This example associates the specified security group with the specified load balancer.
Expected Output:
:example: response = client.set_security_groups(
LoadBalancerArn='string',
SecurityGroups=[
'string',
]
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:type SecurityGroups: list
:param SecurityGroups: [REQUIRED]
The IDs of the security groups.
(string) --
:rtype: dict
:return: {
'SecurityGroupIds': [
'string',
]
}
:returns:
(string) --
"""
pass
def set_subnets(LoadBalancerArn=None, Subnets=None):
"""
Enables the Availability Zone for the specified subnets for the specified load balancer. The specified subnets replace the previously enabled subnets.
See also: AWS API Documentation
Examples
This example enables the Availability Zones for the specified subnets for the specified load balancer.
Expected Output:
:example: response = client.set_subnets(
LoadBalancerArn='string',
Subnets=[
'string',
]
)
:type LoadBalancerArn: string
:param LoadBalancerArn: [REQUIRED]
The Amazon Resource Name (ARN) of the load balancer.
:type Subnets: list
:param Subnets: [REQUIRED]
The IDs of the subnets. You must specify at least two subnets. You can add only one subnet per Availability Zone.
(string) --
:rtype: dict
:return: {
'AvailabilityZones': [
{
'ZoneName': 'string',
'SubnetId': 'string'
},
]
}
"""
pass
| 33.87404 | 470 | 0.594323 | 7,045 | 66,156 | 5.559262 | 0.088573 | 0.028188 | 0.007558 | 0.01029 | 0.710507 | 0.656811 | 0.623592 | 0.591523 | 0.558356 | 0.532823 | 0 | 0.00814 | 0.333333 | 66,156 | 1,952 | 471 | 33.891393 | 0.879875 | 0.844852 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
77a99dda069f9f7f48cb48d772b1f2807a59d8bd | 381 | py | Python | apple_problems/problem_6.py | loftwah/Daily-Coding-Problem | 0327f0b4f69ef419436846c831110795c7a3c1fe | [
"MIT"
] | 129 | 2018-10-14T17:52:29.000Z | 2022-01-29T15:45:57.000Z | apple_problems/problem_6.py | loftwah/Daily-Coding-Problem | 0327f0b4f69ef419436846c831110795c7a3c1fe | [
"MIT"
] | 2 | 2019-11-30T23:28:23.000Z | 2020-01-03T16:30:32.000Z | apple_problems/problem_6.py | loftwah/Daily-Coding-Problem | 0327f0b4f69ef419436846c831110795c7a3c1fe | [
"MIT"
] | 60 | 2019-02-21T09:18:31.000Z | 2022-03-25T21:01:04.000Z | """This problem was asked by Apple.
Gray code is a binary code where each successive value differ in only one bit,
as well as when wrapping around.
Gray code is common in hardware so that we don't see temporary spurious values during transitions.
Given a number of bits n, generate a possible gray code for it.
For example, for n = 2, one gray code would be [00, 01, 11, 10].
""" | 54.428571 | 98 | 0.750656 | 71 | 381 | 4.028169 | 0.774648 | 0.111888 | 0.06993 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029221 | 0.191601 | 381 | 7 | 99 | 54.428571 | 0.899351 | 0.981627 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77c24118da8072428972a87376c807e485caae0f | 541 | py | Python | modules/apps/PrefabTfChain.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 1 | 2017-06-07T08:11:57.000Z | 2017-06-07T08:11:57.000Z | modules/apps/PrefabTfChain.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 106 | 2017-05-10T18:16:31.000Z | 2019-09-18T15:09:07.000Z | modules/apps/PrefabTfChain.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 5 | 2018-01-26T16:11:52.000Z | 2018-08-22T15:12:52.000Z | from js9 import j
app = j.tools.prefab._getBaseAppClass()
class PrefabTfChain(app):
NAME = "tfchain"
def build(self, reset=False):
"""Get/Build the binaries of tfchain (tfchaid and tfchainc)
Keyword Arguments:
reset {bool} -- reset the build process (default: {False})
"""
self.prefab.blockchain.tfchain.build(reset=reset)
def install(self, reset=False):
"""
Install the tftchain binaries
"""
self.prefab.blockchain.tfchain.install(reset=reset)
| 23.521739 | 70 | 0.624769 | 61 | 541 | 5.52459 | 0.52459 | 0.053412 | 0.083086 | 0.160237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002506 | 0.262477 | 541 | 22 | 71 | 24.590909 | 0.842105 | 0.312384 | 0 | 0 | 0 | 0 | 0.022013 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.125 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
77cf32aec4f7cf325652a9d249291b4c344fb703 | 2,419 | py | Python | Test/test.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | 2 | 2020-12-05T07:42:55.000Z | 2021-01-06T23:23:18.000Z | Test/test.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | Test/test.py | Tim232/Python-Things | 05f0f373a4cf298e70d9668c88a6e3a9d1cd8146 | [
"MIT"
] | null | null | null | import numpy as np
a = np.array([[['a11', 'a12', 'a13', 'a21', 'a22', 'a23', 'a31', 'a32', 'a33', 'a41', 'a42', 'a43']]])
b = np.array([[['b11', 'b12', 'b13', 'b21', 'b22', 'b23', 'b31', 'b32', 'b33', 'b41', 'b42', 'b43']]])
c = np.concatenate([a, b], axis=1)
with open(file='d:/result.txt', mode='w') as f:
for (h1, w1, c1) in [[1, 1, 24], [1, 2, 12], [1, 3, 8], [1, 4, 6], [1, 6, 4], [1, 8, 3], [1, 12, 2], [1, 24, 1],
[2, 1, 12], [2, 2, 6], [2, 3, 4], [2, 4, 3], [2, 6, 2], [2, 12, 1],
[3, 1, 8], [3, 2, 4], [3, 4, 2], [3, 8, 1],
[4, 1, 6], [4, 2, 3], [4, 3, 2], [4, 6, 1],
[6, 1, 4], [6, 2, 2], [6, 4, 1],
[8, 1, 3], [8, 3, 1],
[12, 1, 2], [12, 2, 1],
[24, 1, 1]]:
for (h2, w2, c2) in [[0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], [2, 1, 0]]:
for (h3, w3, c3) in [[1, 1, 24], [1, 2, 12], [1, 3, 8], [1, 4, 6], [1, 6, 4], [1, 8, 3], [1, 12, 2], [1, 24, 1],
[2, 1, 12], [2, 2, 6], [2, 3, 4], [2, 4, 3], [2, 6, 2], [2, 12, 1],
[3, 1, 8], [3, 2, 4], [3, 4, 2], [3, 8, 1],
[4, 1, 6], [4, 2, 3], [4, 3, 2], [4, 6, 1],
[6, 1, 4], [6, 2, 2], [6, 4, 1],
[8, 1, 3], [8, 3, 1],
[12, 1, 2], [12, 2, 1],
[24, 1, 1]]:
for (h4, w4, c4) in [[0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], [2, 1, 0]]:
c = c.reshape([h1, w1, c1])
c = c.transpose([h2, w2, c2])
c = c.reshape([h3, w3, c3])
c = c.transpose([h4, w4, c4])
shape = c.shape
f.write('(' + str(h1) + ',' + str(w1) + ',' + str(c1) + '), ' + '(' + str(h2) + ',' + str(w2) + ',' + str(c2) + '), ' + '(' + str(h3) + ',' + str(w3) + ',' + str(c3) + '), ' + '(' + str(h4) + ',' + str(w4) + ',' + str(c4) + ')\n')
for line in c:
f.write(str(line) + '\n')
f.write('-- (' + str(shape[0]) + ',' + str(shape[1]) + ',' + str(shape[2]) + ') --' + '\n')
f.write('==============================================\n\n') | 62.025641 | 250 | 0.264572 | 366 | 2,419 | 1.748634 | 0.199454 | 0.0375 | 0.0375 | 0.03125 | 0.3875 | 0.3875 | 0.3875 | 0.3875 | 0.3875 | 0.3875 | 0 | 0.233333 | 0.429516 | 2,419 | 39 | 251 | 62.025641 | 0.230435 | 0 | 0 | 0.424242 | 0 | 0 | 0.071901 | 0.020661 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.030303 | 0 | 0.030303 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
77d3a8f1fd26a35e1791558751a04930ebe0761e | 157 | py | Python | aiocloudflare/api/accounts/tunnels/connections/connections.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 2 | 2021-09-14T13:20:55.000Z | 2022-02-24T14:18:24.000Z | aiocloudflare/api/accounts/tunnels/connections/connections.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 46 | 2021-09-08T08:39:45.000Z | 2022-03-29T12:31:05.000Z | aiocloudflare/api/accounts/tunnels/connections/connections.py | Stewart86/aioCloudflare | 341c0941f8f888a8b7e696e64550bce5da4949e6 | [
"MIT"
] | 1 | 2021-12-30T23:02:23.000Z | 2021-12-30T23:02:23.000Z | from aiocloudflare.commons.auth import Auth
class Connections(Auth):
_endpoint1 = "accounts"
_endpoint2 = "tunnels"
_endpoint3 = "connections"
| 19.625 | 43 | 0.726115 | 15 | 157 | 7.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023438 | 0.184713 | 157 | 7 | 44 | 22.428571 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0.165605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
ae009ca6027718bbda68ea736eb2d58c56c75e2a | 231 | py | Python | examples/python/gpu/tensors/tensor_copy_02.py | kant/ocean-tensor-package | fb3fcff8bba7f4ef6cd8b8d02f0e1be1258da02d | [
"Apache-2.0"
] | 27 | 2018-08-16T21:32:49.000Z | 2021-11-30T10:31:08.000Z | examples/python/gpu/tensors/tensor_copy_02.py | kant/ocean-tensor-package | fb3fcff8bba7f4ef6cd8b8d02f0e1be1258da02d | [
"Apache-2.0"
] | null | null | null | examples/python/gpu/tensors/tensor_copy_02.py | kant/ocean-tensor-package | fb3fcff8bba7f4ef6cd8b8d02f0e1be1258da02d | [
"Apache-2.0"
] | 13 | 2018-08-17T17:33:16.000Z | 2021-11-30T10:31:09.000Z | import ocean
a = ocean.gpu[0](12345)
b = ocean.tensor([], ocean.float, ocean.gpu[0])
b.copy(a)
print(b)
b = ocean.double(b)
print(b)
b = ocean.cdouble(b)
print(b)
b.copy(54321)
print(b)
b.copy(ocean.gpu[0](1.2345))
print(b)
| 11 | 47 | 0.645022 | 46 | 231 | 3.23913 | 0.347826 | 0.201342 | 0.187919 | 0.161074 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09 | 0.134199 | 231 | 20 | 48 | 11.55 | 0.655 | 0 | 0 | 0.384615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.384615 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7ac312598521cfaa89198f4c3b86a054006966b2 | 6,929 | py | Python | pcap.py | ephracis/pylibpcap | bcab6cd2b27eeae0be1b3899cc1edb7b1f487136 | [
"BSD-3-Clause"
] | 3 | 2018-12-08T11:41:40.000Z | 2021-09-21T04:59:38.000Z | pcap.py | ephracis/pylibpcap | bcab6cd2b27eeae0be1b3899cc1edb7b1f487136 | [
"BSD-3-Clause"
] | 2 | 2016-04-28T18:21:10.000Z | 2021-04-05T09:15:44.000Z | pcap.py | ephracis/pylibpcap | bcab6cd2b27eeae0be1b3899cc1edb7b1f487136 | [
"BSD-3-Clause"
] | 1 | 2019-03-05T02:08:36.000Z | 2019-03-05T02:08:36.000Z | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 2.0.10
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2,6,0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_pcap', [dirname(__file__)])
except ImportError:
import _pcap
return _pcap
if fp is not None:
try:
_mod = imp.load_module('_pcap', fp, pathname, description)
finally:
fp.close()
return _mod
_pcap = swig_import_helper()
del swig_import_helper
else:
import _pcap
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self,class_type,name,value,static=1):
if (name == "thisown"): return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name,None)
if method: return method(self,value)
if (not static):
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self,class_type,name,value):
return _swig_setattr_nondynamic(self,class_type,name,value,0)
def _swig_getattr(self,class_type,name):
if (name == "thisown"): return self.this.own()
method = class_type.__swig_getmethods__.get(name,None)
if method: return method(self)
raise AttributeError(name)
def _swig_repr(self):
try: strthis = "proxy of " + self.this.__repr__()
except: strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object : pass
_newclass = 0
__doc__ = _pcap.__doc__
for dltname, dltvalue in _pcap.DLT.items():
globals()[dltname] = dltvalue
del dltname, dltvalue
class pcapObject(_object):
__swig_setmethods__ = {}
__setattr__ = lambda self, name, value: _swig_setattr(self, pcapObject, name, value)
__swig_getmethods__ = {}
__getattr__ = lambda self, name: _swig_getattr(self, pcapObject, name)
__repr__ = _swig_repr
def __init__(self):
import sys
if int(sys.version[0])>=2:
self.datalink.im_func.__doc__ = _pcap.pcapObject_datalink.__doc__
self.activate.im_func.__doc__ = _pcap.pcapObject_activate.__doc__
self.dispatch.im_func.__doc__ = _pcap.pcapObject_dispatch.__doc__
self.setnonblock.im_func.__doc__ = _pcap.pcapObject_setnonblock.__doc__
self.set_promisc.im_func.__doc__ = _pcap.pcapObject_set_promisc.__doc__
self.minor_version.im_func.__doc__ = _pcap.pcapObject_minor_version.__doc__
self.stats.im_func.__doc__ = _pcap.pcapObject_stats.__doc__
self.create.im_func.__doc__ = _pcap.pcapObject_create.__doc__
self.open_live.im_func.__doc__ = _pcap.pcapObject_open_live.__doc__
self.next.im_func.__doc__ = _pcap.pcapObject_next.__doc__
self.dump_open.im_func.__doc__ = _pcap.pcapObject_dump_open.__doc__
self.snapshot.im_func.__doc__ = _pcap.pcapObject_snapshot.__doc__
self.is_swapped.im_func.__doc__ = _pcap.pcapObject_is_swapped.__doc__
self.open_offline.im_func.__doc__ = _pcap.pcapObject_open_offline.__doc__
self.set_snaplen.im_func.__doc__ = _pcap.pcapObject_set_snaplen.__doc__
self.fileno.im_func.__doc__ = _pcap.pcapObject_fileno.__doc__
self.datalinks.im_func.__doc__ = _pcap.pcapObject_datalinks.__doc__
self.set_rfmon.im_func.__doc__ = _pcap.pcapObject_set_rfmon.__doc__
self.major_version.im_func.__doc__ = _pcap.pcapObject_major_version.__doc__
self.getnonblock.im_func.__doc__ = _pcap.pcapObject_getnonblock.__doc__
self.open_dead.im_func.__doc__ = _pcap.pcapObject_open_dead.__doc__
self.set_timeout.im_func.__doc__ = _pcap.pcapObject_set_timeout.__doc__
self.loop.im_func.__doc__ = _pcap.pcapObject_loop.__doc__
self.setfilter.im_func.__doc__ = _pcap.pcapObject_setfilter.__doc__
this = _pcap.new_pcapObject()
try: self.this.append(this)
except: self.this = this
__swig_destroy__ = _pcap.delete_pcapObject
__del__ = lambda self : None;
def create(self, *args): return _pcap.pcapObject_create(self, *args)
def set_snaplen(self, *args): return _pcap.pcapObject_set_snaplen(self, *args)
def set_promisc(self, *args): return _pcap.pcapObject_set_promisc(self, *args)
def set_rfmon(self, *args): return _pcap.pcapObject_set_rfmon(self, *args)
def set_timeout(self, *args): return _pcap.pcapObject_set_timeout(self, *args)
def activate(self): return _pcap.pcapObject_activate(self)
def open_live(self, *args): return _pcap.pcapObject_open_live(self, *args)
def open_dead(self, *args): return _pcap.pcapObject_open_dead(self, *args)
def open_offline(self, *args): return _pcap.pcapObject_open_offline(self, *args)
def dump_open(self, *args): return _pcap.pcapObject_dump_open(self, *args)
def setnonblock(self, *args): return _pcap.pcapObject_setnonblock(self, *args)
def getnonblock(self): return _pcap.pcapObject_getnonblock(self)
def setfilter(self, *args): return _pcap.pcapObject_setfilter(self, *args)
def loop(self, *args): return _pcap.pcapObject_loop(self, *args)
def dispatch(self, *args): return _pcap.pcapObject_dispatch(self, *args)
def next(self): return _pcap.pcapObject_next(self)
def datalink(self): return _pcap.pcapObject_datalink(self)
def datalinks(self): return _pcap.pcapObject_datalinks(self)
def snapshot(self): return _pcap.pcapObject_snapshot(self)
def is_swapped(self): return _pcap.pcapObject_is_swapped(self)
def major_version(self): return _pcap.pcapObject_major_version(self)
def minor_version(self): return _pcap.pcapObject_minor_version(self)
def stats(self): return _pcap.pcapObject_stats(self)
def fileno(self): return _pcap.pcapObject_fileno(self)
pcapObject_swigregister = _pcap.pcapObject_swigregister
pcapObject_swigregister(pcapObject)
def lookupdev():
return _pcap.lookupdev()
lookupdev = _pcap.lookupdev
def findalldevs(unpack=1):
return _pcap.findalldevs(unpack)
findalldevs = _pcap.findalldevs
def lookupnet(*args):
return _pcap.lookupnet(*args)
lookupnet = _pcap.lookupnet
def aton(*args):
return _pcap.aton(*args)
aton = _pcap.aton
def ntoa(*args):
return _pcap.ntoa(*args)
ntoa = _pcap.ntoa
# This file is compatible with both classic and new-style classes.
| 42.25 | 90 | 0.7141 | 894 | 6,929 | 4.958613 | 0.173378 | 0.154748 | 0.048725 | 0.070381 | 0.288292 | 0.153621 | 0.048725 | 0.035191 | 0 | 0 | 0 | 0.002844 | 0.18805 | 6,929 | 163 | 91 | 42.509202 | 0.785105 | 0.042575 | 0 | 0.072464 | 1 | 0 | 0.013889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.253623 | false | 0.014493 | 0.072464 | 0.217391 | 0.463768 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
7ac35c04d628c2286a5b5b5c085bff755c3f4ad5 | 107 | py | Python | artefact_nca/utils/utils.py | Ackey-code/3d-artefacts-nca | b13228d5dd30519ad885d2400061be2adf6cfc3c | [
"MIT"
] | 37 | 2021-05-26T03:41:07.000Z | 2022-02-03T21:24:26.000Z | artefact_nca/utils/utils.py | Ackey-code/3d-artefacts-nca | b13228d5dd30519ad885d2400061be2adf6cfc3c | [
"MIT"
] | 1 | 2021-12-01T21:43:33.000Z | 2021-12-01T21:43:33.000Z | artefact_nca/utils/utils.py | Ackey-code/3d-artefacts-nca | b13228d5dd30519ad885d2400061be2adf6cfc3c | [
"MIT"
] | 4 | 2021-06-07T17:29:13.000Z | 2021-12-18T16:30:50.000Z | import os
def makedirs(path):
if not os.path.exists(path):
os.makedirs(path)
return path
| 13.375 | 32 | 0.635514 | 16 | 107 | 4.25 | 0.5625 | 0.352941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261682 | 107 | 7 | 33 | 15.285714 | 0.860759 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7ad630a85cfa9d6e9ad12e494c49b6ff546ab0f7 | 113 | py | Python | dymouse/driver/old/scale.py | x4rMa/dymouse | 0304837f4af362ec33ec6da09c7eb6a9840dcca6 | [
"MIT"
] | 2 | 2020-11-02T17:52:01.000Z | 2021-02-25T14:34:24.000Z | dymouse/driver/old/scale.py | x4rMa/dymouse | 0304837f4af362ec33ec6da09c7eb6a9840dcca6 | [
"MIT"
] | null | null | null | dymouse/driver/old/scale.py | x4rMa/dymouse | 0304837f4af362ec33ec6da09c7eb6a9840dcca6 | [
"MIT"
] | null | null | null | from DataRecorder import DataRecorder
z = DataRecorder(plot=False)
z.make_record()
import pdb; pdb.set_trace()
| 16.142857 | 37 | 0.787611 | 16 | 113 | 5.4375 | 0.6875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115044 | 113 | 6 | 38 | 18.833333 | 0.87 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7aeb3440f9c37924a5a0f17a54bef250befdcf4a | 380 | py | Python | src/manage.py | graipher/TeaRoom | 0beb8c2c889b8685d5b7a463206de41947c2d669 | [
"MIT"
] | null | null | null | src/manage.py | graipher/TeaRoom | 0beb8c2c889b8685d5b7a463206de41947c2d669 | [
"MIT"
] | 15 | 2015-05-20T12:55:13.000Z | 2022-03-11T23:26:40.000Z | src/manage.py | graipher/TeaRoom | 0beb8c2c889b8685d5b7a463206de41947c2d669 | [
"MIT"
] | 2 | 2016-11-17T11:07:41.000Z | 2017-07-07T11:18:36.000Z | #!/bin/sh
"""":
exec /usr/bin/env python -W ignore::DeprecationWarning $0 $@
"""
import os
import sys
import warnings
warnings.simplefilter("ignore", DeprecationWarning)
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "TeaRoom.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
| 20 | 71 | 0.752632 | 47 | 380 | 5.744681 | 0.659574 | 0.177778 | 0.133333 | 0.162963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003003 | 0.123684 | 380 | 18 | 72 | 21.111111 | 0.807808 | 0.189474 | 0 | 0 | 0 | 0 | 0.173333 | 0.073333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
7aed08ca73b10656f07d2319c9ccc9e4e6cf6e2f | 366 | py | Python | dashboard/utils.py | riparias/early-warning-webapp | 702691a6ecbabc7865e5f232e125c8dee28a7f2e | [
"MIT"
] | null | null | null | dashboard/utils.py | riparias/early-warning-webapp | 702691a6ecbabc7865e5f232e125c8dee28a7f2e | [
"MIT"
] | 124 | 2021-09-02T06:53:33.000Z | 2022-03-31T12:46:51.000Z | dashboard/utils.py | riparias/early-warning-webapp | 702691a6ecbabc7865e5f232e125c8dee28a7f2e | [
"MIT"
] | null | null | null | import subprocess
def readable_string(input_string: str) -> str:
"""Remove multiple whitespaces and \n to make a long string more readable"""
return " ".join(input_string.replace("\n", "").split())
def human_readable_git_version_number() -> str:
return subprocess.check_output(
["git", "describe", "--always"], encoding="UTF-8"
).strip()
| 28.153846 | 80 | 0.672131 | 46 | 366 | 5.173913 | 0.717391 | 0.092437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003289 | 0.169399 | 366 | 12 | 81 | 30.5 | 0.779605 | 0.185792 | 0 | 0 | 0 | 0 | 0.093103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7af4117944ceb99cc978659fcc0837bce72c1297 | 219 | py | Python | backoffice/web/core/models.py | uktrade/trade-access-program | 8fb565e96de7d7bb0bde31255aef0f291063e93c | [
"MIT"
] | 1 | 2021-03-04T15:24:12.000Z | 2021-03-04T15:24:12.000Z | backoffice/web/core/models.py | uktrade/trade-access-program | 8fb565e96de7d7bb0bde31255aef0f291063e93c | [
"MIT"
] | 7 | 2020-08-24T13:27:02.000Z | 2021-06-09T18:42:31.000Z | backoffice/web/core/models.py | uktrade/trade-access-program | 8fb565e96de7d7bb0bde31255aef0f291063e93c | [
"MIT"
] | 1 | 2021-05-20T07:40:00.000Z | 2021-05-20T07:40:00.000Z | from django.db import models
class Image(models.Model):
file = models.ImageField(upload_to='images/')
uploaded_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.file.url
| 21.9 | 57 | 0.716895 | 30 | 219 | 4.966667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 219 | 9 | 58 | 24.333333 | 0.827778 | 0 | 0 | 0 | 0 | 0 | 0.031963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0.166667 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
7afb6ecd4cc96c4178d2c21f873246fe6ff0cebe | 157 | py | Python | companies/tests/test_home.py | vitorpvcampos/comp-emp | de9ceeda510e1c484316b52be409347fad59515d | [
"MIT"
] | null | null | null | companies/tests/test_home.py | vitorpvcampos/comp-emp | de9ceeda510e1c484316b52be409347fad59515d | [
"MIT"
] | 134 | 2020-11-23T12:16:08.000Z | 2022-03-20T13:42:11.000Z | companies/tests/test_home.py | vitorpvcampos/comp-emp | de9ceeda510e1c484316b52be409347fad59515d | [
"MIT"
] | null | null | null | from django.test import Client
def test_admin_home(client: Client):
resp = client.get('/admin/login/?next=/admin/')
assert resp.status_code == 200
| 22.428571 | 51 | 0.713376 | 23 | 157 | 4.73913 | 0.695652 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022556 | 0.152866 | 157 | 6 | 52 | 26.166667 | 0.796992 | 0 | 0 | 0 | 0 | 0 | 0.165605 | 0.165605 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
249821db5e9c11db98858f44ee541b1f3d480d14 | 315 | py | Python | 07-User-authentication/00-Pasword-hashing/bcrypt_basics.py | alehpineda/flask_bootcamp | 7310bf093be61f33567c6d8ffd710a29e35004cd | [
"MIT"
] | null | null | null | 07-User-authentication/00-Pasword-hashing/bcrypt_basics.py | alehpineda/flask_bootcamp | 7310bf093be61f33567c6d8ffd710a29e35004cd | [
"MIT"
] | 3 | 2021-02-08T20:38:44.000Z | 2021-06-02T00:46:15.000Z | 07-User-authentication/00-Pasword-hashing/bcrypt_basics.py | alehpineda/flask_bootcamp | 7310bf093be61f33567c6d8ffd710a29e35004cd | [
"MIT"
] | null | null | null | from flask_bcrypt import Bcrypt
bcrypt = Bcrypt()
password = 'supersecretpassword'
hashed = bcrypt.generate_password_hash(password=password)
print(hashed)
check = bcrypt.check_password_hash(hashed, 'wrongpassword')
print(check)
check = bcrypt.check_password_hash(hashed, 'supersecretpassword')
print(check)
| 17.5 | 65 | 0.8 | 36 | 315 | 6.805556 | 0.333333 | 0.146939 | 0.130612 | 0.195918 | 0.277551 | 0.277551 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101587 | 315 | 17 | 66 | 18.529412 | 0.865724 | 0 | 0 | 0.222222 | 1 | 0 | 0.161905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.444444 | 0.111111 | 0 | 0.111111 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
24a060f1df6df17b3e24a4be05becff92382bbc8 | 256 | py | Python | demo_built-in/demo_queue.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | 1 | 2019-05-04T09:26:29.000Z | 2019-05-04T09:26:29.000Z | demo_built-in/demo_queue.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | null | null | null | demo_built-in/demo_queue.py | Ethan16/python_misc | 29cf2fdbd7529a05bcf35768e0244e634fe2ae7a | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@version: 1.0
@author: James
@license: Apache Licence
@contact: euler52201044@sina.com
@file: demo_queue.py
@time: 2019/4/7 下午12:26
@description:
"""
from random import randint
from time import sleep
from queue import Queue
| 16 | 32 | 0.710938 | 38 | 256 | 4.763158 | 0.815789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09633 | 0.148438 | 256 | 15 | 33 | 17.066667 | 0.733945 | 0.664063 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
24a524313a1a72a486aecbd1a883179629ead6a3 | 215 | py | Python | src/sum.py | yxtay/data-structures-algorithms | a28f0b7a727192c121579ed51d44d00e09cc1b9a | [
"MIT"
] | 1 | 2020-06-23T16:08:51.000Z | 2020-06-23T16:08:51.000Z | src/sum.py | yxtay/data-structures-algorithms | a28f0b7a727192c121579ed51d44d00e09cc1b9a | [
"MIT"
] | null | null | null | src/sum.py | yxtay/data-structures-algorithms | a28f0b7a727192c121579ed51d44d00e09cc1b9a | [
"MIT"
] | null | null | null | def sum_iter(numbers):
total = 0
for n in numbers:
total = total + n
return total
def sum_rec(numbers):
if len(numbers) == 0:
return 0
return numbers[0] + sum_rec(numbers[1:])
| 16.538462 | 44 | 0.581395 | 32 | 215 | 3.8125 | 0.4375 | 0.098361 | 0.213115 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033784 | 0.311628 | 215 | 12 | 45 | 17.916667 | 0.790541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
24acf2855092bbd0ac5af08ee88d6c322bebbc76 | 584 | py | Python | library/flotilla/touch.py | sonntagsgesicht/flotilla-python | 3b2535d9ff41c99b3104e2b67f3cd46639e3be0b | [
"MIT"
] | 22 | 2016-01-26T14:08:25.000Z | 2022-01-17T01:43:26.000Z | library/flotilla/touch.py | sonntagsgesicht/flotilla-python | 3b2535d9ff41c99b3104e2b67f3cd46639e3be0b | [
"MIT"
] | 19 | 2016-01-09T19:53:30.000Z | 2022-02-10T17:19:46.000Z | library/flotilla/touch.py | sonntagsgesicht/flotilla-python | 3b2535d9ff41c99b3104e2b67f3cd46639e3be0b | [
"MIT"
] | 18 | 2015-12-16T18:13:36.000Z | 2021-11-14T15:26:44.000Z | from .module import Module
class Touch(Module):
name = 'touch'
@property
def one(self):
if len(self.data) > 0:
return int(self.data[0]) == 1
return False
@property
def two(self):
if len(self.data) > 1:
return int(self.data[1]) == 1
return False
@property
def three(self):
if len(self.data) > 2:
return int(self.data[2]) == 1
return False
@property
def four(self):
if len(self.data) > 3:
return int(self.data[3]) == 1
return False
| 19.466667 | 41 | 0.510274 | 77 | 584 | 3.87013 | 0.298701 | 0.214765 | 0.120805 | 0.174497 | 0.459732 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03252 | 0.368151 | 584 | 29 | 42 | 20.137931 | 0.775068 | 0 | 0 | 0.347826 | 0 | 0 | 0.008562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0 | 0.043478 | 0 | 0.652174 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
24b0da5ec1a158caa10fe4e900a7386b6e50688d | 485 | py | Python | 07/7.py | cjm00/project-euler | 10f186aafda2ed13bf93cf3e3ba6cff63c85fbd0 | [
"CC-BY-3.0"
] | 1 | 2015-08-16T20:30:40.000Z | 2015-08-16T20:30:40.000Z | 07/7.py | cjm00/project-euler | 10f186aafda2ed13bf93cf3e3ba6cff63c85fbd0 | [
"CC-BY-3.0"
] | 1 | 2016-08-11T13:06:12.000Z | 2016-08-11T13:06:12.000Z | 07/7.py | cjm00/project-euler | 10f186aafda2ed13bf93cf3e3ba6cff63c85fbd0 | [
"CC-BY-3.0"
] | null | null | null | #wow such code reuse
#this is a very slow sieve
known_prime_list = []
desired_prime = 10001
index = 2
def IsPrime(input):
for prime in known_prime_list:
if input % prime == 0:
return False
return True
while len(known_prime_list) < desired_prime:
if not known_prime_list: #Empty lists are False
known_prime_list.append(index)
index += 1
continue
if IsPrime(index):
known_prime_list.append(index)
index += 1
known_prime_list.sort()
print known_prime_list[-1]
| 16.724138 | 48 | 0.736082 | 78 | 485 | 4.346154 | 0.487179 | 0.235988 | 0.330383 | 0.123894 | 0.336283 | 0.182891 | 0.182891 | 0 | 0 | 0 | 0 | 0.025189 | 0.181443 | 485 | 28 | 49 | 17.321429 | 0.828715 | 0.134021 | 0 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.055556 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
24d02a0cf495cba1b4884a5173b78251ca064ae5 | 590 | py | Python | sample_api_app/main.py | ShankarChavan/FastAPI_ML_demo | 8f635231c10dbe2310aa6ae89316f931140f630a | [
"MIT"
] | null | null | null | sample_api_app/main.py | ShankarChavan/FastAPI_ML_demo | 8f635231c10dbe2310aa6ae89316f931140f630a | [
"MIT"
] | null | null | null | sample_api_app/main.py | ShankarChavan/FastAPI_ML_demo | 8f635231c10dbe2310aa6ae89316f931140f630a | [
"MIT"
] | null | null | null | from fastapi import FastAPI
from pydantic import BaseModel
app=FastAPI()
db=[]
class city(BaseModel):
name: str
time_zone: str
@app.get('/')
def index():
return {'healthcheck':'True'}
@app.get('/cities')
def get_cities():
return db
@app.get('/cities/{city_id}')
def get_city(city_id:int):
return db[city_id-1]
@app.post('/cities')
def create_city(city:city):
db.append(city.dict())
return db[-1]
@app.delete('/cities/{city_id}')
def delete_city(cityid:int):
db.pop(city_id-1)
return {}
#uvicorn main:app --reload | 17.352941 | 34 | 0.623729 | 85 | 590 | 4.211765 | 0.388235 | 0.083799 | 0.067039 | 0.083799 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006424 | 0.208475 | 590 | 34 | 35 | 17.352941 | 0.760171 | 0.042373 | 0 | 0 | 0 | 0 | 0.120301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.208333 | false | 0 | 0.083333 | 0.125 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
24dc1f4e948216aa3249ac5e8455041bece65ace | 1,153 | py | Python | dataset/tfrecords/base/writer.py | AltumTek/deep-koalarization | b5a16751a40484ca4990e0b9c005fe31c3301812 | [
"MIT"
] | 23 | 2018-08-16T12:50:01.000Z | 2021-12-27T13:13:10.000Z | dataset/tfrecords/base/writer.py | Kriztoper/deep-koalarization | 7d45895272ac457cc2b4df836ff23598d889d49c | [
"MIT"
] | 22 | 2018-08-27T04:49:57.000Z | 2022-03-11T23:43:18.000Z | dataset/tfrecords/base/writer.py | Kriztoper/deep-koalarization | 7d45895272ac457cc2b4df836ff23598d889d49c | [
"MIT"
] | 7 | 2018-08-27T15:32:59.000Z | 2020-04-20T14:48:40.000Z | from os.path import join
import tensorflow as tf
compression = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE)
class RecordWriter(tf.python_io.TFRecordWriter):
"""
A commodity subclass of TFRecordWriter that adds the methods to
easily serialize different data types.
"""
def __init__(self, tfrecord_name, dest_folder=''):
self.path = join(dest_folder, tfrecord_name)
super().__init__(self.path, options=compression)
@staticmethod
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
@staticmethod
def _int64(single_int):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[single_int]))
@staticmethod
def _int64_list(list_of_int):
return tf.train.Feature(int64_list=tf.train.Int64List(value=list_of_int))
@staticmethod
def _float32(single_float):
return tf.train.Feature(float_list=tf.train.FloatList(value=[single_float]))
@staticmethod
def _float32_list(list_of_floats):
return tf.train.Feature(float_list=tf.train.FloatList(value=list_of_floats))
| 31.162162 | 85 | 0.732003 | 150 | 1,153 | 5.36 | 0.373333 | 0.087065 | 0.080846 | 0.124378 | 0.256219 | 0.256219 | 0.256219 | 0.256219 | 0.256219 | 0.256219 | 0 | 0.016632 | 0.165655 | 1,153 | 36 | 86 | 32.027778 | 0.819127 | 0.088465 | 0 | 0.227273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0.227273 | 0.636364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
24dfd375a51b9e03b227d47163839bc164e5d9f8 | 104 | py | Python | wsgi.py | MagedMYoussef/corona-bel3raby | 4558f6ddc936f01e21f51800a73779ad855707b9 | [
"MIT"
] | null | null | null | wsgi.py | MagedMYoussef/corona-bel3raby | 4558f6ddc936f01e21f51800a73779ad855707b9 | [
"MIT"
] | null | null | null | wsgi.py | MagedMYoussef/corona-bel3raby | 4558f6ddc936f01e21f51800a73779ad855707b9 | [
"MIT"
] | 1 | 2020-04-10T20:36:24.000Z | 2020-04-10T20:36:24.000Z | from src.main.api import create_app
app = create_app("prod")
if __name__ == "__main__":
app.run()
| 14.857143 | 35 | 0.682692 | 16 | 104 | 3.8125 | 0.6875 | 0.295082 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173077 | 104 | 6 | 36 | 17.333333 | 0.709302 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
24e0d81fe6021edda199097b214798a302e13107 | 744 | py | Python | qcloudsdkvpc/SubnetUnBindBmNatGatewayRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkvpc/SubnetUnBindBmNatGatewayRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | qcloudsdkvpc/SubnetUnBindBmNatGatewayRequest.py | f3n9/qcloudcli | b965a4f0e6cdd79c1245c1d0cd2ca9c460a56f19 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from qcloudsdkcore.request import Request
class SubnetUnBindBmNatGatewayRequest(Request):
def __init__(self):
super(SubnetUnBindBmNatGatewayRequest, self).__init__(
'vpc', 'qcloudcliV1', 'SubnetUnBindBmNatGateway', 'vpc.api.qcloud.com')
def get_natId(self):
return self.get_params().get('natId')
def set_natId(self, natId):
self.add_param('natId', natId)
def get_subnetIds(self):
return self.get_params().get('subnetIds')
def set_subnetIds(self, subnetIds):
self.add_param('subnetIds', subnetIds)
def get_vpcId(self):
return self.get_params().get('vpcId')
def set_vpcId(self, vpcId):
self.add_param('vpcId', vpcId)
| 26.571429 | 83 | 0.663978 | 85 | 744 | 5.576471 | 0.329412 | 0.037975 | 0.088608 | 0.107595 | 0.164557 | 0.164557 | 0 | 0 | 0 | 0 | 0 | 0.003373 | 0.202957 | 744 | 27 | 84 | 27.555556 | 0.795953 | 0.028226 | 0 | 0 | 0 | 0 | 0.130374 | 0.033287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.411765 | false | 0 | 0.058824 | 0.176471 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
24e80ea6fdf7d90f65f728e2d718804aa6303f31 | 2,490 | py | Python | py/legacypipe/runs.py | legacysurvey/pipeline | 76dc2a9fc94e7b94fcd41af77e7c0423b62d693a | [
"BSD-3-Clause"
] | null | null | null | py/legacypipe/runs.py | legacysurvey/pipeline | 76dc2a9fc94e7b94fcd41af77e7c0423b62d693a | [
"BSD-3-Clause"
] | 36 | 2015-06-26T20:39:44.000Z | 2015-07-03T03:36:54.000Z | py/legacypipe/runs.py | legacysurvey/pipeline | 76dc2a9fc94e7b94fcd41af77e7c0423b62d693a | [
"BSD-3-Clause"
] | null | null | null | from legacypipe.survey import LegacySurveyData
class DecamSurvey(LegacySurveyData):
def filter_ccd_kd_files(self, fns):
return [fn for fn in fns if 'decam' in fn]
def filter_ccds_files(self, fns):
return [fn for fn in fns if 'decam' in fn]
def filter_annotated_ccds_files(self, fns):
return [fn for fn in fns if 'decam' in fn]
def get_default_release(self):
return 9008
class NinetyPrimeMosaic(LegacySurveyData):
def filter_ccd_kd_files(self, fns):
return [fn for fn in fns if ('90prime' in fn) or ('mosaic' in fn)]
def filter_ccds_files(self, fns):
return [fn for fn in fns if ('90prime' in fn) or ('mosaic' in fn)]
def filter_annotated_ccds_files(self, fns):
return [fn for fn in fns if ('90prime' in fn) or ('mosaic' in fn)]
def get_default_release(self):
return 9009
class M33SurveyData(DecamSurvey):
def ccds_for_fitting(self, brick, ccds):
import numpy as np
from astrometry.libkd.spherematch import match_radec
I, _, _ = match_radec(ccds.ra, ccds.dec, np.array(23.462121), np.array(30.659925), 0.55, nearest=True)
#I = np.delete(I, np.where((ccds.filter[I] == 'g') * (ccds.expnum[I] != 661055))[0])
#I = np.delete(I, np.where((ccds.filter[I] == 'z') * (ccds.expnum[I] != 790242))[0])
return I
class OdinData(LegacySurveyData):
#def filter_ccd_kd_files(self, fns):
# return [fn for fn in fns if ('90prime' in fn) or ('mosaic' in fn)]
def filter_ccds_files(self, fns):
return [fn for fn in fns if ('odin' in fn)]
def filter_annotated_ccds_files(self, fns):
return [fn for fn in fns if ('odin' in fn)]
def get_default_release(self):
return 200
class HscData(LegacySurveyData):
#def filter_ccd_kd_files(self, fns):
# return [fn for fn in fns if ('90prime' in fn) or ('mosaic' in fn)]
def filter_ccds_files(self, fns):
return [fn for fn in fns if ('hsc' in fn)]
def filter_annotated_ccds_files(self, fns):
return [fn for fn in fns if ('hsc' in fn)]
def get_default_release(self):
return 200
runs = {
'decam': DecamSurvey,
'90prime-mosaic': NinetyPrimeMosaic,
'south': DecamSurvey,
'north': NinetyPrimeMosaic,
'm33': M33SurveyData,
'odin': OdinData,
'hsc': HscData,
None: LegacySurveyData,
}
def get_survey(name, **kwargs):
survey_class = runs[name]
survey = survey_class(**kwargs)
return survey
| 37.164179 | 110 | 0.648594 | 370 | 2,490 | 4.232432 | 0.197297 | 0.043423 | 0.091954 | 0.137931 | 0.637292 | 0.637292 | 0.637292 | 0.637292 | 0.60281 | 0.528736 | 0 | 0.03396 | 0.231325 | 2,490 | 66 | 111 | 37.727273 | 0.784222 | 0.151004 | 0 | 0.490566 | 0 | 0 | 0.050759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.301887 | false | 0 | 0.056604 | 0.264151 | 0.754717 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
24fe3ca21a2fe9e00f892dde54ef4ec5b5835aed | 572 | py | Python | app/src/main/python/hello.py | hbszlf/Python | 009d66e50eb227b6d428f29fc09a32a53abcfd98 | [
"MIT"
] | null | null | null | app/src/main/python/hello.py | hbszlf/Python | 009d66e50eb227b6d428f29fc09a32a53abcfd98 | [
"MIT"
] | null | null | null | app/src/main/python/hello.py | hbszlf/Python | 009d66e50eb227b6d428f29fc09a32a53abcfd98 | [
"MIT"
] | null | null | null | from java import jclass
def greet(name):
print("--- hello,%s ---" % name)
def add(a, b):
return a + b
def sub(count, a=0, b=0, c=0):
return count - a - b - c
def get_list(a, b, c, d):
return [a, b, c, d]
def print_list(data):
print(type(data))
# 遍历Java的ArrayList对象
for i in range(data.size()):
print(data.get(i))
# python调用Java类
def get_java_bean():
JavaBean = jclass("com.mn.python.bean.JavaBean") # 用自己的包名
jb = JavaBean("python")
jb.setData("json")
jb.setData("xml")
jb.setData("xhtml")
return jb
| 16.342857 | 62 | 0.583916 | 89 | 572 | 3.707865 | 0.460674 | 0.030303 | 0.027273 | 0.024242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006961 | 0.246504 | 572 | 34 | 63 | 16.823529 | 0.758701 | 0.068182 | 0 | 0 | 0 | 0 | 0.115312 | 0.05104 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0.05 | 0.15 | 0.55 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
701d54dd2464556f581913269546783a689d21de | 2,795 | py | Python | networks/utils.py | shirgur/UMIS | d3be2fe6adc05843aa3d5dde2733ca43b3e5b149 | [
"Apache-2.0"
] | 67 | 2019-08-19T06:14:41.000Z | 2022-01-18T02:04:18.000Z | networks/utils.py | shirgur/UMIS | d3be2fe6adc05843aa3d5dde2733ca43b3e5b149 | [
"Apache-2.0"
] | 8 | 2019-10-31T13:11:26.000Z | 2022-02-21T14:53:43.000Z | networks/utils.py | shirgur/UMIS | d3be2fe6adc05843aa3d5dde2733ca43b3e5b149 | [
"Apache-2.0"
] | 13 | 2019-10-06T14:05:24.000Z | 2020-04-30T08:46:15.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
class GradXYZ(nn.Module):
def __init__(self):
super(GradXYZ, self).__init__()
self.padding = 1
self.register_buffer('dX', torch.Tensor([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[-1 / 2, 0, 1 / 2],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]
]
).unsqueeze(0).unsqueeze(0))
self.register_buffer('dY', torch.Tensor([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, -1 / 2, 0],
[0, 0, 0],
[0, 1 / 2, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]
]
).unsqueeze(0).unsqueeze(0))
self.register_buffer('dZ', torch.Tensor([[[0, 0, 0],
[0, -1 / 2, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 1 / 2, 0],
[0, 0, 0]]
]
).unsqueeze(0).unsqueeze(0))
def forward(self, x):
dx = F.conv3d(x, self.dX, padding=self.padding).abs()
dy = F.conv3d(x, self.dY, padding=self.padding).abs()
dz = F.conv3d(x, self.dZ, padding=self.padding).abs()
return dx + dy + dz
def norm_ip(img, min, max):
out = torch.clamp(img, min=min, max=max)
out = (out - min) / (max - min + 1e-5)
return out
def norm_range(t, range=None):
if range is not None:
return norm_ip(t, range[0], range[1])
else:
return norm_ip(t, float(t.min()), float(t.max())) | 45.080645 | 76 | 0.243649 | 235 | 2,795 | 2.834043 | 0.195745 | 0.198198 | 0.261261 | 0.3003 | 0.324324 | 0.324324 | 0.307808 | 0.307808 | 0.277778 | 0.277778 | 0 | 0.100698 | 0.641145 | 2,795 | 62 | 77 | 45.080645 | 0.56331 | 0 | 0 | 0.407407 | 0 | 0 | 0.002146 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.055556 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
702e0552af3f2797b0caa011efffe0bc1a961fb3 | 1,186 | py | Python | test/lmp/model/_lstm_2002/test_forward.py | ProFatXuanAll/char-RNN | 531f101b3d1ba20bafd28ca060aafe6f583d1efb | [
"Beerware"
] | null | null | null | test/lmp/model/_lstm_2002/test_forward.py | ProFatXuanAll/char-RNN | 531f101b3d1ba20bafd28ca060aafe6f583d1efb | [
"Beerware"
] | null | null | null | test/lmp/model/_lstm_2002/test_forward.py | ProFatXuanAll/char-RNN | 531f101b3d1ba20bafd28ca060aafe6f583d1efb | [
"Beerware"
] | null | null | null | """Test forward pass and tensor graph.
Test target:
- :py:meth:`lmp.model._lstm_2002.LSTM2002.forward`.
"""
import torch
from lmp.model._lstm_2002 import LSTM2002
def test_forward_path(
lstm_2002: LSTM2002,
batch_cur_tkids: torch.Tensor,
batch_next_tkids: torch.Tensor,
) -> None:
"""Parameters used during forward pass must have gradients."""
# Make sure model has zero gradients at the begining.
lstm_2002 = lstm_2002.train()
lstm_2002.zero_grad()
loss = lstm_2002(batch_cur_tkids=batch_cur_tkids, batch_next_tkids=batch_next_tkids)
loss.backward()
assert loss.size() == torch.Size([])
assert loss.dtype == torch.float
assert hasattr(lstm_2002.emb.weight, 'grad')
assert hasattr(lstm_2002.h_0, 'grad')
assert hasattr(lstm_2002.c_0, 'grad')
assert hasattr(lstm_2002.proj_e2cg[1].weight, 'grad')
assert hasattr(lstm_2002.proj_e2cg[1].bias, 'grad')
assert hasattr(lstm_2002.proj_h2cg.weight, 'grad')
assert hasattr(lstm_2002.proj_c2ig, 'grad')
assert hasattr(lstm_2002.proj_c2fg, 'grad')
assert hasattr(lstm_2002.proj_c2og, 'grad')
assert hasattr(lstm_2002.proj_h2e[1].weight, 'grad')
assert hasattr(lstm_2002.proj_h2e[1].bias, 'grad')
| 31.210526 | 86 | 0.747049 | 182 | 1,186 | 4.620879 | 0.32967 | 0.171225 | 0.222354 | 0.274673 | 0.387634 | 0.387634 | 0.21522 | 0.173603 | 0 | 0 | 0 | 0.094321 | 0.123946 | 1,186 | 37 | 87 | 32.054054 | 0.715111 | 0.177909 | 0 | 0 | 0 | 0 | 0.045691 | 0 | 0 | 0 | 0 | 0 | 0.541667 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
7046caf4da8dad46dadb1c6a0c82033c5a73e184 | 882 | py | Python | accelbyte_py_sdk/api/eventlog/operations/event_registry/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | accelbyte_py_sdk/api/eventlog/operations/event_registry/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | 1 | 2021-10-13T03:46:58.000Z | 2021-10-13T03:46:58.000Z | accelbyte_py_sdk/api/eventlog/operations/event_registry/__init__.py | AccelByte/accelbyte-python-sdk | dcd311fad111c59da828278975340fb92e0f26f7 | [
"MIT"
] | null | null | null | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: justice_py_sdk_codegen/__main__.py
"""Auto-generated package that contains models used by the justice-event-log-service."""
__version__ = ""
__author__ = "AccelByte"
__email__ = "dev@accelbyte.net"
# pylint: disable=line-too-long
from .get_registered_event_id_f55558 import GetRegisteredEventIDHandler
from .get_registered_events_b_671cec import GetRegisteredEventsByEventTypeHandler
from .get_registered_events_handler import GetRegisteredEventsHandler
from .register_event_handler import RegisterEventHandler
from .unregister_event_id_handler import UnregisterEventIDHandler
from .update_event_registry_handler import UpdateEventRegistryHandler
| 38.347826 | 88 | 0.842404 | 105 | 882 | 6.714286 | 0.704762 | 0.073759 | 0.07234 | 0.065248 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015132 | 0.100907 | 882 | 22 | 89 | 40.090909 | 0.873897 | 0.414966 | 0 | 0 | 1 | 0 | 0.051587 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
704a4567ef4aa02871e79c6f3827a11331f0876b | 1,313 | py | Python | src/elastic/azext_elastic/generated/_client_factory.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 207 | 2017-11-29T06:59:41.000Z | 2022-03-31T10:00:53.000Z | src/elastic/azext_elastic/generated/_client_factory.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 4,061 | 2017-10-27T23:19:56.000Z | 2022-03-31T23:18:30.000Z | src/elastic/azext_elastic/generated/_client_factory.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 802 | 2017-10-11T17:36:26.000Z | 2022-03-31T22:24:32.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
def cf_elastic_cl(cli_ctx, *_):
from azure.cli.core.commands.client_factory import get_mgmt_service_client
from azext_elastic.vendored_sdks.elastic import MicrosoftElastic
return get_mgmt_service_client(cli_ctx,
MicrosoftElastic)
def cf_monitor(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).monitors
def cf_monitored_resource(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).monitored_resources
def cf_deployment_info(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).deployment_info
def cf_tag_rule(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).tag_rules
def cf_vm_host(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).vm_host
def cf_vm_ingestion(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).vm_ingestion
def cf_vm_collection(cli_ctx, *_):
return cf_elastic_cl(cli_ctx).vm_collection
| 29.177778 | 78 | 0.670982 | 174 | 1,313 | 4.672414 | 0.413793 | 0.118081 | 0.108241 | 0.137761 | 0.277983 | 0.257073 | 0.257073 | 0.257073 | 0.114391 | 0 | 0 | 0 | 0.147753 | 1,313 | 44 | 79 | 29.840909 | 0.726542 | 0.334349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.421053 | false | 0 | 0.105263 | 0.368421 | 0.947368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
706a2528f0b32dcb1eea25e23dc9f8521d82c8f6 | 643 | py | Python | NakovBook/ConditionalStatements/Trip.py | LuGeorgiev/PythonSelfLearning | db8fcff2c2df8946d6acf2a2e5677eccf2bbe5dc | [
"MIT"
] | null | null | null | NakovBook/ConditionalStatements/Trip.py | LuGeorgiev/PythonSelfLearning | db8fcff2c2df8946d6acf2a2e5677eccf2bbe5dc | [
"MIT"
] | null | null | null | NakovBook/ConditionalStatements/Trip.py | LuGeorgiev/PythonSelfLearning | db8fcff2c2df8946d6acf2a2e5677eccf2bbe5dc | [
"MIT"
] | null | null | null | budget = float(input())
season = input()
if budget <= 100:
destination = 'Bulgaria'
money_spent = budget * 0.7
info = f'Hotel - {money_spent:.2f}'
if season == 'summer':
money_spent = budget * 0.3
info = f'Camp - {money_spent:.2f}'
elif budget <= 1000:
destination = 'Balkans'
money_spent = budget * 0.8
info = f'Hotel - {money_spent:.2f}'
if season == 'summer':
money_spent = budget * 0.4
info = f'Camp - {money_spent:.2f}'
else:
destination = 'Europe'
money_spent = budget * 0.9
info = f'Hotel - {money_spent:.2f}'
print('Somewhere in ' + destination)
print(info)
| 26.791667 | 42 | 0.59409 | 85 | 643 | 4.376471 | 0.352941 | 0.268817 | 0.215054 | 0.228495 | 0.456989 | 0.456989 | 0.284946 | 0.284946 | 0.284946 | 0.284946 | 0 | 0.046025 | 0.25661 | 643 | 23 | 43 | 27.956522 | 0.732218 | 0 | 0 | 0.318182 | 0 | 0 | 0.26283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.090909 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
707f0d663a3caecf9f5e630b1f04356d33c4fc0e | 31 | py | Python | __init__.py | cclough715/Budget-Calculator | a71eeb71a04ed3003936f9de03f00cfb0289dff4 | [
"BSD-2-Clause"
] | null | null | null | __init__.py | cclough715/Budget-Calculator | a71eeb71a04ed3003936f9de03f00cfb0289dff4 | [
"BSD-2-Clause"
] | 7 | 2019-12-12T04:18:11.000Z | 2021-06-02T00:47:19.000Z | __init__.py | cclough715/Budget-Calculator | a71eeb71a04ed3003936f9de03f00cfb0289dff4 | [
"BSD-2-Clause"
] | null | null | null | __all__ = {'budget_calculator'} | 31 | 31 | 0.774194 | 3 | 31 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 31 | 1 | 31 | 31 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0.53125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
709c9e7cb9b011564273f2373b8748030f76a174 | 94 | py | Python | examples/blackpink_logo.py | jeremia50/TextProme | 2e4d1ecde2287b7153bb259759e01a1771857e7a | [
"MIT"
] | 2 | 2021-01-07T07:40:42.000Z | 2021-01-07T15:04:36.000Z | examples/blackpink_logo.py | jeremia50/TextProme | 2e4d1ecde2287b7153bb259759e01a1771857e7a | [
"MIT"
] | 1 | 2021-01-08T01:13:11.000Z | 2021-01-08T03:07:10.000Z | examples/blackpink_logo.py | jeremia50/TextProme | 2e4d1ecde2287b7153bb259759e01a1771857e7a | [
"MIT"
] | 1 | 2021-01-21T13:26:06.000Z | 2021-01-21T13:26:06.000Z | from textprome import TextProMe
textpro = TextProMe()
print(textpro.style_blackpink("Minato")) | 31.333333 | 40 | 0.819149 | 11 | 94 | 6.909091 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.074468 | 94 | 3 | 40 | 31.333333 | 0.873563 | 0 | 0 | 0 | 0 | 0 | 0.063158 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
709e9338763ff3f7257569f63f1bd38e20e150cb | 182 | py | Python | python全栈/day20/day20-8 静态方法.py | Ringo-li/python_exercise_100 | 2c6c42b84a88ffbbac30c67ffbd7bad3418eda14 | [
"MIT"
] | null | null | null | python全栈/day20/day20-8 静态方法.py | Ringo-li/python_exercise_100 | 2c6c42b84a88ffbbac30c67ffbd7bad3418eda14 | [
"MIT"
] | null | null | null | python全栈/day20/day20-8 静态方法.py | Ringo-li/python_exercise_100 | 2c6c42b84a88ffbbac30c67ffbd7bad3418eda14 | [
"MIT"
] | null | null | null | # 1.定义类:定义静态方法
class Dog(object):
@staticmethod
def info_print():
print('这是一个静态方法')
# 2.创建对象
wangcai = Dog()
# 3.用类和实例分别调用静态方法
wangcai.info_print()
Dog.info_print() | 15.166667 | 25 | 0.664835 | 24 | 182 | 4.916667 | 0.666667 | 0.228814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02027 | 0.186813 | 182 | 12 | 26 | 15.166667 | 0.777027 | 0.192308 | 0 | 0 | 0 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0.571429 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
709fe118e3f4b9d0c9bc0aed0c4186cbf2f46192 | 26,357 | py | Python | dataPre_Postprocess/calculate_nearestPts.py | dddtqshmpmz/IHC | c5d8f34a901bc0196666250b86f08a5a1f47ced5 | [
"Apache-2.0"
] | 1 | 2020-09-28T07:16:15.000Z | 2020-09-28T07:16:15.000Z | dataPre_Postprocess/calculate_nearestPts.py | dddtqshmpmz/IHC | c5d8f34a901bc0196666250b86f08a5a1f47ced5 | [
"Apache-2.0"
] | 1 | 2021-03-16T09:45:10.000Z | 2021-03-16T09:45:10.000Z | dataPre_Postprocess/calculate_nearestPts.py | dddtqshmpmz/IHC | c5d8f34a901bc0196666250b86f08a5a1f47ced5 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import cv2
import csv
import os
import pandas as pd
import time
def calcuNearestPtsDis2(ptList1):
''' Find the nearest point of each point in ptList1 & return the mean min_distance
Parameters
----------
ptList1: numpy array
points' array, shape:(x,2)
Return
----------
mean_Dis: float
the mean value of the minimum distances
'''
if len(ptList1)<=1:
print('error!')
return 'error'
minDis_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,0:2]
ptList2 = np.delete(ptList1,i,axis=0)
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList2)**2, axis=1).astype(np.float32) )
minDis = disMat.min()
minDis_list.append(minDis)
minDisArr = np.array(minDis_list)
mean_Dis = np.mean(minDisArr)
return mean_Dis
def calcuNearestPtsDis(ptList1, ptList2):
''' Find the nearest point of each point in ptList1 from ptList2
& return the mean min_distance
Parameters
----------
ptList1: numpy array
points' array, shape:(x,2)
ptList2: numpy array
points' array, shape:(x,2)
Return
----------
mean_Dis: float
the mean value of the minimum distances
'''
if (not len(ptList2)) or (not len(ptList1)):
print('error!')
return 'error'
minDis_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,0:2]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList2)**2, axis=1).astype(np.float32) )
minDis = disMat.min()
minDis_list.append(minDis)
minDisArr = np.array(minDis_list)
mean_Dis = np.mean(minDisArr)
return mean_Dis
def calcuNearestPts(csvName1, csvName2):
ptList1_csv = pd.read_csv(csvName1,usecols=['x_cord', 'y_cord'])
ptList2_csv = pd.read_csv(csvName2,usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
ptList2 = ptList2_csv.values[:,:2]
minDisInd_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,0:2]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList2)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
ptList1_csv = pd.concat([ptList1_csv, pd.DataFrame( columns=['nearestInd'],data = minDisInd)], axis=1)
ptList1_csv.to_csv(csvName1,index=False)
return minDisInd
def drawDisPic(picInd):
picName = 'patients_dataset/image/'+ picInd +'.png'
img = cv2.imread(picName)
csvName1='patients_dataset/data_csv/'+picInd+'other_tumour_pts.csv'
csvName2='patients_dataset/data_csv/'+picInd+'other_lymph_pts.csv'
ptList1_csv = pd.read_csv(csvName1)
ptList2_csv = pd.read_csv(csvName2)
ptList1 = ptList1_csv.values
ptList2 = ptList2_csv.values
for i in range(len(ptList1)):
img = cv2.circle(img, tuple(ptList1[i,:2]), 3 , (0, 0, 255), -1 )
img = cv2.line(img, tuple(ptList1[i,:2]) , tuple(ptList2[ ptList1[i,2] ,:2]), (0,255,0), 1)
for i in range(len(ptList2)):
img = cv2.circle(img, tuple(ptList2[i,:2]), 3 , (255, 0, 0), -1 )
cv2.imwrite( picInd+'_dis.png',img)
def drawDistancePic(disName1, disName2, picID):
''' Draw & save the distance pics
Parameters
----------
disName1,disName2: str
such as 'positive_lymph', 'all_tumour'
picID: str
the patient's ID
'''
cellName_color = {'other_lymph': (255, 0, 0), 'positive_lymph': (255, 255, 0),
'other_tumour': (0, 0, 255), 'positive_tumour': (0, 255, 0)}
ptline_color = {'positive_lymph': (0,0,255), 'positive_tumour': (0,0,255),
'ptumour_plymph': (51, 97, 235), 'other_tumour': (0, 255, 0)}
if (disName1 == 'all_tumour' and disName2 == 'all_lymph') or (disName1 == 'all_tumour' and disName2 == 'positive_lymph'):
line_color = (0,255,255)
elif disName1 == 'positive_tumour' and disName2 == 'positive_lymph':
line_color = (51, 97, 235)
else:
line_color = ptline_color[disName1]
csv_dir = '/data/Datasets/MediImgExp/data_csv'
img_dir = '/data/Datasets/MediImgExp/image'
if disName1 == 'all_tumour' and disName2 == 'positive_lymph':
dis1_csv = pd.read_csv(csv_dir + '/' + picID + 'positive_tumour' + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis2_csv = pd.read_csv(csv_dir + '/' + picID + 'other_tumour' + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis3_csv = pd.read_csv(csv_dir + '/' + picID + 'positive_lymph' + '_pts.csv', usecols=['x_cord', 'y_cord'])
ptList1 = dis1_csv.values[:,:2]
ptList2 = dis2_csv.values[:,:2]
ptList3 = dis3_csv.values[:,:2]
# positive tumour: find the nearest lymph cell
minDisInd_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,:]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList3)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis1_csv = pd.concat([dis1_csv, pd.DataFrame(columns=['nearestInd'], data=minDisInd)], axis=1)
# other tumour: find the nearest lymph cell
minDisInd_list = []
for i in range(len(ptList2)):
currentPt = ptList2[i,:]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList3)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis2_csv = pd.concat([dis2_csv, pd.DataFrame(columns=['nearestInd'], data=minDisInd)], axis=1)
img = cv2.imread(img_dir + '/' + picID + '.jpg')
ptList1 = dis1_csv.values
for i in range(len(ptList1)):
img = cv2.line(img, tuple(ptList1[i,:2]), tuple(ptList3[ptList1[i, 2],:2]), line_color, 1)
ptList2 = dis2_csv.values
for i in range(len(ptList2)):
img = cv2.line(img, tuple(ptList2[i,:2]), tuple(ptList3[ptList2[i, 2],:2]), line_color, 1)
for i in range(len(ptList1)):
img = cv2.circle(img, tuple(ptList1[i,:2]), 4, (0, 255, 0), -1)
for i in range(len(ptList2)):
img = cv2.circle(img, tuple(ptList2[i,:2]), 4, (0, 0, 255), -1)
for i in range(len(ptList3)):
img = cv2.circle(img, tuple(ptList3[i,:2]), 4, (255, 255, 0), -1)
cv2.imwrite(picID + disName1 + '_' + disName2 + '_dis.png', img)
elif disName1 == 'all_tumour' and disName2 == 'all_lymph':
dis1_csv = pd.read_csv(csv_dir + '/' + picID + 'positive_tumour' + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis2_csv = pd.read_csv(csv_dir + '/' + picID + 'other_tumour' + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis3_csv = pd.read_csv(csv_dir + '/' + picID + 'positive_lymph' + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis4_csv = pd.read_csv(csv_dir + '/' + picID + 'other_lymph' + '_pts.csv', usecols=['x_cord', 'y_cord'])
ptList1 = dis1_csv.values[:,:2]
ptList2 = dis2_csv.values[:,:2]
ptList3 = dis3_csv.values[:,:2]
ptList4 = dis4_csv.values[:,:2]
ptList6 = np.concatenate((ptList3, ptList4), axis=0)
minDisInd_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,:]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList6)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis1_csv = pd.concat([dis1_csv, pd.DataFrame(columns=['nearestInd'], data=minDisInd)], axis=1)
minDisInd_list = []
for i in range(len(ptList2)):
currentPt = ptList2[i,:]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList6)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis2_csv = pd.concat([dis2_csv, pd.DataFrame(columns=['nearestInd'], data=minDisInd)], axis=1)
img = cv2.imread(img_dir + '/' + picID + '.jpg')
ptList1 = dis1_csv.values
for i in range(len(ptList1)):
img = cv2.line(img, tuple(ptList1[i,:2]), tuple(ptList6[ptList1[i, 2],:2]), line_color, 1)
ptList2 = dis2_csv.values
for i in range(len(ptList2)):
img = cv2.line(img, tuple(ptList2[i,:2]), tuple(ptList6[ptList2[i, 2],:2]), line_color, 1)
for i in range(len(ptList1)):
img = cv2.circle(img, tuple(ptList1[i,:2]), 4, (0, 255, 0), -1)
for i in range(len(ptList2)):
img = cv2.circle(img, tuple(ptList2[i,:2]), 4, (0, 0, 255), -1)
for i in range(len(ptList3)):
img = cv2.circle(img, tuple(ptList3[i,:2]), 4, (255, 255, 0), -1)
for i in range(len(ptList4)):
img = cv2.circle(img, tuple(ptList4[i,:2]), 4, (255, 0, 0), -1)
cv2.imwrite(picID + disName1 + '_' + disName2 + '_dis.png', img)
elif disName1 != disName2:
dis1_csv = pd.read_csv(csv_dir + '/' + picID + disName1 + '_pts.csv', usecols=['x_cord', 'y_cord'])
dis2_csv = pd.read_csv(csv_dir + '/' + picID + disName2 + '_pts.csv', usecols=['x_cord', 'y_cord'])
ptList1 = dis1_csv.values[:,:2]
ptList2 = dis2_csv.values[:,:2]
minDisInd_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i,:]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList2)**2, axis=1))
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis1_csv = pd.concat([dis1_csv, pd.DataFrame( columns=['nearestInd'],data = minDisInd)], axis=1)
img = cv2.imread(img_dir + '/' + picID + '.jpg')
img[:,:, 0] = 255
img[:,:, 1] = 255
img[:,:, 2] = 255
ptList1 = dis1_csv.values
for i in range(len(ptList1)):
img = cv2.line(img, tuple(ptList1[i,:2]) , tuple(ptList2[ ptList1[i,2] ,:2]), line_color, 1)
for i in range(len(ptList1)):
img = cv2.circle(img, tuple(ptList1[i,:2]), 5, cellName_color[disName1], -1)
for i in range(len(ptList2)):
img = cv2.circle(img, tuple(ptList2[i,:2]), 5, cellName_color[disName2], -1)
cv2.imwrite(picID + disName1 + '_' + disName2 + '_dis.png', img)
elif disName1 == disName2:
dis1_csv = pd.read_csv(csv_dir + '/' + picID + disName1 + '_pts.csv', usecols=['x_cord', 'y_cord'])
ptList1 = dis1_csv.values[:,:2]
minDisInd_list = []
for i in range(len(ptList1)):
currentPt = ptList1[i, :2]
disMat = np.sqrt(np.sum(np.asarray(currentPt - ptList1)** 2, axis=1).astype(np.float32))
minDisInd = np.argmin(disMat)
disMat[minDisInd] = 1000.0
minDisInd = np.argmin(disMat)
minDisInd_list.append(minDisInd)
minDisInd = np.array(minDisInd_list).reshape(-1,1)
dis1_csv = pd.concat([dis1_csv, pd.DataFrame( columns=['nearestInd'],data = minDisInd)], axis=1)
img = cv2.imread(img_dir + '/' + picID + '.jpg')
img[:,:, 0] = 255
img[:,:, 1] = 255
img[:,:, 2] = 255
ptList1 = dis1_csv.values
for i in range(len(ptList1)):
img = cv2.line(img, tuple(ptList1[i,:2]), tuple(ptList1[ptList1[i, 2],:2]), line_color, 1)
for i in range(len(ptList1)):
img = cv2.circle(img, tuple(ptList1[i,:2]), 5, cellName_color[disName1], -1)
cv2.imwrite(picID + disName1 + '_dis.png', img)
def getAllPicsDisCSV():
'''
Get all distance data from the saved csv files (get from the above functions)
'''
base_dir = '/data/Datasets/MediImgExp'
f = open( base_dir + '/' + 'AllDisData.csv','w',encoding='utf-8',newline="")
csv_writer = csv.writer(f)
csv_writer.writerow([ 'Ind','PosiTumourRatio','PosiLymphRatio',
'DisTumourLymph','DisPosiTumour','DisPosiLymph',
'DisPosiTumourPosiLymph','DisTumourPosiLymph'])
process_dir = base_dir + '/process'
csv_dir = base_dir + '/data_csv'
pic_name = os.listdir(process_dir)
picIDList = []
for pic_name_ in pic_name:
picIDList.append( pic_name_.split('_')[0] )
for picID in picIDList:
list_data = []
list_data.append(picID)
# PosiTumourRatio
PosiTumourCsv = pd.read_csv( csv_dir+'/'+ picID +'positive_tumour_pts.csv')
OtherTumourCsv = pd.read_csv( csv_dir+'/'+ picID +'other_tumour_pts.csv')
Num_PosiTumour = PosiTumourCsv.shape[0]
Num_OtherTumour = OtherTumourCsv.shape[0]
if (Num_PosiTumour + Num_OtherTumour)!=0 :
PosiTumourRatio = Num_PosiTumour / (Num_PosiTumour + Num_OtherTumour)
else:
PosiTumourRatio = 'error'
list_data.append(PosiTumourRatio)
# PosiLymphRatio
PosiLymphCsv = pd.read_csv( csv_dir+'/'+ picID +'positive_lymph_pts.csv')
OtherLymphCsv = pd.read_csv( csv_dir+'/'+ picID +'other_lymph_pts.csv')
Num_PosiLymph = PosiLymphCsv.shape[0]
Num_OtherLymph = OtherLymphCsv.shape[0]
if (Num_PosiLymph + Num_OtherLymph)!=0 :
PosiLymphRatio = Num_PosiLymph / (Num_PosiLymph + Num_OtherLymph)
else:
PosiLymphRatio = 'error'
list_data.append(PosiLymphRatio)
# DisTumourLymph
ptList1_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList2_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_lymph_pts.csv',usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
ptList2 = ptList2_csv.values[:,:2]
ptList3_csv = pd.read_csv(csv_dir+'/'+ picID +'other_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList4_csv = pd.read_csv(csv_dir+'/'+ picID +'other_lymph_pts.csv',usecols=['x_cord', 'y_cord'])
ptList3 = ptList3_csv.values[:,:2]
ptList4 = ptList4_csv.values[:,:2]
ptList1 = np.concatenate((ptList1,ptList3), axis=0)
ptList2 = np.concatenate((ptList2,ptList4), axis=0)
DisTumourLymph = calcuNearestPtsDis(ptList1, ptList2)
list_data.append(DisTumourLymph)
# DisPosiTumour
ptList1_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
DisPosiTumour = calcuNearestPtsDis2(ptList1)
list_data.append(DisPosiTumour)
# DisPosiLymph
ptList1_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_lymph_pts.csv',usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
DisPosiLymph = calcuNearestPtsDis2(ptList1)
list_data.append(DisPosiLymph)
# DisPosiTumourPosiLymph
ptList1_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList2_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_lymph_pts.csv',usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
ptList2 = ptList2_csv.values[:,:2]
DisPosiTumourPosiLymph = calcuNearestPtsDis(ptList1, ptList2)
list_data.append(DisPosiTumourPosiLymph)
# DisTumourPosiLymph
ptList1_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList2_csv = pd.read_csv(csv_dir+'/'+ picID +'positive_lymph_pts.csv',usecols=['x_cord', 'y_cord'])
ptList1 = ptList1_csv.values[:,:2]
ptList2 = ptList2_csv.values[:,:2]
ptList3_csv = pd.read_csv(csv_dir+'/'+ picID +'other_tumour_pts.csv',usecols=['x_cord', 'y_cord'])
ptList3 = ptList3_csv.values[:,:2]
ptList1 = np.concatenate((ptList1,ptList3), axis=0)
DisTumourPosiLymph = calcuNearestPtsDis(ptList1, ptList2)
list_data.append(DisTumourPosiLymph)
csv_writer.writerow(list_data)
def adjustToMultiCSV():
'''
Divide the AllDisData.csv into 6+1=7 csv
'''
base_dir = '/data/Datasets/MediImgExp'
alldata = pd.read_csv( base_dir + '/' + 'AllDisData.csv' )
IndData = alldata['Ind'].values
patient_Ind = []
for IndName in IndData:
patient_Ind.append(IndName.split('-')[0])
patient_Ind = np.unique(patient_Ind)
patient_Ind = sorted( list(map(int,patient_Ind)) )
column_name = ['Ind','2D','3D','GAL9','LAG3','MHC','OX40','OX40L','PD1','PDL1','TIM3']
# stage 1 calculate the 6 csv (10 cols for each csv)
DisPosiTumour = pd.DataFrame(columns=column_name,index= patient_Ind)
DisPosiTumour['Ind'] = patient_Ind
patient_Id = patient_Ind
column_names = ['2D','3D','GAL9','LAG3','MHC','OX40','OX40L','PD1','PDL1','TIM3']
for patient in patient_Id:
for column in column_names:
combine_name = str(patient) + '-' + column
exist_flag = (alldata['Ind'].str[0:len(combine_name)]== combine_name).any()
if not exist_flag:
continue
valid_slice = alldata[ alldata['Ind'].str[0:len(combine_name)]== combine_name ]
arr = valid_slice['DisTumourPosiLymph'].values
if arr.__contains__('error'):
arr = np.setdiff1d(arr, ['error'])
if not arr.shape[0]:
continue
valid_slice_mean = np.mean( arr.astype(np.float32))
DisPosiTumour.loc[ patient ,column ] = valid_slice_mean
DisPosiTumour.to_csv( base_dir + '/' + 'DisTumourPosiLymph.csv',index=False )
# stage 2 add the outputs (4 cols)
all_data_name = base_dir + '/' + 'alldata2.csv'
all_data = pd.read_csv(all_data_name)
all_data.index = all_data['Ind']
valid_columns = ['RELAPSE','RFS','DEATH','OS']
valid_slice = all_data.loc[ patient_Ind, valid_columns ]
DisPosiTumour = pd.read_csv( base_dir + '/' + 'PosiTumourRatio.csv',index_col=0)
DisPosiTumour = pd.concat([DisPosiTumour,valid_slice],axis = 1)
DisPosiTumour.to_csv( base_dir + '/' + 'PosiTumourRatio.csv' )
# stage 3 calculate DisTumourLymph (use all markers' mean values)
DisTumourLymph = pd.DataFrame(columns=['mean_10markers'],index= patient_Ind)
patient_Id = patient_Ind
column_names = [ 'mean_10markers']
for patient in patient_Id:
for column in column_names:
combine_name = str(patient) + '-'
exist_flag = (alldata['Ind'].str[0:len(combine_name)]== combine_name).any()
if not exist_flag:
continue
valid_slice = alldata[ alldata['Ind'].str[0:len(combine_name)]== combine_name ]
arr = valid_slice['DisTumourLymph'].values
if arr.__contains__('error'):
arr = np.setdiff1d(arr, ['error'])
if not arr.shape[0]:
continue
valid_slice_mean = np.mean( arr.astype(np.float32))
DisTumourLymph.loc[ patient ,column ] = valid_slice_mean
DisTumourLymph.to_csv( base_dir + '/' + 'DisTumourLymph.csv' )
all_data_name = base_dir + '/' + 'alldata2.csv'
all_data = pd.read_csv(all_data_name)
all_data.index = all_data['Ind']
valid_columns = ['RELAPSE','RFS','DEATH','OS']
valid_slice = all_data.loc[ patient_Ind, valid_columns]
DisTumourLymph = pd.concat([DisTumourLymph,valid_slice],axis = 1)
DisTumourLymph.to_csv( base_dir + '/' + 'DisTumourLymph.csv')
def getAllFeatureCSV():
base_dir = '/data/Datasets/MediImgExp/csv'
alldata = pd.read_csv( base_dir + '/' + 'AllDisData.csv' )
oridata = pd.read_csv( base_dir + '/' + 'alldata2.csv',index_col=0 )
ori_columns = oridata.columns.values
ori_columns = ori_columns[4:-4] # original 40 feature names
csv_name = [ 'DisPosiLymph','DisPosiTumour', 'DisPosiTumourPosiLymph',
'DisTumourPosiLymph','PosiLymphRatio','PosiTumourRatio', 'DisTumourLymph']
meaningful_feature_OS = { 'DisPosiLymph_PD1':24,'DisPosiLymph_OX40L':48.14,'DisPosiLymph_OX40':98.33,'DisPosiLymph_3D':13.44,
'DisPosiTumour_TIM3':546.86, 'DisPosiTumour_GAL9':85.97,'DisPosiTumour_2D':24.22,
'DisPosiTumourPosiLymph_3D':20.88, 'DisPosiTumourPosiLymph_MHC':18.68,'DisPosiTumourPosiLymph_OX40L':40.02,
'DisTumourPosiLymph_OX40L':173.56, 'DisTumourPosiLymph_2D':223.71,'DisTumourPosiLymph_3D':21.19,'DisTumourPosiLymph_OX40':445.89,
'PosiLymphRatio_3D':0.97, 'PosiLymphRatio_OX40':0.44,'PosiLymphRatio_GAL9':0.23,'PosiLymphRatio_2D':0.37,
'PosiTumourRatio_MHC':0.93,'PosiTumourRatio_GAL9':0.41,'PosiTumourRatio_3D':1.0 }
IndData = alldata['Ind'].values
patient_Ind = []
for IndName in IndData:
patient_Ind.append(IndName.split('-')[0])
patient_Ind = np.unique(patient_Ind)
patient_Ind = sorted( list(map(int,patient_Ind)) )
# create a super_csv including all features 61+40=101 features / 4 outputs
super_csv = pd.DataFrame(index= patient_Ind)
super_csv_copy = pd.DataFrame(index= patient_Ind)
super_csv['Ind'] = patient_Ind
for column in ori_columns: # 40 features
super_csv[column] = oridata.loc[ patient_Ind,column]
for i in range(0,7):
csvName = os.path.join(base_dir,csv_name[i] + '.csv')
csvData = pd.read_csv(csvName,index_col=0)
if i == 6: # 1 feature
column_name = csvData.columns.values[:1]
super_csv[ csv_name[i]+'_'+column_name[0] ] = csvData.loc[ patient_Ind, column_name[0] ]
output_name = csvData.columns.values[-4:]
for k in range(0,4):
super_csv[ output_name[k] ] = csvData.loc[ patient_Ind,output_name[k] ]
break
column_name = csvData.columns.values[:10]
for j in range(0,10): # 6*10 = 60 features
super_csv[ csv_name[i]+'_'+column_name[j] ] = csvData.loc[ patient_Ind, column_name[j] ]
for key, values in meaningful_feature_OS.items():
super_csv_copy[key] = super_csv.loc[patient_Ind, key ]
super_csv_copy.loc[ super_csv_copy[key] <= values, key] = 0
super_csv_copy.loc[ super_csv_copy[key] > values, key ] = 1
for k in range(0,4):
super_csv_copy[ output_name[k] ] = csvData.loc[ patient_Ind,output_name[k] ]
super_csv_copy = super_csv_copy.dropna(axis=0,how='any')
#super_csv.to_csv(base_dir+'/'+ 'super1.csv',index =False)
#super_csv.to_csv(base_dir+'/'+ 'super2.csv',index =False)
#super_csv_copy.to_csv(base_dir+'/'+ 'super3.csv',index =True)
#super_csv_copy.to_csv(base_dir+'/'+ 'super4.csv',index =True)
def getAllFeatureCSV2():
base_dir = '/data/Datasets/MediImgExp/csv'
alldata = pd.read_csv( base_dir + '/' + 'AllDisData.csv' )
oridata = pd.read_csv( base_dir + '/' + 'alldata2.csv',index_col=0 )
ori_columns = oridata.columns.values
ori_columns = np.concatenate(( ori_columns[19:32],ori_columns[44:48] ) ,axis=0 ) # original 40 feature names
csv_name = [ 'DisPosiLymph','DisPosiTumour', 'DisPosiTumourPosiLymph',
'DisTumourPosiLymph','PosiLymphRatio','PosiTumourRatio', 'DisTumourLymph']
meaningful_feature_OS = { 'DisPosiLymph_OX40L':61.55,
'DisPosiTumour_GAL9':119.40,
'DisPosiTumourPosiLymph_3D':20.28, 'DisPosiTumourPosiLymph_OX40L':40.02,
'DisTumourPosiLymph_3D':21.19, 'DisTumourPosiLymph_MHC':135.56,
'PosiLymphRatio_3D':0.97, 'PosiLymphRatio_OX40':0.52,'PosiLymphRatio_GAL9':0.2 }
IndData = alldata['Ind'].values
patient_Ind = []
for IndName in IndData:
patient_Ind.append(IndName.split('-')[0])
patient_Ind = np.unique(patient_Ind)
patient_Ind = sorted( list(map(int,patient_Ind)) )
# create a super_csv including all features 61+xx features / 4 outputs
super_csv = pd.DataFrame(index= patient_Ind)
super_csv_copy = pd.DataFrame(index= patient_Ind)
super_csv['Ind'] = patient_Ind
for column in ori_columns: # 40 features
super_csv[column] = oridata.loc[ patient_Ind,column]
for i in range(0,7):
csvName = os.path.join(base_dir,csv_name[i] + '.csv')
csvData = pd.read_csv(csvName,index_col=0)
if i == 6: # 1 feature
column_name = csvData.columns.values[:1]
super_csv[ csv_name[i]+'_'+column_name[0] ] = csvData.loc[ patient_Ind, column_name[0] ]
output_name = csvData.columns.values[-4:]
for k in range(0,4):
super_csv[ output_name[k] ] = csvData.loc[ patient_Ind,output_name[k] ]
break
column_name = csvData.columns.values[:10]
for j in range(0,10): # 6*10 = 60 features
super_csv[ csv_name[i]+'_'+column_name[j] ] = csvData.loc[ patient_Ind, column_name[j] ]
for key, values in meaningful_feature_OS.items():
super_csv_copy[key] = super_csv.loc[patient_Ind, key ]
super_csv_copy.loc[ super_csv_copy[key] <= values, key] = 0
super_csv_copy.loc[ super_csv_copy[key] > values, key ] = 1
for k in range(0,4):
super_csv_copy[ output_name[k] ] = csvData.loc[ patient_Ind,output_name[k] ]
#super_csv_copy = super_csv_copy.dropna(axis=0,how='any')
#super_csv.to_csv(base_dir+'/rfs_csv'+'/'+ 'super1.csv',index =False)
#super_csv.to_csv(base_dir+'/'+ 'super2.csv',index =False)
super_csv_copy.to_csv(base_dir+'/rfs_csv'+'/'+ 'super3.csv',index =True)
#super_csv_copy.to_csv(base_dir+'/'+ 'super4.csv',index =True)
if __name__ == '__main__':
getAllPicsDisCSV()
adjustToMultiCSV()
getAllFeatureCSV2()
# draw the distance pics
drawDistancePic(disName1='all_tumour', disName2='positive_lymph', picID='0-GAL9-1')
| 46.485009 | 138 | 0.609933 | 3,365 | 26,357 | 4.576226 | 0.089153 | 0.027924 | 0.022794 | 0.020716 | 0.770245 | 0.740308 | 0.711085 | 0.688616 | 0.680174 | 0.669134 | 0 | 0.044614 | 0.243123 | 26,357 | 567 | 139 | 46.485009 | 0.727305 | 0.073036 | 0 | 0.652681 | 0 | 0 | 0.124319 | 0.03485 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020979 | false | 0 | 0.013986 | 0 | 0.04662 | 0.004662 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
70a6436bede7b65ea73d086ce97303034229e00b | 261 | py | Python | zhangzhen/20180402/text6.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null | zhangzhen/20180402/text6.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null | zhangzhen/20180402/text6.py | python20180319howmework/homework | c826d7aa4c52f8d22f739feb134d20f0b2c217cd | [
"Apache-2.0"
] | null | null | null | '''
6. 有这样一个字典d = {"chaoqian":87, “caoxu”:90, “caohuan”:98, “wuhan”:82, “zhijia”:89}
1)将以上字典按成绩排名
'''
d = {"chaoqian":87, "caoxu":90, "caohuan":98,"wuhan":82,"zhijia":89}
print(sorted(d.items(),key = lambda x :x[1],reverse =True))
| 7.909091 | 80 | 0.547893 | 36 | 261 | 3.972222 | 0.638889 | 0.13986 | 0.20979 | 0.237762 | 0.573427 | 0.573427 | 0.573427 | 0.573427 | 0.573427 | 0.573427 | 0 | 0.110048 | 0.199234 | 261 | 32 | 81 | 8.15625 | 0.574163 | 0.367816 | 0 | 0 | 0 | 0 | 0.238462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
5626dcdb8caf54becd357267cec05d7721358da0 | 44 | py | Python | version.py | AlanAyy/unify-bot | b28c02a0ca7921ef7bc80a5ac86ec1981230177c | [
"MIT"
] | null | null | null | version.py | AlanAyy/unify-bot | b28c02a0ca7921ef7bc80a5ac86ec1981230177c | [
"MIT"
] | null | null | null | version.py | AlanAyy/unify-bot | b28c02a0ca7921ef7bc80a5ac86ec1981230177c | [
"MIT"
] | null | null | null | version = 0.1
# View README for v0.1 details | 22 | 30 | 0.727273 | 9 | 44 | 3.555556 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.181818 | 44 | 2 | 30 | 22 | 0.777778 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
563b3c79a91f2f21a08bf694d976c8064abbbbcb | 237 | py | Python | cryptocurrency/celery.py | deepanshu-jain1999/cryptocurrencytracking | 1feb8f14e7615406b0658138d23314188f8f0e8b | [
"Apache-2.0"
] | null | null | null | cryptocurrency/celery.py | deepanshu-jain1999/cryptocurrencytracking | 1feb8f14e7615406b0658138d23314188f8f0e8b | [
"Apache-2.0"
] | null | null | null | cryptocurrency/celery.py | deepanshu-jain1999/cryptocurrencytracking | 1feb8f14e7615406b0658138d23314188f8f0e8b | [
"Apache-2.0"
] | null | null | null | import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'cryptocurrency.settings')
app = Celery('cryptocurrency')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
| 23.7 | 74 | 0.805907 | 29 | 237 | 6.413793 | 0.586207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07173 | 237 | 9 | 75 | 26.333333 | 0.845455 | 0 | 0 | 0 | 0 | 0 | 0.35865 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
5654560040bcea100546e70954c0b6426dc1457f | 16,931 | py | Python | insurance-app.py | alliwene/gb-november-grp2-health-insurance | 9f2699f9a7a533b33ea431e9bb7cb95c25654599 | [
"MIT"
] | null | null | null | insurance-app.py | alliwene/gb-november-grp2-health-insurance | 9f2699f9a7a533b33ea431e9bb7cb95c25654599 | [
"MIT"
] | null | null | null | insurance-app.py | alliwene/gb-november-grp2-health-insurance | 9f2699f9a7a533b33ea431e9bb7cb95c25654599 | [
"MIT"
] | null | null | null | # import libraries
import base64
import os
import uuid
import re
import streamlit as st
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from PIL import Image
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from matplotlib.backends.backend_agg import RendererAgg
_lock = RendererAgg.lock
plt.style.use('seaborn-notebook')
sns.set(context='paper', font='monospace', font_scale=3)
def main():
page = st.sidebar.selectbox('Choose a page',['About App','Prediction and Evaluation'])
if page == 'About App':
st.title('Analysis and Prediction of Health Insurance Subscription in Nigeria')
image = Image.open('images/GB.png')
st.image(image)
st.markdown("""
This app predicts whether an individual would take up a health insurance policy or not leveraging
a machine learning classification model. We would also investigate factors that most likely influence taking up a health
insurance policy by an individual using the trained model.
Data obtained from Individual Recode section of the 2018 Nigerian Demographic and
Health Survey [DHS](https://dhsprogram.com/data/dataset/Nigeria_Standard-DHS_2018.cfm) .
""")
st.markdown("## Meet the Data Scientists")
col1,mid,col2 = st.beta_columns(3)
with col1:
st.image('images/ope.jpg',width=300)
html = f"Opeyemi Idris <a href='https://github.com/hardcore05' alt='GitHub'><img height='20' src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAA7AAAAOwBeShxvQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAMdSURBVFiFxZfLb9RWFMZ/99oeG3uYyZuUlD7UBgkCFCEkqKoiUKUuaOgCNhXTXdUisULdsOMPQAJWgAQSbBqWrEAtWzY8Fm0kQkBKKzXNJEI8JqOJx7FnPL4sZnAChdhRQuZbXY/POd835557jq+ghQmlTCuoH0NwFMU2wGF1UUUwhmLEN41Lg0IEAAJgyvMGkPpN4ItVJn0XRonC4U22PS1a//zeGpLHInzT2CutoH6sDeQAO62g/rNEUWgDeROCgpjya3NAtk0SXNlGcoCsbCM5APpyHRoKHlZ9/pzzeRqEAGwwdXatt9jqWGhiefHElF9TaY3vVzwuFmf5z6+/9f3HlsHxD7vYnVu3+gJGnpS5OlMmyVgCP23s5If+fCoBqWrg9xcuV2bKdBsaX3XY6OL/eTak4OsOm7yucXlmllslN5WAxBoohw0uFEsAfJm3OfFRN9NByD9ejX5TRwAzQchmO8MHps7pf59zq+RyoVhib34deU1bmYAbz+bwGhEAQdTcgAFTZ8BccB20M/E6UE1bN4y48WyOQn/HkvETt+BOZT5eH+pdn2TO9725Bd/y/BKWKQVMt46arUm2OGZiwG2ORUaK13xXJMCPmilVtGZ3AoRo9goAv7UdKxLQrTeLaL4RMeHVEgM+qgY0VFNBj7F0AaYSsCNrxevzxVJckG+DG0bxiQHY7ljvtE0t4GBPc1Yd7svRUIofx6a5WJxlclE3/Nurcb5YojBe5HE1iH//LkXRJgrYnrXY12nzxwuXE5u6+czOcK/ikdUWXE0puP60ghsuZOdAl8NQiqJN1YqrjYhfJ57gSMnJT3rpMSTaom4YRIqDo5Px82bH5MznG7C15EabqhU7muTcYD8dhuTo2BTf/jUZV/qb+KbL4WxKcljGOLY1yalP+3jQ63O77CEXnUldCI705djf6bA1RdoXY1nj+H2g7V9EEkg3N98PKhLFeNvoBeMSwW9tE6AYeXU1uwvsXGP60Ypp7JGDQgRE4TAwupbkROHwkBC1+DQ/VCqTC+q/ICi0ruerfWFxETxAca1iGpeGhKgBvARKYRTDyS8igAAAAABJRU5ErkJggg=='></a>"
st.markdown(html, unsafe_allow_html=True)
with col2:
st.image('images/shakir.jpg',width=300)
html = f"Shakiru Muraina <a href='https://github.com/Debare' alt='GitHub'><img height='20' src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAA7AAAAOwBeShxvQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAMdSURBVFiFxZfLb9RWFMZ/99oeG3uYyZuUlD7UBgkCFCEkqKoiUKUuaOgCNhXTXdUisULdsOMPQAJWgAQSbBqWrEAtWzY8Fm0kQkBKKzXNJEI8JqOJx7FnPL4sZnAChdhRQuZbXY/POd835557jq+ghQmlTCuoH0NwFMU2wGF1UUUwhmLEN41Lg0IEAAJgyvMGkPpN4ItVJn0XRonC4U22PS1a//zeGpLHInzT2CutoH6sDeQAO62g/rNEUWgDeROCgpjya3NAtk0SXNlGcoCsbCM5APpyHRoKHlZ9/pzzeRqEAGwwdXatt9jqWGhiefHElF9TaY3vVzwuFmf5z6+/9f3HlsHxD7vYnVu3+gJGnpS5OlMmyVgCP23s5If+fCoBqWrg9xcuV2bKdBsaX3XY6OL/eTak4OsOm7yucXlmllslN5WAxBoohw0uFEsAfJm3OfFRN9NByD9ejX5TRwAzQchmO8MHps7pf59zq+RyoVhib34deU1bmYAbz+bwGhEAQdTcgAFTZ8BccB20M/E6UE1bN4y48WyOQn/HkvETt+BOZT5eH+pdn2TO9725Bd/y/BKWKQVMt46arUm2OGZiwG2ORUaK13xXJMCPmilVtGZ3AoRo9goAv7UdKxLQrTeLaL4RMeHVEgM+qgY0VFNBj7F0AaYSsCNrxevzxVJckG+DG0bxiQHY7ljvtE0t4GBPc1Yd7svRUIofx6a5WJxlclE3/Nurcb5YojBe5HE1iH//LkXRJgrYnrXY12nzxwuXE5u6+czOcK/ikdUWXE0puP60ghsuZOdAl8NQiqJN1YqrjYhfJ57gSMnJT3rpMSTaom4YRIqDo5Px82bH5MznG7C15EabqhU7muTcYD8dhuTo2BTf/jUZV/qb+KbL4WxKcljGOLY1yalP+3jQ63O77CEXnUldCI705djf6bA1RdoXY1nj+H2g7V9EEkg3N98PKhLFeNvoBeMSwW9tE6AYeXU1uwvsXGP60Ypp7JGDQgRE4TAwupbkROHwkBC1+DQ/VCqTC+q/ICi0ruerfWFxETxAca1iGpeGhKgBvARKYRTDyS8igAAAAABJRU5ErkJggg=='></a>"
st.markdown(html, unsafe_allow_html=True)
col1,mid,col2 = st.beta_columns(3)
with col1:
st.image('images/bolu.jpg',width=300)
html = f"Boluwatife Adewale <a href='https://github.com/BBLinus' alt='GitHub'><img height='20' src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAA7AAAAOwBeShxvQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAMdSURBVFiFxZfLb9RWFMZ/99oeG3uYyZuUlD7UBgkCFCEkqKoiUKUuaOgCNhXTXdUisULdsOMPQAJWgAQSbBqWrEAtWzY8Fm0kQkBKKzXNJEI8JqOJx7FnPL4sZnAChdhRQuZbXY/POd835557jq+ghQmlTCuoH0NwFMU2wGF1UUUwhmLEN41Lg0IEAAJgyvMGkPpN4ItVJn0XRonC4U22PS1a//zeGpLHInzT2CutoH6sDeQAO62g/rNEUWgDeROCgpjya3NAtk0SXNlGcoCsbCM5APpyHRoKHlZ9/pzzeRqEAGwwdXatt9jqWGhiefHElF9TaY3vVzwuFmf5z6+/9f3HlsHxD7vYnVu3+gJGnpS5OlMmyVgCP23s5If+fCoBqWrg9xcuV2bKdBsaX3XY6OL/eTak4OsOm7yucXlmllslN5WAxBoohw0uFEsAfJm3OfFRN9NByD9ejX5TRwAzQchmO8MHps7pf59zq+RyoVhib34deU1bmYAbz+bwGhEAQdTcgAFTZ8BccB20M/E6UE1bN4y48WyOQn/HkvETt+BOZT5eH+pdn2TO9725Bd/y/BKWKQVMt46arUm2OGZiwG2ORUaK13xXJMCPmilVtGZ3AoRo9goAv7UdKxLQrTeLaL4RMeHVEgM+qgY0VFNBj7F0AaYSsCNrxevzxVJckG+DG0bxiQHY7ljvtE0t4GBPc1Yd7svRUIofx6a5WJxlclE3/Nurcb5YojBe5HE1iH//LkXRJgrYnrXY12nzxwuXE5u6+czOcK/ikdUWXE0puP60ghsuZOdAl8NQiqJN1YqrjYhfJ57gSMnJT3rpMSTaom4YRIqDo5Px82bH5MznG7C15EabqhU7muTcYD8dhuTo2BTf/jUZV/qb+KbL4WxKcljGOLY1yalP+3jQ63O77CEXnUldCI705djf6bA1RdoXY1nj+H2g7V9EEkg3N98PKhLFeNvoBeMSwW9tE6AYeXU1uwvsXGP60Ypp7JGDQgRE4TAwupbkROHwkBC1+DQ/VCqTC+q/ICi0ruerfWFxETxAca1iGpeGhKgBvARKYRTDyS8igAAAAABJRU5ErkJggg=='></a>"
st.markdown(html, unsafe_allow_html=True)
with col2:
st.image('images/uthman.jpg',width=300)
html = f"Uthman Allison <a href='https://github.com/alliwene' alt='GitHub'><img height='20' src='data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAA7AAAAOwBeShxvQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAAMdSURBVFiFxZfLb9RWFMZ/99oeG3uYyZuUlD7UBgkCFCEkqKoiUKUuaOgCNhXTXdUisULdsOMPQAJWgAQSbBqWrEAtWzY8Fm0kQkBKKzXNJEI8JqOJx7FnPL4sZnAChdhRQuZbXY/POd835557jq+ghQmlTCuoH0NwFMU2wGF1UUUwhmLEN41Lg0IEAAJgyvMGkPpN4ItVJn0XRonC4U22PS1a//zeGpLHInzT2CutoH6sDeQAO62g/rNEUWgDeROCgpjya3NAtk0SXNlGcoCsbCM5APpyHRoKHlZ9/pzzeRqEAGwwdXatt9jqWGhiefHElF9TaY3vVzwuFmf5z6+/9f3HlsHxD7vYnVu3+gJGnpS5OlMmyVgCP23s5If+fCoBqWrg9xcuV2bKdBsaX3XY6OL/eTak4OsOm7yucXlmllslN5WAxBoohw0uFEsAfJm3OfFRN9NByD9ejX5TRwAzQchmO8MHps7pf59zq+RyoVhib34deU1bmYAbz+bwGhEAQdTcgAFTZ8BccB20M/E6UE1bN4y48WyOQn/HkvETt+BOZT5eH+pdn2TO9725Bd/y/BKWKQVMt46arUm2OGZiwG2ORUaK13xXJMCPmilVtGZ3AoRo9goAv7UdKxLQrTeLaL4RMeHVEgM+qgY0VFNBj7F0AaYSsCNrxevzxVJckG+DG0bxiQHY7ljvtE0t4GBPc1Yd7svRUIofx6a5WJxlclE3/Nurcb5YojBe5HE1iH//LkXRJgrYnrXY12nzxwuXE5u6+czOcK/ikdUWXE0puP60ghsuZOdAl8NQiqJN1YqrjYhfJ57gSMnJT3rpMSTaom4YRIqDo5Px82bH5MznG7C15EabqhU7muTcYD8dhuTo2BTf/jUZV/qb+KbL4WxKcljGOLY1yalP+3jQ63O77CEXnUldCI705djf6bA1RdoXY1nj+H2g7V9EEkg3N98PKhLFeNvoBeMSwW9tE6AYeXU1uwvsXGP60Ypp7JGDQgRE4TAwupbkROHwkBC1+DQ/VCqTC+q/ICi0ruerfWFxETxAca1iGpeGhKgBvARKYRTDyS8igAAAAABJRU5ErkJggg=='></a>"
st.markdown(html, unsafe_allow_html=True)
if page == 'Prediction and Evaluation':
st.title('Predicting Health Insurance Subscription')
st.sidebar.header('User Input Features')
# st.sidebar.markdown("""
# [Example CSV input file](https://raw.githubusercontent.com/alliwene/gb-november-grp2-health-insurance/main/data/data_sample.csv)
# """)
def download_button(object_to_download, download_filename, button_text):
"""
Generates a link to download the given object_to_download.
Params:
------
object_to_download: The object to be downloaded.
download_filename (str): filename and extension of file. e.g. mydata.csv
button_text (str): Text to display on download button (e.g. 'click here to download file')
Returns:
-------
(str): the anchor tag to download object_to_download
"""
try:
# some strings <-> bytes conversions necessary here
b64 = base64.b64encode(object_to_download.encode()).decode()
except AttributeError as e:
b64 = base64.b64encode(object_to_download).decode()
button_uuid = str(uuid.uuid4()).replace('-', '')
button_id = re.sub('\d+', '', button_uuid)
custom_css = f"""
<style>
#{button_id} {{
background-color: rgb(255, 255, 255);
color: rgb(38, 39, 48);
padding: 0.25em 0.38em;
position: relative;
text-decoration: none;
border-radius: 4px;
border-width: 1px;
border-style: solid;
border-color: rgb(230, 234, 241);
border-image: initial;
}}
#{button_id}:hover {{
border-color: rgb(246, 51, 102);
color: rgb(246, 51, 102);
}}
#{button_id}:active {{
box-shadow: none;
background-color: rgb(246, 51, 102);
color: white;
}}
</style> """
dl_link = custom_css + f'<a download="{download_filename}" id="{button_id}" href="data:file/txt;base64,{b64}">{button_text}</a><br></br>'
return dl_link
def file_selector(folder_path='data'):
filenames = os.listdir(folder_path)
selected_filename = 'data_sample.csv'
return os.path.join(folder_path, selected_filename)
filename = file_selector()
# Load selected file
with open(filename, 'rb') as f:
s = f.read()
download_button_str = download_button(s, filename, 'Download sample input CSV file')
st.sidebar.markdown(download_button_str, unsafe_allow_html=True)
# Load cleaned dataset
insurance_clean = pd.read_csv('data/data_clean.csv')
insurance = insurance_clean.drop(columns=['target'])
# Collects user input features into dataframe
uploaded_file = st.sidebar.file_uploader("Upload your input CSV file", type=["csv"])
if uploaded_file is not None:
input_df = pd.read_csv(uploaded_file)
internet = st.sidebar.selectbox('Use of internet',('Yes, last 12 months', 'Never', 'Yes, before last 12 months'))
bank_acount = st.sidebar.selectbox('Account in bank',('Yes', 'No'))
attainment = st.sidebar.selectbox("Husband/partner's educational attainment",
('Complete secondary', 'No education', 'Higher', 'Complete primary',
'Incomplete secondary', "Don't know", 'Incomplete primary'))
internet_freq = st.sidebar.selectbox('Internet use frequency',('At least once a week', 'Almost every day', 'Not at all',
'Less than once a week'))
literacy = st.sidebar.selectbox('Literacy', ('Able to read whole sentence', 'Cannot read at all',
'Able to read only parts of sentence', 'Blind/visually impaired', 'No card with required language'))
wealth_index = st.sidebar.slider('Wealth index', insurance['Wealth index factor score for urban/rural (5 decimals)'].min(),
insurance['Wealth index factor score for urban/rural (5 decimals)'].max(),
float(input_df['Wealth index factor score for urban/rural (5 decimals)'][0]))
toilet = st.sidebar.selectbox('Type of toilet facility',
('Flush to piped sewer system', 'Flush to septic tank',
'Flush to pit latrine', 'Pit latrine with slab',
'Not a dejure resident', 'Pit latrine without slab/open pit',
'No facility/bush/field', 'Ventilated Improved Pit latrine (VIP)',
'Flush to somewhere else', 'Bucket toilet', 'Other',
'Composting toilet', "Flush, don't know where",
'Hanging toilet/latrine'))
medical_help = st.sidebar.selectbox('Getting money needed for treatment', ('Not a big problem', 'Big problem'))
residence = st.sidebar.selectbox('Type of place of residence', ('Urban', 'Rural'))
medical_visit = st.sidebar.selectbox('Visited health facility last 12 months', ('Yes', 'No'))
tv_watch = st.sidebar.selectbox('Frequency of watching television',
('At least once a week', 'Not at all', 'Less than once a week'))
edu_year = st.sidebar.selectbox('Highest year of education',
('3.0', '4.0', '2.0', '6.0', '1.0',
'No years completed at level V106', '5.0', '8.0', '7.0'))
data = {'Use of internet': internet,
'Has an account in a bank or other financial institution': bank_acount,
"Husband/partner's educational attainment": attainment,
'Frequency of using internet last month': internet_freq,
'Literacy': literacy,
'Wealth index factor score for urban/rural (5 decimals)': wealth_index,
'Type of toilet facility': toilet,
'Getting medical help for self: getting money needed for treatment': medical_help,
'Type of place of residence': residence,
'Visited health facility last 12 months': medical_visit,
'Frequency of watching television': tv_watch,
'Highest year of education': edu_year,
}
# Replace some values in input_ using data
for key, value in data.items():
input_df[key] = value
# Combines user input features with cleaned dataset
# This will be useful for the encoding phase
df = pd.concat([input_df,insurance],axis=0,ignore_index=True)
@st.cache()
# one hot encode categorical features
def one_hot_encode(df):
# get categorical features of df
cat_feat = df.select_dtypes(exclude = np.number).columns
for feat in cat_feat:
dummy = pd.get_dummies(df[feat], prefix=feat)
df = pd.concat([df,dummy], axis=1)
del df[feat]
input_df = df[:1] # Selects only the first row (the user input data)
# remove duplicate columns
input_df = input_df.loc[:,~input_df.columns.duplicated()]
return input_df
input_df = one_hot_encode(df)
st.subheader('User Input features')
st.write(input_df)
# Reads in saved classification model
load_clf = pickle.load(open('model/insurance_rf.pkl', 'rb'))
# Apply model to make predictions
prediction = load_clf.predict(input_df)
prediction_proba = load_clf.predict_proba(input_df)
st.subheader('Prediction')
output = np.array(['No','Yes'])
st.write(output[prediction])
st.subheader('Prediction Probability')
st.write(prediction_proba)
# make feature importance plot
st.subheader('Feature Importance Plot')
st.markdown('''
The top $30$ factors that most likely influence taking up an health insurance policy
by an individual is plotted. Values of some of these factors would be moved
around to investigate its effect on our prediction.
''')
feat_imp = pd.DataFrame(sorted(zip(load_clf.feature_importances_,input_df.columns)),
columns=['Value','Feature'])
imp_data = feat_imp.sort_values(by="Value", ascending=False)
with _lock:
fig = plt.figure(figsize=(20,15))
sns.barplot(x="Value", y="Feature", data=imp_data.iloc[:30])
plt.ylabel('Feature Importance Score')
st.pyplot(fig)
# Displays the user input features
if uploaded_file is not None:
st.write(' ')
else:
st.subheader('User Input features')
st.write('Awaiting CSV file to be uploaded...')
if __name__=="__main__":
main()
| 64.132576 | 1,381 | 0.692871 | 1,540 | 16,931 | 7.531169 | 0.324675 | 0.013192 | 0.018624 | 0.008191 | 0.547508 | 0.510778 | 0.485601 | 0.473013 | 0.468702 | 0.461114 | 0 | 0.06531 | 0.222255 | 16,931 | 263 | 1,382 | 64.376426 | 0.815462 | 0.063552 | 0 | 0.092391 | 0 | 0.032609 | 0.612045 | 0.331296 | 0 | 1 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.086957 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5672ebf88e6a636aad5b677603743bec5d2a56a3 | 361 | py | Python | hello.py | whiterabbitsource/pyhello | 7268780aab969d43aadf166ce199e454230bed73 | [
"MIT"
] | null | null | null | hello.py | whiterabbitsource/pyhello | 7268780aab969d43aadf166ce199e454230bed73 | [
"MIT"
] | null | null | null | hello.py | whiterabbitsource/pyhello | 7268780aab969d43aadf166ce199e454230bed73 | [
"MIT"
] | null | null | null | # Hello! World!
print("Hello, World!")
# Learning Strings
my_string = "This is a string"
## Make string uppercase
my_string_upper = my_string.upper()
print(my_string_upper)
# Determine data type of string
print(type(my_string))
# Slicing strings [python is zero-based and starts at 0 and not 1]
print(my_string[0:4])
print(my_string[:1])
print(my_string[0:14])
| 25.785714 | 66 | 0.750693 | 62 | 361 | 4.209677 | 0.467742 | 0.245211 | 0.199234 | 0.10728 | 0.114943 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.124654 | 361 | 13 | 67 | 27.769231 | 0.800633 | 0.407202 | 0 | 0 | 0 | 0 | 0.140097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
567532ef07dca51337f49b0b35ebbb473c9ee7c1 | 6,456 | py | Python | crescent/policy/actions/kinesis.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | 1 | 2020-03-26T19:20:03.000Z | 2020-03-26T19:20:03.000Z | crescent/policy/actions/kinesis.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | null | null | null | crescent/policy/actions/kinesis.py | mpolatcan/zepyhrus | 2fd0b1b9b21613b5876a51fe8b5f9e3afbec1b67 | [
"Apache-2.0"
] | null | null | null | from crescent.resources.kinesis import StreamArn, StreamConsumerArn
from .action import Action, AccessLevelAllActions
from typing import Union
class KinesisAction(Action):
__SERVICE_KINESIS = "kinesis"
def __init__(self, action_name, **definable_resources):
super(KinesisAction, self).__init__(self.__SERVICE_KINESIS, action_name, **definable_resources)
def Stream(self, stream: Union[str, StreamArn]):
return self._set_resource(self.Stream.__name__, stream)
def Consumer(self, consumer: Union[str, StreamConsumerArn]):
return self._set_resource(self.Consumer.__name__, consumer)
# -----------------------------------------------
class KinesisAccessLevelAllActions(AccessLevelAllActions):
def __init__(self, access_level):
super(KinesisAccessLevelAllActions, self).__init__(access_level)
def Stream(self, stream: Union[str, StreamArn]):
return self._set_all_actions_resources(self.Stream.__name__, stream)
def Consumer(self, consumer: Union[str, StreamConsumerArn]):
return self._set_all_actions_resources(self.Consumer.__name__, consumer)
# -----------------------------------------------
class Actions:
class Tagging:
@staticmethod
def AddTagsToStream(): return KinesisAction(Actions.Tagging.AddTagsToStream.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def RemoveTagsFromStream(): return KinesisAction(Actions.Tagging.RemoveTagsFromStream.__name__,
required=[KinesisAction.Stream.__name__])
class Write:
@staticmethod
def CreateStream(): return KinesisAction(Actions.Write.CreateStream.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def DecreaseStreamRetentionPeriod(): return KinesisAction(Actions.Write.DecreaseStreamRetentionPeriod.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def DeleteStream(): return KinesisAction(Actions.Write.DeleteStream.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def DeregisterStreamConsumer(): return KinesisAction(Actions.Write.DeregisterStreamConsumer.__name__,
required=[KinesisAction.Consumer.__name__])
@staticmethod
def DisableEnhancedMonitoring(): return KinesisAction(Actions.Write.DisableEnhancedMonitoring.__name__)
@staticmethod
def EnableEnhancedMonitoring(): return KinesisAction(Actions.Write.EnableEnhancedMonitoring.__name__)
@staticmethod
def IncreaseStreamRetentionPeriod(): return KinesisAction(Actions.Write.IncreaseStreamRetentionPeriod.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def MergeShards(): return KinesisAction(Actions.Write.MergeShards.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def PutRecord(): return KinesisAction(Actions.Write.PutRecord.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def PutRecords(): return KinesisAction(Actions.Write.PutRecords.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def RegisterStreamConsumer(): return KinesisAction(Actions.Write.RegisterStreamConsumer.__name__,
required=[KinesisAction.Consumer.__name__])
@staticmethod
def SplitShard(): return KinesisAction(Actions.Write.SplitShard.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def UpdateShardCount(): return KinesisAction(Actions.Write.UpdateShardCount.__name__)
class Read:
@staticmethod
def DescribeLimits(): return KinesisAction(Actions.Read.DescribeLimits.__name__)
@staticmethod
def DescribeStream(): return KinesisAction(Actions.Read.DescribeStream.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def DescribeStreamConsumer(): return KinesisAction(Actions.Read.DescribeStreamConsumer.__name__,
required=[KinesisAction.Consumer.__name__])
@staticmethod
def DescribeStreamSummary(): return KinesisAction(Actions.Read.DescribeStreamSummary.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def GetRecords(): return KinesisAction(Actions.Read.GetRecords.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def GetShardIterator(): return KinesisAction(Actions.Read.GetShardIterator.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def ListTagsForStream(): return KinesisAction(Actions.Read.ListTagsForStream.__name__,
required=[KinesisAction.Stream.__name__])
@staticmethod
def UpdateShardCount(): return KinesisAction(Actions.Write.UpdateShardCount.__name__)
class List:
@staticmethod
def ListShards(): return KinesisAction(Actions.List.ListShards.__name__)
@staticmethod
def ListStreamConsumers(): return KinesisAction(Actions.List.ListStreamConsumers.__name__)
@staticmethod
def ListStreams(): return KinesisAction(Actions.List.ListStreams.__name__)
@staticmethod
def TaggingAll(): return KinesisAccessLevelAllActions(Actions.Tagging)
@staticmethod
def WriteAll(): return KinesisAccessLevelAllActions(Actions.Write)
@staticmethod
def ReadAll(): return KinesisAccessLevelAllActions(Actions.Read)
@staticmethod
def ListAll(): return KinesisAccessLevelAllActions(Actions.List)
All = "kinesis:*"
| 43.328859 | 119 | 0.631041 | 464 | 6,456 | 8.280172 | 0.142241 | 0.117127 | 0.17595 | 0.121031 | 0.366216 | 0.338886 | 0.338886 | 0.131182 | 0.131182 | 0.131182 | 0 | 0 | 0.28005 | 6,456 | 148 | 120 | 43.621622 | 0.826592 | 0.014715 | 0 | 0.529412 | 0 | 0 | 0.002517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.352941 | false | 0 | 0.029412 | 0.333333 | 0.509804 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
568a1aa337f57bc7c4cda92b824694bbeb5808eb | 824 | py | Python | ACME/geometry/laplacian.py | mauriziokovacic/ACME | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | 3 | 2019-10-23T23:10:55.000Z | 2021-09-01T07:30:14.000Z | ACME/geometry/laplacian.py | mauriziokovacic/ACME-Python | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | null | null | null | ACME/geometry/laplacian.py | mauriziokovacic/ACME-Python | 2615b66dd4addfd5c03d9d91a24c7da414294308 | [
"MIT"
] | 1 | 2020-07-11T11:35:43.000Z | 2020-07-11T11:35:43.000Z | import torch
from ..topology.laplacian import *
from .adjacency import *
def combinatorial_Laplacian(P, T):
"""
Computes the combinatorial laplacian matrix for a given mesh.
Parameters
----------
P : Tensor
the input points set tensor
T : LongTensor
the topology tensor
Returns
-------
Tensor
the laplacian matrix
"""
return laplacian(Adjacency(T, P=P, type='std'))
def cotangent_Laplacian(P, T):
"""
Computes the cotangent weights laplacian matrix for a given triangle mesh.
Parameters
----------
P : Tensor
the input points set tensor
T : LongTensor
the topology tensor
Returns
-------
Tensor
the laplacian matrix
"""
return laplacian(Adjacency(T, P=P, type='cot'))
| 18.727273 | 78 | 0.59466 | 91 | 824 | 5.362637 | 0.340659 | 0.122951 | 0.045082 | 0.077869 | 0.737705 | 0.54918 | 0.54918 | 0.54918 | 0.54918 | 0.54918 | 0 | 0 | 0.300971 | 824 | 43 | 79 | 19.162791 | 0.847222 | 0.538835 | 0 | 0 | 0 | 0 | 0.022305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.285714 | false | 0 | 0.428571 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
569570cd854476e56a289049d62849a9b1e02716 | 263 | py | Python | light_test/light_test/doctype/audit_keith/test_audit_keith.py | kwatkinsLexul/light_test | 048937aac2d2b13af7d55b92fc6f7437f74f4c04 | [
"MIT"
] | null | null | null | light_test/light_test/doctype/audit_keith/test_audit_keith.py | kwatkinsLexul/light_test | 048937aac2d2b13af7d55b92fc6f7437f74f4c04 | [
"MIT"
] | null | null | null | light_test/light_test/doctype/audit_keith/test_audit_keith.py | kwatkinsLexul/light_test | 048937aac2d2b13af7d55b92fc6f7437f74f4c04 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2015, Keith and Contributors
# See license.txt
from __future__ import unicode_literals
import frappe
import unittest
# test_records = frappe.get_test_records('Audit Keith')
class TestAuditKeith(unittest.TestCase):
pass
| 20.230769 | 55 | 0.768061 | 34 | 263 | 5.705882 | 0.794118 | 0.113402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02193 | 0.13308 | 263 | 12 | 56 | 21.916667 | 0.828947 | 0.509506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.6 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
56a8778cd78552434dd56f302aa163ca4702f8d3 | 211 | py | Python | yt/frontends/art/api.py | Xarthisius/yt | 321643c3abff64a6f132d98d0747f3558f7552a3 | [
"BSD-3-Clause-Clear"
] | 360 | 2017-04-24T05:06:04.000Z | 2022-03-31T10:47:07.000Z | yt/frontends/art/api.py | Xarthisius/yt | 321643c3abff64a6f132d98d0747f3558f7552a3 | [
"BSD-3-Clause-Clear"
] | 2,077 | 2017-04-20T20:36:07.000Z | 2022-03-31T16:39:43.000Z | yt/frontends/art/api.py | stonnes/yt | aad3cfa3b4ebab7838352ab467275a27c26ff363 | [
"BSD-3-Clause-Clear"
] | 257 | 2017-04-19T20:52:28.000Z | 2022-03-29T12:23:52.000Z | from . import tests
from .data_structures import (
ARTDataset,
ARTDomainFile,
ARTDomainSubset,
ARTIndex,
DarkMatterARTDataset,
)
from .fields import ARTFieldInfo
from .io import IOHandlerART
| 19.181818 | 32 | 0.748815 | 20 | 211 | 7.85 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.199052 | 211 | 10 | 33 | 21.1 | 0.928994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3b14a8a90a9b966384d4097207eed24c289af924 | 9,786 | py | Python | src/configs/project_configs.py | PawelRosikiewicz/SkinDiagnosticAI | 7cc7b7a9ccd4103095a7548e7b99de4988858356 | [
"MIT"
] | 1 | 2021-05-15T09:57:25.000Z | 2021-05-15T09:57:25.000Z | src/configs/project_configs.py | PawelRosikiewicz/SkinDiagnosticAI | 7cc7b7a9ccd4103095a7548e7b99de4988858356 | [
"MIT"
] | null | null | null | src/configs/project_configs.py | PawelRosikiewicz/SkinDiagnosticAI | 7cc7b7a9ccd4103095a7548e7b99de4988858356 | [
"MIT"
] | null | null | null | # ********************************************************************************** #
# #
# Project: SkinAnaliticAI #
# Author: Pawel Rosikiewicz #
# Contact: prosikiewicz_gmail.com #
# #
#. This notebook is a part of Skin AanaliticAI development kit, created #
#. for evaluation of public datasets used for skin cancer detection with #
#. large number of AI models and data preparation pipelines. #
# #
# License: MIT #
#. Copyright (C) 2021.01.30 Pawel Rosikiewicz #
# https://opensource.org/licenses/MIT #
# #
# ********************************************************************************** #
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# config, ...........................................................................................
PROJECT_NAME = "Skin_cancer_detection_and_classyfication"
# config, ...........................................................................................
# CLASS_DESCRIPTION
#. "key" : str, class name used in original dataset downloaded form databse
# "original_name" : str, same as the key, but you can introduce other values in case its necessarly
# "class_full_name" : str, class name used on images, saved data etc, (more descriptive then class names, or sometimes the same according to situation)
# "class_group" : str, group of classes, if the classes are hierarchical,
# "class_description" : str, used as notes, or for class description available for the user/client
# "links" : list, with link to more data, on each class,
CLASS_DESCRIPTION = {
'akiec':{
"original_name":'akiec',
"class_full_name": "squamous_cell_carcinoma", # prevoisly called "Actinic_keratoses" in my dataset, but ths name is easier to find in online resourses, noth names are correct,
"class_group": "Tumour_Benign",
"class_description": "Class that contains two subclasses:(A) Actinic_Keratoses or (B) Bowen’s disease. Actinic Keratoses (Solar Keratoses) and Intraepithelial Carcinoma (Bowen’s disease) are common non-invasive, variants of squamous cell carcinoma that can be treated locally without surgery. These lesions may progress to invasive squamous cell carcinoma – which is usually not pigmented. Both neoplasms commonly show surface scaling and commonly are devoid of pigment, Actinic keratoses are more common on the face and Bowen’s disease is more common on other body sites. Because both types are induced by UV-light the surrounding skin is usually typified by severe sun damaged except in cases of Bowen’s disease that are caused by human papilloma virus infection and not by UV. Pigmented variants exist for Bowen’s disease and for actinic keratoses",
"links":["https://dermoscopedia.org/Actinic_keratosis_/_Bowen%27s_disease_/_keratoacanthoma_/_squamous_cell_carcinoma"]
},
'bcc':{
"original_name":'bcc',
"class_full_name": "Basal_cell_carcinoma",
"class_group": "Tumour_Benign",
"class_description": "Basal cell carcinoma (BCC) is the most common type of skin cancer in the world that rarely metastasizes but grows destructively if untreated. It appears in different morphologic variants (flat, nodular, pigmented, cystic). There are multiple histopathologic subtypes of BCC including superficial, nodular, morpheaform/sclerosing/infiltrative, fibroepithelioma of Pinkus, microcytic adnexal and baso-squamous cell BCC. Each subtype can be clinically pigmented or non-pigmented. It is not uncommon for BCCs to display pigment on dermoscopy with up to 30% of clinically non-pigmented BCCs revealing pigment on dermoscopy. Based on the degree of pigmentation, some BCCs can mimic melanomas or other pigmented skin lesions. Depending on the subtype of BCC and the degree of pigmentation, the clinical differential diagnosis can be quite broad ranging from benign inflammatory lesions to melanoma. Fortunately, the dermoscopic criteria for BCC are visible irrespective of the size of the tumor and can be well distiguished using dermatoscopy",
"links":["https://dermoscopedia.org/Basal_cell_carcinoma"]
},
'bkl':{
"original_name":'bkl',
"class_full_name": "Benign_keratosis",
"class_group": "Tumour_Benign",
"class_description": "Benign keratosis is a generic group that includes three typesy of non-carcinogenig lesions: (A) seborrheic keratoses (senile wart), (B) solar lentigo - which can be regarded a flat variant of seborrheic keratosis, (C) and lichen-planus like keratoses (LPLK), which corresponds to a seborrheic keratosis or a solar lentigo with inflammation and regression. The three subgroups may look different dermatoscopically, but we grouped them together because they are similar biologically and often reported under the same generic term histopathologically. Briefly: Seborrheic keratoses (A) are benign epithelial lesions that can appear on any part of the body except for the mucous membranes, palms, and soles. The lesions are quite prevalent in people older than 30 years. Early seborrheic keratoses are light - to dark brown oval macules with sharply demarcated borders. As the lesions progress, they transform into plaques with a waxy or stuck-on appearance, often with follicular plugs scattered over their surfaces. The size of the lesions varies from a few millimeters to a few centimeters. Solar lentigines (B) are sharply circumscribed, uniformly pigmented macules that are located predominantly on the sun-exposed areas of the skin, such as the dorsum of the hands, the shoulders, and the scalp. Lentigines are a result of hyperplasia of keratinocytes and melanocytes, with increased accumulation of melanin in the keratinocytes. They are induced by ultraviolet light exposure. Unlike freckles, solar lentigines persist indefinitely. Nearly 90% of Caucasians over the age of 60 years have these lesions. LPLK (C), is one of the common benign neoplasms of the skin, and it is highly variable in its appearance, Some LPKL can show morphologic features mimicking melanoma and are often biopsied or excised for diagnostic reasons",
"links": ["https://dermoscopedia.org/Solar_lentigines_/_seborrheic_keratoses_/_lichen_planus-like_keratosis"]
},
'df': {
"original_name":'df',
"class_full_name": "Dermatofibroma",
"class_group": "Tumour_Benign",
"class_description": "Dermatofibromas (DFs) are prevalent cutaneous lesions that most frequently affect young to middle-aged adults, with a slight predominance in females. Clinically, dermatofibromas appear as firm, single or multiple papules/nodules with a relatively smooth surface and predilection for the lower extremities. Characteristically, upon lateral compression of the skin surrounding dermatofibromas, the tumors tend to pucker inward producing a dimple-like depression in the overlying skin; a feature known as the dimple or Fitzpatrick’s sign. Dermatofibroma is a benign skin lesion regarded as either a benign proliferation or an inflammatory reaction to minimal trauma. The most common dermatoscopic presentation is reticular lines at the periphery with a central white patch denoting fibrosis",
"links": ["https://dermoscopedia.org/Dermatofibromas"]
},
'nv': {
"original_name":'nv',
"class_full_name": "Melanocytic_nevus",
"class_group": "Tumour_Benign",
"class_description": "Melanocytic nevi are benign neoplasms of melanocytes and appear in a myriad of variants, which all were included in train data used for diagnosis. The variants may differ significantly from a dermatoscopic point of view. Unlike, melanoma they are usually symmetric with regard to the distribution of color and structure",
"links":["https://dermoscopedia.org/Benign_Melanocytic_lesions"]
},
"mel": {
"original_name":'mel',
"class_full_name": "Melanoma",
"class_group": "Tumour_Malignant",
"class_description": "Melanoma is a malignant neoplasm derived from melanocytes that may appear in different variants. If excised in an early stage it can be cured by simple surgical excision. Melanomas can be invasive or non-invasive (in situ). Melanomas are usually, albeit not always, chaotic, and some melanoma specific criteria depend on anatomic site, All variants of melanoma including melanoma in situ, except for non-pigmented, subungual, ocular or mucosal melanoma were included in train dataset used for diagnosis",
"linkss": ["https://dermoscopedia.org/Melanoma"]
},
'vasc':{
"original_name":'vasc',
"class_full_name": "Vascular_skin_lesions",
"class_group": "Vascular_skin_lesions",
"class_description": "Angiomas are dermatoscopically characterized by red or purple color and solid, well circumscribed structures known as red clods or lacunes.Data Used for training for diagnosis: Vascular skin lesions in the dataset range from cherry angiomas to angiokeratomas and pyogenic granulomas. Hemorrhage is also included in this category",
"links": ["https://dermoscopedia.org/Vascular_lesions"]
}
} | 109.955056 | 1,857 | 0.682812 | 1,200 | 9,786 | 5.495 | 0.399167 | 0.026691 | 0.015772 | 0.023658 | 0.028814 | 0.028814 | 0 | 0 | 0 | 0 | 0 | 0.002501 | 0.223789 | 9,786 | 89 | 1,858 | 109.955056 | 0.865456 | 0.238402 | 0 | 0.096154 | 0 | 0.134615 | 0.914833 | 0.019031 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
3b150c9ed8bd6f6ef6cb2c14cefd89505c76a9dd | 152 | py | Python | backend/config/pagination.py | hnthh/foodgram-project-react | 3383c6a116fded11b4a764b95e6ca4ead03444f3 | [
"MIT"
] | 1 | 2022-02-09T10:42:45.000Z | 2022-02-09T10:42:45.000Z | backend/config/pagination.py | hnthh/foodgram | 3383c6a116fded11b4a764b95e6ca4ead03444f3 | [
"MIT"
] | null | null | null | backend/config/pagination.py | hnthh/foodgram | 3383c6a116fded11b4a764b95e6ca4ead03444f3 | [
"MIT"
] | null | null | null | from rest_framework.pagination import PageNumberPagination
class LimitQueryParamPagination(PageNumberPagination):
page_size_query_param = 'limit'
| 25.333333 | 58 | 0.855263 | 14 | 152 | 9 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098684 | 152 | 5 | 59 | 30.4 | 0.919708 | 0 | 0 | 0 | 0 | 0 | 0.032895 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
3b42179ab00c78174874156f3d536008a8c28bea | 258 | py | Python | phyton/Exercicio026_Quantidade_de_Letras_A.py | felipebaloneker/Practice | 6c4f9b9f91c872350b566927fe9df10aed6930be | [
"MIT"
] | null | null | null | phyton/Exercicio026_Quantidade_de_Letras_A.py | felipebaloneker/Practice | 6c4f9b9f91c872350b566927fe9df10aed6930be | [
"MIT"
] | null | null | null | phyton/Exercicio026_Quantidade_de_Letras_A.py | felipebaloneker/Practice | 6c4f9b9f91c872350b566927fe9df10aed6930be | [
"MIT"
] | null | null | null | frase = str(input('Digite uma frase?').lower().strip())
print('Na Frase há: {} letra A.'.format(frase.count('a')))
print('Ela aparece pela primeira vez em {}.'.format(frase.find('a')+1))
print(' Ela aparece pela ultima vez em {}'.format(frase.rfind('a')+1))
| 51.6 | 71 | 0.662791 | 42 | 258 | 4.071429 | 0.571429 | 0.192982 | 0.175439 | 0.222222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 0.100775 | 258 | 4 | 72 | 64.5 | 0.728448 | 0 | 0 | 0 | 0 | 0 | 0.44186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.75 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 3 |
3b48b93d3bf4369cb18e3ae1122381bb59ebc4f4 | 571 | py | Python | pynars/Narsese/_py/Interval.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/Narsese/_py/Interval.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/Narsese/_py/Interval.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | from typing import Type
from .Term import Term
class Interval(Term):
is_interval: bool = True
def __init__(self, interval, do_hashing=False, word_sorted=None, is_input=False) -> None:
super().__init__("+"+str(interval), do_hashing=do_hashing, word_sorted=word_sorted, is_input=is_input)
self.interval = int(interval)
def __repr__(self) -> str:
return f'<Interval: {str(self)}>'
def __int__(self) -> int:
return self.interval
def __add__(self, o: Type['Interval']):
return Interval(int(self)+int(o)) | 31.722222 | 110 | 0.658494 | 77 | 571 | 4.493506 | 0.363636 | 0.104046 | 0.098266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210158 | 571 | 18 | 111 | 31.722222 | 0.767184 | 0 | 0 | 0 | 0 | 0 | 0.055944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.307692 | false | 0 | 0.153846 | 0.230769 | 0.846154 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
3b526aed7d086bae704ca9ac633d49d7f0033d5e | 162 | py | Python | py_tdlib/constructors/page_block_related_articles.py | Mr-TelegramBot/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 24 | 2018-10-05T13:04:30.000Z | 2020-05-12T08:45:34.000Z | py_tdlib/constructors/page_block_related_articles.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 3 | 2019-06-26T07:20:20.000Z | 2021-05-24T13:06:56.000Z | py_tdlib/constructors/page_block_related_articles.py | MrMahdi313/python-tdlib | 2e2d21a742ebcd439971a32357f2d0abd0ce61eb | [
"MIT"
] | 5 | 2018-10-05T14:29:28.000Z | 2020-08-11T15:04:10.000Z | from ..factory import Type
class pageBlockRelatedArticles(Type):
header = None # type: "RichText"
articles = None # type: "vector<pageBlockRelatedArticle>"
| 23.142857 | 59 | 0.746914 | 16 | 162 | 7.5625 | 0.75 | 0.132231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 162 | 6 | 60 | 27 | 0.876812 | 0.345679 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
3b6599b100e35ef2c8d864695349bf3c844cd6ac | 151 | py | Python | src/webshot/utils.py | hostilex00/webshot | 55e3d866af9136dc5e37eccb19ba2507346bd598 | [
"MIT"
] | null | null | null | src/webshot/utils.py | hostilex00/webshot | 55e3d866af9136dc5e37eccb19ba2507346bd598 | [
"MIT"
] | null | null | null | src/webshot/utils.py | hostilex00/webshot | 55e3d866af9136dc5e37eccb19ba2507346bd598 | [
"MIT"
] | null | null | null | from pathlib import Path
def mkdirs(directories: list):
for directory in directories:
Path(directory).mkdir(parents=True, exist_ok=True)
| 21.571429 | 58 | 0.735099 | 20 | 151 | 5.5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178808 | 151 | 6 | 59 | 25.166667 | 0.887097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8e5c752e8092ec0210ade846c9f8fa29bd9e3eb9 | 174 | py | Python | setup.py | neureal/gym-zmq | 9df9adcc2cedecebca9556ace13ba0729add902f | [
"MIT"
] | null | null | null | setup.py | neureal/gym-zmq | 9df9adcc2cedecebca9556ace13ba0729add902f | [
"MIT"
] | null | null | null | setup.py | neureal/gym-zmq | 9df9adcc2cedecebca9556ace13ba0729add902f | [
"MIT"
] | null | null | null | from setuptools import setup
setup(name='gym_zmq',
version='0.0.1',
install_requires=['gym>=0.10.9',
'pyzmq>=17.1.2'] # libzmq5 v4.1.4
)
| 21.75 | 57 | 0.54023 | 26 | 174 | 3.538462 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0.281609 | 174 | 7 | 58 | 24.857143 | 0.616 | 0.08046 | 0 | 0 | 0 | 0 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8e601e1d6371060deac4d7262c1a8ed8ff0d7958 | 256 | py | Python | finance/admin.py | NSYT0607/DONGKEY | 83f926f22a10a28895c9ad71038c9a27d200e231 | [
"MIT"
] | 1 | 2018-04-10T11:47:16.000Z | 2018-04-10T11:47:16.000Z | finance/admin.py | NSYT0607/DONGKEY | 83f926f22a10a28895c9ad71038c9a27d200e231 | [
"MIT"
] | null | null | null | finance/admin.py | NSYT0607/DONGKEY | 83f926f22a10a28895c9ad71038c9a27d200e231 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import (
Accounting,
Classification,
Income,
Expenditure
)
admin.site.register(Accounting)
admin.site.register(Classification)
admin.site.register(Income)
admin.site.register(Expenditure)
| 18.285714 | 35 | 0.757813 | 28 | 256 | 6.928571 | 0.428571 | 0.185567 | 0.350515 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152344 | 256 | 13 | 36 | 19.692308 | 0.894009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.181818 | 0 | 0.181818 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
8e7570909604fc79b6b2cfa734263ec181f0833e | 241 | py | Python | posix_checkapi/TRACES/POT/ut_lind_fs_statfs.py | JustinCappos/checkapi | 2508c414869eda3479e1384b1bea65ec1e749d3b | [
"Apache-2.0"
] | null | null | null | posix_checkapi/TRACES/POT/ut_lind_fs_statfs.py | JustinCappos/checkapi | 2508c414869eda3479e1384b1bea65ec1e749d3b | [
"Apache-2.0"
] | null | null | null | posix_checkapi/TRACES/POT/ut_lind_fs_statfs.py | JustinCappos/checkapi | 2508c414869eda3479e1384b1bea65ec1e749d3b | [
"Apache-2.0"
] | null | null | null | import lind_test_server
from lind_fs_constants import *
lind_test_server._blank_fs_init()
# / should exist.
statfsdict = lind_test_server.statfs_syscall('/')
assert(statfsdict['f_type']==0xBEEFC0DE)
assert(statfsdict['f_bsize']==4096)
| 17.214286 | 49 | 0.784232 | 33 | 241 | 5.30303 | 0.606061 | 0.137143 | 0.24 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027397 | 0.091286 | 241 | 13 | 50 | 18.538462 | 0.77169 | 0.062241 | 0 | 0 | 0 | 0 | 0.063063 | 0 | 0 | 0 | 0.045045 | 0 | 0.333333 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8e9220eee6986672fd8ba4788d066506d5b28c4d | 2,828 | py | Python | src/api/datamanage/pro/datamodel/application/jobs/console.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 84 | 2021-06-30T06:20:23.000Z | 2022-03-22T03:05:49.000Z | src/api/datamanage/pro/datamodel/application/jobs/console.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 7 | 2021-06-30T06:21:16.000Z | 2022-03-29T07:36:13.000Z | src/api/datamanage/pro/datamodel/application/jobs/console.py | Chromico/bk-base | be822d9bbee544a958bed4831348185a75604791 | [
"MIT"
] | 40 | 2021-06-30T06:21:26.000Z | 2022-03-29T12:42:26.000Z | # -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making BK-BASE 蓝鲸基础平台 available.
Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
BK-BASE 蓝鲸基础平台 is licensed under the MIT License.
License for BK-BASE 蓝鲸基础平台:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
from datamanage.pro.datamodel.application.jobs.orchestrator import achieved_job_system
class Console(object):
job_system = None
def __init__(self, data_scope, job_type='AssembleSql'):
"""
任务面板初始化
:param data_scope: 数据域
{
"node_type": (string) dim/fact/indicator,
"node_instance":
}
:param job_type: 任务类型 AssembleSql - 构建sql任务
"""
if job_type in achieved_job_system:
self.job_system = achieved_job_system[job_type](data_scope=data_scope)
if self.job_system is None:
# Todo: XXX任务系统还未实现
raise
def build(self, *args, **kwargs):
"""
构建任务
:return: mixed None - 无执行内容; 非None - 执行结果
"""
plan = self.job_system.gen_plan(command='build')
return plan.execute(*args, **kwargs)
def destroy(self):
"""
销毁任务
:return: mixed None - 无执行内容; 非None - 执行结果
"""
plan = self.job_system.gen_plan(command='destroy')
return plan.execute()
def start(self):
"""
启动任务
:return: mixed None - 无执行内容; 非None - 执行结果
"""
plan = self.job_system.gen_plan(command='start')
return plan.execute()
def stop(self):
"""
停止任务
:return: mixed None - 无执行内容; 非None - 执行结果
"""
plan = self.job_system.gen_plan(command='stop')
return plan.execute()
| 35.35 | 111 | 0.653112 | 367 | 2,828 | 4.948229 | 0.46594 | 0.049559 | 0.042952 | 0.044053 | 0.132159 | 0.132159 | 0.132159 | 0.132159 | 0.132159 | 0.132159 | 0 | 0.003277 | 0.244696 | 2,828 | 79 | 112 | 35.797468 | 0.84691 | 0.606789 | 0 | 0.15 | 0 | 0 | 0.035242 | 0 | 0 | 0 | 0 | 0.012658 | 0 | 1 | 0.25 | false | 0 | 0.05 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
8e9f1c6c567e138d25c328267947ccd37f8edfc2 | 201 | py | Python | examples/create_frames.py | axju/blurring | 2f2e50b7f47d8556be2d74687f34ac6eabd8d235 | [
"MIT"
] | 15 | 2019-09-20T14:20:53.000Z | 2022-01-06T13:31:17.000Z | examples/create_frames.py | axju/blurring | 2f2e50b7f47d8556be2d74687f34ac6eabd8d235 | [
"MIT"
] | null | null | null | examples/create_frames.py | axju/blurring | 2f2e50b7f47d8556be2d74687f34ac6eabd8d235 | [
"MIT"
] | 2 | 2019-09-21T05:37:30.000Z | 2019-09-22T04:53:57.000Z | import os
from blurring.utils import create_frames
root = os.path.dirname(os.path.abspath(__file__))
src = os.path.join(root, 'video.mp4')
dest = os.path.join(root, 'frames')
create_frames(src, dest)
| 25.125 | 49 | 0.751244 | 33 | 201 | 4.393939 | 0.515152 | 0.165517 | 0.137931 | 0.193103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005525 | 0.099502 | 201 | 7 | 50 | 28.714286 | 0.79558 | 0 | 0 | 0 | 0 | 0 | 0.074627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8eb2a20778f7cf51025f1dd897c1a4cb3894ebca | 327 | py | Python | app/routes/main.py | selutin99/flask-template | 76afd4544433bad6052a82c927696808c5821979 | [
"Apache-2.0"
] | 1 | 2022-01-04T11:21:57.000Z | 2022-01-04T11:21:57.000Z | app/routes/main.py | selutin99/flask-template | 76afd4544433bad6052a82c927696808c5821979 | [
"Apache-2.0"
] | null | null | null | app/routes/main.py | selutin99/flask-template | 76afd4544433bad6052a82c927696808c5821979 | [
"Apache-2.0"
] | null | null | null | from flask import jsonify, make_response
from flask import render_template, Blueprint
main = Blueprint('main', __name__, template_folder='templates')
@main.route('/')
def index():
return render_template('main/index.html')
@main.route('/json')
def json():
return make_response(jsonify(response='Hello world'), 200)
| 21.8 | 63 | 0.737003 | 42 | 327 | 5.52381 | 0.52381 | 0.077586 | 0.12931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010453 | 0.122324 | 327 | 14 | 64 | 23.357143 | 0.797909 | 0 | 0 | 0 | 0 | 0 | 0.137615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0.222222 | 0.666667 | 0.222222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
8ebad60b678a7665a77bbce9a9153f1b6af1fbe6 | 915 | py | Python | pyobs/events/log.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 4 | 2020-02-14T10:50:03.000Z | 2022-03-25T04:15:06.000Z | pyobs/events/log.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 60 | 2020-09-14T09:10:20.000Z | 2022-03-25T17:51:42.000Z | pyobs/events/log.py | pyobs/pyobs-core | e3401e63eb31587c2bc535f7346b7e4ef69d64ab | [
"MIT"
] | 2 | 2020-10-14T09:34:57.000Z | 2021-04-27T09:35:57.000Z | from .event import Event
class LogEvent(Event):
"""Event for log entries."""
__module__ = 'pyobs.events'
def __init__(self, time=None, level=None, filename=None, function=None, line=None, message=None):
Event.__init__(self)
self.data = {
'time': time,
'level': level,
'filename': filename,
'function': function,
'line': line,
'message': message
}
@property
def time(self):
return self.data['time']
@property
def level(self):
return self.data['level']
@property
def filename(self):
return self.data['filename']
@property
def function(self):
return self.data['function']
@property
def line(self):
return self.data['line']
@property
def message(self):
return self.data['message']
__all__ = ['LogEvent']
| 20.333333 | 101 | 0.557377 | 96 | 915 | 5.145833 | 0.260417 | 0.11336 | 0.17004 | 0.218623 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.312568 | 915 | 44 | 102 | 20.795455 | 0.785374 | 0.024044 | 0 | 0.1875 | 0 | 0 | 0.10372 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.21875 | false | 0 | 0.03125 | 0.1875 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
8ee7284d8bcc58529f5312577823cdb140b81117 | 856 | py | Python | steps/step-01/main.py | sfeir-open-source/sfeir-school-python | 7ae95b74cc9867d1dbcc90559ca0d47edb0b0883 | [
"Apache-2.0"
] | 5 | 2020-04-29T13:26:28.000Z | 2022-03-17T13:02:35.000Z | steps/step-01/main.py | sfeir-open-source/sfeir-school-python | 7ae95b74cc9867d1dbcc90559ca0d47edb0b0883 | [
"Apache-2.0"
] | 12 | 2020-07-24T10:08:26.000Z | 2022-03-15T08:10:25.000Z | steps/step-01/main.py | sfeir-open-source/sfeir-school-python | 7ae95b74cc9867d1dbcc90559ca0d47edb0b0883 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
To-Do application
"""
def add(todos):
"""
Add a task
"""
pass
def delete(todos, index=None):
"""
Delete one or all tasks
"""
pass
def get_printable_todos(todos):
"""
Get formatted tasks
"""
pass
def toggle_done(todos, index):
"""
Toggle a task
"""
pass
def view(todos, index):
"""
Print tasks
"""
print('\nTo-Do list')
print('=' * 40)
def main():
"""
Main function
"""
print('Add New tasks...')
# TODO Add 3 tasks & print
print('\nThe Second one is toggled')
# TODO Toggle the second task & print
print('\nThe last one is removed')
# TODO Remove only the third task & print
print('\nAll the todos are cleaned.')
# TODO Remove all the tasks & print
if __name__ == '__main__':
main()
| 14.508475 | 45 | 0.546729 | 108 | 856 | 4.231481 | 0.453704 | 0.061269 | 0.039387 | 0.052516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006757 | 0.308411 | 856 | 58 | 46 | 14.758621 | 0.765203 | 0.315421 | 0 | 0.222222 | 0 | 0 | 0.241736 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.333333 | false | 0.222222 | 0 | 0 | 0.333333 | 0.388889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
d9031eb28f7c4001d26f8afc819693851750038e | 265 | py | Python | test/test.py | gosia1138/MontyHall | 21d6a79bb857e1820c715d44a72af9eee248a215 | [
"MIT"
] | null | null | null | test/test.py | gosia1138/MontyHall | 21d6a79bb857e1820c715d44a72af9eee248a215 | [
"MIT"
] | null | null | null | test/test.py | gosia1138/MontyHall | 21d6a79bb857e1820c715d44a72af9eee248a215 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import PIL
import os
import unittest
from io import StringIO
from unittest.mock import patch
from .context import *
class TestBasic(unittest.TestCase):
def test_first(self):
pass
if __name__ == '__main__':
unittest.main()
| 15.588235 | 35 | 0.724528 | 36 | 265 | 5.083333 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004651 | 0.188679 | 265 | 16 | 36 | 16.5625 | 0.846512 | 0.079245 | 0 | 0 | 0 | 0 | 0.032922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0.090909 | 0.545455 | 0 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
d90ad2223de74fc247f1ab04f905c51abb4a6c4c | 333 | py | Python | app/core/models/profile.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | null | null | null | app/core/models/profile.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | 1 | 2022-03-15T22:42:58.000Z | 2022-03-15T23:05:30.000Z | app/core/models/profile.py | rdurica/example | 733420f955b679d34adfb6bffa35b17177e086f6 | [
"MIT"
] | null | null | null | from django.contrib.auth.models import User
from django.db import models
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
def __str__(self) -> str:
return f"{self.user.username}"
def __repr__(self) -> str:
return f"--id: {self.id} --name: {self.user.username}"
| 25.615385 | 63 | 0.678679 | 46 | 333 | 4.717391 | 0.521739 | 0.092166 | 0.119816 | 0.129032 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.183183 | 333 | 12 | 64 | 27.75 | 0.797794 | 0 | 0 | 0 | 0 | 0 | 0.192192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
d90d534a5142bd3ecc5d983e901a22b900394171 | 164 | py | Python | ex015.py | mateusguida/ExerciciosPython | 70f2df0a2a7bfd152205bcce228e2161c11f5888 | [
"MIT"
] | null | null | null | ex015.py | mateusguida/ExerciciosPython | 70f2df0a2a7bfd152205bcce228e2161c11f5888 | [
"MIT"
] | null | null | null | ex015.py | mateusguida/ExerciciosPython | 70f2df0a2a7bfd152205bcce228e2161c11f5888 | [
"MIT"
] | null | null | null | dias = int(input("Quantos dias alugados? "))
km = float(input("Quantos Kms rodados: "))
preco = 60 * dias + 0.15 * km
print(f'O total a pagar é de R${preco:.2f}') | 27.333333 | 44 | 0.640244 | 29 | 164 | 3.62069 | 0.793103 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.044444 | 0.176829 | 164 | 6 | 45 | 27.333333 | 0.733333 | 0 | 0 | 0 | 0 | 0 | 0.472727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d9102bf14ddafe216862a4391d0aa3a6ce4ed657 | 293 | py | Python | src/test/stack_growth.py | yuyichao/rr | 18f2ae57eee76e50c216066ad9163a90d0dfddb5 | [
"BSD-1-Clause"
] | 2 | 2020-10-29T02:10:54.000Z | 2021-06-20T00:00:26.000Z | src/test/stack_growth.py | yuyichao/rr | 18f2ae57eee76e50c216066ad9163a90d0dfddb5 | [
"BSD-1-Clause"
] | 4 | 2018-07-14T23:44:05.000Z | 2018-11-28T00:04:30.000Z | src/test/stack_growth.py | yuyichao/rr | 18f2ae57eee76e50c216066ad9163a90d0dfddb5 | [
"BSD-1-Clause"
] | 6 | 2018-06-07T02:28:36.000Z | 2019-09-02T07:36:30.000Z | from rrutil import *
send_gdb('break breakpoint')
expect_gdb('Breakpoint 1')
send_gdb('c')
expect_gdb('Breakpoint 1')
send_gdb('finish')
send_gdb('watch -l buf[100]')
expect_gdb('Hardware[()/a-z ]+watchpoint 2')
send_gdb('c')
expect_gdb('Old value = 0')
expect_gdb('New value = 100')
ok()
| 17.235294 | 44 | 0.703072 | 48 | 293 | 4.083333 | 0.520833 | 0.178571 | 0.193878 | 0.204082 | 0.367347 | 0.27551 | 0 | 0 | 0 | 0 | 0 | 0.038314 | 0.109215 | 293 | 16 | 45 | 18.3125 | 0.712644 | 0 | 0 | 0.333333 | 0 | 0 | 0.419795 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d91c3821ef45ca7747d2fb1419586c778cda6068 | 4,691 | py | Python | 4/tests.py | remihuguet/aoc2020 | c313c5b425dda92d949fd9ca4f18ff66f452794f | [
"MIT"
] | null | null | null | 4/tests.py | remihuguet/aoc2020 | c313c5b425dda92d949fd9ca4f18ff66f452794f | [
"MIT"
] | null | null | null | 4/tests.py | remihuguet/aoc2020 | c313c5b425dda92d949fd9ca4f18ff66f452794f | [
"MIT"
] | null | null | null | import passport
filename = '4/test_input.txt'
def test_parse_batch_file_properly():
passports = passport.read_batch(filename)
assert 4 == len(passports)
assert "ecl:gry pid:860033327 eyr:2020 hcl:#fffffd byr:1937 iyr:2017 cid:147 hgt:183cm" == passports[0]
def test_passport_is_valid_againt_fields():
passports = passport.read_batch(filename)
assert passport.is_valid(passports[0])
assert not passport.is_valid(passports[1])
assert passport.is_valid(passports[2])
assert not passport.is_valid(passports[3])
def test_count_valids():
assert 2 == passport.count_valids(filename)
def test_validate_byr():
assert passport.is_byr_valid('byr:2002')
assert not passport.is_byr_valid('byr:2003')
assert not passport.is_byr_valid('byr:2dsdsdsds')
assert passport.is_byr_valid('byr:1920')
assert not passport.is_byr_valid('byr:1919')
assert not passport.is_byr_valid('byr:192')
assert not passport.is_byr_valid('byr:20022')
assert passport.is_byr_valid('byr:2002 ')
assert passport.is_byr_valid('dskdlms byr:2002 hgt:fdiksdlkd')
def test_validate_hgt():
assert passport.is_hgt_valid('dsdsd hgt:60in fkmdslkfml')
assert passport.is_hgt_valid('hgt:59in')
assert passport.is_hgt_valid('dsdsd hgt:76in fkmdslkfml')
assert not passport.is_hgt_valid('dsdsd hgt:77in fkmdslkfml')
assert not passport.is_hgt_valid('hgt:190in')
assert not passport.is_hgt_valid('hgt:58in')
assert passport.is_hgt_valid('dfsdsd hgt:150cm ')
assert passport.is_hgt_valid('dfsdsd hgt:190cm ')
assert passport.is_hgt_valid('dfsdsd hgt:193cm ')
assert not passport.is_hgt_valid('dfsdsd hgt:194cm ')
assert not passport.is_hgt_valid('hgt:149cm')
assert not passport.is_hgt_valid('hgt:190')
assert not passport.is_hgt_valid('hgt:1919')
def test_validate_iyr():
assert passport.is_iyr_valid('iyr:2010')
assert not passport.is_iyr_valid('iyr:2008')
assert not passport.is_iyr_valid('iyr:2dsdsdsds')
assert passport.is_iyr_valid('iyr:2020')
assert passport.is_iyr_valid('iyr:2012')
def test_validate_eyr():
assert passport.is_eyr_valid('eyr:2020')
assert not passport.is_eyr_valid('eyr:2018')
assert not passport.is_eyr_valid('eyr:2dsdsdsds')
assert passport.is_eyr_valid('eyr:2030')
assert not passport.is_eyr_valid('eyr:2032')
def test_validate_hcl():
assert passport.is_hcl_valid('hcl:#12ac45')
assert not passport.is_hcl_valid('hcl:#12ac45dd')
assert not passport.is_hcl_valid('hcl:12ac45')
assert not passport.is_hcl_valid('hcl:#12ac4')
assert not passport.is_hcl_valid('hcl:#12ac4!')
def test_validate_ecl():
assert passport.is_ecl_valid('ecl:amb')
assert passport.is_ecl_valid('ecl:blu')
assert passport.is_ecl_valid('ecl:brn')
assert passport.is_ecl_valid('ecl:gry')
assert passport.is_ecl_valid('ecl:grn')
assert passport.is_ecl_valid('ecl:hzl')
assert passport.is_ecl_valid('ecl:oth')
assert not passport.is_ecl_valid('ecl:amc')
assert not passport.is_ecl_valid('ecl:bla')
assert not passport.is_ecl_valid('ecl:brnaaa')
assert not passport.is_ecl_valid('ecl:1112323')
def test_validate_pid():
assert passport.is_pid_valid('pid:012345678')
assert passport.is_pid_valid('pid:458289043')
assert not passport.is_pid_valid('pid:akdfmlkf')
assert not passport.is_pid_valid('pid:45828904312323232')
assert not passport.is_pid_valid('pid:4232')
def test_are_fields_valid():
p = 'pid:087499704 hgt:74in ecl:grn iyr:2012 eyr:2030 byr:1980 hcl:#623a2f'
assert passport.are_fields_valid(p)
def test_are_fields_invalid():
p = 'eyr:1972 cid:100 hcl:#18171d ecl:amb hgt:170 pid:186cm iyr:2018 byr:1926'
assert not passport.are_fields_valid(p)
def test_invalidate_passports():
passports = [
'eyr:1972 cid:100 hcl:#18171d ecl:amb hgt:170 pid:186cm iyr:2018 byr:1926',
'iyr:2019 hcl:#602927 eyr:1967 hgt:170cm ecl:grn pid:012533040 byr:1946',
'hcl:dab227 iyr:2012 ecl:brn hgt:182cm pid:021572410 eyr:2020 byr:1992 cid:277',
'hgt:59cm ecl:zzz eyr:2038 hcl:74454a iyr:2023 pid:3556412378 byr:2007'
]
assert 0 == passport.count_valid_passports(passports)
def test_validate_passports():
passports = [
'pid:087499704 hgt:74in ecl:grn iyr:2012 eyr:2030 byr:1980 hcl:#623a2f',
'eyr:2029 ecl:blu cid:129 byr:1989 iyr:2014 pid:896056539 hcl:#a97842 hgt:165cm',
'hcl:#888785 hgt:164cm byr:2001 iyr:2015 cid:88 pid:545766238 ecl:hzl eyr:2022',
'iyr:2010 hgt:158cm hcl:#b6652a ecl:blu byr:1944 eyr:2021 pid:093154719'
]
assert 4 == passport.count_valid_passports(passports)
| 36.364341 | 107 | 0.731614 | 728 | 4,691 | 4.498626 | 0.196429 | 0.177099 | 0.160916 | 0.174046 | 0.656183 | 0.603969 | 0.42229 | 0.137405 | 0.102595 | 0.102595 | 0 | 0.122869 | 0.149861 | 4,691 | 128 | 108 | 36.648438 | 0.698345 | 0 | 0 | 0.0625 | 0 | 0.114583 | 0.29951 | 0.004477 | 0 | 0 | 0 | 0 | 0.666667 | 1 | 0.145833 | false | 0.75 | 0.010417 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
d92389f85c2992190e44f393f493834251373728 | 224 | py | Python | arc/utils.py | jtguibas/arc | e9df473ce5051f2b9f3981ef219b6a02076bdb42 | [
"MIT"
] | null | null | null | arc/utils.py | jtguibas/arc | e9df473ce5051f2b9f3981ef219b6a02076bdb42 | [
"MIT"
] | null | null | null | arc/utils.py | jtguibas/arc | e9df473ce5051f2b9f3981ef219b6a02076bdb42 | [
"MIT"
] | null | null | null | import numpy as np
def softmax(x):
stable_logits = x - np.amax(x, axis=1, keepdims=True) # Shift for stabilitity
exp_logits = np.exp(stable_logits)
return exp_logits / np.sum(exp_logits, axis=1, keepdims=True)
| 32 | 82 | 0.709821 | 37 | 224 | 4.162162 | 0.540541 | 0.175325 | 0.168831 | 0.220779 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01087 | 0.178571 | 224 | 6 | 83 | 37.333333 | 0.826087 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
d93440e440d253ffe90c7fa95e0e58ebe127eff8 | 611 | py | Python | util/color.py | IVRL/AdversaryLossLandscape | 73456788b38bebeac40b833f2ac5d6cb2f1530ea | [
"MIT"
] | 3 | 2022-02-22T18:44:22.000Z | 2022-02-24T01:20:14.000Z | util/color.py | IVRL/AdversaryLossLandscape | 73456788b38bebeac40b833f2ac5d6cb2f1530ea | [
"MIT"
] | null | null | null | util/color.py | IVRL/AdversaryLossLandscape | 73456788b38bebeac40b833f2ac5d6cb2f1530ea | [
"MIT"
] | null | null | null | import random
global_color_map = {}
def get_color(color_idx):
if color_idx in global_color_map:
return global_color_map[color_idx]
base_color = ['b', 'y', 'c', 'm', 'g', 'r']
if color_idx < 6:
global_color_map[color_idx] = base_color[color_idx]
return base_color[color_idx]
else:
dex = ['0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f']
ret_color = '#'
for _ in range(6):
token_idx = random.randint(0,15)
ret_color += dex[token_idx]
global_color_map[color_idx] = ret_color
return ret_color | 30.55 | 79 | 0.564648 | 94 | 611 | 3.361702 | 0.425532 | 0.202532 | 0.221519 | 0.18038 | 0.265823 | 0.196203 | 0.196203 | 0 | 0 | 0 | 0 | 0.033113 | 0.258592 | 611 | 20 | 80 | 30.55 | 0.664459 | 0 | 0 | 0 | 0 | 0 | 0.037582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d9453ad89c8316f6505aa2d0df85f0eba9d94487 | 363 | py | Python | instance/manage.py | ESA-VirES/eoxserver-magnetism | 89746756d80f3cfea05305ee0f373c7a2742cde1 | [
"MIT"
] | 1 | 2017-11-21T22:23:45.000Z | 2017-11-21T22:23:45.000Z | instance/manage.py | ESA-VirES/eoxserver-magnetism | 89746756d80f3cfea05305ee0f373c7a2742cde1 | [
"MIT"
] | null | null | null | instance/manage.py | ESA-VirES/eoxserver-magnetism | 89746756d80f3cfea05305ee0f373c7a2742cde1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "instance.settings")
from django.core.management import execute_from_command_line
# Initialize the EOxServer component system.
import eoxserver.core
eoxserver.core.initialize()
execute_from_command_line(sys.argv)
| 24.2 | 72 | 0.752066 | 45 | 363 | 5.711111 | 0.622222 | 0.085603 | 0.140078 | 0.171206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15978 | 363 | 14 | 73 | 25.928571 | 0.842623 | 0.173554 | 0 | 0 | 0 | 0 | 0.157718 | 0.073826 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
d94a47d514957df17bba72e0b18cc8681b80c71c | 3,103 | py | Python | user.py | 12moi/password-locker | 44e7e9cbf2008ee42dbdf80e28f5ba1e5c2be28c | [
"MIT"
] | null | null | null | user.py | 12moi/password-locker | 44e7e9cbf2008ee42dbdf80e28f5ba1e5c2be28c | [
"MIT"
] | null | null | null | user.py | 12moi/password-locker | 44e7e9cbf2008ee42dbdf80e28f5ba1e5c2be28c | [
"MIT"
] | null | null | null |
from collections import UserList
class User:
'''
class that generates a new user instance
'''
# Empty user list array
user_list=[]
def __init__(self,firstname,lastname, username, userpassword):
self.username=username
self.firstname=firstname
self.lastname=lastname
self.password=userpassword
def save_user(self):
'''
save_user method saves a new user objects to the user_list
'''
User.user_list.append(self)
@classmethod
def diplay_user(cls):
return cls.user_list
def delete_user(self):
'''
A method that deletes a saved account from the list
'''
UserList.user_list.remove(self)
def verify_user(cls, username,password):
'''
A method that very the user if the user exist in the user_list
'''
a_user=""
for user in user.user_list:
if(username==username and password==password):
a_user=username
return a_user
class Credentials():
'''
Create credentials class to help create new objects of credentials
'''
acounts = []
@classmethod
def __init__(self,accountname,accountusername, accountpassword):
'''
a method that defines the user credentials to saved
'''
self.accountname=accountname
self.accountusername=accountusername
self.accountpassword=accountpassword
def save_account(self):
'''
this is a method that saves Accounts information
'''
Credentials.acounts.append(self)
def delete_account(self):
'''
Deletes saved account credentials
'''
Credentials.acounts.remove(self)
@classmethod
def display_accounts(cls):
'''
this method returns the accounts list
'''
for acount in cls.acounts:
return cls.acounts
@classmethod
def find_by_username(cls,username):
'''
This method takes in a number and finds a contact that matches the number
'''
for account in cls.acounts:
if account.accountusername==username:
return account
def save_credentials(self):
'''
save_user method saves a new user objects to the user_list
'''
Credentials.credentials_list.append(self)
def delete_credentils(self):
'''
A method that deletes a saved account from the list
'''
Credentials.credentials_list.remove(self)
@classmethod
def find_credentials(cls, account):
'''
method that take account and retrieves password for the account
'''
for credential in cls.credentials_list:
if credential.account==account:
return credential
@classmethod
def display_credentials(cls):
'''
A method that returns all the items in the credentials list
'''
return cls.credentials_list
| 25.024194 | 81 | 0.594586 | 332 | 3,103 | 5.442771 | 0.222892 | 0.039845 | 0.036525 | 0.019923 | 0.10736 | 0.10736 | 0.10736 | 0.10736 | 0.10736 | 0.10736 | 0 | 0 | 0.33387 | 3,103 | 123 | 82 | 25.227642 | 0.874214 | 0.253303 | 0 | 0.113208 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.264151 | false | 0.113208 | 0.018868 | 0.018868 | 0.471698 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
d971c3a2420c667305fce359ce9cbfcdcbe1b5e6 | 150 | py | Python | test_codes/deneme6.py | Akerdogmus/ake_python_toolkit | f4228b611584a9311a5b08068b75c7486182a15f | [
"MIT"
] | null | null | null | test_codes/deneme6.py | Akerdogmus/ake_python_toolkit | f4228b611584a9311a5b08068b75c7486182a15f | [
"MIT"
] | null | null | null | test_codes/deneme6.py | Akerdogmus/ake_python_toolkit | f4228b611584a9311a5b08068b75c7486182a15f | [
"MIT"
] | null | null | null |
def solution(n):
sum = 0
print(list(str(n)))
for i, j in enumerate(list(str(n))):
sum+=int(j)
return sum
print(solution(11)) | 16.666667 | 40 | 0.56 | 25 | 150 | 3.36 | 0.64 | 0.095238 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027273 | 0.266667 | 150 | 9 | 41 | 16.666667 | 0.736364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0.285714 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
d9885225dd9f0070ac668d692d08d6fc3c85815d | 295 | py | Python | sprinkler/dict_def.py | shanisma/plant-keeper | 3ca92ae2d55544a301e1398496a08a45cca6d15b | [
"CC0-1.0"
] | 1 | 2020-04-12T22:00:17.000Z | 2020-04-12T22:00:17.000Z | sprinkler/dict_def.py | shanisma/plant-keeper | 3ca92ae2d55544a301e1398496a08a45cca6d15b | [
"CC0-1.0"
] | null | null | null | sprinkler/dict_def.py | shanisma/plant-keeper | 3ca92ae2d55544a301e1398496a08a45cca6d15b | [
"CC0-1.0"
] | null | null | null | from typing import TypedDict
class SprinklerCtrlDict(TypedDict):
wtl: str # water tag link
wvs: int # water_valve_signal
fwv: int # force_water_valve
fwvs: int # force_water_valve_signal
hmin: float # soil_moisture_min_level
hmax: float # soil_moisture_max_level
| 26.818182 | 42 | 0.732203 | 40 | 295 | 5.075 | 0.65 | 0.147783 | 0.157635 | 0.17734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.216949 | 295 | 10 | 43 | 29.5 | 0.878788 | 0.420339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 3 |
7959489d41e9f1311f07cc1728e0d73086e3d127 | 602 | py | Python | src/PYnative/exercise/Random Data Generation/Q6.py | c-w-m/learning_python | 8f06aa41faf9195d978a7d21cbb329280b0d3200 | [
"CNRI-Python"
] | null | null | null | src/PYnative/exercise/Random Data Generation/Q6.py | c-w-m/learning_python | 8f06aa41faf9195d978a7d21cbb329280b0d3200 | [
"CNRI-Python"
] | null | null | null | src/PYnative/exercise/Random Data Generation/Q6.py | c-w-m/learning_python | 8f06aa41faf9195d978a7d21cbb329280b0d3200 | [
"CNRI-Python"
] | null | null | null | # Generate a random Password which meets the following conditions
# Password length must be 10 characters long.
# It must contain at least 2 upper case letter, 2 digits, and 2 special symbols.
# My Solution
import random
import string
source = string.ascii_letters + string.digits + string.punctuation
password = random.choices(string.ascii_uppercase, k=2)
password += random.choices(string.digits, k=2)
password += random.choices(string.punctuation, k=2)
for i in range(4):
password += random.choice(source)
random.SystemRandom().shuffle(password)
password = ''.join(password)
print(password)
| 30.1 | 80 | 0.769103 | 85 | 602 | 5.423529 | 0.564706 | 0.121475 | 0.136659 | 0.175705 | 0.125813 | 0.125813 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.13289 | 602 | 19 | 81 | 31.684211 | 0.8659 | 0.328904 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.636364 | 0.181818 | 0 | 0.181818 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
7960b1bb22655a09ba0d11b6cd22f4da89ea0a74 | 619 | py | Python | tests/test_CoreCli.py | pbianchi/climatic | dc0fa65640e9b8161d07b10c73245c33244124b9 | [
"MIT"
] | 12 | 2021-03-08T13:22:13.000Z | 2022-02-10T01:02:41.000Z | tests/test_CoreCli.py | pbianchi/climatic | dc0fa65640e9b8161d07b10c73245c33244124b9 | [
"MIT"
] | 1 | 2022-02-03T22:59:33.000Z | 2022-02-03T23:46:42.000Z | tests/test_CoreCli.py | pbianchi/climatic | dc0fa65640e9b8161d07b10c73245c33244124b9 | [
"MIT"
] | 2 | 2021-10-18T01:38:31.000Z | 2022-01-26T23:19:21.000Z | import pexpect
import pytest
import re
from expects import *
from unittest.mock import MagicMock
from unittest.mock import Mock
from climatic.CoreCli import CoreCli
def test_core_cli_constructor_destructor(core_cli):
connection = Mock()
cmd = core_cli(connection)
del cmd
connection.connect.assert_called_once()
connection.disconnect.assert_called_once()
@pytest.fixture
def core_cli():
class CoreCliExtension(CoreCli):
def login(self):
pass
def logout(self):
pass
def _get_prompt_size(self):
return 3
return CoreCliExtension
| 21.344828 | 51 | 0.705977 | 75 | 619 | 5.64 | 0.493333 | 0.066194 | 0.07565 | 0.104019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002105 | 0.232633 | 619 | 28 | 52 | 22.107143 | 0.888421 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.217391 | false | 0.086957 | 0.304348 | 0.043478 | 0.652174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 3 |
79677c524299a769c716b175a2f8065776e0f73a | 390 | py | Python | Flask/NationalEducationRadio/NationalEducationRadio/models/form/CaptionForm.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | Flask/NationalEducationRadio/NationalEducationRadio/models/form/CaptionForm.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | Flask/NationalEducationRadio/NationalEducationRadio/models/form/CaptionForm.py | Jessieluu/WIRL_national_education_radio | edb8b63c25bc7bd5a9a7d074173f02913971f8a7 | [
"MIT"
] | null | null | null | from flask.ext.wtf import Form
from wtforms import StringField, HiddenField
from wtforms.validators import DataRequired
from wtforms.widgets import TextArea
class CaptionForm(Form):
"""
傳送關鍵字用的表單
"""
caption_id = HiddenField(validators=[DataRequired(message="不能沒有 ID ")])
caption_content = StringField(widget=TextArea(), validators=[DataRequired(message="請匯入逐字稿")])
| 26 | 97 | 0.75641 | 42 | 390 | 6.97619 | 0.547619 | 0.112628 | 0.197952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141026 | 390 | 14 | 98 | 27.857143 | 0.874627 | 0.023077 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
797c03669cc0f2003183b6b24c272805b0656fb0 | 30 | py | Python | ChatConnector/Config/discordConfig.py | Micadurp/SHODAN | c54b3d1f58c9f54dd6ba3031ef4a1d30032be5f7 | [
"MIT"
] | null | null | null | ChatConnector/Config/discordConfig.py | Micadurp/SHODAN | c54b3d1f58c9f54dd6ba3031ef4a1d30032be5f7 | [
"MIT"
] | null | null | null | ChatConnector/Config/discordConfig.py | Micadurp/SHODAN | c54b3d1f58c9f54dd6ba3031ef4a1d30032be5f7 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
TOKEN = ''
| 10 | 18 | 0.6 | 4 | 30 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 0.133333 | 30 | 2 | 19 | 15 | 0.653846 | 0.566667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
798014170008bb2859089c82cd78b35c6631add4 | 627 | py | Python | python_helpers/opcode_7xxx.py | RandomBananazz/chip8mc | 0e184c392a523c82dbc945325aa2cb9e5487e5e7 | [
"MIT"
] | 3 | 2020-09-28T17:50:49.000Z | 2020-12-30T18:23:46.000Z | python_helpers/opcode_7xxx.py | RandomBananazz/chip8mc | 0e184c392a523c82dbc945325aa2cb9e5487e5e7 | [
"MIT"
] | null | null | null | python_helpers/opcode_7xxx.py | RandomBananazz/chip8mc | 0e184c392a523c82dbc945325aa2cb9e5487e5e7 | [
"MIT"
] | null | null | null | for x in range(16):
with open(f'..\\data\\cpu\\functions\\opcode_switch\\opcode_7xxx\\opcode_7xxx_{x}.mcfunction', 'w') as f:
f.write(f'scoreboard players operation Global V{hex(x)[2:].upper()} += Global PC_nibble_4\n')
f.write(f'execute if score Global V{hex(x)[2:].upper()} matches 256.. run scoreboard players remove Global V{hex(x)[2:].upper()} 256\n')
"""
for x in range(16):
with open('..\\data\\cpu\\functions\\opcode_switch\\opcode_7xxx.mcfunction', 'a') as f:
f.write(f'execute if score Global PC_nibble_2 matches {x} run function cpu:opcode_switch/opcode_7xxx/opcode_7xxx_{x}\n')
"""
| 57 | 144 | 0.677831 | 106 | 627 | 3.877358 | 0.367925 | 0.121655 | 0.131387 | 0.160584 | 0.666667 | 0.635037 | 0.525547 | 0 | 0 | 0 | 0 | 0.0369 | 0.135566 | 627 | 10 | 145 | 62.7 | 0.721402 | 0 | 0 | 0 | 0 | 0.25 | 0.754617 | 0.377309 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
798e80c7ea92312be89cba8f05d60d3199c9d711 | 793 | py | Python | cluster/silhouette.py | BMI203-2022/project5 | c2361a1b9e3a7295068205fecee39c405de324bb | [
"MIT"
] | null | null | null | cluster/silhouette.py | BMI203-2022/project5 | c2361a1b9e3a7295068205fecee39c405de324bb | [
"MIT"
] | null | null | null | cluster/silhouette.py | BMI203-2022/project5 | c2361a1b9e3a7295068205fecee39c405de324bb | [
"MIT"
] | 20 | 2022-01-31T20:09:57.000Z | 2022-02-15T03:17:27.000Z | import numpy as np
from scipy.spatial.distance import cdist
class Silhouette:
def __init__(self, metric: str = "euclidean"):
"""
inputs:
metric: str
the name of the distance metric to use
"""
def score(self, X: np.ndarray, y: np.ndarray) -> np.ndarray:
"""
calculates the silhouette score for each of the observations
inputs:
X: np.ndarray
A 2D matrix where the rows are observations and columns are features.
y: np.ndarray
a 1D array representing the cluster labels for each of the observations in `X`
outputs:
np.ndarray
a 1D array with the silhouette scores for each of the observations in `X`
"""
| 27.344828 | 94 | 0.576293 | 99 | 793 | 4.575758 | 0.494949 | 0.119205 | 0.059603 | 0.07947 | 0.247241 | 0.119205 | 0.119205 | 0 | 0 | 0 | 0 | 0.005929 | 0.361917 | 793 | 28 | 95 | 28.321429 | 0.889328 | 0.567465 | 0 | 0 | 0 | 0 | 0.042056 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.4 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
79a4ab842f79f3c8da6ebe9646fe6b242e9a039d | 3,760 | py | Python | common/migrations/0031_auto_20210805_1214.py | jordanm88/Django-CRM | 5faf22acb30aeb32f5830898fd5d8ecd1ac0bbd8 | [
"MIT"
] | 1,334 | 2017-06-04T07:47:14.000Z | 2022-03-30T17:12:37.000Z | common/migrations/0031_auto_20210805_1214.py | AhmedDoudou/Django-CRM-1 | 5faf22acb30aeb32f5830898fd5d8ecd1ac0bbd8 | [
"MIT"
] | 317 | 2017-06-04T07:48:13.000Z | 2022-03-29T19:24:26.000Z | common/migrations/0031_auto_20210805_1214.py | AhmedDoudou/Django-CRM-1 | 5faf22acb30aeb32f5830898fd5d8ecd1ac0bbd8 | [
"MIT"
] | 786 | 2017-06-06T09:18:48.000Z | 2022-03-29T01:29:29.000Z | # Generated by Django 3.2.5 on 2021-08-05 06:44
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('common', '0030_alter_user_role'),
]
operations = [
migrations.RenameField(
model_name='user',
old_name='type',
new_name='user_type',
),
migrations.AlterField(
model_name='address',
name='address_line',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='Address'),
),
migrations.AlterField(
model_name='address',
name='city',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='City'),
),
migrations.AlterField(
model_name='address',
name='postcode',
field=models.CharField(blank=True, max_length=64, null=True, verbose_name='Post/Zip-code'),
),
migrations.AlterField(
model_name='address',
name='state',
field=models.CharField(blank=True, max_length=255, null=True, verbose_name='State'),
),
migrations.AlterField(
model_name='address',
name='street',
field=models.CharField(blank=True, max_length=55, null=True, verbose_name='Street'),
),
migrations.AlterField(
model_name='apisettings',
name='website',
field=models.URLField(max_length=255, null=True),
),
migrations.AlterField(
model_name='comment_files',
name='comment_file',
field=models.FileField(null=True, upload_to='comment_files', verbose_name='File'),
),
migrations.AlterField(
model_name='company',
name='address',
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name='company',
name='name',
field=models.CharField(blank=True, max_length=100, null=True),
),
migrations.AlterField(
model_name='document',
name='title',
field=models.TextField(blank=True, null=True),
),
migrations.AlterField(
model_name='google',
name='dob',
field=models.CharField(max_length=50, null=True),
),
migrations.AlterField(
model_name='google',
name='email',
field=models.CharField(db_index=True, max_length=200, null=True),
),
migrations.AlterField(
model_name='google',
name='family_name',
field=models.CharField(max_length=200, null=True),
),
migrations.AlterField(
model_name='google',
name='gender',
field=models.CharField(max_length=10, null=True),
),
migrations.AlterField(
model_name='google',
name='given_name',
field=models.CharField(max_length=200, null=True),
),
migrations.AlterField(
model_name='google',
name='google_id',
field=models.CharField(max_length=200, null=True),
),
migrations.AlterField(
model_name='google',
name='google_url',
field=models.TextField(null=True),
),
migrations.AlterField(
model_name='google',
name='name',
field=models.CharField(max_length=200, null=True),
),
migrations.AlterField(
model_name='google',
name='verified_email',
field=models.CharField(max_length=200, null=True),
),
]
| 32.982456 | 103 | 0.55133 | 365 | 3,760 | 5.520548 | 0.219178 | 0.08933 | 0.235732 | 0.273449 | 0.702233 | 0.665509 | 0.504218 | 0.447643 | 0.352854 | 0.352854 | 0 | 0.023669 | 0.325798 | 3,760 | 113 | 104 | 33.274336 | 0.771203 | 0.011968 | 0 | 0.598131 | 1 | 0 | 0.100189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009346 | 0 | 0.037383 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
79a54dbb12e5a05fd4c9284d99241f5be375fd77 | 2,495 | py | Python | tests/operators/test_defuzz_som.py | amirrr/pyfuzzy | 97e88f7b014e9791fb0a3d07d0727867d27ea9d3 | [
"Apache-2.0"
] | 9 | 2019-04-11T07:03:04.000Z | 2021-05-12T13:01:53.000Z | tests/operators/test_defuzz_som.py | amirrr/pyfuzzy | 97e88f7b014e9791fb0a3d07d0727867d27ea9d3 | [
"Apache-2.0"
] | null | null | null | tests/operators/test_defuzz_som.py | amirrr/pyfuzzy | 97e88f7b014e9791fb0a3d07d0727867d27ea9d3 | [
"Apache-2.0"
] | 13 | 2019-04-07T19:19:03.000Z | 2019-08-20T11:53:23.000Z | import unittest
from pyfuzzy.operators import defuzz_som
class DefuzzLomTestCase(unittest.TestCase):
# Test input type - Input argument should be a dictionary.
def test_defuzz_som_1(self):
test = [1, 2, 3]
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test input type - Input argument should be a dictionary.
def test_defuzz_som_2(self):
test = [[1], [2], [3]]
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test input type - Input argument should be a dictionary.
def test_defuzz_som_3(self):
test = 0.1
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test input size - Dictionary should have at least one set.
def test_defuzz_som_4(self):
test = {}
self.assertRaises(ValueError, lambda: defuzz_som.defuzz_som(test))
# Test key type - Key of dictionary should be a int.
def test_defuzz_som_5(self):
test = {1.0: 0.1, 2.0: 0.2, 3.0: 0.3}
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test value type - Value of dictionary should be a float or int.
def test_defuzz_som_6(self):
test = {1: '0.1', 2: '0.2', 3: '0.3'}
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test value type - Value of dictionary should be a float or int.
def test_defuzz_som_7(self):
test = {1: [0.2], 2: [0.2], 3: [0.1]}
self.assertRaises(TypeError, lambda: defuzz_som.defuzz_som(test))
# Test value range - Value should be between 0 or 1.
def test_defuzz_som_8(self):
test = {1: 2, 2: 3.5, 3: -1}
self.assertRaises(ValueError, lambda: defuzz_som.defuzz_som(test))
# Test 1 - return smallest item with largest value
def test_defuzz_som_9(self):
test = {1: 0.5, 2: 0.3, 3: 0.85, 4: 0.35}
self.assertEqual(defuzz_som.defuzz_som(test), 3)
# Test 2 - return smallest item with largest value
def test_defuzz_som_10(self):
test = {0: 0, 1: 0.3, 2: 0.3, 3: 0.3, 4: 0.5, 5: 0.5, 6: 1, 7: 1, 8: 0}
self.assertEqual(defuzz_som.defuzz_som(test), 6)
# Test 3 - return smallest item with largest value
def test_defuzz_som_11(self):
test = {0: 0, 1: 0.8, 2: 0.2, 3: 0.8, 4: 0.8, 5: 0.5, 6: 0.5, 7: 0.2, 8: 0.2, 9: 0.2, 10: 0, 11: 0.8}
self.assertEqual(defuzz_som.defuzz_som(test), 1)
# Run all unittests
if __name__ == '__main__':
unittest.main()
| 37.238806 | 109 | 0.639279 | 412 | 2,495 | 3.716019 | 0.15534 | 0.199869 | 0.093403 | 0.114958 | 0.746571 | 0.705421 | 0.689745 | 0.617244 | 0.617244 | 0.617244 | 0 | 0.072737 | 0.234068 | 2,495 | 66 | 110 | 37.80303 | 0.728414 | 0.2501 | 0 | 0.210526 | 0 | 0 | 0.00915 | 0 | 0 | 0 | 0 | 0 | 0.289474 | 1 | 0.289474 | false | 0 | 0.052632 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
79b70ecee41b06171313bee26027ac119e9c3b38 | 738 | py | Python | atom_deb-latest.py | pierreduchemin/install_scripts | 87dd42ab54aa36378fc05486b6c43d4fe02ecfd7 | [
"Apache-2.0"
] | 1 | 2017-03-21T13:10:23.000Z | 2017-03-21T13:10:23.000Z | atom_deb-latest.py | pierreduchemin/install_scripts | 87dd42ab54aa36378fc05486b6c43d4fe02ecfd7 | [
"Apache-2.0"
] | null | null | null | atom_deb-latest.py | pierreduchemin/install_scripts | 87dd42ab54aa36378fc05486b6c43d4fe02ecfd7 | [
"Apache-2.0"
] | 2 | 2018-10-11T11:57:22.000Z | 2021-10-07T13:45:18.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import urllib.request
import os, stat
import subprocess
print("Downloading atom...")
urllib.request.urlretrieve("https://atom.io/download/deb", "/tmp/atom-amd64.deb")
os.chmod("/tmp/atom-amd64.deb", 755)
subprocess.call(["sudo", "-S", "dpkg", "-i", "/tmp/atom-amd64.deb"])
os.remove("/tmp/atom-amd64.deb")
print("Installing atom plugins...")
subprocess.call(["apm", "install", "pretty-json"])
subprocess.call(["apm", "install", "platformio-ide-terminal"])
subprocess.call(["apm", "install", "pandoc-convert"])
subprocess.call(["apm", "install", "language-javascript-jsx"])
subprocess.call(["apm", "install", "atom-typescript"])
subprocess.call(["apm", "install", "intellij-idea-keymap"])
| 35.142857 | 81 | 0.688347 | 96 | 738 | 5.291667 | 0.510417 | 0.192913 | 0.200787 | 0.283465 | 0.066929 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018895 | 0.067751 | 738 | 20 | 82 | 36.9 | 0.719477 | 0.058266 | 0 | 0 | 0 | 0 | 0.471861 | 0.066378 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.133333 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
79cb802a0934fb13913b1168d3ece2407767d505 | 50 | py | Python | hmr/__init__.py | Mr-Milk/python-hmr | 1a71f3413ea2374afc27919031db02e09f0f6b75 | [
"MIT"
] | 8 | 2021-01-20T13:28:23.000Z | 2021-08-20T21:35:46.000Z | hmr/__init__.py | Mr-Milk/python-hmr | 1a71f3413ea2374afc27919031db02e09f0f6b75 | [
"MIT"
] | 5 | 2022-02-07T14:54:50.000Z | 2022-03-01T20:19:19.000Z | hmr/__init__.py | Mr-Milk/python-hmr | 1a71f3413ea2374afc27919031db02e09f0f6b75 | [
"MIT"
] | null | null | null | __all__ = ["Reloader"]
from .api import Reloader
| 12.5 | 25 | 0.72 | 6 | 50 | 5.333333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 50 | 3 | 26 | 16.666667 | 0.761905 | 0 | 0 | 0 | 0 | 0 | 0.16 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8dbb579a608c0e6cabd5b059b9d8539d9c061fa1 | 294 | py | Python | app/api/api.py | ridhanf/python-fastapi-challenge | 041a2156c222dbd84805d6b6ee1d9b88b8227db3 | [
"MIT"
] | null | null | null | app/api/api.py | ridhanf/python-fastapi-challenge | 041a2156c222dbd84805d6b6ee1d9b88b8227db3 | [
"MIT"
] | null | null | null | app/api/api.py | ridhanf/python-fastapi-challenge | 041a2156c222dbd84805d6b6ee1d9b88b8227db3 | [
"MIT"
] | null | null | null | from fastapi import APIRouter
from app.api.endpoints import user_controller, course_controller
api_router = APIRouter()
api_router.include_router(user_controller.router, prefix="/users", tags=["users"])
api_router.include_router(course_controller.router, prefix="/courses", tags=["courses"])
| 36.75 | 88 | 0.809524 | 38 | 294 | 6.026316 | 0.421053 | 0.117904 | 0.139738 | 0.19214 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068027 | 294 | 7 | 89 | 42 | 0.835766 | 0 | 0 | 0 | 0 | 0 | 0.088435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
8de949adc97a823bbea1b557333b0752027c0066 | 457 | py | Python | drone-wrapper/users/views.py | 2022-capstone-design-KPUCS/capstone-drone-backend | b56155a44eb7d0353806d534cb4c5220363a68b0 | [
"MIT"
] | null | null | null | drone-wrapper/users/views.py | 2022-capstone-design-KPUCS/capstone-drone-backend | b56155a44eb7d0353806d534cb4c5220363a68b0 | [
"MIT"
] | null | null | null | drone-wrapper/users/views.py | 2022-capstone-design-KPUCS/capstone-drone-backend | b56155a44eb7d0353806d534cb4c5220363a68b0 | [
"MIT"
] | 2 | 2022-02-11T09:02:41.000Z | 2022-02-20T12:50:59.000Z | from rest_framework.decorators import action
from rest_framework import viewsets
from rest_framework.permissions import AllowAny, IsAuthenticated
from rest_framework.response import Response
from .models import User
from .permissions import IsUserOrReadOnly
from .serializers import UserSerializer
class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
permission_class = (IsUserOrReadOnly,) | 32.642857 | 64 | 0.835886 | 50 | 457 | 7.52 | 0.48 | 0.085106 | 0.180851 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.118162 | 457 | 14 | 65 | 32.642857 | 0.933002 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.636364 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
8df969118d302f79d52bd1fab665ec1b756561e7 | 731 | py | Python | setup.py | PropertyGuard/psycopg2_numpy_ext | c42acd97e3ac5c784b205555d8828a8c6efd6eab | [
"MIT"
] | 4 | 2015-08-20T16:48:56.000Z | 2019-12-17T16:14:26.000Z | setup.py | PropertyGuard/psycopg2_numpy_ext | c42acd97e3ac5c784b205555d8828a8c6efd6eab | [
"MIT"
] | null | null | null | setup.py | PropertyGuard/psycopg2_numpy_ext | c42acd97e3ac5c784b205555d8828a8c6efd6eab | [
"MIT"
] | 1 | 2019-12-17T16:15:24.000Z | 2019-12-17T16:15:24.000Z | import setuptools
setuptools.setup(
name="psycopg2_numpy_ext",
version="0.1.0",
url="https://github.com/musically-ut/psycopg2-numpy-ext",
author="Utkarsh Upadhyay",
author_email="musically.ut@gmail.com",
description="Adapters for Numpy's types for Psycopg2.",
long_description=open('README.rst').read(),
packages=setuptools.find_packages(),
install_requires=['numpy', 'psycopg2'],
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
],
)
| 27.074074 | 61 | 0.634747 | 79 | 731 | 5.797468 | 0.594937 | 0.207424 | 0.272926 | 0.113537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024263 | 0.21067 | 731 | 26 | 62 | 28.115385 | 0.769497 | 0 | 0 | 0 | 0 | 0 | 0.52394 | 0.030096 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
5c169c237c698e84466c60958aec7d8ede0d8b54 | 30,090 | py | Python | dd_pseudolabels.py | Ophir-Gal/ssl-descent | 3bff63a3d00cd2d63c549551c1a086f689a94ed6 | [
"MIT"
] | null | null | null | dd_pseudolabels.py | Ophir-Gal/ssl-descent | 3bff63a3d00cd2d63c549551c1a086f689a94ed6 | [
"MIT"
] | null | null | null | dd_pseudolabels.py | Ophir-Gal/ssl-descent | 3bff63a3d00cd2d63c549551c1a086f689a94ed6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""DD-pseudolabels.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1eYKQIIS4i7RPTeAFX5SgIZmG3S2jtnr3
## Double Descent Risk Curve for Neural Networks
The goal of this experiment is to analyze error/loss curve for Resnet18 trained using pseudo-labeling method with CIFAR-10.
- models: resnet18
- dataset: CIFAR-10
- learning algorithm: standard supervised, semi-supervised(pseudo-labeling)
- output : model complexity (number of parameters, epochs) vs. test error/loss
---
- hypothesis: in both supervised and semi-supervised, we should get Double Descent phenomenon, which is defined by
- having a U-shaped curve before the interpolation threshold (under-parameterized)
- peaking at the threshold
- decreasing again in the over-parameterized regime
"""
# Commented out IPython magic to ensure Python compatibility.
from google.colab import drive
drive.mount('/gdrive')
# %matplotlib inline
import torch
from torch.utils.data import DataLoader, Subset, random_split
from torch import nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from torchvision.models.resnet import ResNet, BasicBlock
from torchvision.datasets import CIFAR10, MNIST
from torchvision.transforms import ToTensor
import numpy as np
import pickle, time, json
import matplotlib.pyplot as plt
from PIL import Image
torch.manual_seed(0)
np.random.seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
"""Define some functions"""
def alpha_weight(epoch, T1, T2, af):
""" calculate value of alpha used in loss based on the epoch
params:
- epoch: your current epoch
- T1: threshold for training with only labeled data
- T2: threshold for training with only unlabeled data
- af: max alpha value
"""
if epoch < T1:
return 0.0
elif epoch > T2:
return af
else:
return ((epoch - T1) / (T2 - T1)) * af
def evaluate(model, data_loader, b):
""" evaluate the loss and accuracy of the trained network on test data
returns:
- (test_accuracy, test_loss)
params:
- model:
- test_loader:
- b:
"""
correct = 0
total = 0
running_loss = 0
model.eval()
with torch.no_grad():
for data, labels in data_loader:
data = data.to(device)
labels = labels.to(device)
output = model(data)
predicted = torch.max(output, 1)[1]
correct += (predicted == labels).sum()
total += data.shape[0]
loss = loss_fn(output, labels, b)
running_loss += loss.item()
test_error = 1 - correct/total
return test_error, running_loss/len(data_loader)
def loss_fn(outputs, labels, b):
criterion = nn.CrossEntropyLoss()
return (criterion(outputs, labels) - b).abs() + b
def train_semisuper(epochs, model, optimizer, train_loader, unlabeled_loader, test_loader, b):
"""train model on labeled and unlabeled data using pseudo-labels/self-training
returns:
- metrics:
params:
- epochs: total epochs for training
- model:
- optimizer:
- train_loader:
- unlabeled_loader:
- test_loader:
- b:
"""
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
metrics = []
start_ts = time.time()
T1 = 20 # no contribution of unlabeled data
T2 = 150 # less contribution of unlabeled data
af = 3.1 # Instead of using current epoch we use a "step" variable to calculate alpha_weight. This helps the model converge faster
# step 1: pre-train teacher model for T1 epochs only on labeled data
print("START SUPERVISED LEARNING")
train_super(T1, model, optimizer, train_loader, test_loader, b)
model.train()
print("START SEMI-SUPERVISED LEARNING")
# step 2: train model only on unlabeled data for (T2-T1) epochs and both
for epoch in range(epochs):
running_loss = 0
unlabeled_correct = 0
unlabeled_total = 0
labeled_correct = 0
labeled_total = 0
# generate pseudo labels and train model on unlabeled data
for (x_unlabeled, y_unlabeled), (x_labeled, y_labeled) in zip(unlabeled_loader, train_loader):
# genereate the pseudo labels (changed by every weight update) for unlabeled images
x_unlabeled, y_unlabeled = x_unlabeled.to(device), y_unlabeled.to(device)
x_labeled, y_labeled = x_labeled.to(device), y_labeled.to(device)
output_unlabeled = model(x_unlabeled)
_, pseudo_labels = torch.max(output_unlabeled, 1) # even if the absolute confidence is low, the max class becomes 1
unlabeled_loss = loss_fn(output_unlabeled, pseudo_labels, b)
unlabeled_total += x_unlabeled.shape[0]
unlabeled_correct += (pseudo_labels == y_unlabeled).sum()
output_labeled = model(x_labeled)
_, predicted = torch.max(output_labeled, 1)
labeled_loss = loss_fn(output_labeled, y_labeled, b)
labeled_total += x_labeled.shape[0]
labeled_correct += (predicted == y_labeled).sum()
total_loss = alpha_weight(epoch, T1, T2, af)*unlabeled_loss + labeled_loss
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
running_loss += total_loss.item()
print('pseudo label accuracy={}'.format(unlabeled_correct/unlabeled_total))
train_loss = running_loss/len(train_loader)
correct = unlabeled_correct + labeled_correct
total = unlabeled_total + labeled_total
train_error = 1 - correct/total
test_error, test_loss = evaluate(model, test_loader, b)
metrics.append([train_error.item(), test_error.item(), train_loss, test_loss])
print('Epoch: {} : Alpha Weights : {:.3f}, Train Error : {:.3f}, Train Loss : {:.3f} | Test Error : {:.3f}, Test Loss : {:.3f} '.format(epoch+1, alpha_weight(epoch, T1, T2, af), train_error, train_loss, test_error, test_loss))
model.train()
print("minutes elapsed: {:.3f}".format((time.time() - start_ts)/60))
return metrics
def train_super(epochs, model, optimizer, train_loader, test_loader, b):
"""train model in a standard supervised way
returns:
- metrics:
params:
- epochs: total epochs for training
- model:
- optimizer:
- train_loader:
- test_loader:
- b: flooding level
"""
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
metrics = []
start_ts = time.time()
model.train()
# for each epoch
for epoch in range(epochs):
running_loss = 0
total = 0
correct = 0
# for each batch
for batch_idx, (data, labels) in enumerate(train_loader):
data, labels = data.to(device), labels.to(device)
output = model(data)
_, predicted = torch.max(output, 1)
correct += (predicted == labels).sum()
total += data.shape[0]
loss = loss_fn(output, labels, b)
running_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss = running_loss/len(train_loader)
train_error = 1 - correct/total
test_error, test_loss = evaluate(model, test_loader, b)
metrics.append([train_error.item(), test_error.item(), train_loss, test_loss])
print('Epoch: {} : Train Error : {:.3f}, Train Loss : {:.3f} | Test Error : {:.3f}, Test Loss : {:.3f} '.format(epoch+1, train_error, train_loss, test_error, test_loss))
model.train()
print("minutes elapsed: {:.3f}".format((time.time() - start_ts)/60))
return metrics
"""Define your neural net model"""
class PreActBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, **kwargs):
super(PreActBlock, self).__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
if stride != 1 or in_planes != self.expansion * planes:
self.shortcut = nn.Sequential(nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False))
def forward(self, x):
out = F.relu(self.bn1(x))
shortcut = self.shortcut(out) if hasattr(self, 'shortcut') else x
out = self.conv1(out)
out = self.conv2(F.relu(self.bn2(out)))
out += shortcut
return out
class PreActResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10, init_channels=64):
super(PreActResNet, self).__init__()
self.in_planes = init_channels
c = init_channels
self.conv1 = nn.Conv2d(3, c, kernel_size=3, stride=1, padding=1, bias=False)
self.layer1 = self._make_layer(block, c, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 2*c, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 4*c, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 8*c, num_blocks[3], stride=2)
self.dropout = nn.Dropout(0.5)
self.linear = nn.Linear(8 * c * block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
# eg: [2, 1, 1, ..., 1]. Only the first one downsamples.
strides = [stride] + [1] * (num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = self.conv1(x)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.dropout(out)
out = self.linear(out)
return out
class MNISTResNet(PreActResNet):
def __init__(self, n_classes, k):
super(MNISTResNet, self).__init__(PreActBlock, [2, 2, 2, 2], num_classes=n_classes, init_channels=k)
class CIFARResNet(PreActResNet):
def __init__(self, n_classes, k):
super(CIFARResNet, self).__init__(PreActBlock, [2, 2, 2, 2], num_classes=n_classes, init_channels=k)
"""#Supervised Experiments
### Experiment 1 - Model-wise DD (Finished!)
"""
basic_setting = {
'k': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 40, 48, 56, 64],
'epochs': 300,
'label_noise': 0.15,
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': 0.15,
'n_labeled': (10000, 40000),
'augmentation': True
}
super_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/Supervised Experiments/super_model_64.json'
open_file = open(file_name, "w")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
# load either MNIST or CIFAR-10
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
total_samples = len(train.targets)
# assign random labels to (label_noise)% of the training set (needed for semi-supervised learning)
rands = np.random.choice(total_samples, int(basic_setting['label_noise']*total_samples), replace=False)
for rand in rands:
train.targets[rand] = torch.randint(high=10, size=(1,1)).item()
n_labeled, n_unlabeled = basic_setting['n_labeled']
# split training data into labeled and unlabeled
train, val = random_split(train, [n_labeled, n_unlabeled])
print(len(train), len(val))
train_loader = DataLoader(train, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
print(basic_setting)
for k in basic_setting['k']:
model = CIFARResNet(basic_setting['n_classes'], k) # define model with the number of parameter
model.to(device)
total_params = sum(p.numel() for p in model.parameters())
print("number of model parameters = {} when k={}".format(total_params, k))
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# standard supervised training
error_metrics = train_super(basic_setting['epochs'], model, optimizer, train_loader, test_loader, basic_setting['b'])
super_results[str(k)] = error_metrics
# semi-supervised training using pseudo-labels
# error_metrics = train_semisuper(basic_setting['epochs'], model, optimizer, train_loader, unlabeled_loader, test_loader, basic_setting['b'])
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(super_results, f)
"""### Experiment 2 - Epoch-wise DD vs. n_samples"""
basic_setting = {
'k': 64,
'epochs': 300,
'label_noise': 0.2,
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': 0.15,
'n_labeled': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
'augmentation': True
}
super_results = {}
semisuper_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/super_epoch_n_samples.json'
open_file = open(file_name, "ab")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
print(basic_setting)
n_labeled, n_unlabeled = basic_setting['n_labeled']
for ratio in basic_setting['n_labeled']:
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
total_samples = len(train)
# assign random labels to (label_noise)% of the training set (needed for semi-supervised learning)
rands = np.random.choice(total_samples, int(basic_setting['label_noise']*total_samples), replace=False)
for rand in rands:
train.targets[rand] = torch.randint(high=10, size=(1,1)).item()
# split training data into labeled and unlabeled
train, val = random_split(train, [n_labeled, n_unlabeled])
print("number of labeled: {}, number of unlabeled: {}\n".format(len(train), len(val)))
train_loader = DataLoader(train, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
model = CIFARResNet(basic_setting['n_classes'], basic_setting['k']) # define model with the number of parameter
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# standard supervised training
error_metrics = train_super(basic_setting['epochs'], model, optimizer, train_loader, test_loader, basic_setting['b'])
super_results[str(ratio)] = error_metrics
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(super_results, f)
"""### Experiment 3 - Epoch-wise DD vs. flooding (Finished!)"""
basic_setting = {
'k': 64,
'epochs': 200,
'label_noise': 0.2,
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': [0.1, 0.15, 0.2],
'n_labeled': (50000, 0),
'augmentation': True
}
super_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/Supervised Experiments/super_epoch_flooding.json'
open_file = open(file_name, "ab")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
print(basic_setting)
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
total_samples = len(train)
# assign random labels to (label_noise)% of the training set (needed for semi-supervised learning)
rands = np.random.choice(total_samples, int(basic_setting['label_noise']*total_samples), replace=False)
for rand in rands:
train.targets[rand] = torch.randint(high=10, size=(1,1)).item()
# split training data into labeled and unlabeled
n_labeled, n_unlabeled = basic_setting['n_labeled']
train, val = random_split(train, [n_labeled, n_unlabeled])
print("number of labeled: {}, number of unlabeled: {}\n".format(len(train), len(val)))
train_loader = DataLoader(train, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
for flood in basic_setting['b']:
model = CIFARResNet(basic_setting['n_classes'], basic_setting['k']) # define model with the number of parameter
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# standard supervised training
error_metrics = train_super(basic_setting['epochs'], model, optimizer, train_loader, test_loader, flood)
super_results[str(flood)] = error_metrics
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(super_results, f)
"""### Experiment 4 - Epoch-wise DD vs. label noise (Finished!)"""
basic_setting = {
'k': 64,
'epochs': 200,
'noise': [0.1, 0.15, 0.2],
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': 0.1,
'n_labeled': (20000, 30000),
'augmentation': True
}
super_results = {}
semisuper_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/Supervised Experiments/super_epoch_label_noise.json'
open_file = open(file_name, "ab")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
print(basic_setting)
n_labeled, n_unlabeled = basic_setting['n_labeled']
for noise in basic_setting['noise']:
# train = datasets.MNIST(root='./data', train=True, download=True, transform=transform_cifar)
# test = datasets.MNIST(root='./data', train=False, download=True, transform=transform_test)
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
total_samples = len(train)
# assign random labels to (label_noise)% of the training set (needed for semi-supervised learning)
rands = np.random.choice(total_samples, int(noise*total_samples), replace=False)
for rand in rands:
train.targets[rand] = torch.randint(high=10, size=(1,1)).item()
# split training data into labeled and unlabeled
train, val = random_split(train, [n_labeled, n_unlabeled])
print("number of labeled: {}, number of unlabeled: {}\n".format(len(train), len(val)))
train_loader = DataLoader(train, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
model = CIFARResNet(basic_setting['n_classes'], basic_setting['k']) # define model with the number of parameter
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# standard supervised training
error_metrics = train_super(basic_setting['epochs'], model, optimizer, train_loader, test_loader, basic_setting['b'])
super_results[str(noise)] = error_metrics
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(super_results, f)
"""# Semi-Supervised Experiments
### Experiment 1 - Epoch-wise DD vs. labeled ratio
"""
basic_setting = {
'k': 64,
'epochs': 200,
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': 0.1,
'n_labeled': [(20000, 30000),(10000, 40000)],
'augmentation': True
}
# We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world).
# lr = (n_unlabeled)**(-0.5) # SGD learning rate
print(basic_setting)
semisuper_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/Semi-supervised Experiments/semisuper_epoch_ratio.json'
open_file = open(file_name, "ab")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
# load either MNIST or CIFAR-10
# train = datasets.MNIST(root='./data', train=True, download=True, transform=transform_mnist)
# test = datasets.MNIST(root='./data', train=False, download=True, transform=transform_mnist)
for n_labeled, n_unlabeled in basic_setting['n_labeled']:
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
train, val = random_split(train, [n_labeled, n_unlabeled])
print("number of labeled: {}, number of unlabeled: {}\n".format(len(train), len(val)))
train_loader = DataLoader(train, batch_size=int(len(train)/basic_setting['n_batch']), shuffle=True, num_workers=2)
unlabeled_loader = DataLoader(val, batch_size=int(len(val)/basic_setting['n_batch']), shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
model = CIFARResNet(basic_setting['n_classes'], basic_setting['k']) # define model with the number of parameter
model.to(device)
# total_params = sum(p.numel() for p in model.parameters())
# print("number of model parameters = {} when k={}".format(total_params, basic_setting['k']))
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# semi-supervised training using pseudo-labels
error_metrics = train_semisuper(basic_setting['epochs'], model, optimizer, train_loader, unlabeled_loader, test_loader, basic_setting['b'])
semisuper_results = {}
semisuper_results[str(n_labeled)] = error_metrics
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(semisuper_results, f)
"""### Experiment 2 - Model-wise DD"""
basic_setting = {
'k': 64,
'epochs': 300,
'n_batch': 128,
'n_classes': 10,
'lr': 1e-4,
'b': 0.15,
'n_labeled': [0.6, 0.4, 0.2],
'augmentation': True
}
# We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world).
# lr = (n_unlabeled)**(-0.5) # SGD learning rate
print(basic_setting)
semisuper_results = {}
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/semisuper_epoch.json'
open_file = open(file_name, "ab")
# define transformations for training and test set
if basic_setting['augmentation']:
transform_cifar = transforms.Compose([transforms.ToTensor(),transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()])
else:
transform_cifar = transforms.Compose([transforms.ToTensor()])
transform_test = transforms.Compose([transforms.ToTensor()])
# load either MNIST or CIFAR-10
# train = datasets.MNIST(root='./data', train=True, download=True, transform=transform_mnist)
# test = datasets.MNIST(root='./data', train=False, download=True, transform=transform_mnist)
for ratio in basic_setting['n_labeled']:
train = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_cifar)
test = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
# split training data into labeled and unlabeled
n_labeled = int(total_samples*ratio)
n_unlabeled = int(total_samples*(1-ratio))
train, val = random_split(train, [n_labeled, n_unlabeled])
print("number of labeled: {}, number of unlabeled: {}\n".format(len(train), len(val)))
train_loader = DataLoader(train, batch_size=int(len(train)/basic_setting['n_batch']), shuffle=True, num_workers=2)
unlabeled_loader = DataLoader(val, batch_size=int(len(val)/basic_setting['n_batch']), shuffle=True, num_workers=2)
test_loader = DataLoader(test, batch_size=basic_setting['n_batch'], shuffle=True, num_workers=2)
model = CIFARResNet(basic_setting['n_classes'], basic_setting['k']) # define model with the number of parameter
model.to(device)
# total_params = sum(p.numel() for p in model.parameters())
# print("number of model parameters = {} when k={}".format(total_params, basic_setting['k']))
optimizer = torch.optim.Adam(model.parameters(), lr=basic_setting['lr'])
# optimizer = torch.optim.SGD(model.parameters(), lr=lr)
# semi-supervised training using pseudo-labels
error_metrics = train_semisuper(basic_setting['epochs'], model, optimizer, train_loader, unlabeled_loader, test_loader, basic_setting['b'])
semi_super_results[str(ratio)] = error_metrics
# save list to pickle file
with open(file_name, 'w') as f:
json.dump(semi_super_results, f)
"""# Plotting"""
# matplotlib.rcParams.update({'font.size': 25})
def plot_modelwise(results, fname):
titles = ['error', 'loss']
colors = ['blue', 'lime']
labels = ['train', 'test']
k = [] # fixed x-axis
metrics =. []
exp_type = fname.split('/')[-2]
exp_name = fname.split('/')[-1].split('.')[0]
# extract model size and metrics into separate lists
for model_size, metric in results.items():
k.append(int(model_size))
metrics.append(metric[-1])
metrics = np.array(metrics)
# test/train error and loss
for i, title in enumerate(titles):
fig, axes = plt.subplots(figsize=(8, 6), dpi=300)
axes.grid()
axes.plot(k, metrics[:, 2*i], label=labels[0], color=colors[0]) # train
axes.plot(k, metrics[:, 2*i+1], label=labels[1], color=colors[1]) # test
axes.set_xlabel('resnet width=k')
axes.set_ylabel(title)
axes.set_title("model width vs. {}".format(title))
axes.legend(loc='upper right', prop={'size': 15})
fig.savefig('/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/{}/{}_{}.png'.format(exp_type, exp_name, title), dpi=300)
def plot_epochwise(results, fname):
titles = ['error', 'loss']
colors = ['red', 'lime', 'blue']
labels = ['train', 'test']
exp_type = fname.split('/')[-2]
exp_name = fname.split('/')[-1].split('.')[0]
for key in results.keys():
epochs = np.arange(1, len(results[key])+1) # fixed x-axis
for i, title in enumerate(titles):
fig, axes = plt.subplots(figsize=(8, 6))
for j, (param, metrics) in enumerate(results.items()):
metrics = np.array(results[param])
axes.plot(epochs, metrics[:, ], label=param, color=colors[0])
axes.plot(epochs, metrics[:, ], label=param, color=colors[1])
axes.set_xlabel('Epochs')
axes.set_ylabel(title)
axes.set_title('Epochs vs. {}'.format(title))
axes.legend(loc='upper right', prop={'size': 15})
fig.savefig('/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/{}/{}_{}.png'.format(exp_type, exp_name, title), dpi=300)
file_name = '/gdrive/My Drive/CMSC 828W Research/Code (Won & Amartya)/Semi-supervised Experiments/semisuper_epoch_ratio.json'
with open(file_name) as json_file:
data = json.load(json_file)
print(data)
#plot_modelwise(data, file_name)
plot_epochwise(data, file_name) | 38.725869 | 234 | 0.673114 | 4,006 | 30,090 | 4.904144 | 0.111083 | 0.047032 | 0.01919 | 0.032068 | 0.700346 | 0.687061 | 0.667566 | 0.652703 | 0.63784 | 0.627201 | 0 | 0.022052 | 0.195248 | 30,090 | 777 | 235 | 38.725869 | 0.789263 | 0.143835 | 0 | 0.578022 | 1 | 0.015385 | 0.105477 | 0.014487 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.028571 | null | null | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
30b786cd5fa20d1e7b694de361c426d6baeb53cd | 628 | py | Python | attic/operator/factorial/factorial.py | matteoshen/example-code | b54c22a1b8cee3fc53d1473cb38ca46eb179b4c3 | [
"MIT"
] | 5,651 | 2015-01-06T21:58:46.000Z | 2022-03-31T13:39:07.000Z | attic/operator/factorial/factorial.py | matteoshen/example-code | b54c22a1b8cee3fc53d1473cb38ca46eb179b4c3 | [
"MIT"
] | 42 | 2016-12-11T19:17:11.000Z | 2021-11-23T19:41:16.000Z | attic/operator/factorial/factorial.py | matteoshen/example-code | b54c22a1b8cee3fc53d1473cb38ca46eb179b4c3 | [
"MIT"
] | 2,394 | 2015-01-18T10:57:38.000Z | 2022-03-31T11:41:12.000Z | def factorial(n):
return 1 if n < 2 else n * factorial(n-1)
if __name__=='__main__':
for i in range(1, 26):
print('%s! = %s' % (i, factorial(i)))
"""
output:
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
10! = 3628800
11! = 39916800
12! = 479001600
13! = 6227020800
14! = 87178291200
15! = 1307674368000
16! = 20922789888000
17! = 355687428096000
18! = 6402373705728000
19! = 121645100408832000
20! = 2432902008176640000
21! = 51090942171709440000
22! = 1124000727777607680000
23! = 25852016738884976640000
24! = 620448401733239439360000
25! = 15511210043330985984000000
"""
| 16.972973 | 45 | 0.665605 | 79 | 628 | 5.189873 | 0.78481 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.648221 | 0.194268 | 628 | 36 | 46 | 17.444444 | 0.162055 | 0 | 0 | 0 | 0 | 0 | 0.09697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.2 | 0.4 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
30c78f1e4b38cbf4328632d869f83cd49884978b | 310 | py | Python | claripy/vsa/__init__.py | embg/claripy | 1a5e0ca61d3f480e541226f103900e983f025e4a | [
"BSD-2-Clause"
] | 211 | 2015-08-06T23:25:01.000Z | 2022-03-26T19:34:49.000Z | claripy/vsa/__init__.py | embg/claripy | 1a5e0ca61d3f480e541226f103900e983f025e4a | [
"BSD-2-Clause"
] | 175 | 2015-09-03T11:09:18.000Z | 2022-03-09T20:24:33.000Z | claripy/vsa/__init__.py | embg/claripy | 1a5e0ca61d3f480e541226f103900e983f025e4a | [
"BSD-2-Clause"
] | 99 | 2015-08-07T10:30:08.000Z | 2022-03-26T10:32:09.000Z | from .valueset import RegionAnnotation, ValueSet
from .strided_interval import StridedInterval, CreateStridedInterval
from .discrete_strided_interval_set import DiscreteStridedIntervalSet
from .abstract_location import AbstractLocation
from .bool_result import BoolResult, TrueResult, FalseResult, MaybeResult
| 51.666667 | 73 | 0.887097 | 31 | 310 | 8.677419 | 0.645161 | 0.111524 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 310 | 5 | 74 | 62 | 0.94386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 3 |
30db2a3cdf832309b396aca6c28e5c404c8bc106 | 740 | py | Python | src/cdev/default/environment.py | cdev-framework/cdev-sdk | 06cd7b40936ab063d1d8fd1a7d9f6882750e8a96 | [
"BSD-3-Clause-Clear"
] | 2 | 2022-02-28T02:51:59.000Z | 2022-03-24T15:23:18.000Z | src/cdev/default/environment.py | cdev-framework/cdev-sdk | 06cd7b40936ab063d1d8fd1a7d9f6882750e8a96 | [
"BSD-3-Clause-Clear"
] | null | null | null | src/cdev/default/environment.py | cdev-framework/cdev-sdk | 06cd7b40936ab063d1d8fd1a7d9f6882750e8a96 | [
"BSD-3-Clause-Clear"
] | null | null | null | from core.constructs.workspace import (
Workspace,
initialize_workspace,
load_workspace,
)
from ..constructs.environment import Environment, environment_info
class local_environment(Environment):
"""
A logically isolated instance of a project.
"""
def __init__(self, info: environment_info) -> None:
self.name = info.name
self.workspace_info = info.workspace_info
self._loaded_workspace = load_workspace(self.workspace_info)
def get_name(self) -> str:
return self.name
def get_workspace(self) -> Workspace:
return self._loaded_workspace
def initialize_environment(self) -> None:
initialize_workspace(self._loaded_workspace, self.workspace_info)
| 26.428571 | 73 | 0.712162 | 83 | 740 | 6.060241 | 0.313253 | 0.10338 | 0.101392 | 0.10338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205405 | 740 | 27 | 74 | 27.407407 | 0.855442 | 0.058108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.117647 | 0.117647 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
30e545d65916ca66dbe988165aec9d0884329b7f | 249 | py | Python | awsscripts/sketches/ca.py | vbmacher/aws-scripts | d3ae9d862c9d388dc8326bba244f4805b2599b91 | [
"MIT"
] | null | null | null | awsscripts/sketches/ca.py | vbmacher/aws-scripts | d3ae9d862c9d388dc8326bba244f4805b2599b91 | [
"MIT"
] | 12 | 2021-09-10T09:23:15.000Z | 2022-01-05T09:09:07.000Z | awsscripts/sketches/ca.py | vbmacher/aws-scripts | d3ae9d862c9d388dc8326bba244f4805b2599b91 | [
"MIT"
] | null | null | null | from awsscripts.sketches.sketchitem import SketchItem
class CodeArtifactSketchItem(SketchItem):
def generate(self):
return {
'repository': 'TODO',
'domain': 'TODO',
'domain-owner': 'TODO'
}
| 20.75 | 53 | 0.582329 | 20 | 249 | 7.25 | 0.75 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305221 | 249 | 11 | 54 | 22.636364 | 0.83815 | 0 | 0 | 0 | 1 | 0 | 0.160643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0.125 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 3 |
30f4a3ae5ea95d1940574c95c6eca484a3fb1354 | 222 | py | Python | agent/serializers.py | Serafim-End/bearAI | 32d9896ddcac8a7509f2e32a75923e17f29a9af8 | [
"Unlicense"
] | 1 | 2017-06-05T10:37:43.000Z | 2017-06-05T10:37:43.000Z | agent/serializers.py | Serafim-End/bearAI | 32d9896ddcac8a7509f2e32a75923e17f29a9af8 | [
"Unlicense"
] | null | null | null | agent/serializers.py | Serafim-End/bearAI | 32d9896ddcac8a7509f2e32a75923e17f29a9af8 | [
"Unlicense"
] | null | null | null | from rest_framework import serializers
from agent.models import Agent
class AgentSerializer(serializers.ModelSerializer):
class Meta:
model = Agent
fields = ('developer', 'username', 'date_joined')
| 20.181818 | 57 | 0.720721 | 23 | 222 | 6.869565 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.198198 | 222 | 10 | 58 | 22.2 | 0.88764 | 0 | 0 | 0 | 0 | 0 | 0.126126 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
30f58c391509423a4bee8d65a9cde3b8ff49837e | 2,930 | py | Python | downloads-generation/data_pdb/make_pdb_query.py | openvax/mhc2flurry | 914dddfd708801a83615d0cc3d41dd3b19e45919 | [
"Apache-2.0"
] | 1 | 2021-11-09T11:34:25.000Z | 2021-11-09T11:34:25.000Z | downloads-generation/data_pdb/make_pdb_query.py | openvax/mhc2flurry | 914dddfd708801a83615d0cc3d41dd3b19e45919 | [
"Apache-2.0"
] | null | null | null | downloads-generation/data_pdb/make_pdb_query.py | openvax/mhc2flurry | 914dddfd708801a83615d0cc3d41dd3b19e45919 | [
"Apache-2.0"
] | 1 | 2021-11-11T16:02:54.000Z | 2021-11-11T16:02:54.000Z | # Just print a JSON PDB query to stdout
# Doing this in a python script so we have comments.
import json
sequences = []
# DRA1*01:01
sequences.append(
"MAISGVPVLGFFIIAVLMSAQESWAIKEEHVIIQAEFYLNPDQSGEFMFDFDGDEIFHVDMAKKETVWRLEEFGRF"
"ASFEAQGALANIAVDKANLEIMTKRSNYTPITNVPPEVTVLTNSPVELREPNVLICFIDKFTPPVVNVTWLRNGKP"
"VTTGVSETVFLPREDHLFRKFHYLPFLPSTEDVYDCRVEHWGLDEPLLKHWEFDAPSPLPETTENVVCALGLTVGL"
"VGIIIGTIFIIKGVRKSNAAERRGPL")
# DRB1*01:01
sequences.append(
"MVCLKLPGGSCMTALTVTLMVLSSPLALAGDTRPRFLWQLKFECHFFNGTERVRLLERCIYNQEESVRFDSDVGEY"
"RAVTELGRPDAEYWNSQKDLLEQRRAAVDTYCRHNYGVGESFTVQRRVEPKVTVYPSKTQPLQHHNLLVCSVSGFY"
"PGSIEVRWFRNGQEEKAGVVSTGLIQNGDWTFQTLVMLETVPRSGEVYTCQVEHPSVTSPLTVEWRARSESAQSKM"
"LSGVGGFVLGLLFLGAGLFIYFRNQKGHSGLQPTGFLS")
# DRB3*01:01
sequences.append(
"MVCLKLPGGSSLAALTVTLMVLSSRLAFAGDTRPRFLELRKSECHFFNGTERVRYLDRYFHNQEEFLRFDSDVGEY"
"RAVTELGRPVAESWNSQKDLLEQKRGRVDNYCRHNYGVGESFTVQRRVHPQVTVYPAKTQPLQHHNLLVCSVSGFY"
"PGSIEVRWFRNGQEEKAGVVSTGLIQNGDWTFQTLVMLETVPRSGEVYTCQVEHPSVTSALTVEWRARSESAQSKM"
"LSGVGGFVLGLLFLGAGLFIYFRNQKGHSGLQPTGFLS")
# DRB4*01:01
sequences.append(
"MVCLKLPGGSCMAALTVTLTVLSSPLALAGDTQPRFLEQAKCECHFLNGTERVWNLIRYI"
"YNQEEYARYNSDLGEYQAVTELGRPDAEYWNSQKDLLERRRAEVDTYCRYNYGVVESFTV"
"QRRVQPKVTVYPSKTQPLQHHNLLVCSVNGFYPGSIEVRWFRNSQEEKAGVVSTGLIQNG"
"DWTFQTLVMLETVPRSGEVYTCQVEHPSMMSPLTVQWSARSESAQSKMLSGVGGFVLGLL"
"FLGTGLFIYFRNQKGHSGLQPTGLLS")
# DRB5*01:01
sequences.append(
"MVCLKLPGGSYMAKLTVTLMVLSSPLALAGDTRPRFLQQDKYECHFFNGTERVRFLHRDIYNQEEDLRFDSDVGEY"
"RAVTELGRPDAEYWNSQKDFLEDRRAAVDTYCRHNYGVGESFTVQRRVEPKVTVYPARTQTLQHHNLLVCSVNGFY"
"PGSIEVRWFRNSQEEKAGVVSTGLIQNGDWTFQTLVMLETVPRSGEVYTCQVEHPSVTSPLTVEWRAQSESAQSKM"
"LSGVGGFVLGLLFLGAGLFIYFKNQKGHSGLHPTGLVS")
# HLA-DQB1*02:01
sequences.append(
"MSWKKALRIPGGLRAATVTLMLSMLSTPVAEGRDSPEDFVYQFKGMCYFTNGTERVRLVS"
"RSIYNREEIVRFDSDVGEFRAVTLLGLPAAEYWNSQKDILERKRAAVDRVCRHNYQLELR"
"TTLQRRVEPTVTISPSRTEALNHHNLLVCSVTDFYPAQIKVRWFRNDQEETAGVVSTPLI"
"RNGDWTFQILVMLEMTPQRGDVYTCHVEHPSLQSPITVEWRAQSESAQSKMLSGIGGFVL"
"GLIFLGLGLIIHHRSQKGLLH")
# HLA-DPB1*01:01
sequences.append(
"MMVLQVSAAPRTVALTALLMVLLTSVVQGRATPENYVYQGRQECYAFNGTQRFLERYIYN"
"REEYARFDSDVGEFRAVTELGRPAAEYWNSQKDILEEKRAVPDRVCRHNYELDEAVTLQR"
"RVQPKVNVSPSKKGPLQHHNLLVCHVTDFYPGSIQVRWFLNGQEETAGVVSTNLIRNGDW"
"TFQILVMLEMTPQQGDVYICQVEHTSLDSPVTVEWKAQSDSAQSKTLTGAGGFVLGLIIC"
"GVGIFMHRRSKKVQRGSA")
# Should be distinct
assert len(sequences) == len(set(sequences))
def node_from_sequence(sequence):
return {
"type": "terminal",
"service": "sequence",
"parameters": {
"evalue_cutoff": 10,
"identity_cutoff": 0.5,
"target": "pdb_protein_sequence",
"value": sequence,
}
}
query = {
"query": {
"type": "group",
"logical_operator": "or",
"nodes": [node_from_sequence(sequence) for sequence in sequences],
},
"request_options": {
"return_all_hits": True
},
"return_type": "entry"
}
print(json.dumps(query)) | 32.921348 | 80 | 0.83413 | 151 | 2,930 | 16.099338 | 0.642384 | 0.031674 | 0.048951 | 0.046894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014695 | 0.094198 | 2,930 | 89 | 81 | 32.921348 | 0.901281 | 0.065529 | 0 | 0.140625 | 0 | 0 | 0.739003 | 0.666789 | 0 | 0 | 0 | 0 | 0.015625 | 1 | 0.015625 | false | 0 | 0.015625 | 0.015625 | 0.046875 | 0.015625 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
eb4af68fe6e2b37d9387e8cfbe2a09723fbfe4cc | 222 | py | Python | setup.py | jthop/MQTTPlugin | 74b3701f810051ee115899d6ed0b63767bbc2507 | [
"MIT"
] | null | null | null | setup.py | jthop/MQTTPlugin | 74b3701f810051ee115899d6ed0b63767bbc2507 | [
"MIT"
] | null | null | null | setup.py | jthop/MQTTPlugin | 74b3701f810051ee115899d6ed0b63767bbc2507 | [
"MIT"
] | null | null | null | from setuptools import setup
setup(name='MQTTPlugin',
version='0.1',
py_modules=['MQTTPlugin'],
install_requires=['setuptools', 'paho-mqtt'],
entry_points={'pynx584': ['mqtt_plugin=src.MQTTPlugin:MQTTBridge']},
)
| 24.666667 | 70 | 0.720721 | 26 | 222 | 6 | 0.807692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024876 | 0.094595 | 222 | 8 | 71 | 27.75 | 0.751244 | 0 | 0 | 0 | 0 | 0 | 0.387387 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
eb72acbdcd2a4abc2840da625f8f608321da3ff4 | 1,329 | py | Python | api/database/table.py | dhinakg/BitSTAR | f2693c5a0612e58e337511023f8f9e4f25543e33 | [
"Apache-2.0"
] | 6 | 2017-04-29T03:45:56.000Z | 2018-05-27T02:03:13.000Z | api/database/table.py | dhinakg/BitSTAR | f2693c5a0612e58e337511023f8f9e4f25543e33 | [
"Apache-2.0"
] | 18 | 2017-04-12T20:26:05.000Z | 2018-06-23T18:11:55.000Z | api/database/table.py | dhinakg/BitSTAR | f2693c5a0612e58e337511023f8f9e4f25543e33 | [
"Apache-2.0"
] | 16 | 2017-04-30T05:04:15.000Z | 2019-08-15T04:59:09.000Z | # Copyright 2017 Starbot Discord Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from api.database import DAL
from api.database.db import DB
from api.database.DAL import SQLite
class Table:
name = None
table_type = None
def __init__(self, name_in, type_in):
self.name = name_in
self.table_type = type_in
DAL.db_create_table(DB, self.name)
def insert(self, dataDict):
return DAL.db_insert(DB, self, dataDict)
def search(self, searchTerm, searchFor):
return SQLite.db_search(DB, self, searchTerm, searchFor)
def getContents(self, rows):
return DAL.db_get_contents_of_table(DB, self, rows)
def getLatestID(self):
return DAL.db_get_latest_id(DB, self)
class TableTypes:
pServer = 1
pGlobal = 2 | 30.906977 | 77 | 0.699774 | 193 | 1,329 | 4.709845 | 0.502591 | 0.066007 | 0.049505 | 0.035204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009728 | 0.226486 | 1,329 | 43 | 78 | 30.906977 | 0.874514 | 0.443943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.238095 | false | 0 | 0.142857 | 0.190476 | 0.857143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.