hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d20d55a6b8812037e181c5378cc8a4a6ae82572d | 285 | py | Python | project/write_matrix.py | felixzheng02/pypkpd | 9e7f41aa7e33ba50cec3482f14e7be08a4bc23e7 | [
"MIT"
] | 9 | 2021-06-17T08:11:31.000Z | 2021-06-22T08:05:27.000Z | project/write_matrix.py | Caiya-Zhang/pypkpd | 9e7f41aa7e33ba50cec3482f14e7be08a4bc23e7 | [
"MIT"
] | null | null | null | project/write_matrix.py | Caiya-Zhang/pypkpd | 9e7f41aa7e33ba50cec3482f14e7be08a4bc23e7 | [
"MIT"
] | 2 | 2021-06-18T07:16:08.000Z | 2021-09-25T04:43:28.000Z | """
Function written to match MATLAB function
Author: Caiya Zhang, Yuchen Zheng
"""
import numpy as np
from project.fprintf import fprintf
def write_matrix(f, x: np.ndarray):
for i in range(0, x.shape[0]):
fprintf(f,"%6e",x[i, :])
fprintf(f,"\n")
return
| 17.8125 | 42 | 0.635088 | 44 | 285 | 4.090909 | 0.727273 | 0.088889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013636 | 0.22807 | 285 | 15 | 43 | 19 | 0.804545 | 0 | 0 | 0 | 0 | 0 | 0.026882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.285714 | null | null | 0.428571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
d22e53fa5b707e5ff25bad39c2e1688da76babc2 | 1,330 | py | Python | setup.py | VivumLab/py_cui | e3f185102d469b9c758f6d7ad53b6bcfc27e0f3d | [
"BSD-3-Clause"
] | 654 | 2020-02-22T00:02:14.000Z | 2022-03-29T23:10:31.000Z | setup.py | VivumLab/py_cui | e3f185102d469b9c758f6d7ad53b6bcfc27e0f3d | [
"BSD-3-Clause"
] | 133 | 2020-01-28T15:41:05.000Z | 2022-03-22T19:05:38.000Z | setup.py | VivumLab/py_cui | e3f185102d469b9c758f6d7ad53b6bcfc27e0f3d | [
"BSD-3-Clause"
] | 68 | 2020-02-22T01:43:09.000Z | 2022-02-22T18:01:43.000Z | import setuptools
from sys import platform
# Use README for long description
with open('README.md', 'r') as readme_fp:
long_description = readme_fp.read()
with open('requirements.txt', 'r') as req_fp:
required_libs = req_fp.readlines()
# py_cui setup
setuptools.setup(
name='py_cui',
description='A widget and grid based framework for building command line user interfaces in python.',
long_description=long_description,
long_description_content_type='text/markdown',
version='0.1.4',
author='Jakub Wlodek',
author_email='jwlodek.dev@gmail.com',
license='BSD (3-clause)',
packages=setuptools.find_packages(exclude=['docs','tests', 'examples', 'venv']),
install_requires=required_libs,
url='https://github.com/jwlodek/py_cui',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
keywords='cui cli commandline user-interface ui',
python_requires='>=3.6',
)
| 33.25 | 105 | 0.654887 | 161 | 1,330 | 5.291925 | 0.565217 | 0.156103 | 0.205399 | 0.213615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019886 | 0.206015 | 1,330 | 39 | 106 | 34.102564 | 0.786932 | 0.033083 | 0 | 0 | 0 | 0 | 0.468433 | 0.016368 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d234dc73aa6cfe8b677814a5964de2e921ddf21e | 1,327 | py | Python | oop.py | vuksamardzic/basic-py | 356766f895660376530a4734d751cea281d1920e | [
"MIT"
] | null | null | null | oop.py | vuksamardzic/basic-py | 356766f895660376530a4734d751cea281d1920e | [
"MIT"
] | null | null | null | oop.py | vuksamardzic/basic-py | 356766f895660376530a4734d751cea281d1920e | [
"MIT"
] | null | null | null | # Classes & Objects
class Person:
# __ means private
__name = ''
__email = ''
def __init__(self, name, email):
self.__name = name
self.__email = email
def set_name(self, name):
self.__name = name
def get_name(self):
return self.__name
def set_email(self, email):
self.__email = email
def get_email(self):
return self.__email
def to_string(self):
return '{} can be contacted at {}'.format(self.__name, self.__email)
person = Person('vuk samardžić', 'samardzic.vuk@gmail.com')
# person.set_name('vuk')
# person.set_email('samardzic.vuk@gmail.com')
# print(person.get_name(), person.get_email())
# print(person.to_string())
class Customer(Person):
__balance = 0
def __init__(self, name, email, balance):
super().__init__(name, email)
self.__balance = balance
def set_balance(self, balance):
self.__balance = balance
def get_balance(self):
return self.__balance
def to_string(self):
return '{} has the balance of {} and can be contacted at {}'.format(self.get_name(), self.__balance,
self.get_email())
customer = Customer('John Doe', 'jdoe@gmail.com', 100)
print(customer.to_string())
| 23.696429 | 108 | 0.60211 | 161 | 1,327 | 4.590062 | 0.242236 | 0.075778 | 0.056834 | 0.040595 | 0.181326 | 0.070365 | 0 | 0 | 0 | 0 | 0 | 0.004158 | 0.275057 | 1,327 | 55 | 109 | 24.127273 | 0.764033 | 0.129616 | 0 | 0.258065 | 0 | 0 | 0.116725 | 0.020035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.322581 | false | 0 | 0 | 0.16129 | 0.645161 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d23eddf080fd748ffcca83d21face5169d137711 | 1,166 | py | Python | apps/user/migrations/0002_auto_20200131_2116.py | phdevs1/CyberCaffe | bee989a6d8d59205ee2645e986b4b0f16d00bf05 | [
"Apache-2.0"
] | null | null | null | apps/user/migrations/0002_auto_20200131_2116.py | phdevs1/CyberCaffe | bee989a6d8d59205ee2645e986b4b0f16d00bf05 | [
"Apache-2.0"
] | 7 | 2021-03-19T08:39:34.000Z | 2022-03-12T00:15:38.000Z | apps/user/migrations/0002_auto_20200131_2116.py | pioh123/CyberCaffe | bee989a6d8d59205ee2645e986b4b0f16d00bf05 | [
"Apache-2.0"
] | null | null | null | # Generated by Django 2.2.9 on 2020-01-31 21:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('user', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='user',
options={'ordering': ['first_name'], 'verbose_name': 'Cliente', 'verbose_name_plural': 'Clientes'},
),
migrations.AlterField(
model_name='user',
name='first_name',
field=models.CharField(max_length=30, verbose_name='Nombres'),
),
migrations.AlterField(
model_name='user',
name='last_name',
field=models.CharField(blank=True, max_length=30, null=True, verbose_name='Apellidos'),
),
migrations.AlterField(
model_name='user',
name='money',
field=models.FloatField(blank=True, null=True, verbose_name='Saldo'),
),
migrations.AlterField(
model_name='user',
name='phone',
field=models.CharField(blank=True, max_length=30, null=True, verbose_name='Telefono'),
),
]
| 30.684211 | 111 | 0.575472 | 117 | 1,166 | 5.581197 | 0.42735 | 0.101072 | 0.153139 | 0.177642 | 0.407351 | 0.407351 | 0.180704 | 0.180704 | 0.180704 | 0.180704 | 0 | 0.030266 | 0.291595 | 1,166 | 37 | 112 | 31.513514 | 0.760291 | 0.038593 | 0 | 0.419355 | 1 | 0 | 0.141198 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.032258 | 0 | 0.129032 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d243b100883faf50e90a2ee1d67172b43fbf7f7c | 272 | py | Python | wafhelpers/rtems_trace.py | fakecoinbase/ntpsecslashntpsec | 6f72d3bfb0614b24219e69d990d3701393eb92ae | [
"CC-BY-4.0",
"BSD-2-Clause",
"NTP",
"MIT",
"BSD-3-Clause"
] | 201 | 2015-11-16T16:57:58.000Z | 2022-03-21T01:01:34.000Z | wafhelpers/rtems_trace.py | fakecoinbase/ntpsecslashntpsec | 6f72d3bfb0614b24219e69d990d3701393eb92ae | [
"CC-BY-4.0",
"BSD-2-Clause",
"NTP",
"MIT",
"BSD-3-Clause"
] | 4 | 2019-03-20T21:49:34.000Z | 2021-12-30T18:08:56.000Z | wafhelpers/rtems_trace.py | fakecoinbase/ntpsecslashntpsec | 6f72d3bfb0614b24219e69d990d3701393eb92ae | [
"CC-BY-4.0",
"BSD-2-Clause",
"NTP",
"MIT",
"BSD-3-Clause"
] | 40 | 2016-05-25T05:25:51.000Z | 2021-12-30T17:40:00.000Z | from waflib.TaskGen import feature, after_method
@feature("rtems_trace")
@after_method('apply_link')
def rtems_trace(self):
if self.env.RTEMS_TEST_ENABLE:
self.link_task.env.LINK_CC = self.env.BIN_RTEMS_TLD \
+ self.env.RTEMS_TEST_FLAGS + ['--']
| 27.2 | 61 | 0.705882 | 40 | 272 | 4.475 | 0.55 | 0.117318 | 0.134078 | 0.178771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165441 | 272 | 9 | 62 | 30.222222 | 0.788546 | 0 | 0 | 0 | 0 | 0 | 0.084559 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d252e4772aedc4e852e116819c2a601f7290f948 | 1,276 | py | Python | HD/Dovapp/views.py | xiaojie96528/Dove | 514200432c6fa9158feea1fe5527d6f291c20ddc | [
"Unlicense"
] | null | null | null | HD/Dovapp/views.py | xiaojie96528/Dove | 514200432c6fa9158feea1fe5527d6f291c20ddc | [
"Unlicense"
] | null | null | null | HD/Dovapp/views.py | xiaojie96528/Dove | 514200432c6fa9158feea1fe5527d6f291c20ddc | [
"Unlicense"
] | null | null | null | from django.shortcuts import render
from Dovapp.models import *
from rest_framework import viewsets
from Dovapp.serializers import *
from django.contrib.auth import authenticate,login,logout
from django.contrib.auth.decorators import login_required
from django.http import JsonResponse
import json
class ProjectViewSet(viewsets.ModelViewSet):
"""
get:
Return all projects.
post:
Create a new project.
"""
queryset = Project.objects.all()
serializer_class = ProjectSerializer
def login_acc(request):
if request.method=="POST":
username = request.POST.get("username")
password = request.POST.get("password")
try:
user = authenticate(username=username,password=password)
except Exception as e:
user=None
print(e)
if user is not None:
login(request,user)
data = {"msg": "登录成功", "data": user.username,"code":"200"}
return JsonResponse(data,safe=False)
else:
data = {"msg": "用户名或密码错误1","data": "xm"}
return JsonResponse(data,safe=False)
def logout_acc(request):
logout(request)
data = {"msg": "退出成功", "data": "", "code": "200"}
return JsonResponse(data,safe=False) | 32.717949 | 70 | 0.637147 | 143 | 1,276 | 5.65035 | 0.454545 | 0.049505 | 0.081683 | 0.096535 | 0.132426 | 0.094059 | 0.094059 | 0 | 0 | 0 | 0 | 0.00733 | 0.251567 | 1,276 | 39 | 71 | 32.717949 | 0.838743 | 0.048589 | 0 | 0.096774 | 0 | 0 | 0.063194 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.064516 | 0.258065 | 0 | 0.516129 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
d254b436bc6cddd56e9b89d311860bb0e5dc2b6a | 790 | py | Python | com.systemincloud.examples.tasks.pythontask/src/test/py/tasks/params/Parameters.py | systemincloud/sic-examples | b82d5d672f515b1deb5ddb35c5a93c003e03c030 | [
"Apache-2.0"
] | null | null | null | com.systemincloud.examples.tasks.pythontask/src/test/py/tasks/params/Parameters.py | systemincloud/sic-examples | b82d5d672f515b1deb5ddb35c5a93c003e03c030 | [
"Apache-2.0"
] | 15 | 2015-01-08T20:28:19.000Z | 2016-07-20T07:19:15.000Z | com.systemincloud.examples.tasks.pythontask/src/test/py/tasks/params/Parameters.py | systemincloud/sic-examples | b82d5d672f515b1deb5ddb35c5a93c003e03c030 | [
"Apache-2.0"
] | null | null | null | from sicpythontask.PythonTaskInfo import PythonTaskInfo
from sicpythontask.PythonTask import PythonTask
from sicpythontask.SicParameter import SicParameter
from sicpythontask.InputPort import InputPort
from sicpythontask.OutputPort import OutputPort
from sicpythontask.data.Int32 import Int32
@PythonTaskInfo
@SicParameter(name="M")
@SicParameter(name="N", default_value="5")
class Parameters(PythonTask):
def __init_ports__(self):
self.in_ = InputPort(name="in", data_type=Int32)
self.out = OutputPort(name="out", data_type=Int32)
def runner_start(self):
self.n = int(self.get_parameter('N'))
self.m = int(self.get_parameter('M'))
def execute(self, grp):
self.out.put_data(Int32(self.in_.get_data(Int32).values[0]*self.n + self.m))
| 32.916667 | 84 | 0.74557 | 102 | 790 | 5.627451 | 0.343137 | 0.1777 | 0.045296 | 0.066202 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020619 | 0.140506 | 790 | 23 | 85 | 34.347826 | 0.824742 | 0 | 0 | 0 | 0 | 0 | 0.012674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d25532d0801b35285335d7d62daaa54571c4a971 | 2,697 | py | Python | wagtailnest/utils.py | ionata/wagtailnest | da903db0967e6f3b87db7213c9d94c0ec98048f5 | [
"BSD-3-Clause"
] | 1 | 2018-04-11T23:47:33.000Z | 2018-04-11T23:47:33.000Z | wagtailnest/utils.py | ionata/wagtailnest | da903db0967e6f3b87db7213c9d94c0ec98048f5 | [
"BSD-3-Clause"
] | 12 | 2017-07-18T01:52:06.000Z | 2021-09-08T00:15:37.000Z | wagtailnest/utils.py | ionata/wagtailnest | da903db0967e6f3b87db7213c9d94c0ec98048f5 | [
"BSD-3-Clause"
] | 1 | 2017-05-05T06:18:44.000Z | 2017-05-05T06:18:44.000Z | from functools import wraps
from dj_core.utils import as_absolute
from django.conf import settings
from django.urls import reverse
from rest_framework.settings import perform_import
from wagtail.core.blocks import RichTextBlock
from wagtail.core.models import Site, UserPagePermissionsProxy
from wagtail.images.formats import get_image_formats
def get_site():
site = Site.objects.filter(pk=settings.SITE_ID).first()
if site is None:
site = Site.objects.filter(
hostname=settings.DJCORE.URL.hostname).first()
return site or Site.objects.first()
def _clean_rel_url(rel_url):
return '/{}/'.format(rel_url.strip('/')).replace('//', '/')
def get_root_relative_url(url_path):
"""Remove the root page slug from the URL path"""
return _clean_rel_url('/'.join(url_path.split('/')[2:]))
def get_url_path(root_relative_url):
"""Add the root page slug to the URL path"""
return _clean_rel_url('/'.join(
[get_site().root_page.url_path.strip('/')] +
[s for s in root_relative_url.split('/') if s != '']))
def publishable_pages(user, qs=None):
publishable = UserPagePermissionsProxy(user).publishable_pages()
if qs is None:
return publishable
return qs.filter(id__in=publishable.values_list('pk', flat=True))
def can_publish(user, page):
return page.pk in publishable_pages(user).values_list('pk', flat=True)
def generate_image_url(image, filter_spec):
"""From an Image, get a URL."""
from wagtail.images.views.serve import generate_signature
signature = generate_signature(image.id, filter_spec)
name = 'wagtailimages_serve'
url = reverse(name, args=(signature, image.id, filter_spec))
return as_absolute(url)
def get_image_filter_spec(profile_name):
profiles = {x.name: x for x in get_image_formats()}
return getattr(profiles.get(profile_name, None), 'filter_spec', 'original')
def richtext_to_python(value):
"""Convert a RichText rendered string back into RichText."""
if isinstance(value, str):
return str(RichTextBlock().to_python(value))
return value
def serialize_video_url(video_url):
from wagtailnest.serializers import EmbedSerializer
return EmbedSerializer.for_url(video_url)
def import_setting(name, default=None):
value = perform_import(settings.WAGTAILNEST.get(name, None), name)
return default if value is None else value
def nonraw_signal_handler(signal_handler):
"""Decorator that turns off signal handlers when loading fixture data."""
@wraps(signal_handler)
def wrapper(*args, **kwargs):
if kwargs.get('raw'):
return
signal_handler(*args, **kwargs)
return wrapper
| 31.360465 | 79 | 0.719318 | 371 | 2,697 | 5.032345 | 0.320755 | 0.022496 | 0.017675 | 0.022496 | 0.085699 | 0.057847 | 0.033208 | 0.033208 | 0 | 0 | 0 | 0.000445 | 0.167223 | 2,697 | 85 | 80 | 31.729412 | 0.83081 | 0.085651 | 0 | 0 | 0 | 0 | 0.02377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.236364 | false | 0 | 0.218182 | 0.036364 | 0.727273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d266cfc02c86def010526f635a28263a0737b8ff | 552 | py | Python | django_quicky/context_processors.py | sametmax/django-quicky | 2a87dbdcc6db400aff5a9119533bd3784fc4afb4 | [
"Zlib"
] | 149 | 2015-01-02T19:48:47.000Z | 2022-02-18T15:43:34.000Z | django_quicky/context_processors.py | keshapps/django-quicky | 2a87dbdcc6db400aff5a9119533bd3784fc4afb4 | [
"Zlib"
] | 3 | 2015-01-28T18:44:42.000Z | 2017-05-23T18:50:02.000Z | django_quicky/context_processors.py | keshapps/django-quicky | 2a87dbdcc6db400aff5a9119533bd3784fc4afb4 | [
"Zlib"
] | 11 | 2015-01-05T19:22:16.000Z | 2021-01-25T13:06:20.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# vim: ai ts=4 sts=4 et sw=4 nu
from django.views.debug import get_safe_settings
class SafeSettings(object):
"""
Map attributes to values in the safe settings dict
"""
def __init__(self):
self._settings = get_safe_settings()
def __getattr__(self, name):
try:
return self._settings[name.upper()]
except KeyError:
raise AttributeError
settings_obj = SafeSettings()
def settings(request):
return {'settings': settings_obj}
| 19.034483 | 58 | 0.63587 | 69 | 552 | 4.855072 | 0.681159 | 0.107463 | 0.089552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009732 | 0.255435 | 552 | 28 | 59 | 19.714286 | 0.805353 | 0.222826 | 0 | 0 | 0 | 0 | 0.019704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.083333 | 0.083333 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d27223323c43244be904aa993d9b2e5796b576a0 | 1,644 | py | Python | mpcereform/reform.py | michaelgfalk/mpce-database-reform | 097738b103a969489a25d2ae3ce5c704ec6e1dbd | [
"MIT"
] | 1 | 2019-07-02T01:36:47.000Z | 2019-07-02T01:36:47.000Z | mpcereform/reform.py | michaelgfalk/mpce-database-reform | 097738b103a969489a25d2ae3ce5c704ec6e1dbd | [
"MIT"
] | null | null | null | mpcereform/reform.py | michaelgfalk/mpce-database-reform | 097738b103a969489a25d2ae3ce5c704ec6e1dbd | [
"MIT"
] | 1 | 2019-08-14T06:12:59.000Z | 2019-08-14T06:12:59.000Z | """Command for reshaping existing 'manuscripts' database, and porting it to the new structure"""
import sys
import argparse
from mpcereform.core import LocalDB
def main():
"""Main entry point for the script"""
# Define argument parser
parser = argparse.ArgumentParser(description='Build the MPCE database from raw data.')
parser.add_argument('-u', '--user', type=str,
help='username for your MySQL/MariaDB server', default='root')
parser.add_argument('-p', '--password', type=str,
help='password for your MySQL/MariaDB server', default=None)
parser.add_argument('-hst', '--host', type=str,
help='hostname for your MySQL/MariaDB server (defaults to localhost)',
default='127.0.0.1')
args = parser.parse_args()
arg_dict = vars(args)
# Start connection, build schema if necessary
print('\nDATABASE CONNECTION')
print('======================\n')
db = LocalDB(**arg_dict) #pylint:disable=invalid-name;
# Run import methods
print('\nENTITY IMPORT')
print('======================\n')
db.import_works()
db.import_editions()
db.import_places()
print('\nEVENT IMPORT')
print('======================\n')
db.import_stn()
db.import_new_tables()
db.import_data_spreadsheets()
print('\nRESOLVING AGENT DATA')
print('======================\n')
db.resolve_agents()
print('\nADDING ADDITIONAL STRUCTURE TO DATABASE')
print('======================\n')
db.create_triggers()
db.summarise()
if __name__ == '__main__':
sys.exit(main())
| 31.018868 | 96 | 0.591849 | 186 | 1,644 | 5.102151 | 0.516129 | 0.05058 | 0.04215 | 0.060063 | 0.135933 | 0.067439 | 0 | 0 | 0 | 0 | 0 | 0.004662 | 0.217153 | 1,644 | 52 | 97 | 31.615385 | 0.732712 | 0.144161 | 0 | 0.138889 | 0 | 0 | 0.330223 | 0.086145 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0.055556 | 0.305556 | 0 | 0.333333 | 0.277778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
962a58eb1d233b2ad02abd3e67afb5e8981e9407 | 172 | py | Python | spectrify/__init__.py | grantatspothero/spectrify | b32910bf2d6b5c0d34578715d4ff2af0cea572d3 | [
"MIT"
] | null | null | null | spectrify/__init__.py | grantatspothero/spectrify | b32910bf2d6b5c0d34578715d4ff2af0cea572d3 | [
"MIT"
] | null | null | null | spectrify/__init__.py | grantatspothero/spectrify | b32910bf2d6b5c0d34578715d4ff2af0cea572d3 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""Top-level package for Spectrify."""
__author__ = """The Narrativ Company, Inc."""
__email__ = 'engineering@narrativ.com'
__version__ = '1.0.1'
| 21.5 | 45 | 0.656977 | 21 | 172 | 4.809524 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026846 | 0.133721 | 172 | 7 | 46 | 24.571429 | 0.651007 | 0.319767 | 0 | 0 | 0 | 0 | 0.495496 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
9641a6590cc0a7f2c2cf6df11e4a2c3ac8c4a6c3 | 1,666 | py | Python | openquake/hazardlib/scalerel/peer.py | gfzriesgos/shakyground-lfs | 2caf67cc32e6800286eded2df1efb05973ccf41b | [
"BSD-3-Clause"
] | 1 | 2019-08-01T00:28:24.000Z | 2019-08-01T00:28:24.000Z | openquake/hazardlib/scalerel/peer.py | gfzriesgos/shakyground-lfs | 2caf67cc32e6800286eded2df1efb05973ccf41b | [
"BSD-3-Clause"
] | 4 | 2018-08-31T14:14:35.000Z | 2021-10-11T12:53:13.000Z | openquake/hazardlib/scalerel/peer.py | gfzriesgos/shakyground-lfs | 2caf67cc32e6800286eded2df1efb05973ccf41b | [
"BSD-3-Clause"
] | 3 | 2018-08-31T14:11:00.000Z | 2019-07-17T10:06:02.000Z | # -*- coding: utf-8 -*-
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# Copyright (C) 2012-2018 GEM Foundation
#
# OpenQuake is free software: you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# OpenQuake is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with OpenQuake. If not, see <http://www.gnu.org/licenses/>.
"""
Module :mod:`openquake.hazardlib.scalerel.peer` implements :class:`PeerMSR`.
"""
from openquake.hazardlib.scalerel.base import BaseMSRSigma
from openquake.baselib.slots import with_slots
@with_slots
class PeerMSR(BaseMSRSigma):
"""
Magnitude-Scaling Relationship defined for PEER PSHA test cases.
See "Verification of Probabilistic Seismic Hazard Analysis Computer
Programs", Patricia Thomas and Ivan Wong, PEER Report 2010/106, May 2010.
"""
_slots_ = []
def get_median_area(self, mag, rake):
"""
Calculates median area as ``10 ** (mag - 4)``. Rake is ignored.
"""
return 10 ** (mag - 4.0)
def get_std_dev_area(self, mag, rake):
"""
Standard deviation for PeerMSR. Mag and rake are ignored.
>>> peer = PeerMSR()
>>> 0.25 == peer.get_std_dev_area(4.0, 50)
True
"""
return 0.25
| 32.666667 | 77 | 0.690276 | 233 | 1,666 | 4.88412 | 0.575107 | 0.013181 | 0.031634 | 0.050088 | 0.087873 | 0.087873 | 0.059754 | 0 | 0 | 0 | 0 | 0.031322 | 0.214286 | 1,666 | 50 | 78 | 33.32 | 0.838044 | 0.720888 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.222222 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
96487499ba079f0859d79d5c4ffe39f18b10e29a | 431 | py | Python | sympy/polys/domains/characteristiczero.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/polys/domains/characteristiczero.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | sympy/polys/domains/characteristiczero.py | msgoff/sympy | 1e7daef7514902f5e89718fa957b7b36c6669a10 | [
"BSD-3-Clause"
] | null | null | null | """Implementation of :class:`CharacteristicZero` class. """
from __future__ import print_function, division
from sympy.polys.domains.domain import Domain
from sympy.utilities import public
@public
class CharacteristicZero(Domain):
"""Domain that has infinite number of elements. """
has_CharacteristicZero = True
def characteristic(self):
"""Return the characteristic of this domain. """
return 0
| 23.944444 | 59 | 0.733179 | 48 | 431 | 6.458333 | 0.604167 | 0.148387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002825 | 0.178654 | 431 | 17 | 60 | 25.352941 | 0.872881 | 0.327146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.375 | 0 | 0.875 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
9657189fa10a1d0b806ca88143156feeda494c67 | 1,211 | py | Python | src/ingest-pipeline/md/data_file_types/__init__.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 6 | 2020-02-18T19:09:59.000Z | 2021-10-07T20:38:46.000Z | src/ingest-pipeline/md/data_file_types/__init__.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 324 | 2020-02-06T22:08:50.000Z | 2022-03-24T20:44:33.000Z | src/ingest-pipeline/md/data_file_types/__init__.py | AustinHartman/ingest-pipeline | 788d9310792c9396a38650deda3dad11483b368c | [
"MIT"
] | 2 | 2020-07-20T14:43:49.000Z | 2021-10-29T18:24:36.000Z | from .ignore_metadata_file import IgnoreMetadataFile
from .yaml_metadata_file import YamlMetadataFile
from .json_metadata_file import JSONMetadataFile
from .false_json_metadata_file import FalseJSONMetadataFile
from .txt_tform_metadata_file import TxtTformMetadataFile
from .txt_wordlist_metadata_file import TxtWordListMetadataFile
from .mtx_tform_metadata_file import MtxTformMetadataFile
#from .czi_metadata_file import CZIMetadataFile
from .ome_tiff_metadata_file import OMETiffMetadataFile
from .scn_tiff_metadata_file import ScnTiffMetadataFile
from .imzml_metadata_file import ImzMLMetadataFile
from .fastq_metadata_file import FASTQMetadataFile
from .csv_metadata_file import CSVMetadataFile
from .tsv_metadata_file import TSVMetadataFile
from .metadatatsv_metadata_file import MetadataTSVMetadataFile
__all__ = ["IgnoreMetadataFile", "YamlMetadataFile", "JSONMetadataFile",
"TxtTformMetadataFile", "MtxTformMetadataFile",
#"CZIMetadataFile",
"OMETiffMetadataFile", "ScnTiffMetadataFile", "ImzMLMetadataFile",
"FASTQMetadataFile", "FalseJSONMetadataFile", "TxtWordListMetadataFile",
"CSVMetadataFile", "TSVMetadataFile", "MetadataTSVMetadataFile"]
| 55.045455 | 83 | 0.83815 | 112 | 1,211 | 8.705357 | 0.330357 | 0.184615 | 0.276923 | 0.045128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109001 | 1,211 | 21 | 84 | 57.666667 | 0.903614 | 0.052849 | 0 | 0 | 0 | 0 | 0.226201 | 0.058515 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.736842 | 0 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
965942653ae1d738b4c43d1e78e32e417acd5007 | 182 | py | Python | app/views/__test__/test_foo.py | alexjravila/JogoBaixoWebAPI | 3e0e2d746000c600d105f973eba690ee6590db93 | [
"MIT"
] | null | null | null | app/views/__test__/test_foo.py | alexjravila/JogoBaixoWebAPI | 3e0e2d746000c600d105f973eba690ee6590db93 | [
"MIT"
] | null | null | null | app/views/__test__/test_foo.py | alexjravila/JogoBaixoWebAPI | 3e0e2d746000c600d105f973eba690ee6590db93 | [
"MIT"
] | null | null | null | from http import HTTPStatus
def test_foo_get_all_should_be_ok(client):
expectedResult = []
response = client.get("/api/foo")
assert response.status_code == HTTPStatus.OK | 30.333333 | 48 | 0.747253 | 25 | 182 | 5.16 | 0.76 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 182 | 6 | 48 | 30.333333 | 0.837662 | 0 | 0 | 0 | 0 | 0 | 0.043716 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96699696e3c59556aa5e6cb9f65ef4981c4e2135 | 234 | py | Python | code/List/sample.py | jumploop/30-seconds-of-python | bfcc5a35d9bd0bba67e81de5715dba21e1ba43be | [
"CC0-1.0"
] | null | null | null | code/List/sample.py | jumploop/30-seconds-of-python | bfcc5a35d9bd0bba67e81de5715dba21e1ba43be | [
"CC0-1.0"
] | null | null | null | code/List/sample.py | jumploop/30-seconds-of-python | bfcc5a35d9bd0bba67e81de5715dba21e1ba43be | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
功能实现:从列表中返回一个随机元素。
解读:
使用random.choice()从lst中获取一个随机元素
"""
from random import choice
def sample(lst):
return choice(lst)
# Examples
print(sample([3, 7, 9, 11]))
# output:
# 9
| 11.142857 | 30 | 0.641026 | 32 | 234 | 4.6875 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.036082 | 0.17094 | 234 | 20 | 31 | 11.7 | 0.737113 | 0.495727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
9671c2b3c051b1808d5ef9fe526b6e9c32a9b462 | 314 | py | Python | elliptic_integral_approximation.py | BIDS-numpy/presentation-uofm-2020 | 8ffd67167f212273bad8c8ed0171259a90e62909 | [
"BSD-3-Clause"
] | 5 | 2019-12-19T23:27:52.000Z | 2020-01-30T19:38:09.000Z | elliptic_integral_approximation.py | Dimbinantenaina/presentation-AIMS-2020 | 55a97b8a46613d220219ea4be2182ce3c01f5579 | [
"BSD-3-Clause"
] | 8 | 2020-01-11T07:43:54.000Z | 2020-02-04T22:58:39.000Z | elliptic_integral_approximation.py | Dimbinantenaina/presentation-AIMS-2020 | 55a97b8a46613d220219ea4be2182ce3c01f5579 | [
"BSD-3-Clause"
] | 6 | 2020-04-30T16:24:15.000Z | 2020-05-02T09:11:08.000Z | import numpy as np
def schlawin(x):
a1 = 0.44325141463
a2 = 0.06260601220
a3 = 0.04757383546
a4 = 0.01736506451
b1 = 0.24998368310
b2 = 0.09200180037
b3 = 0.04069697526
b4 = 0.00526449639
return 1 + x*(a1 + x*(a2 + x*(a3 + x*a4))) + x*(b1 + x*(b2 + x*(b3 + x*b4)))*np.log(1/x)
| 24.153846 | 92 | 0.563694 | 53 | 314 | 3.339623 | 0.509434 | 0.033898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.497817 | 0.270701 | 314 | 12 | 93 | 26.166667 | 0.275109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
968cb5c2efa92fd0d1cd2bd70ffcfae507ec19df | 21,295 | py | Python | src/rocommand/ro_annotation.py | A-Mazurek/ro-manager | e49b6025b89594e036fdb2b56c8b871717b3b620 | [
"MIT-0",
"MIT"
] | 11 | 2015-01-19T04:21:58.000Z | 2019-02-21T11:54:45.000Z | src/rocommand/ro_annotation.py | A-Mazurek/ro-manager | e49b6025b89594e036fdb2b56c8b871717b3b620 | [
"MIT-0",
"MIT"
] | 1 | 2016-10-18T14:35:36.000Z | 2016-10-25T19:12:05.000Z | src/rocommand/ro_annotation.py | A-Mazurek/ro-manager | e49b6025b89594e036fdb2b56c8b871717b3b620 | [
"MIT-0",
"MIT"
] | 7 | 2015-03-04T17:22:00.000Z | 2022-03-14T15:55:23.000Z | # ro_annotation.py
"""
Research Object annotation read, write, decode functions
"""
__author__ = "Graham Klyne (GK@ACM.ORG)"
__copyright__ = "Copyright 2011-2013, University of Oxford"
__license__ = "MIT (http://opensource.org/licenses/MIT)"
import sys
import os
import os.path
import datetime
import logging
import re
import urlparse
log = logging.getLogger(__name__)
import rdflib
#from rdflib.namespace import RDF, RDFS
#from rdflib import URIRef, Namespace, BNode
#from rdflib import Literal
import ro_settings
import ro_manifest
from ro_namespaces import RDF, RDFS, RO, AO, ORE, DCTERMS, ROTERMS
from ro_uriutils import resolveUri, resolveFileAsUri
from ro_prefixes import prefix_dict
# Default list of annotation types
annotationTypes = (
[
{ "name": "type", "prefix": "dcterms", "localName": "type", "type": "string"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.type
, "label": "Type"
, "description": "Word or brief phrase describing type of Research Object component"
}
, { "name": "keywords", "prefix": "dcterms", "localName": "subject", "type": "termlist"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.subject
, "label": "Keywords"
, "description": "List of key words or phrases associated with a Research Object component"
}
, { "name": "description", "prefix": "dcterms", "localName": "description", "type": "text"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.description
, "label": "Description"
, "description": "Extended description of Research Object component"
}
, { "name": "format", "prefix": "dcterms", "localName": "format", "type": "string"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.format
, "label": "Data format"
, "description": "String indicating the data format of a Research Object component"
}
, { "name": "note", "prefix": "dcterms", "localName": "note", "type": "text"
, "baseUri": ROTERMS.baseUri, "fullUri": ROTERMS.note
, "label": "Note"
, "description": "String indicating some information about a Research Object component"
}
, { "name": "title", "prefix": "dcterms", "localName": "title", "type": "string"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.title
, "label": "Title"
, "description": "Title of Research Object component"
}
, { "name": "created", "prefix": "dcterms", "localName": "created", "type": "datetime"
, "baseUri": DCTERMS.baseUri, "fullUri": DCTERMS.created
, "label": "Creation time"
, "description": "Date and time that Research Object component was created"
}
, { "name": "rdf:type", "prefix": "rdf", "localName": "type", "type": "resource"
, "baseUri": RDF.baseUri, "fullUri": RDF.type
, "label": "RDF type"
, "description": "RDF type of the annotated object"
}
, { "name": "rdfs:seeAlso", "prefix": "rdfs", "localName": "seeAlso", "type": "resource"
, "baseUri": RDFS.baseUri, "fullUri": RDFS.seeAlso
, "label": "See also"
, "description": "Related resource with further information"
}
])
# Default list of annotation prefixes
annotationPrefixes = prefix_dict.copy()
annotationPrefixes.update({'ex': "http://example.org/ro/annotation#"})
# Annotation support functions
def getResourceNameString(ro_config, rname, base=None):
"""
Returns a string value corresoponding to a URI indicated by the supplied parameter.
Relative references are assumed to be paths relative to the supplied base URI or,
if no rbase is supplied, relative to the current directory.
"""
rsplit = rname.split(":")
if len(rsplit) == 2:
# Try to interpret name as CURIE
for rpref in ro_config["annotationPrefixes"]:
if rsplit[0] == rpref:
rname = ro_config["annotationPrefixes"][rpref]+rsplit[1]
if urlparse.urlsplit(rname).scheme == "":
if base:
rname = resolveUri(rname, base)
else:
rname = resolveFileAsUri(rname)
return rname
def getAnnotationByName(ro_config, aname, defaultType="string"):
"""
Given an attribute name from the command line, returns
attribute predicate and type URIs as a URIRef node and attribute value type
"""
predicate = aname
valtype = defaultType
for atype in ro_config["annotationTypes"]:
# Try to interpret attribute name as predefined name
if atype["name"] == aname:
predicate = atype["fullUri"]
valtype = atype["type"]
break
else:
predicate = getResourceNameString(ro_config, aname, base=ROTERMS.defaultBase+"#")
predicate = rdflib.URIRef(predicate)
return (predicate, valtype)
def getAnnotationByUri(ro_config, auri, defaultType="string"):
"""
Given an attribute URI from the manifest graph, returns an
attribute name and type tuple for displaying an attribute
"""
# Look for predefined name
for atype in ro_config["annotationTypes"]:
if str(atype["fullUri"]) == str(auri):
return (atype["name"], atype["type"])
# Look for CURIE match
for (pref,puri) in ro_config["annotationPrefixes"].iteritems():
if auri.startswith(puri):
return (pref+":"+auri[len(puri):], defaultType)
# return full URI in angle brackets
return ("<"+str(auri)+">", defaultType)
def getAnnotationNameByUri(ro_config, uri):
"""
Given an attribute URI from the manifest graph, returns an
attribute name for displaying an attribute
"""
return getAnnotationByUri(ro_config, uri)[0]
def makeAnnotationFilename(rodir, afile):
#log.debug("makeAnnotationFilename: %s, %s"%(rodir, afile))
return os.path.join(os.path.abspath(rodir), ro_settings.MANIFEST_DIR+"/", afile)
def makeComponentFilename(rodir, rofile):
log.debug("makeComponentFilename: %s, %s"%(rodir, rofile))
return os.path.join(rodir, rofile)
def readAnnotationBody(rodir, annotationfile):
"""
Read annotation body from indicated file, return RDF Graph of annotation values.
"""
log.debug("readAnnotationBody: %s, %s"%(rodir, annotationfile))
annotationfilename = makeComponentFilename(rodir, annotationfile)
if not os.path.exists(annotationfilename): return None
annotationformat = "xml"
# Look at file extension to figure format
if re.search("\.(ttl|n3)$", annotationfile): annotationformat="n3"
rdfGraph = rdflib.Graph()
rdfGraph.parse(annotationfilename, format=annotationformat)
return rdfGraph
def createAnnotationGraphBody(ro_config, ro_dir, rofile, anngraph):
"""
Create a new annotation body for a single resource in a research object, based
on a supplied graph value.
Existing annotations for the same resource are not touched; if an annotation is being
added or replaced, it is the calling program'sresponsibility to update the manifest to
reference the active annotations. A new name is allocated for the created annotation,
graph which is returned as the result of this function.
ro_config is the research object manager configuration, supplied as a dictionary
ro_dir is the research object root directory
rofile is the name of the Research Object component to be annotated, possibly
relative to the RO root directory.
anngraph is an annotation graph that is to be saved.
Returns the name of the annotation body created relative to the RO
manifest and metadata directory.
"""
# Determine name for annotation body
log.debug("createAnnotationGraphBody: %s, %s"%(ro_dir, rofile))
annotation_filename = None
name_index = 0
name_suffix = os.path.basename(rofile)
if name_suffix in [".",""]:
name_suffix = os.path.basename(os.path.normpath(ro_dir))
today = datetime.date.today()
while annotation_filename == None:
name_index += 1
name = ("Ann-%04d%02d%02d-%04d-%s.rdf"%
(today.year, today.month, today.day, name_index, name_suffix))
if not os.path.exists(makeAnnotationFilename(ro_dir, name)):
annotation_filename = name
# Create annotation body file
log.debug("createAnnotationGraphBody: %s"%(annotation_filename))
anngraph.serialize(destination=makeAnnotationFilename(ro_dir, annotation_filename),
format='xml', base=ro_manifest.getRoUri(ro_dir), xml_base="..")
return annotation_filename
def createAnnotationBody(ro_config, ro_dir, rofile, attrdict, defaultType="string"):
"""
Create a new annotation body for a single resource in a research object.
Existing annotations for the same resource are not touched; if an annotation is being
added or replaced, it is the calling program'sresponsibility to update the manifest to
reference the active annotations. A new name is allocated for the created annotation,
which is returned as the result of this function.
ro_config is the research object manager configuration, supplied as a dictionary
ro_dir is the research object root directory
rofile is the name of the Research Object component to be annotated, possibly
relative to the RO root directory.
attrdict is a dictionary of attributes to be saved inthe annotation body.
Dictionary keys are attribute names that can be resolved via getAnnotationByName.
Returns the name of the annotation body created relative to the RO
manifest and metadata directory.
"""
# Assemble data for annotation
anngraph = rdflib.Graph()
s = ro_manifest.getComponentUri(ro_dir, rofile)
for k in attrdict:
(p,t) = getAnnotationByName(ro_config, k, defaultType)
anngraph.add((s, p, makeAnnotationValue(ro_config, attrdict[k],t)))
# Write graph and return filename
return createAnnotationGraphBody(ro_config, ro_dir, rofile, anngraph)
def _addAnnotationBodyToRoGraph(ro_graph, ro_dir, rofile, annfile):
"""
Add a new annotation body to an RO graph
ro_graph graph to which annotation is added
ro_dir is the research object directory
rofile is the research object file being annotated
annfile is the base file name of the annotation body to be added
"""
# <ore:aggregates>
# <ro:AggregatedAnnotation>
# <ro:annotatesAggregatedResource rdf:resource="data/UserRequirements-astro.ods" />
# <ao:body rdf:resource=".ro/(annotation).rdf" />
# </ro:AggregatedAnnotation>
# </ore:aggregates>
ann = rdflib.BNode()
ro_graph.add((ann, RDF.type, RO.AggregatedAnnotation))
ro_graph.add((ann, RO.annotatesAggregatedResource, ro_manifest.getComponentUri(ro_dir, rofile)))
ro_graph.add((ann, AO.body, ro_manifest.getComponentUri(ro_dir, ro_settings.MANIFEST_DIR+"/"+annfile)))
ro_graph.add((ro_manifest.getComponentUri(ro_dir, "."), ORE.aggregates, ann))
return
def _removeAnnotationBodyFromRoGraph(ro_graph, annbody):
"""
Remove references to an annotation body from an RO graph
ro_graph graph from which annotation is removed
annbody is the the annotation body node to be removed
"""
ro_graph.remove((annbody, None, None ))
ro_graph.remove((None, ORE.aggregates, annbody))
return
def _addSimpleAnnotation(ro_config, ro_dir, rofile, attrname, attrvalue):
"""
Add a simple annotation to a file in a research object.
ro_config is the research object manager configuration, supplied as a dictionary
ro_dir is the research object root directory
rofile names the file or resource to be annotated, possibly relative to the RO.
attrname names the attribute in a form recognized by getAnnotationByName
attrvalue is a value to be associated with the attribute
"""
annfile = createAnnotationBody(ro_config, ro_dir, rofile, { attrname: attrvalue} )
ro_graph = ro_manifest.readManifestGraph(ro_dir)
_addAnnotationBodyToRoGraph(ro_graph, ro_dir, rofile, annfile)
ro_manifest.writeManifestGraph(ro_dir, ro_graph)
return
def _removeSimpleAnnotation(ro_config, ro_dir, rofile, attrname, attrvalue):
"""
Remove a simple annotation or multiple matching annotations a research object.
ro_config is the research object manager configuration, supplied as a dictionary
ro_dir is the research object root directory
rofile names the annotated file or resource, possibly relative to the RO.
attrname names the attribute in a form recognized by getAnnotationByName
attrvalue is the attribute value to be deleted, or Nomne to delete all vaues
"""
log.debug("removeSimpleAnnotation: ro_dir %s, rofile %s, attrname %s, attrvalue %s"%
(ro_dir, rofile, attrname, attrvalue))
# Enumerate annotations
# For each:
# if annotation is only one in graph then:
# remove aggregated annotation
# else:
# create new annotation graph witj annotation removed
# update aggregated annotation
ro_graph = ro_manifest.readManifestGraph(ro_dir)
subject = ro_manifest.getComponentUri(ro_dir, rofile)
(predicate,valtype) = getAnnotationByName(ro_config, attrname)
val = attrvalue and makeAnnotationValue(ro_config, attrvalue, valtype)
#@@TODO refactor common code with getRoAnnotations, etc.
add_annotations = []
remove_annotations = []
for ann_node in ro_graph.subjects(predicate=RO.annotatesAggregatedResource, object=subject):
ann_uri = ro_graph.value(subject=ann_node, predicate=AO.body)
ann_graph = readAnnotationBody(ro_dir, ro_manifest.getComponentUriRel(ro_dir, ann_uri))
if (subject, predicate, val) in ann_graph:
ann_graph.remove((subject, predicate, val))
if (subject, None, None) in ann_graph:
# Triples remain in annotation body: write new body and update RO graph
ann_name = createAnnotationGraphBody(ro_config, ro_dir, rofile, ann_graph)
remove_annotations.append(ann_node)
add_annotations.append(ann_name)
else:
# Remove annotation from RO graph
remove_annotations.append(ann_node)
# Update RO graph if needed
if add_annotations or remove_annotations:
for a in remove_annotations:
_removeAnnotationBodyFromRoGraph(ro_graph, a)
for a in add_annotations:
_addAnnotationBodyToRoGraph(ro_graph, ro_dir, rofile, a)
ro_manifest.writeManifestGraph(ro_dir, ro_graph)
return
def _replaceSimpleAnnotation(ro_config, ro_dir, rofile, attrname, attrvalue):
"""
Replace a simple annotation in a research object.
ro_config is the research object manager configuration, supplied as a dictionary
ro_dir is the research object root directory
rofile names the file or resource to be annotated, possibly relative to the RO.
attrname names the attribute in a form recognized by getAnnotationByName
attrvalue is a new value to be associated with the attribute
"""
ro_graph = ro_manifest.readManifestGraph(ro_dir)
subject = ro_manifest.getComponentUri(ro_dir, rofile)
(predicate,valtype) = getAnnotationByName(ro_config, attrname)
log.debug("Replace annotation: subject %s, predicate %s, value %s"%(repr(subject), repr(predicate), repr(attrvalue)))
ro_graph.remove((subject, predicate, None))
ro_graph.add((subject, predicate, makeAnnotationValue(ro_config, attrvalue, valtype)))
ro_manifest.writeManifestGraph(ro_dir, ro_graph)
return
def _getAnnotationValues(ro_config, ro_dir, rofile, attrname):
"""
Returns iterator over annotation values for given subject and attribute
"""
log.debug("getAnnotationValues: ro_dir %s, rofile %s, attrname %s"%(ro_dir, rofile, attrname))
ro_graph = ro_manifest.readManifestGraph(ro_dir)
subject = ro_manifest.getComponentUri(ro_dir, rofile)
(predicate,valtype) = getAnnotationByName(ro_config, attrname)
#@@TODO refactor common code with getRoAnnotations, etc.
for ann_node in ro_graph.subjects(predicate=RO.annotatesAggregatedResource, object=subject):
ann_uri = ro_graph.value(subject=ann_node, predicate=AO.body)
ann_graph = readAnnotationBody(ro_dir, ro_manifest.getComponentUriRel(ro_dir, ann_uri))
for v in ann_graph.objects(subject=subject, predicate=predicate):
#log.debug("Triple: %s %s %s"%(subject,p,v))
yield v
return
def _getRoAnnotations(ro_dir):
"""
Returns iterator over annotations applied to the RO as an entity.
Each value returned by the iterator is a (subject,predicate,object) triple.
"""
ro_graph = ro_manifest.readManifestGraph(ro_dir)
subject = ro_manifest.getRoUri(ro_dir)
log.debug("getRoAnnotations %s"%str(subject))
for ann_node in ro_graph.subjects(predicate=RO.annotatesAggregatedResource, object=subject):
ann_uri = ro_graph.value(subject=ann_node, predicate=AO.body)
ann_graph = readAnnotationBody(ro_dir, ro_manifest.getComponentUriRel(ro_dir, ann_uri))
if ann_graph:
for (p, v) in ann_graph.predicate_objects(subject=subject):
#log.debug("Triple: %s %s %s"%(subject,p,v))
yield (subject, p, v)
return
def _getFileAnnotations(ro_dir, rofile):
"""
Returns iterator over annotations applied to a specified component in the RO
Each value returned by the iterator is a (subject,predicate,object) triple.
"""
log.debug("getFileAnnotations: ro_dir %s, rofile %s"%(ro_dir, rofile))
ro_graph = ro_manifest.readManifestGraph(ro_dir)
subject = ro_manifest.getComponentUri(ro_dir, rofile)
log.debug("getFileAnnotations: %s"%str(subject))
#@@TODO refactor common code with getRoAnnotations, etc.
for ann_node in ro_graph.subjects(predicate=RO.annotatesAggregatedResource, object=subject):
ann_uri = ro_graph.value(subject=ann_node, predicate=AO.body)
ann_graph = readAnnotationBody(ro_dir, ro_manifest.getComponentUriRel(ro_dir, ann_uri))
if ann_graph:
for (p, v) in ann_graph.predicate_objects(subject=subject):
#log.debug("Triple: %s %s %s"%(subject,p,v))
yield (subject, p, v)
return
def getAllAnnotations(ro_dir):
"""
Returns iterator over all annotations associated with the RO
Each value returned by the iterator is a (subject,predicate,object) triple.
"""
log.debug("getAllAnnotations %s"%str(ro_dir))
ro_graph = ro_manifest.readManifestGraph(ro_dir)
#@@TODO refactor common code with getRoAnnotations, etc.
for (ann_node, subject) in ro_graph.subject_objects(predicate=RO.annotatesAggregatedResource):
ann_uri = ro_graph.value(subject=ann_node, predicate=AO.body)
log.debug("- ann_uri %s"%(str(ann_uri)))
ann_graph = readAnnotationBody(ro_dir, ro_manifest.getComponentUriRel(ro_dir, ann_uri))
if ann_graph == None:
log.debug("No annotation graph: ann_uri: "+str(ann_uri))
else:
for (p, v) in ann_graph.predicate_objects(subject=subject):
#log.debug("Triple: %s %s %s"%(subject,p,v))
yield (subject, p, v)
return
def makeAnnotationValue(ro_config, aval, atype):
"""
atype is one of "string", "resurce", ...
Returns a graph node for the supplied type and value
"""
#@@TODO: construct appropriately typed RDF literals
if atype == "resource":
return rdflib.URIRef(getResourceNameString(ro_config, aval))
if atype == "string":
return rdflib.Literal(aval)
if atype == "text":
return rdflib.Literal(aval)
if atype == "datetime":
return rdflib.Literal(aval)
return rdflib.Literal(aval)
def formatAnnotationValue(aval, atype):
"""
atype is one of "string", "resurce", ...
"""
#@@TODO: deal with appropriately typed RDF literals
if atype == "resource" or isinstance(aval,rdflib.URIRef):
return '<' + str(aval) + '>'
if atype == "string":
return '"' + unicode(aval).encode('utf-8').replace('"', '\\"') + '"'
if atype == "text":
# multiline
return '"""' + unicode(aval).encode('utf-8') + '"""'
if atype == "datetime":
return '"' + str(aval) + '"'
return str(aval)
def showAnnotations(ro_config, ro_dir, annotations, outstr):
sname_prev = None
for (asubj,apred,aval) in annotations:
# log.debug("Annotations: asubj %s, apred %s, aval %s"%
# (repr(asubj), repr(apred), repr(aval)))
if apred != ORE.aggregates:
(aname, atype) = getAnnotationByUri(ro_config, apred)
sname = ro_manifest.getComponentUriRel(ro_dir, str(asubj))
log.debug("Annotations: sname %s, aname %s"%(sname, aname))
if sname == "":
sname = ro_manifest.getRoUri(ro_dir)
if sname != sname_prev:
print "\n<"+str(sname)+">"
sname_prev = sname
outstr.write(" %s %s\n"%(aname, formatAnnotationValue(aval, atype)))
return
# End.
| 44.457203 | 121 | 0.680723 | 2,577 | 21,295 | 5.518044 | 0.143966 | 0.022855 | 0.017792 | 0.016034 | 0.473418 | 0.412025 | 0.367018 | 0.310056 | 0.305274 | 0.294937 | 0 | 0.001561 | 0.217938 | 21,295 | 478 | 122 | 44.550209 | 0.852288 | 0.084386 | 0 | 0.244526 | 0 | 0 | 0.152432 | 0.008838 | 0 | 0 | 0 | 0.012552 | 0 | 0 | null | null | 0 | 0.047445 | null | null | 0.00365 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
969ffef1a03d4a66cdc882e7038aa21225221acf | 5,508 | py | Python | arxiv/integration/meta.py | ibnesayeed/arxiv-base | 9f49302370272792a0afc78debd039d249844c6c | [
"MIT"
] | 23 | 2019-01-10T22:01:18.000Z | 2022-02-02T10:28:25.000Z | arxiv/integration/meta.py | ibnesayeed/arxiv-base | 9f49302370272792a0afc78debd039d249844c6c | [
"MIT"
] | 57 | 2018-12-17T16:45:38.000Z | 2021-12-14T14:20:58.000Z | arxiv/integration/meta.py | cul-it/arxiv-base-ui | a5beadf44c24f72e21313299bfafc1ffb9d28ac7 | [
"MIT"
] | 5 | 2019-01-10T22:01:28.000Z | 2021-11-05T12:25:31.000Z | """Meta tools for integrations."""
import inspect
from typing import Any
class MetaIntegration(type):
"""
Metaclass for context-bound integrations.
The purpose of this metaclass is to simplify writing integrations that
need to attach to the application or request context.
A typical pattern when using an integration's method is to first check for
an instance of the integration on the current context, and then call the
method on that instance. In practice, this means either performing the
context check every time the method is used (yuck) or implementing
module-level functions that wrap the integration class instance methods,
and check the context when called (also yuck).
Here's an example of what life was like back then:
.. code-block:: python
class FileManagementService(object):
'''Encapsulates a connection with the file management service.'''
def __init__(self, *args, **kwargs) -> None:
...
def get_upload_status(self, upload_id: int) -> Upload:
...
def init_app(app: object = None) -> None:
'''Set default configuration params for an application instance.'''
config = get_application_config(app)
config.setdefault('FILE_MANAGER_ENDPOINT', 'https://arxiv.org/')
config.setdefault('FILE_MANAGER_VERIFY', True)
def get_session(app: object = None) -> FileManagementService:
'''Get a new session with the file management endpoint.'''
config = get_application_config(app)
endpoint = config.get('FILE_MANAGER_ENDPOINT', 'https://arxiv.org/')
verify_cert = config.get('FILE_MANAGER_VERIFY', True)
return FileManagementService(endpoint, verify_cert=verify_cert)
def current_session() -> FileManagementService:
'''Get/create :class:`.FileManagementService` for this context.'''
g = get_application_global()
if not g:
return get_session()
elif 'filemanager' not in g:
g.filemanager = get_session() # type: ignore
return g.filemanager # type: ignore
@wraps(FileManagementService.get_upload_status)
def get_upload_status(upload_id: int) -> Upload:
'''See :meth:`FileManagementService.get_upload_status`.'''
return current_session().get_upload_status(upload_id)
and then when you want to use it:
.. code-block:: python
from my.cool.services import filemanager
filemanager.get_upload_status(59192)
A better way
------------
Classes that use this metaclass are expected to implement a classmethod
called ``current_session()`` that returns an instance of the class, similar
to the example above. That method is responsible for getting the correct
instance for the context.
The net effect is that instance methods will be exposed as if they were
class methods; behind the scenes, this metaclass will call
``current_session()`` and obtain the bound method from the returned
instance.
So you can do something like:
.. code-block:: python
class FileManagementService(metaclass=MetaIntegration):
'''Encapsulates a connection with the file management service.'''
def __init__(self, *args, **kwargs) -> None:
...
def get_upload_status(self, upload_id: int) -> Upload:
...
@classmethod
def init_app(cls, app: object = None) -> None:
'''Set default configuration params for an application instance.'''
config = get_application_config(app)
config.setdefault('FILE_MANAGER_ENDPOINT', 'https://arxiv.org/')
config.setdefault('FILE_MANAGER_VERIFY', True)
@classmethod
def get_session(cls, app: object = None) -> 'FileManagementService':
'''Get a new session with the file management endpoint.'''
config = get_application_config(app)
endpoint = config.get('FILE_MANAGER_ENDPOINT', 'https://arxiv.org/')
verify_cert = config.get('FILE_MANAGER_VERIFY', True)
return cls(endpoint, verify_cert=verify_cert)
@classmethod
def current_session(cls) -> 'FileManagementService':
'''Get/create :class:`.FileManagementService` for this context.'''
g = get_application_global()
if not g:
return cls.get_session()
elif 'filemanager' not in g:
g.filemanager = cls.get_session() # type: ignore
return g.filemanager # type: ignore
and then use it like:
.. code-block:: python
from my.cool.services.filemanager import FileManager
FileManager.get_upload_status(59192)
An even better way
------------------
The example above was only a marginal improvement. What would be better
is not having to write any of those classmethods at all.
For an example of what that looks like, see
:class:`.api.service.HTTPIntegration`.
"""
def __getattribute__(self, key: str) -> Any:
"""Get the attribute from the instance bound to the current context."""
obj = super(MetaIntegration, self).__getattribute__(key)
if inspect.isfunction(obj):
return getattr(self.current_session(), key)
return obj
| 37.216216 | 83 | 0.640523 | 640 | 5,508 | 5.38125 | 0.282813 | 0.020906 | 0.034843 | 0.02439 | 0.503484 | 0.450058 | 0.450058 | 0.40302 | 0.40302 | 0.350174 | 0 | 0.002489 | 0.270697 | 5,508 | 147 | 84 | 37.469388 | 0.854867 | 0.871097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
96ae59476203e369d22f45505c454269f030c9e1 | 1,168 | py | Python | tests/repositories/test_integration_class.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | tests/repositories/test_integration_class.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | tests/repositories/test_integration_class.py | racelandshop/integration | 424057dcad30f20ed0276aec07d28b48b2b187be | [
"MIT"
] | null | null | null | import pytest
from custom_components.racelandshop.helpers.classes.exceptions import RacelandshopException
@pytest.mark.asyncio
async def test_async_post_installation(repository_integration, racelandshop):
await repository_integration.async_post_installation()
repository_integration.data.config_flow = True
repository_integration.data.first_install = True
racelandshop.hass.data["custom_components"] = {}
await repository_integration.async_post_installation()
@pytest.mark.asyncio
async def test_async_post_registration(repository_integration):
await repository_integration.async_post_registration()
@pytest.mark.asyncio
async def test_reload_custom_components(repository_integration, racelandshop):
racelandshop.hass.data["custom_components"] = {}
await repository_integration.reload_custom_components()
@pytest.mark.asyncio
async def test_validate_repository(repository_integration):
with pytest.raises(RacelandshopException):
await repository_integration.validate_repository()
@pytest.mark.asyncio
async def test_update_repository(repository_integration):
await repository_integration.update_repository()
| 32.444444 | 91 | 0.832192 | 126 | 1,168 | 7.388889 | 0.261905 | 0.293233 | 0.167562 | 0.118153 | 0.549946 | 0.38131 | 0.214823 | 0.214823 | 0 | 0 | 0 | 0 | 0.097603 | 1,168 | 35 | 92 | 33.371429 | 0.883302 | 0 | 0 | 0.391304 | 0 | 0 | 0.02911 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.086957 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
96b89c05cc470150ee90158676e7c8b15141bf97 | 1,999 | py | Python | custom-scorer/retrieve_and_rank_scorer/scorer_exception.py | al7e/answer-retrieval | bbca5f1511e855046000618777ce8392ef00e2f5 | [
"Apache-2.0"
] | 39 | 2016-07-18T15:32:02.000Z | 2017-10-05T17:33:11.000Z | custom-scorer/retrieve_and_rank_scorer/scorer_exception.py | al7e/answer-retrieval | bbca5f1511e855046000618777ce8392ef00e2f5 | [
"Apache-2.0"
] | 27 | 2016-07-18T22:09:46.000Z | 2017-08-04T08:37:38.000Z | custom-scorer/retrieve_and_rank_scorer/scorer_exception.py | al7e/answer-retrieval | bbca5f1511e855046000618777ce8392ef00e2f5 | [
"Apache-2.0"
] | 64 | 2016-07-18T14:56:47.000Z | 2017-12-07T08:55:04.000Z | # Copyright 2016 IBM All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class ScorerConfigurationException(Exception):
"""
Define exceptions to be used by various services.
ScorerConfigurationException: Raised if a Scorer is improperly configured
ScorerRuntimeException: Raised if a Scorer has a runtime error
"""
def __init__(self, message):
""" Wrapper exception for configuration issues. Should be raised if:
1) Inputs to the constructor of a scorer are bad,
2) Invariant to scorer is violated
etc.
"""
super(ScorerConfigurationException, self).__init__(message)
# endclass ScorerConfigurationException
class ScorerRuntimeException(Exception):
def __init__(self, message):
""" Wrapper exception for general runtime issues for a scorer. Should be raised if:
1) Input to an api method (likely .score) is invalid
2) Unforeseen problems prevent properly scoring a query, document, query/document pair
"""
super(ScorerRuntimeException, self).__init__(message)
# endclass ScorerRuntimeException
class ScorerTimeoutException(ScorerRuntimeException):
def __init__(self, message, args, kwargs):
""" Should be raised if a scorer times out
"""
super(ScorerTimeoutException, self).__init__(message)
self._args = args
self._kwargs = kwargs
#endclass ScorerTimeoutException
| 39.196078 | 102 | 0.705853 | 236 | 1,999 | 5.868644 | 0.512712 | 0.043321 | 0.019495 | 0.032491 | 0.077978 | 0.05343 | 0.05343 | 0 | 0 | 0 | 0 | 0.007792 | 0.229615 | 1,999 | 50 | 103 | 39.98 | 0.891558 | 0.637819 | 0 | 0.181818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
96c22e564b0b34f67d76208eb69779ca5fad5e51 | 150 | py | Python | curso-python/exercicios/exercicio-010.py | thiagoAlexandre3/cursos-auto-didata | 2afb4f4bfe4a5622f2b5a9a5adbdc7e495ddf772 | [
"MIT"
] | null | null | null | curso-python/exercicios/exercicio-010.py | thiagoAlexandre3/cursos-auto-didata | 2afb4f4bfe4a5622f2b5a9a5adbdc7e495ddf772 | [
"MIT"
] | null | null | null | curso-python/exercicios/exercicio-010.py | thiagoAlexandre3/cursos-auto-didata | 2afb4f4bfe4a5622f2b5a9a5adbdc7e495ddf772 | [
"MIT"
] | null | null | null | reais = float(input('Quantos reais você tem? R$ '));
dolar = reais * 5.81;
print('Você tem {:.2f}R$ e pode comprar {:.2f}US$.'.format(reais, dolar));
| 37.5 | 74 | 0.633333 | 25 | 150 | 3.8 | 0.68 | 0.147368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03876 | 0.14 | 150 | 3 | 75 | 50 | 0.697674 | 0 | 0 | 0 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73690bddedd826c29466aebe00b0e97e5a07f812 | 2,977 | py | Python | script/python/devopstest.py | ibmcuijunluke/software-automation | 9bc9ad78f1095983b6b46671482091489088cb15 | [
"Apache-2.0"
] | 2 | 2017-04-14T03:40:55.000Z | 2018-04-25T11:12:51.000Z | script/python/devopstest.py | ibmcuijunluke/software-automation | 9bc9ad78f1095983b6b46671482091489088cb15 | [
"Apache-2.0"
] | null | null | null | script/python/devopstest.py | ibmcuijunluke/software-automation | 9bc9ad78f1095983b6b46671482091489088cb15 | [
"Apache-2.0"
] | 1 | 2018-04-25T11:12:51.000Z | 2018-04-25T11:12:51.000Z | #!/usr/bin/python
#-*-coding:UTF-8 -*-
'''
Created on 20170407
@author: cui.jun@thoughtworks.cn
'''
import sys,os,time
#from tw_devops import zipfileutils
print ("start deploy tomcat war and static zip package now .......");
tomcatwebapphome="/opt/apache-tomcat-8.0.41/webapps"
webappswar="/opt/apache-tomcat-8.0.41/webapps/jw-interfaceapi.war"
sourceunzip="/opt/apache-tomcat-8.0.41/webapps/jw-interfaceapi"
targetjavawar="/opt/javawars";
tomcathome="/opt/apache-tomcat-8.0.41"
zipfiledirfile = '/opt/javawars/html5demo.zip'
warfiledirfile = '/opt/javawars/jw-interfaceapi.war'
ngnixTargetDir="/opt/ngnixdir"
current_date=time.strftime('%Y-%m-%d')
current_datetime=time.strftime('%Y%m%d%H%M%S')
#step 1
# print os.path.isfile(ngnix_zipfile_path)
#check our env ngnixzipfile and tomcatwar packages if exist
if os.path.isfile(zipfiledirfile):
message = 'OK, the "%s" file exists.',current_datetime
pass
else:
message = "Sorry, I cannot find the %s filee. ,pls create it"
raise Exception("Sorry, I cannot find the file. %s ")
#sys.exit()
print "the message is ",message
if os.path.isfile(warfiledirfile):
message = 'OK, the "%s" file exists.',current_datetime
pass
else:
message = "Sorry, I cannot find the %s file. ,pls create it"
raise Exception("Sorry, I cannot find the s% file. %s ")
#sys.exit()
print "the message is ",message
if os.path.isdir(ngnixTargetDir):
message = 'OK, the "%s" ngnixfile dir exists.',current_datetime
pass
else:
message = "Sorry, I cannot find the ngnixfile dir ,pls create it ."
raise Exception("Sorry, I cannot find the ngnixfile dir. %s ")
#sys.exit()
print "the message is ",message
#ngnixzipdir = 'html5-map.zip'
# r = zipfilezipfileutilsfile(filename)
#step 2 unzip zip file to ngnix dir and restart ngnix
import zipfile
import os
#python2.7 zipfile libary has error, so i use python call shell.
#os.listdir(r'E:\\apache-tomcat-8.0.41\\webapps\\')
os.system("cp -f %s/html5demo.zip %s/" %(targetjavawar,ngnixTargetDir));
#os.system("cd %s" %(targetjavawar))
os.system("unzip -o -d %s %s" %(ngnixTargetDir,zipfiledirfile))
#os.system("/usr/nginx/sbin/nginx -t")
#os.system("/usr/local/nginx/sbin/nginx -s reload")
#/usr/nginx/sbin/nginx -t
#/usr/local/nginx/sbin/nginx -s reload
#step 3 restart tomcat new war
print "start application server......begin............."
kill_cmd="kill `ps -ef | grep tomcat | awk '{print $2,$8}' | grep 'java$'| awk '{print $1}'`";
os.system(kill_cmd);
time.sleep(3);
# os.system("rm -rf ");
# time.sleep(3);
os.system("rm -rf %s" %(webappswar));
os.system("rm -rf %s" %(sourceunzip));
os.system("cp -rf %s %s/" %(targetjavawar,tomcatwebapphome));
# time.sleep(3);
os.system("%s/bin/startup.sh &" %(tomcathome));
#time.sleep(3);
#Msg='-'*30+time.strftime('%Y-%m-%d %H:%M:%S')+'-'*30+'\n'
#print " the time is ".Msg
#time.sleep(2);
print "start application server......end............."
| 32.714286 | 96 | 0.674169 | 445 | 2,977 | 4.48764 | 0.330337 | 0.044066 | 0.036054 | 0.048072 | 0.426139 | 0.384577 | 0.363545 | 0.284927 | 0.250876 | 0.210816 | 0 | 0.019201 | 0.142761 | 2,977 | 90 | 97 | 33.077778 | 0.763323 | 0.276789 | 0 | 0.23913 | 0 | 0.021739 | 0.482809 | 0.134625 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.065217 | 0.065217 | null | null | 0.152174 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
736954c7128a50d71fe3df09fedd5e97a151c0f1 | 220 | py | Python | djangoBlog/apps/config/urls.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | 4 | 2019-07-20T02:04:11.000Z | 2020-05-02T06:15:22.000Z | djangoBlog/apps/config/urls.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | 8 | 2020-05-03T09:01:14.000Z | 2022-01-13T02:13:14.000Z | djangoBlog/apps/config/urls.py | blackmonkey121/blog | 938f104d3360c5f7562a2fd5a7d2f2e77c4695c0 | [
"BSD-3-Clause"
] | null | null | null | #!usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = "Monkey"
from django.urls import path
from ..config.views import FavoriteView
urlpatterns = [
path('favorite/', FavoriteView.as_view(), name='favorite'),
] | 22 | 63 | 0.695455 | 27 | 220 | 5.481481 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.136364 | 220 | 10 | 64 | 22 | 0.768421 | 0.190909 | 0 | 0 | 0 | 0 | 0.129944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
7372f311335c86c86acbc9ebc4dc0e6b1e969875 | 240 | py | Python | graphite_feeder/handler/event/appliance/sound/player/fade_out/volume.py | majamassarini/automate-graphite-feeder | 0f17f99bbdaab86e10e0b7d424d055ff44fc4ca0 | [
"MIT"
] | null | null | null | graphite_feeder/handler/event/appliance/sound/player/fade_out/volume.py | majamassarini/automate-graphite-feeder | 0f17f99bbdaab86e10e0b7d424d055ff44fc4ca0 | [
"MIT"
] | null | null | null | graphite_feeder/handler/event/appliance/sound/player/fade_out/volume.py | majamassarini/automate-graphite-feeder | 0f17f99bbdaab86e10e0b7d424d055ff44fc4ca0 | [
"MIT"
] | null | null | null | import home
from graphite_feeder.handler.event.appliance.sound.player.volume import (
Handler as Parent,
)
class Handler(Parent):
KLASS = home.appliance.sound.player.event.fade_out.volume.Event
TITLE = "Fade out volume (%)"
| 20 | 73 | 0.7375 | 32 | 240 | 5.46875 | 0.5625 | 0.16 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154167 | 240 | 11 | 74 | 21.818182 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0.079167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7378d4aa72c88b68bc97ef203cf2c74074ba032f | 747 | py | Python | packages/pyright-internal/src/tests/samples/paramInference1.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 3,934 | 2019-03-22T09:26:41.000Z | 2019-05-06T21:03:08.000Z | packages/pyright-internal/src/tests/samples/paramInference1.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 107 | 2019-03-24T04:09:37.000Z | 2019-05-06T17:00:04.000Z | packages/pyright-internal/src/tests/samples/paramInference1.py | Jasha10/pyright | 0ce0cfa10fe7faa41071a2cc417bb449cf8276fe | [
"MIT"
] | 119 | 2019-03-23T10:48:04.000Z | 2019-05-06T08:57:56.000Z | # This sample tests the logic that infers parameter types based on
# default argument values or annotated base class methods.
class Parent:
def func1(self, a: int, b: str) -> float:
...
class Child(Parent):
def func1(self, a, b):
reveal_type(self, expected_text="Self@Child")
reveal_type(a, expected_text="int")
reveal_type(b, expected_text="str")
return a
def func2(a, b=0, c=None):
reveal_type(a, expected_text="Unknown")
reveal_type(b, expected_text="int")
reveal_type(c, expected_text="Unknown | None")
def func3(a=(1, 2), b=[1,2], c={1: 2}):
reveal_type(a, expected_text="Unknown")
reveal_type(b, expected_text="Unknown")
reveal_type(c, expected_text="Unknown")
| 26.678571 | 66 | 0.659973 | 113 | 747 | 4.20354 | 0.371681 | 0.189474 | 0.2 | 0.12 | 0.553684 | 0.349474 | 0.223158 | 0.223158 | 0.223158 | 0.223158 | 0 | 0.018425 | 0.200803 | 747 | 27 | 67 | 27.666667 | 0.777219 | 0.161981 | 0 | 0.117647 | 0 | 0 | 0.097913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0 | 0 | 0.411765 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7381e501735df140d94d4a45a5c60c1de5725216 | 790 | py | Python | config/hooks/tk-multi-publish2/python/tk_multi_publish2/__init__.py | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | null | null | null | config/hooks/tk-multi-publish2/python/tk_multi_publish2/__init__.py | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | null | null | null | config/hooks/tk-multi-publish2/python/tk_multi_publish2/__init__.py | JoanAzpeitia/lp_sg | e0ee79555e419dd2ae3a5f31e5515b3f40b22a62 | [
"MIT"
] | 1 | 2020-02-15T10:42:56.000Z | 2020-02-15T10:42:56.000Z | # Copyright (c) 2017 Shotgun Software Inc.
#
# CONFIDENTIAL AND PROPRIETARY
#
# This work is provided "AS IS" and subject to the Shotgun Pipeline Toolkit
# Source Code License included in this distribution package. See LICENSE.
# By accessing, using, copying or modifying this work you indicate your
# agreement to the Shotgun Pipeline Toolkit Source Code License. All rights
# not expressly granted therein are reserved by Shotgun Software Inc.
from sgtk.platform.qt import QtCore, QtGui
import util
def show_dialog(app):
"""
Show the main dialog ui
:param app: The parent App
"""
# defer imports so that the app works gracefully in batch modes
from .dialog import AppDialog
# start ui
app.engine.show_dialog("Shotgun Publish", app, AppDialog)
| 27.241379 | 76 | 0.737975 | 112 | 790 | 5.1875 | 0.651786 | 0.051635 | 0.061962 | 0.068847 | 0.151463 | 0.151463 | 0.151463 | 0.151463 | 0 | 0 | 0 | 0.006369 | 0.205063 | 790 | 28 | 77 | 28.214286 | 0.91879 | 0.703797 | 0 | 0 | 0 | 0 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.6 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
73a3c87ffd5f26a101fd0f8dc6cc0997de70b24c | 488 | py | Python | atcoder/arc098/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 506 | 2018-08-22T10:30:38.000Z | 2022-03-31T10:01:49.000Z | atcoder/arc098/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 13 | 2019-08-07T18:31:18.000Z | 2020-12-15T21:54:41.000Z | atcoder/arc098/c.py | Ashindustry007/competitive-programming | 2eabd3975c029d235abb7854569593d334acae2f | [
"WTFPL"
] | 234 | 2018-08-06T17:11:41.000Z | 2022-03-26T10:56:42.000Z | #!/usr/bin/env python3
# https://arc098.contest.atcoder.jp/tasks/arc098_a
n = int(input())
s = input()
a = [0] * n
if s[0] == 'W': a[0] = 1
for i in range(1, n):
c = s[i]
if c == 'E':
a[i] = a[i - 1]
else:
a[i] = a[i - 1] + 1
b = [0] * n
if s[n - 1] == 'E': b[n - 1] = 1
for i in range(n - 2, -1, -1):
c = s[i]
if c == 'W':
b[i] = b[i + 1]
else:
b[i] = b[i + 1] + 1
m = n
for i in range(n):
m = min(m, a[i] + b[i])
print(m - 1)
| 19.52 | 50 | 0.403689 | 106 | 488 | 1.849057 | 0.292453 | 0.05102 | 0.091837 | 0.168367 | 0.352041 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080997 | 0.342213 | 488 | 24 | 51 | 20.333333 | 0.529595 | 0.143443 | 0 | 0.181818 | 0 | 0 | 0.009615 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.045455 | 0 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73b72fbb308f8fd4a86c4a19d3cc88d7dc9cbdb7 | 405 | py | Python | MyPythonDemos2018/SimpleDemos/a05_classmethod.py | zcatt/MyDemos2018 | a332fdf94170663ba7a530cedc28418159a1c29a | [
"MIT"
] | null | null | null | MyPythonDemos2018/SimpleDemos/a05_classmethod.py | zcatt/MyDemos2018 | a332fdf94170663ba7a530cedc28418159a1c29a | [
"MIT"
] | null | null | null | MyPythonDemos2018/SimpleDemos/a05_classmethod.py | zcatt/MyDemos2018 | a332fdf94170663ba7a530cedc28418159a1c29a | [
"MIT"
] | null | null | null | #!/usr/bin/python3
# -*- coding: utf-8 -*-
"""
class C:
@classmethod
def f(cls, arg1, arg2, ...): ...
类方法传入的第一个参数不是instance的self,而是类cls.
"""
class C:
@classmethod
def createC(cls, value):
obj= cls(value)
return obj
def __init__(self, value):
self.v= value
def output(self):
print("value= {0:d}".format(self.v))
c = C.createC(3)
c.output()
| 16.2 | 44 | 0.555556 | 52 | 405 | 4.269231 | 0.576923 | 0.054054 | 0.153153 | 0.18018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020067 | 0.261728 | 405 | 24 | 45 | 16.875 | 0.719064 | 0.096296 | 0 | 0 | 0 | 0 | 0.047431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
73bdf2c124efa5c0ddad9b7c311bbec5c90bc4d5 | 4,304 | py | Python | src/meteringpoints_shared/models.py | Energy-Track-and-Trace/ett-meteringpoints | 75f5a10a55b1ee450e53986db1bf70a23ea82a3f | [
"Apache-2.0"
] | null | null | null | src/meteringpoints_shared/models.py | Energy-Track-and-Trace/ett-meteringpoints | 75f5a10a55b1ee450e53986db1bf70a23ea82a3f | [
"Apache-2.0"
] | null | null | null | src/meteringpoints_shared/models.py | Energy-Track-and-Trace/ett-meteringpoints | 75f5a10a55b1ee450e53986db1bf70a23ea82a3f | [
"Apache-2.0"
] | null | null | null | import sqlalchemy as sa
from enum import Enum
from typing import List, Optional
from dataclasses import dataclass, field
from sqlalchemy.orm import relationship
from origin.serialize import Serializable
from origin.models.tech import TechnologyType
from origin.models.common import ResultOrdering
from origin.models.meteringpoints import MeteringPointType
from .db import db
# -- Common models -----------------------------------------------------------
@dataclass
class MeteringPointFilters(Serializable):
"""
Filters for querying MeteringPoints.
"""
gsrn: Optional[List[str]] = field(default=None)
type: Optional[MeteringPointType] = field(default=None)
sector: Optional[List[str]] = field(default=None)
class MeteringPointOrderingKeys(Enum):
"""
Keys to order MeteringPoints by when querying.
"""
gsrn = 'gsrn'
type = 'type'
sector = 'sector'
MeteringPointOrdering = ResultOrdering[MeteringPointOrderingKeys]
# -- Database models ---------------------------------------------------------
class DbMeteringPoint(db.ModelBase):
"""
SQL representation of a MeteringPoint.
"""
__tablename__ = 'meteringpoint'
__table_args__ = (
sa.PrimaryKeyConstraint('gsrn'),
sa.UniqueConstraint('gsrn'),
)
gsrn = sa.Column(sa.String(), index=True, nullable=False)
sector = sa.Column(sa.String(), index=True)
type = sa.Column(sa.Enum(MeteringPointType), index=True)
# -- Relationships -------------------------------------------------------
address = relationship(
'DbMeteringPointAddress',
primaryjoin='foreign(DbMeteringPoint.gsrn) == DbMeteringPointAddress.gsrn', # noqa: E501
uselist=False,
viewonly=True,
lazy='joined',
)
# TODO Rewrite this?
technology = relationship(
'DbTechnology',
primaryjoin='foreign(DbMeteringPoint.gsrn) == DbMeteringPointTechnology.gsrn', # noqa: E501
secondary='meteringpoint_technology',
secondaryjoin=(
'and_('
'foreign(DbMeteringPointTechnology.tech_code) == DbTechnology.tech_code,' # noqa: E501
'foreign(DbMeteringPointTechnology.fuel_code) == DbTechnology.fuel_code' # noqa: E501
')'
),
uselist=False,
viewonly=True,
lazy='joined',
)
class DbMeteringPointAddress(db.ModelBase):
"""
SQL representation of a (physical) address for a MeteringPoint.
"""
__tablename__ = 'meteringpoint_address'
__table_args__ = (
sa.PrimaryKeyConstraint('gsrn'),
sa.UniqueConstraint('gsrn'),
)
gsrn = sa.Column(sa.String(), index=True, nullable=False)
street_code = sa.Column(sa.String())
street_name = sa.Column(sa.String())
building_number = sa.Column(sa.String())
floor_id = sa.Column(sa.String())
room_id = sa.Column(sa.String())
post_code = sa.Column(sa.String())
city_name = sa.Column(sa.String())
city_sub_division_name = sa.Column(sa.String())
municipality_code = sa.Column(sa.String())
location_description = sa.Column(sa.String())
class DbMeteringPointTechnology(db.ModelBase):
"""
SQL representation of technology codes for a MeteringPoint.
"""
__tablename__ = 'meteringpoint_technology'
__table_args__ = (
sa.PrimaryKeyConstraint('gsrn'),
sa.UniqueConstraint('gsrn'),
)
gsrn = sa.Column(sa.String(), index=True, nullable=False)
tech_code = sa.Column(sa.String())
fuel_code = sa.Column(sa.String())
class DbMeteringPointDelegate(db.ModelBase):
"""
TODO
"""
__tablename__ = 'meteringpoint_delegate'
__table_args__ = (
sa.PrimaryKeyConstraint('gsrn', 'subject'),
)
gsrn = sa.Column(sa.String(), index=True, nullable=False)
subject = sa.Column(sa.String())
class DbTechnology(db.ModelBase):
"""
SQL representation of a Technology.
"""
__tablename__ = 'technology'
__table_args__ = (
sa.PrimaryKeyConstraint('tech_code', 'fuel_code'),
sa.UniqueConstraint('tech_code', 'fuel_code'),
)
fuel_code = sa.Column(sa.String())
tech_code = sa.Column(sa.String())
# TODO Use String instead of Enum (forward compatibility)
type = sa.Column(sa.Enum(TechnologyType))
| 28.693333 | 100 | 0.643123 | 423 | 4,304 | 6.368794 | 0.252955 | 0.06533 | 0.081663 | 0.118782 | 0.423163 | 0.259465 | 0.157016 | 0.157016 | 0.125835 | 0.110245 | 0 | 0.003497 | 0.202602 | 4,304 | 149 | 101 | 28.885906 | 0.781469 | 0.147305 | 0 | 0.271739 | 0 | 0 | 0.144826 | 0.101519 | 0 | 0 | 0 | 0.020134 | 0 | 1 | 0 | false | 0 | 0.108696 | 0 | 0.619565 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
73e84c9135bab7ac7be004011cc3b757ccc4b119 | 4,168 | py | Python | reference-architecture/day2ops/scripts/ocp36-sat6.py | rafael-sales/openshift-ansible-contrib | 627c8b1856699cf25c7adca074f26ff899ddd4fd | [
"Apache-2.0"
] | null | null | null | reference-architecture/day2ops/scripts/ocp36-sat6.py | rafael-sales/openshift-ansible-contrib | 627c8b1856699cf25c7adca074f26ff899ddd4fd | [
"Apache-2.0"
] | null | null | null | reference-architecture/day2ops/scripts/ocp36-sat6.py | rafael-sales/openshift-ansible-contrib | 627c8b1856699cf25c7adca074f26ff899ddd4fd | [
"Apache-2.0"
] | 1 | 2019-09-22T19:10:41.000Z | 2019-09-22T19:10:41.000Z | #!/usr/bin/env python
# vim: sw=4 ts=4 et
import os, argparse, socket, getpass, subprocess
class ocpSat6(object):
__name__ = 'ocpSat6'
openshift3Images=[]
def __init__(self, load=True):
if load:
self._parseCli()
self._loadImageList()
self._addData()
self._syncData()
def _loadImageList(self):
cmd='curl -s https://registry.access.redhat.com/v1/search?q="openshift3" | python -mjson.tool | grep ".name.:" | cut -d: -f2 | sed -e "s/ "//g"" -e "s/,"//g"" | grep -E \'(ose-haproxy-router|registry-console|ose-deployer|ose-pod|ose-docker-registry)\''
result = subprocess.check_output(cmd, shell=True)
lines = result.splitlines()
for line in lines:
nl = line.replace('"', '')
self.openshift3Images.append(nl)
def _parseCli(self):
parser = argparse.ArgumentParser(description='Add all OCP images for disconnected installation to satellite 6', add_help=True)
parser.add_argument('--orgid', action='store', default='1',help='Satellite organization ID to create new product for OCP images in')
parser.add_argument('--productname', action='store', default='ocp36',help='Satellite product name to use to create OCP images')
parser.add_argument('--username', action='store', default='admin', help='Satellite 6 username for hammer CLI')
parser.add_argument('--password', action='store', help='Satellite 6 Password for hammer CLI')
parser.add_argument('--no_confirm', action='store_true', help='Do not ask for confirmation')
self.args = parser.parse_args()
if not self.args.password:
self.args.password = getpass.getpass(prompt='Please enter the password to use for the admin account in hammer CLI: ')
def _syncData(self):
if not self.args.no_confirm:
print "Sync repo data? (This may take a while)"
go = raw_input("Continue? y/n:\n")
if 'y' not in go:
exit(0)
cmd="hammer --username %s --password %s product synchronize --name %s --organization-id %s" % (self.args.username, self.args.password, self.args.productname, self.args.orgid)
os.system(cmd)
def _addData(self):
if not self.args.no_confirm:
print "Adding OCP images to org ID: %s with the product name: %s" % (self.args.orgid, self.args.productname)
go = raw_input("Continue? y/n:\n")
if 'y' not in go:
exit(0)
print "Adding product with name: %s" % self.args.productname
cmd="hammer --username %s --password %s product create --name %s --organization-id %s" % (self.args.username, self.args.password, self.args.productname, self.args.orgid)
os.system(cmd)
print "Adding openshift3 images"
for image in self.openshift3Images:
cmd='hammer --username %s --password %s repository create --name %s --organization-id %s --content-type docker --url "https://registry.access.redhat.com" --docker-upstream-name %s --product %s' % (self.args.username, self.args.password, image, self.args.orgid, image, self.args.productname )
os.system(cmd)
print "The following vars should exist in your OpenShift install playbook"
cmd="hammer --username %s --password %s organization list" % (self.args.username, self.args.password)
result = subprocess.check_output(cmd, shell=True)
lines = result.splitlines()
for line in lines:
if self.args.orgid in line:
orgLabel = line.split("|")[2].lower()
hostname = socket.getfqdn()
oreg_url = "%s-%s-openshift3_ose-${component}:${version}" % ( orgLabel, self.args.productname )
print "oreg_url: %s" % ( oreg_url.replace(" ", ""))
print 'openshift_disable_check: "docker_image_availability"'
print 'openshift_docker_insecure_registries: "%s:5000"' % hostname
print 'openshift_docker_additional_registries: "%s:5000"' % hostname
print "openshift_examples_modify_imagestreams: True"
if __name__ == '__main__':
ocpSat6()
| 48.465116 | 304 | 0.636756 | 529 | 4,168 | 4.911153 | 0.323251 | 0.073903 | 0.036952 | 0.027714 | 0.347575 | 0.316782 | 0.220554 | 0.181678 | 0.157814 | 0.157814 | 0 | 0.009337 | 0.229127 | 4,168 | 85 | 305 | 49.035294 | 0.799253 | 0.009117 | 0 | 0.265625 | 0 | 0.0625 | 0.368459 | 0.056202 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.125 | 0.015625 | null | null | 0.15625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb417c8637a28bbf828561b30c84658f308e31f8 | 892 | py | Python | quaternary_FOM_stackedtern_demo.py | johnmgregoire/PythonCompositionPlots | e105c575463b7d4512d9aac18c7330d1a0dc2c14 | [
"BSD-3-Clause"
] | 4 | 2018-03-05T09:34:49.000Z | 2022-02-01T15:33:54.000Z | quaternary_FOM_stackedtern_demo.py | johnmgregoire/PythonCompositionPlots | e105c575463b7d4512d9aac18c7330d1a0dc2c14 | [
"BSD-3-Clause"
] | null | null | null | quaternary_FOM_stackedtern_demo.py | johnmgregoire/PythonCompositionPlots | e105c575463b7d4512d9aac18c7330d1a0dc2c14 | [
"BSD-3-Clause"
] | 2 | 2016-01-24T19:09:21.000Z | 2019-10-11T12:43:07.000Z | import matplotlib.cm as cm
import numpy
import pylab
import operator, copy, os
from myternaryutility import TernaryPlot
from myquaternaryutility import QuaternaryPlot
from quaternary_FOM_stackedtern import *
axl, stpl=make10ternaxes()
gridi=30
comps_10full=[(a*1./gridi, b*1./gridi, c*1./gridi, (gridi-a-b-c)*1./gridi) for a in numpy.arange(0,1+gridi) for b in numpy.arange(0,1+gridi-a) for c in numpy.arange(0,1+gridi-a-b)]
comps_10full=list(set(comps_10full))
print len(comps_10full)
#plotpoints_cmyk
comps_10full=numpy.array(comps_10full)
pylab.figure()
stpquat=QuaternaryPlot(111)
cols=stpquat.rgb_comp(comps_10full)
stpquat.scatter(comps_10full, c=cols, s=20, edgecolors='none')
scatter_10axes(comps_10full, cols, stpl, s=20, edgecolors='none')
stpquat.label()
pylab.savefig('stackedtern_quat.png')
pylab.figure(axl[0].figure.number)
pylab.savefig('stackedtern.png')
pylab.show()
| 27.875 | 180 | 0.784753 | 143 | 892 | 4.79021 | 0.398601 | 0.144526 | 0.056934 | 0.061314 | 0.090511 | 0.090511 | 0.061314 | 0 | 0 | 0 | 0 | 0.05122 | 0.080717 | 892 | 31 | 181 | 28.774194 | 0.784146 | 0.016816 | 0 | 0 | 0 | 0 | 0.049087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.304348 | null | null | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
fb53099647d1f5af1a16d37c6eb0d5b68a9badc9 | 191 | py | Python | app/data/mysql_cfg.py | cyy0523xc/fastapi-start | 3cc6e0165cc32895b48f4864e12a14b08f42a1e2 | [
"Apache-2.0"
] | 9 | 2021-08-16T03:44:54.000Z | 2022-03-31T08:54:10.000Z | app/data/mysql_cfg.py | cyy0523xc/fastapi-start | 3cc6e0165cc32895b48f4864e12a14b08f42a1e2 | [
"Apache-2.0"
] | 2 | 2021-08-18T14:01:09.000Z | 2021-11-05T06:46:11.000Z | app/data/mysql_cfg.py | cyy0523xc/fastapi-start | 3cc6e0165cc32895b48f4864e12a14b08f42a1e2 | [
"Apache-2.0"
] | 3 | 2021-06-08T02:29:09.000Z | 2022-02-25T01:34:05.000Z |
# MYSQL数据库配置
# 通常在不同环境下配置是不一样的,根据实际情况进行修改
MYSQL_CFG = {
'HOST': '172.17.0.1',
'PORT': 3306,
'USERNAME': 'username',
'PASSWORD': 'password',
'DATABASE': 'database_name'
}
| 17.363636 | 31 | 0.60733 | 19 | 191 | 6 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072368 | 0.204188 | 191 | 10 | 32 | 19.1 | 0.677632 | 0.193717 | 0 | 0 | 0 | 0 | 0.473333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.142857 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fb57d9f989a9d8ee0bc2a1d294116b7862ea2de2 | 7,214 | py | Python | model.py | titee15017/CarND-Behavioral-Cloning-P3 | 3eee75bcda92519a1f94547ac1a9256e5e3c6a7f | [
"MIT"
] | null | null | null | model.py | titee15017/CarND-Behavioral-Cloning-P3 | 3eee75bcda92519a1f94547ac1a9256e5e3c6a7f | [
"MIT"
] | null | null | null | model.py | titee15017/CarND-Behavioral-Cloning-P3 | 3eee75bcda92519a1f94547ac1a9256e5e3c6a7f | [
"MIT"
] | null | null | null | import csv
import cv2
import numpy as np
images = []
measurements = []
correction = 0.2
corrections = [0, correction, -correction]
print('start processing data')
print('1st data, normal lap')
lines = []
with open('data/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for i in range(1,len(lines)):
# drop most of 0 angle, so model will not be train on a lot of 0
if ((float(lines[i][3]) == 0.0) and (np.random.rand(1)[0] > 0.90) or (float(lines[i][3]) != 0.0)):
for j in range(3):
source_path = lines[i][j]
filename = source_path.split('/')[-1]
current_path = 'data/IMG/' + filename
image = cv2.imread(current_path)
measurement = float(lines[i][3]) + corrections[j]
images.append(image)
measurements.append(measurement)
image_flipped = np.fliplr(image)
measurement_flipped = -measurement
images.append(image_flipped)
measurements.append(measurement_flipped)
print('2nd data, prevent off road')
lines = []
with open('data_off_road_1/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for i in range(1,len(lines)):
# drop most of 0 angle, so model will not be train on a lot of 0
if ((float(lines[i][3]) == 0.0) and (np.random.rand(1)[0] > 0.90) or (float(lines[i][3]) != 0.0)):
for j in range(3):
source_path = lines[i][j]
filename = source_path.split('/')[-1]
current_path = 'data_off_road_1/IMG/' + filename
image = cv2.imread(current_path)
measurement = float(lines[i][3]) + corrections[j]
images.append(image)
measurements.append(measurement)
image_flipped = np.fliplr(image)
measurement_flipped = -measurement
images.append(image_flipped)
measurements.append(measurement_flipped)
print('3th data, prevent off road')
lines = []
with open('data_off_road_2/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for i in range(1,len(lines)):
# drop most of 0 angle, so model will not be train on a lot of 0
if ((float(lines[i][3]) == 0.0) and (np.random.rand(1)[0] > 0.90) or (float(lines[i][3]) != 0.0)):
for j in range(3):
source_path = lines[i][j]
filename = source_path.split('/')[-1]
current_path = 'data_off_road_2/IMG/' + filename
image = cv2.imread(current_path)
measurement = float(lines[i][3]) + corrections[j]
images.append(image)
measurements.append(measurement)
image_flipped = np.fliplr(image)
measurement_flipped = -measurement
images.append(image_flipped)
measurements.append(measurement_flipped)
print('4th data, recovering')
lines = []
with open('data_recovering/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
for i in range(1,len(lines)):
# remove angle between -1 to 1, prevent influence to center lane data set
if ((float(lines[i][3]) > 1.0) or (float(lines[i][3]) < -1.0)):
for j in range(3):
source_path = lines[i][j]
filename = source_path.split('/')[-1]
current_path = 'data_recovering/IMG/' + filename
image = cv2.imread(current_path)
measurement = float(lines[i][3]) + corrections[j]
images.append(image)
measurements.append(measurement)
image_flipped = np.fliplr(image)
measurement_flipped = -measurement
images.append(image_flipped)
measurements.append(measurement_flipped)
print('done processing data')
print('start trainning')
X_train = np.array(images)
y_train = np.array(measurements)
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda, Cropping2D, Dropout
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D
model = Sequential()
# Normalize
model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(160,320,3)))
# Crop, input shape (160, 320, 3), output shape (60, 320, 3)
model.add(Cropping2D(cropping=((70,26),(0,0))))
# Convolution layer 1, input shape (64, 320, 3), filtter shape (3, 3, 8), output shape (64, 320, 8)
model.add(Convolution2D(8, 3, strides=1, activation="relu", padding="same"))
# MaxPooling, input shape (64, 320, 8), output shape (32, 160, 8)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Convolution layer 2, input shape (32, 160, 8), filtter shape (3, 3, 16), output shape (32, 160, 16)
model.add(Convolution2D(16, 3, strides=1, activation="relu", padding="same"))
# MaxPooling, input shape (32, 160, 16), output shape (16, 80, 16)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Convolution layer 3, input shape (16, 80, 16), filtter shape (3, 3, 32), output shape (16, 80, 32)
model.add(Convolution2D(32, 3, strides=1, activation="relu", padding="same"))
# Convolution layer 4, input shape (16, 80, 32), filtter shape (3, 3, 32), output shape (16, 80, 32)
model.add(Convolution2D(32, 3, strides=1, activation="relu", padding="same"))
# MaxPooling, input shape (16, 80, 32), output shape (8, 40, 32)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Convolution layer 5, input shape (8, 40, 32), filtter shape (3, 3, 64), output shape (8, 40, 64)
model.add(Convolution2D(64, 3, strides=1, activation="relu", padding="same"))
# Convolution layer 6, input shape (8, 40, 64), filtter shape (3, 3, 64), output shape (8, 40, 64)
model.add(Convolution2D(64, 3, strides=1, activation="relu", padding="same"))
# MaxPooling, input shape (8, 40, 64), output shape (4, 20, 64)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Convolution layer 7, input shape (4, 20, 64), filtter shape (3, 3, 64), output shape (4, 20, 64)
model.add(Convolution2D(64, 3, strides=1, activation="relu", padding="same"))
# Convolution layer 8, input shape (4, 20, 64), filtter shape (3, 3, 64), output shape (4, 20, 64)
model.add(Convolution2D(64, 3, strides=1, activation="relu", padding="same"))
# MaxPooling, input shape (4, 20, 64), output shape (2, 10, 64)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Flatten, input shape (2, 10, 64), output shape (1280)
model.add(Flatten())
# Fully connected layer 9, input shape (1280), output shape (1280)
model.add(Dense(1280))
# Dropout rate 50%
model.add(Dropout(0.5))
# Fully connected layer 10, input shape (1280), output shape (1280)
model.add(Dense(640))
# Dropout rate 50%
model.add(Dropout(0.5))
# Fully connected layer 11, input shape (1000), output shape (1)
model.add(Dense(1))
# Compute loss by mean squared error, Optimize by adam
model.compile(loss='mse', optimizer='adam')
# Split training/validation by 80/20, shuffle data, number of epoch is 10
model.fit(X_train, y_train, validation_split=0.2, shuffle=True, nb_epoch=10)
print('done trainning')
print('save model')
model.save('model.h5')
| 37.968421 | 102 | 0.64153 | 1,046 | 7,214 | 4.365201 | 0.159656 | 0.036794 | 0.028909 | 0.031537 | 0.726675 | 0.70346 | 0.697547 | 0.697547 | 0.686159 | 0.62396 | 0 | 0.073322 | 0.215414 | 7,214 | 189 | 103 | 38.169312 | 0.733392 | 0.254505 | 0 | 0.650407 | 0 | 0 | 0.081713 | 0.01739 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.056911 | 0 | 0.056911 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fb6bd21508d4eb474f730a7550657d5737bd35b1 | 105 | py | Python | inputCleaner/inputCleaner.py | ChristopherMichael-Stokes/AdventOfCode2021 | 871883c3e8b98c4f229429c2aab5beee5ec70f85 | [
"BSD-2-Clause"
] | null | null | null | inputCleaner/inputCleaner.py | ChristopherMichael-Stokes/AdventOfCode2021 | 871883c3e8b98c4f229429c2aab5beee5ec70f85 | [
"BSD-2-Clause"
] | null | null | null | inputCleaner/inputCleaner.py | ChristopherMichael-Stokes/AdventOfCode2021 | 871883c3e8b98c4f229429c2aab5beee5ec70f85 | [
"BSD-2-Clause"
] | null | null | null | import os
with open('./input.txt', 'r') as f:
data = f.readlines()
print(''.join(data).__repr__())
| 15 | 35 | 0.6 | 16 | 105 | 3.6875 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161905 | 105 | 6 | 36 | 17.5 | 0.670455 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fb6f0b8f0f841779ddf08babd88330bb9c34aeee | 1,275 | py | Python | exercises/reverse_number.py | R0bertWell/interview_questions | f8a65a842dfe03ac28c865bb8370422ff2071137 | [
"MIT"
] | null | null | null | exercises/reverse_number.py | R0bertWell/interview_questions | f8a65a842dfe03ac28c865bb8370422ff2071137 | [
"MIT"
] | null | null | null | exercises/reverse_number.py | R0bertWell/interview_questions | f8a65a842dfe03ac28c865bb8370422ff2071137 | [
"MIT"
] | null | null | null | def reverse_num(x: int) -> int :
if x == 0:
return 0
else:
num_temp = str(x)
tam = len(num_temp)
num_zeros = 0
if num_temp[0] == "-":
if num_temp[tam - 1] == "0":
dici = {"0": 0}
for value in num_temp[:1:-1]:
if value in dici:
dici[value] += 1
num_zeros += 1
else:
break
num_temp = num_temp[tam-num_zeros:0:-1]
x = (int(num_temp)) * -1
else:
num_temp = num_temp[:0:-1]
x = (int(num_temp)) * -1
else:
if num_temp[tam - 1] == "0":
dici = {"0": 0}
for value in num_temp[::-1]:
if value in dici:
dici[value] += 1
num_zeros += 1
else:
break
num_temp = num_temp[tam-num_zeros::-1]
x = (int(num_temp))
else:
x = int(num_temp[::-1])
if x > (2 ** 31 - 1) or x < (-2 ** 31):
return 0
else:
return x
if __name__ == "__main__":
print(reverse_num(65090)) | 30.357143 | 55 | 0.359216 | 148 | 1,275 | 2.878378 | 0.182432 | 0.279343 | 0.093897 | 0.103286 | 0.631455 | 0.575117 | 0.575117 | 0.575117 | 0.49061 | 0.49061 | 0 | 0.065359 | 0.52 | 1,275 | 42 | 56 | 30.357143 | 0.630719 | 0 | 0 | 0.575 | 0 | 0 | 0.010188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0 | 0 | 0.1 | 0.025 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fba0ad79b9f393dc2c78c71f0640de95743488c4 | 2,673 | py | Python | backend/app/literature/routers/reference_manual_term_tag_router.py | alliance-genome/agr_literature_service_demo | 48cd3a3797f96ef94e6d40d2c94e379bfc48914f | [
"MIT"
] | null | null | null | backend/app/literature/routers/reference_manual_term_tag_router.py | alliance-genome/agr_literature_service_demo | 48cd3a3797f96ef94e6d40d2c94e379bfc48914f | [
"MIT"
] | null | null | null | backend/app/literature/routers/reference_manual_term_tag_router.py | alliance-genome/agr_literature_service_demo | 48cd3a3797f96ef94e6d40d2c94e379bfc48914f | [
"MIT"
] | null | null | null | from sqlalchemy.orm import Session
from fastapi import APIRouter
from fastapi import Depends
from fastapi import status
from fastapi import Response
from fastapi import Security
from fastapi_okta import OktaUser
from literature import database
from literature.user import set_global_user_id
from literature.schemas import ReferenceManualTermTagSchemaShow
from literature.schemas import ReferenceManualTermTagSchemaPost
from literature.schemas import ReferenceManualTermTagSchemaPatch
from literature.schemas import ResponseMessageSchema
from literature.crud import reference_manual_term_tag_crud
from literature.routers.authentication import auth
router = APIRouter(
prefix="/reference_manual_term_tag",
tags=['Reference Manual Term Tag']
)
get_db = database.get_db
@router.post('/',
status_code=status.HTTP_201_CREATED,
response_model=str)
def create(request: ReferenceManualTermTagSchemaPost,
user: OktaUser = Security(auth.get_user),
db: Session = Depends(get_db)):
set_global_user_id(db, user.id)
return reference_manual_term_tag_crud.create(db, request)
@router.delete('/{reference_manual_term_tag_id}',
status_code=status.HTTP_204_NO_CONTENT)
def destroy(reference_manual_term_tag_id: int,
user: OktaUser = Security(auth.get_user),
db: Session = Depends(get_db)):
set_global_user_id(db, user.id)
reference_manual_term_tag_crud.destroy(db, reference_manual_term_tag_id)
return Response(status_code=status.HTTP_204_NO_CONTENT)
@router.patch('/{reference_manual_term_tag_id}',
status_code=status.HTTP_202_ACCEPTED,
response_model=ResponseMessageSchema)
async def patch(reference_manual_term_tag_id: int,
request: ReferenceManualTermTagSchemaPatch,
user: OktaUser = Security(auth.get_user),
db: Session = Depends(get_db)):
set_global_user_id(db, user.id)
patch = request.dict(exclude_unset=True)
return reference_manual_term_tag_crud.patch(db, reference_manual_term_tag_id, patch)
@router.get('/{reference_manual_term_tag_id}',
response_model=ReferenceManualTermTagSchemaShow,
status_code=200)
def show(reference_manual_term_tag_id: int,
db: Session = Depends(get_db)):
return reference_manual_term_tag_crud.show(db, reference_manual_term_tag_id)
@router.get('/{reference_manual_term_tag_id}/versions',
status_code=200)
def show(reference_manual_term_tag_id: int,
db: Session = Depends(get_db)):
return reference_manual_term_tag_crud.show_changesets(db, reference_manual_term_tag_id)
| 33.4125 | 91 | 0.75982 | 339 | 2,673 | 5.640118 | 0.19469 | 0.156904 | 0.198745 | 0.230126 | 0.478556 | 0.45136 | 0.330021 | 0.27249 | 0.27249 | 0.226464 | 0 | 0.00809 | 0.167602 | 2,673 | 79 | 92 | 33.835443 | 0.851236 | 0 | 0 | 0.263158 | 0 | 0 | 0.069211 | 0.059484 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.263158 | 0.035088 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fba237db49b2b835cdcad396506066991e50a95b | 2,087 | py | Python | backend/app/apps/authentication/views.py | Hesbon5600/hotel-recommender | f0048bdc25e212b218ded8413977242429ff4b90 | [
"MIT"
] | null | null | null | backend/app/apps/authentication/views.py | Hesbon5600/hotel-recommender | f0048bdc25e212b218ded8413977242429ff4b90 | [
"MIT"
] | 1 | 2021-04-08T20:07:39.000Z | 2021-04-08T20:07:39.000Z | backend/app/apps/authentication/views.py | Hesbon5600/hotel-recommender | f0048bdc25e212b218ded8413977242429ff4b90 | [
"MIT"
] | null | null | null | from django.views.generic import CreateView, FormView
from django.conf import settings
from django.shortcuts import render, redirect, reverse
from django.urls import reverse_lazy
from django.contrib.auth import authenticate, login, logout
from django.contrib import messages
from django.core.mail import send_mail
from .models import User
from .forms import UserSignupForm, UserLoginForm
class UserRegistrationCreateView(FormView):
"""
Create user api view
"""
model = User
form_class = UserSignupForm
template_name = 'authentication/signup.html'
def post(self, request):
"""
Overide the default post()
"""
form = self.form_class(request.POST)
if not form.is_valid():
return render(request, self.template_name, {"form": form})
data = {
"first_name": form.cleaned_data['first_name'],
"last_name": form.cleaned_data['last_name'],
"password": form.cleaned_data['password'],
"email": form.cleaned_data['email'],
}
User.objects.create_user(**data)
return redirect(reverse('authentication:login'))
class UserLoginCreateView(FormView):
"""
Create user api view
"""
model = User
form_class = UserLoginForm
template_name = 'authentication/login.html'
success_url = reverse_lazy("hotels:list-hotels")
def post(self, request):
"""
Overide the default post()
# """
form = self.form_class(request.POST)
email = request.POST['username']
password = request.POST['password']
user = authenticate(request, email=email, password=password)
if user and user.is_active:
login(request, user)
return super(UserLoginCreateView, self).form_valid(form)
messages.success(
request, 'This email and password combination is invalid', extra_tags='red')
return render(request, self.template_name, {"form": form})
def logout_view(request):
logout(request)
return redirect('authentication:login')
| 31.621212 | 88 | 0.659799 | 233 | 2,087 | 5.798283 | 0.321888 | 0.051813 | 0.044412 | 0.031088 | 0.226499 | 0.226499 | 0.226499 | 0.226499 | 0.162842 | 0.099186 | 0 | 0 | 0.235745 | 2,087 | 65 | 89 | 32.107692 | 0.847022 | 0.046478 | 0 | 0.181818 | 0 | 0 | 0.128594 | 0.02666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0.090909 | 0.204545 | 0 | 0.590909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
fba5400c53eb75375a556ae72c212f65831ee73d | 911 | py | Python | com/banma/test/test5.py | liuyangyang2015/PythonDemo | a72c009a31ff833dd12405bb97e688ae91ceda6c | [
"MIT"
] | null | null | null | com/banma/test/test5.py | liuyangyang2015/PythonDemo | a72c009a31ff833dd12405bb97e688ae91ceda6c | [
"MIT"
] | null | null | null | com/banma/test/test5.py | liuyangyang2015/PythonDemo | a72c009a31ff833dd12405bb97e688ae91ceda6c | [
"MIT"
] | null | null | null | # class Chain(object):
#
# def __init__(self, path=''):
# self._path = path
#
# def __getattr__(self, path):
# return Chain('%s/%s' % (self._path, path))
#
# def __str__(self):
# return self._path
#
# __repr__ = __str__
#
# print(Chain().status.user.timeline.list)
#
# from enum import Enum, unique
#
# @unique
# class Weekday(Enum):
# Sun = 0 # Sun的value被设定为0
# Mon = 1
# Tue = 2
# Wed = 8
# Thu = 4
# Fri = 5
# Sat = 6
#
# print(Weekday(8))
# import os
#
# print('Process (%s) start...' % os.getpid())
# # Only works on Unix/Linux/Mac:
# pid = os.fork()
# if pid == 0:
# print('I am child process (%s) and my parent is %s.' % (os.getpid(), os.getppid()))
# else:
# print('I (%s) just created a child process (%s).' % (os.getpid(), pid))
from datetime import datetime
now = datetime.now() # 获取当前datetime
print(now)
print(type(now)) | 20.704545 | 89 | 0.566411 | 123 | 911 | 4.00813 | 0.536585 | 0.081136 | 0.048682 | 0.060852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014599 | 0.248079 | 911 | 44 | 90 | 20.704545 | 0.705109 | 0.824369 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
fbaf7f34961e663b79853166a225057d6daf59f0 | 208 | py | Python | client/__init__.py | eresende-nuodb/nuodb-client | 5252b4c98a2d8084d5cb677a5d72f298dd0071fc | [
"BSD-3-Clause"
] | null | null | null | client/__init__.py | eresende-nuodb/nuodb-client | 5252b4c98a2d8084d5cb677a5d72f298dd0071fc | [
"BSD-3-Clause"
] | null | null | null | client/__init__.py | eresende-nuodb/nuodb-client | 5252b4c98a2d8084d5cb677a5d72f298dd0071fc | [
"BSD-3-Clause"
] | null | null | null | __author__ = "NuoDB, Inc."
__copyright__ = "(C) Copyright NuoDB, Inc. 2019"
__license__ = "MIT"
__version__ = "1.0"
__maintainer__ = "NuoDB Drivers"
__email__ = "drivers@nuodb.com"
__status__ = "Production"
| 26 | 49 | 0.721154 | 23 | 208 | 5.304348 | 0.73913 | 0.131148 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033333 | 0.134615 | 208 | 7 | 50 | 29.714286 | 0.644444 | 0 | 0 | 0 | 0 | 0 | 0.423077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
fbb7b04900c9d443524f381064d8ef36a2d29641 | 845 | py | Python | myuw/views/api/directory.py | timtim17/myuw | d59702a8095daf049d7e57cbb1f7f2a5bebc69af | [
"Apache-2.0"
] | null | null | null | myuw/views/api/directory.py | timtim17/myuw | d59702a8095daf049d7e57cbb1f7f2a5bebc69af | [
"Apache-2.0"
] | null | null | null | myuw/views/api/directory.py | timtim17/myuw | d59702a8095daf049d7e57cbb1f7f2a5bebc69af | [
"Apache-2.0"
] | null | null | null | # Copyright 2022 UW-IT, University of Washington
# SPDX-License-Identifier: Apache-2.0
import traceback
import logging
from myuw.dao.pws import get_person_of_current_user
from myuw.views.error import handle_exception
from myuw.logger.timer import Timer
from myuw.logger.logresp import log_api_call
from myuw.views.api import ProtectedAPI
logger = logging.getLogger(__name__)
class MyDirectoryInfo(ProtectedAPI):
def get(self, request, *args, **kwargs):
"""
GET returns 200 with PWS entry for the current user
"""
timer = Timer()
try:
resp = get_person_of_current_user(request).json_data()
log_api_call(timer, request, "Get MyDirectoryInfo")
return self.json_response(resp)
except Exception:
return handle_exception(logger, timer, traceback)
| 31.296296 | 66 | 0.711243 | 109 | 845 | 5.330275 | 0.522936 | 0.068847 | 0.037866 | 0.061962 | 0.075732 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.211834 | 845 | 26 | 67 | 32.5 | 0.858859 | 0.159763 | 0 | 0 | 0 | 0 | 0.027737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.411765 | 0 | 0.647059 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fbbdd9d2a51684c774767be1b8e4fff0c5b342a9 | 1,332 | py | Python | solvebio/resource/__init__.py | PolinaBevad/solvebio-python | f6c736baa01b5a868a385cb0baf8f9dc2007cec3 | [
"MIT"
] | null | null | null | solvebio/resource/__init__.py | PolinaBevad/solvebio-python | f6c736baa01b5a868a385cb0baf8f9dc2007cec3 | [
"MIT"
] | null | null | null | solvebio/resource/__init__.py | PolinaBevad/solvebio-python | f6c736baa01b5a868a385cb0baf8f9dc2007cec3 | [
"MIT"
] | null | null | null | from __future__ import absolute_import
from .apiresource import ListObject
from .user import User
from .dataset import Dataset
from .datasetfield import DatasetField
from .datasetimport import DatasetImport
from .datasetexport import DatasetExport
from .datasetcommit import DatasetCommit
from .datasetmigration import DatasetMigration
from .datasettemplate import DatasetTemplate
from .vault_sync_task import VaultSyncTask
from .object_copy_task import ObjectCopyTask
from .manifest import Manifest
from .object import Object
from .vault import Vault
from .task import Task
from .beacon import Beacon
from .beaconset import BeaconSet
from .application import Application
from .group import Group
from .savedquery import SavedQuery
types = {
'Application': Application,
'Beacon': Beacon,
'BeaconSet': BeaconSet,
'Dataset': Dataset,
'DatasetImport': DatasetImport,
'DatasetExport': DatasetExport,
'DatasetCommit': DatasetCommit,
'DatasetMigration': DatasetMigration,
'DatasetTemplate': DatasetTemplate,
'DatasetField': DatasetField,
'Group': Group,
'Manifest': Manifest,
'Object': Object,
'ObjectCopyTask': ObjectCopyTask,
'ECSTask': Task,
'VaultSyncTask': VaultSyncTask,
'User': User,
'Vault': Vault,
'list': ListObject,
'SavedQuery': SavedQuery,
}
| 28.340426 | 46 | 0.764264 | 130 | 1,332 | 7.761538 | 0.230769 | 0.029732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158408 | 1,332 | 46 | 47 | 28.956522 | 0.900089 | 0 | 0 | 0 | 0 | 0 | 0.143393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.511628 | 0 | 0.511628 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
fbbeebc4cf13457f9cc2193dd2ff394cca86f7f7 | 1,679 | py | Python | stubs.min/System/ComponentModel/__init___parts/CurrentChangedEventManager.py | ricardyn/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | 1 | 2021-02-02T13:39:16.000Z | 2021-02-02T13:39:16.000Z | stubs.min/System/ComponentModel/__init___parts/CurrentChangedEventManager.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | stubs.min/System/ComponentModel/__init___parts/CurrentChangedEventManager.py | hdm-dt-fb/ironpython-stubs | 4d2b405eda3ceed186e8adca55dd97c332c6f49d | [
"MIT"
] | null | null | null | class CurrentChangedEventManager(WeakEventManager):
""" Provides a System.Windows.WeakEventManager implementation so that you can use the "weak event listener" pattern to attach listeners for the System.ComponentModel.ICollectionView.CurrentChanged event. """
@staticmethod
def AddHandler(source,handler):
""" AddHandler(source: ICollectionView,handler: EventHandler[EventArgs]) """
pass
@staticmethod
def AddListener(source,listener):
"""
AddListener(source: ICollectionView,listener: IWeakEventListener)
Adds the specified listener to the
System.ComponentModel.ICollectionView.CurrentChanged event of the specified
source.
source: The object with the event.
listener: The object to add as a listener.
"""
pass
@staticmethod
def RemoveHandler(source,handler):
""" RemoveHandler(source: ICollectionView,handler: EventHandler[EventArgs]) """
pass
@staticmethod
def RemoveListener(source,listener):
"""
RemoveListener(source: ICollectionView,listener: IWeakEventListener)
Removes the specified listener from the
System.ComponentModel.ICollectionView.CurrentChanged event of the specified
source.
source: The object with the event.
listener: The listener to remove.
"""
pass
ReadLock=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Establishes a read-lock on the underlying data table,and returns an System.IDisposable.
"""
WriteLock=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Establishes a write-lock on the underlying data table,and returns an System.IDisposable.
"""
| 34.979167 | 209 | 0.733175 | 182 | 1,679 | 6.763736 | 0.362637 | 0.048741 | 0.056052 | 0.092608 | 0.541836 | 0.541836 | 0.495532 | 0.495532 | 0.385053 | 0.385053 | 0 | 0 | 0.181656 | 1,679 | 47 | 210 | 35.723404 | 0.895924 | 0.538416 | 0 | 0.533333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0.266667 | 0 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
fbc319e8d1b7e3923c4c6a288f7bd866f0ee7778 | 10,915 | py | Python | release/stubs.min/System/Drawing/__init___parts/Color.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs.min/System/Drawing/__init___parts/Color.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs.min/System/Drawing/__init___parts/Color.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | class Color(object):
""" Represents an ARGB (alpha,red,green,blue) color. """
def Equals(self,obj):
"""
Equals(self: Color,obj: object) -> bool
Tests whether the specified object is a System.Drawing.Color structure and is equivalent to this
System.Drawing.Color structure.
obj: The object to test.
Returns: true if obj is a System.Drawing.Color structure equivalent to this System.Drawing.Color
structure; otherwise,false.
"""
pass
@staticmethod
def FromArgb(*__args):
"""
FromArgb(alpha: int,baseColor: Color) -> Color
Creates a System.Drawing.Color structure from the specified System.Drawing.Color structure,but
with the new specified alpha value. Although this method allows a 32-bit value to be passed for
the alpha value,the value is limited to 8 bits.
alpha: The alpha value for the new System.Drawing.Color. Valid values are 0 through 255.
baseColor: The System.Drawing.Color from which to create the new System.Drawing.Color.
Returns: The System.Drawing.Color that this method creates.
FromArgb(red: int,green: int,blue: int) -> Color
Creates a System.Drawing.Color structure from the specified 8-bit color values (red,green,and
blue). The alpha value is implicitly 255 (fully opaque). Although this method allows a 32-bit
value to be passed for each color component,the value of each component is limited to 8 bits.
red: The red component value for the new System.Drawing.Color. Valid values are 0 through 255.
green: The green component value for the new System.Drawing.Color. Valid values are 0 through 255.
blue: The blue component value for the new System.Drawing.Color. Valid values are 0 through 255.
Returns: The System.Drawing.Color that this method creates.
FromArgb(argb: int) -> Color
Creates a System.Drawing.Color structure from a 32-bit ARGB value.
argb: A value specifying the 32-bit ARGB value.
Returns: The System.Drawing.Color structure that this method creates.
FromArgb(alpha: int,red: int,green: int,blue: int) -> Color
Creates a System.Drawing.Color structure from the four ARGB component (alpha,red,green,and
blue) values. Although this method allows a 32-bit value to be passed for each component,the
value of each component is limited to 8 bits.
alpha: The alpha component. Valid values are 0 through 255.
red: The red component. Valid values are 0 through 255.
green: The green component. Valid values are 0 through 255.
blue: The blue component. Valid values are 0 through 255.
Returns: The System.Drawing.Color that this method creates.
"""
pass
@staticmethod
def FromKnownColor(color):
"""
FromKnownColor(color: KnownColor) -> Color
Creates a System.Drawing.Color structure from the specified predefined color.
color: An element of the System.Drawing.KnownColor enumeration.
Returns: The System.Drawing.Color that this method creates.
"""
pass
@staticmethod
def FromName(name):
"""
FromName(name: str) -> Color
Creates a System.Drawing.Color structure from the specified name of a predefined color.
name: A string that is the name of a predefined color. Valid names are the same as the names of the
elements of the System.Drawing.KnownColor enumeration.
Returns: The System.Drawing.Color that this method creates.
"""
pass
def GetBrightness(self):
"""
GetBrightness(self: Color) -> Single
Gets the hue-saturation-brightness (HSB) brightness value for this System.Drawing.Color
structure.
Returns: The brightness of this System.Drawing.Color. The brightness ranges from 0.0 through 1.0,where
0.0 represents black and 1.0 represents white.
"""
pass
def GetHashCode(self):
"""
GetHashCode(self: Color) -> int
Returns a hash code for this System.Drawing.Color structure.
Returns: An integer value that specifies the hash code for this System.Drawing.Color.
"""
pass
def GetHue(self):
"""
GetHue(self: Color) -> Single
Gets the hue-saturation-brightness (HSB) hue value,in degrees,for this System.Drawing.Color
structure.
Returns: The hue,in degrees,of this System.Drawing.Color. The hue is measured in degrees,ranging from
0.0 through 360.0,in HSB color space.
"""
pass
def GetSaturation(self):
"""
GetSaturation(self: Color) -> Single
Gets the hue-saturation-brightness (HSB) saturation value for this System.Drawing.Color
structure.
Returns: The saturation of this System.Drawing.Color. The saturation ranges from 0.0 through 1.0,where
0.0 is grayscale and 1.0 is the most saturated.
"""
pass
def ToArgb(self):
"""
ToArgb(self: Color) -> int
Gets the 32-bit ARGB value of this System.Drawing.Color structure.
Returns: The 32-bit ARGB value of this System.Drawing.Color.
"""
pass
def ToKnownColor(self):
"""
ToKnownColor(self: Color) -> KnownColor
Gets the System.Drawing.KnownColor value of this System.Drawing.Color structure.
Returns: An element of the System.Drawing.KnownColor enumeration,if the System.Drawing.Color is created
from a predefined color by using either the System.Drawing.Color.FromName(System.String) method
or the System.Drawing.Color.FromKnownColor(System.Drawing.KnownColor) method; otherwise,0.
"""
pass
def ToString(self):
"""
ToString(self: Color) -> str
Converts this System.Drawing.Color structure to a human-readable string.
Returns: A string that is the name of this System.Drawing.Color,if the System.Drawing.Color is created
from a predefined color by using either the System.Drawing.Color.FromName(System.String) method
or the System.Drawing.Color.FromKnownColor(System.Drawing.KnownColor) method; otherwise,a
string that consists of the ARGB component names and their values.
"""
pass
def __eq__(self,*args):
""" x.__eq__(y) <==> x==y """
pass
def __ne__(self,*args):
pass
A=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the alpha component value of this System.Drawing.Color structure.
Get: A(self: Color) -> Byte
"""
B=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the blue component value of this System.Drawing.Color structure.
Get: B(self: Color) -> Byte
"""
G=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the green component value of this System.Drawing.Color structure.
Get: G(self: Color) -> Byte
"""
IsEmpty=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Specifies whether this System.Drawing.Color structure is uninitialized.
Get: IsEmpty(self: Color) -> bool
"""
IsKnownColor=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a value indicating whether this System.Drawing.Color structure is a predefined color. Predefined colors are represented by the elements of the System.Drawing.KnownColor enumeration.
Get: IsKnownColor(self: Color) -> bool
"""
IsNamedColor=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a value indicating whether this System.Drawing.Color structure is a named color or a member of the System.Drawing.KnownColor enumeration.
Get: IsNamedColor(self: Color) -> bool
"""
IsSystemColor=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets a value indicating whether this System.Drawing.Color structure is a system color. A system color is a color that is used in a Windows display element. System colors are represented by elements of the System.Drawing.KnownColor enumeration.
Get: IsSystemColor(self: Color) -> bool
"""
Name=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the name of this System.Drawing.Color.
Get: Name(self: Color) -> str
"""
R=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the red component value of this System.Drawing.Color structure.
Get: R(self: Color) -> Byte
"""
AliceBlue=None
AntiqueWhite=None
Aqua=None
Aquamarine=None
Azure=None
Beige=None
Bisque=None
Black=None
BlanchedAlmond=None
Blue=None
BlueViolet=None
Brown=None
BurlyWood=None
CadetBlue=None
Chartreuse=None
Chocolate=None
Coral=None
CornflowerBlue=None
Cornsilk=None
Crimson=None
Cyan=None
DarkBlue=None
DarkCyan=None
DarkGoldenrod=None
DarkGray=None
DarkGreen=None
DarkKhaki=None
DarkMagenta=None
DarkOliveGreen=None
DarkOrange=None
DarkOrchid=None
DarkRed=None
DarkSalmon=None
DarkSeaGreen=None
DarkSlateBlue=None
DarkSlateGray=None
DarkTurquoise=None
DarkViolet=None
DeepPink=None
DeepSkyBlue=None
DimGray=None
DodgerBlue=None
Empty=None
Firebrick=None
FloralWhite=None
ForestGreen=None
Fuchsia=None
Gainsboro=None
GhostWhite=None
Gold=None
Goldenrod=None
Gray=None
Green=None
GreenYellow=None
Honeydew=None
HotPink=None
IndianRed=None
Indigo=None
Ivory=None
Khaki=None
Lavender=None
LavenderBlush=None
LawnGreen=None
LemonChiffon=None
LightBlue=None
LightCoral=None
LightCyan=None
LightGoldenrodYellow=None
LightGray=None
LightGreen=None
LightPink=None
LightSalmon=None
LightSeaGreen=None
LightSkyBlue=None
LightSlateGray=None
LightSteelBlue=None
LightYellow=None
Lime=None
LimeGreen=None
Linen=None
Magenta=None
Maroon=None
MediumAquamarine=None
MediumBlue=None
MediumOrchid=None
MediumPurple=None
MediumSeaGreen=None
MediumSlateBlue=None
MediumSpringGreen=None
MediumTurquoise=None
MediumVioletRed=None
MidnightBlue=None
MintCream=None
MistyRose=None
Moccasin=None
NavajoWhite=None
Navy=None
OldLace=None
Olive=None
OliveDrab=None
Orange=None
OrangeRed=None
Orchid=None
PaleGoldenrod=None
PaleGreen=None
PaleTurquoise=None
PaleVioletRed=None
PapayaWhip=None
PeachPuff=None
Peru=None
Pink=None
Plum=None
PowderBlue=None
Purple=None
Red=None
RosyBrown=None
RoyalBlue=None
SaddleBrown=None
Salmon=None
SandyBrown=None
SeaGreen=None
SeaShell=None
Sienna=None
Silver=None
SkyBlue=None
SlateBlue=None
SlateGray=None
Snow=None
SpringGreen=None
SteelBlue=None
Tan=None
Teal=None
Thistle=None
Tomato=None
Transparent=None
Turquoise=None
Violet=None
Wheat=None
White=None
WhiteSmoke=None
Yellow=None
YellowGreen=None
| 22.050505 | 248 | 0.705451 | 1,465 | 10,915 | 5.246416 | 0.198635 | 0.101483 | 0.119438 | 0.094848 | 0.528623 | 0.506896 | 0.486729 | 0.436118 | 0.394744 | 0.331122 | 0 | 0.008869 | 0.214934 | 10,915 | 494 | 249 | 22.095142 | 0.888085 | 0.480257 | 0 | 0.088398 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071823 | false | 0.071823 | 0 | 0 | 0.911602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
fbc6b16f9486d5e2e81bdbf62103b2a53aef5bf6 | 20,581 | py | Python | modules/s3/s3cfg.py | flavour/lacity | fd1f1cccdcea64d07143b29d4f88996e3af35c4b | [
"MIT"
] | null | null | null | modules/s3/s3cfg.py | flavour/lacity | fd1f1cccdcea64d07143b29d4f88996e3af35c4b | [
"MIT"
] | null | null | null | modules/s3/s3cfg.py | flavour/lacity | fd1f1cccdcea64d07143b29d4f88996e3af35c4b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" Deployment Settings
@requires: U{B{I{gluon}} <http://web2py.com>}
@author: Dominic König <dominic[at]aidiq.com>
@copyright: 2009-2011 (c) Sahana Software Foundation
@license: MIT
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
"""
__all__ = ["S3Config"]
from gluon import HTTP, current
from gluon.storage import Storage
from gluon.contrib.simplejson.ordered_dict import OrderedDict
class S3Config(Storage):
"""
Deployment Settings Helper Class
"""
def __init__(self):
self.auth = Storage()
self.base = Storage()
self.database = Storage()
self.frontpage = Storage()
self.frontpage.rss = []
self.fin = Storage()
self.gis = Storage()
self.mail = Storage()
self.twitter = Storage()
self.L10n = Storage()
self.options = Storage()
self.osm = Storage()
self.security = Storage()
self.aaa = Storage()
self.ui = Storage()
self.req = Storage()
self.inv = Storage()
self.hrm = Storage()
self.save_search = Storage()
T = current.T
# These are copied from modules/s3/s3aaa.py
self.aaa.acl = Storage(CREATE = 0x0001,
READ = 0x0002,
UPDATE = 0x0004,
DELETE = 0x0008,
ALL = 0x000F # CREATE | READ | UPDATE | DELETE
)
self.CURRENCIES = {
"USD" :T("United States Dollars"),
"EUR" :T("Euros"),
"GBP" :T("Great British Pounds")
}
# -----------------------------------------------------------------------------
# Auth settings
def get_auth_hmac_key(self):
return self.auth.get("hmac_key", "akeytochange")
def get_auth_openid(self):
return self.auth.get("openid", False)
def get_auth_registration_requires_verification(self):
return self.auth.get("registration_requires_verification", False)
def get_auth_registration_requires_approval(self):
return self.auth.get("registration_requires_approval", False)
def get_auth_registration_requests_mobile_phone(self):
return self.auth.get("registration_requests_mobile_phone", False)
def get_auth_registration_mobile_phone_mandatory(self):
" Make the selection of Mobile Phone Mandatory during registration "
return self.auth.get("registration_mobile_phone_mandatory", False)
def get_auth_registration_requests_organisation(self):
" Have the registration form request the Organisation "
return self.auth.get("registration_requests_organisation", False)
def get_auth_registration_organisation_mandatory(self):
" Make the selection of Organisation Mandatory during registration "
return self.auth.get("registration_organisation_mandatory", False)
def get_auth_registration_organisation_hidden(self):
" Hide the Organisation field in the registration form unless an email is entered which isn't whitelisted "
return self.auth.get("registration_organisation_hidden", False)
def get_auth_registration_volunteer(self):
" Redirect the newly-registered user to their volunteer details page "
return self.auth.get("registration_volunteer", False)
def get_auth_always_notify_approver(self):
return self.auth.get("always_notify_approver", False)
# @ToDo: Deprecate
def get_aaa_default_uacl(self):
return self.aaa.get("default_uacl", self.aaa.acl.READ)
def get_aaa_default_oacl(self):
return self.aaa.get("default_oacl", self.aaa.acl.READ |
self.aaa.acl.UPDATE)
def get_security_archive_not_delete(self):
return self.security.get("archive_not_delete", True)
def get_security_audit_read(self):
return self.security.get("audit_read", False)
def get_security_audit_write(self):
return self.security.get("audit_write", False)
def get_security_policy(self):
" Default is Simple Security Policy "
return self.security.get("policy", 1)
def get_security_map(self):
return self.security.get("map", False)
def get_security_self_registration(self):
return self.security.get("self_registration", True)
# -----------------------------------------------------------------------------
# Base settings
def get_system_name(self):
return self.base.get("system_name", current.T("Sahana Eden Humanitarian Management Platform"))
def get_system_name_short(self):
return self.base.get("system_name_short", self.get_system_name())
def get_paper_size(self):
return self.base.get("paper_size", "A4")
def get_base_debug(self):
return self.base.get("debug", False)
def get_base_migrate(self):
return self.base.get("migrate", True)
def get_base_prepopulate(self):
return self.base.get("prepopulate", 1)
def get_base_public_url(self):
return self.base.get("public_url", "http://127.0.0.1:8000")
def get_base_cdn(self):
return self.base.get("cdn", False)
# -----------------------------------------------------------------------------
# Database settings
def get_database_type(self):
return self.database.get("db_type", "sqlite")
def get_database_string(self):
db_type = self.database.get("db_type", "sqlite")
pool_size = self.database.get("pool_size", 0)
if (db_type == "sqlite"):
db_string = "sqlite://storage.db"
elif (db_type == "mysql"):
db_string = "mysql://%s:%s@%s:%s/%s" % \
(self.database.get("username", "sahana"),
self.database.get("password", "password"),
self.database.get("host", "localhost"),
self.database.get("port", None) or "3306",
self.database.get("database", "sahana"))
elif (db_type == "postgres"):
db_string = "postgres://%s:%s@%s:%s/%s" % \
(self.database.get("username", "sahana"),
self.database.get("password", "password"),
self.database.get("host", "localhost"),
self.database.get("port", None) or "5432",
self.database.get("database", "sahana"))
else:
raise HTTP(501, body="Database type '%s' not recognised - please correct file models/000_config.py." % db_type)
if pool_size:
return (db_string, pool_size)
else:
return db_string
# -----------------------------------------------------------------------------
# Finance settings
# @ToDo: Make these customisable per User/Facility
def get_fin_currencies(self):
return self.fin.get("currencies", self.CURRENCIES)
def get_fin_currency_default(self):
return self.fin.get("currency_default", "USD")
def get_fin_currency_readable(self):
return self.fin.get("currency_readable", True)
def get_fin_currency_writable(self):
return self.fin.get("currency_writable", True)
# -----------------------------------------------------------------------------
# GIS (Map) Settings
# No defaults are needed for gis_config deployment settings -- initial
# defaults come either from the table itself or are added to the site
# config when it is created. This does not include defaults for the
# hierarchy labels as that is defined separately in 000_config.
def get_gis_default_config_values(self):
return self.gis.get("default_config_values", Storage())
def get_gis_default_location_hierarchy(self):
location_hierarchy = self.gis.get("location_hierarchy", None)
if not location_hierarchy:
location_hierarchy = OrderedDict([
("L0", current.T("Country")),
("L1", current.T("Province")),
("L2", current.T("District")),
("L3", current.T("Town")),
("L4", current.T("Village")),
#("L5", current.T("Neighbourhood")),
])
return location_hierarchy
# These fields in gis_config are references to other tables. Rather than
# hard code an id, default via the name.
def get_gis_default_symbology(self):
return self.gis.get("default_symbology", "US")
def get_gis_default_projection(self):
return self.gis.get("default_projection", "Spherical Mercator")
def get_gis_default_marker(self):
return self.gis.get("default_marker", "marker_red")
def get_gis_max_allowed_hierarchy_level(self):
# If the site's default hierarchy is deeper than the specified maximum,
# adjust the limit so the entire default hierarchy will be shown in a
# config update form. (At this point, we cannot also limit this to the
# depth available in the gis_config table as the database is not
# available. See max_allowed_level_num in s3gis GIS.)
limit = current.response.s3.gis.adjusted_max_allowed_hierarchy_level
if not limit:
limit = max(self.gis.get("max_allowed_hierarchy_level", "L4"),
self.get_gis_default_location_hierarchy().keys()[-1])
current.response.s3.gis.adjusted_max_allowed_hierarchy_level = limit
return limit
def get_gis_building_name(self):
" Display Building Name when selecting Locations "
return self.gis.get("building_name", True)
def get_gis_latlon_selector(self):
" Display a Lat/Lon boxes when selecting Locations "
return self.gis.get("latlon_selector", True)
def get_gis_map_selector(self):
" Display a Map-based tool to select Locations "
return self.gis.get("map_selector", True)
def get_gis_menu(self):
"""
Should we display a menu of GIS configurations?
- set to False to not show the menu (default)
- set to the label to use for the menu to enable it
e.g. T("Events") or T("Regions")
"""
return self.gis.get("menu", False)
def get_gis_display_l0(self):
return self.gis.get("display_L0", False)
def get_gis_display_l1(self):
return self.gis.get("display_L1", True)
def get_gis_duplicate_features(self):
return self.gis.get("duplicate_features", False)
def get_gis_edit_lx(self):
" Edit Hierarchy Locations "
return self.gis.get("edit_Lx", True)
def get_gis_edit_group(self):
" Edit Location Groups "
return self.gis.get("edit_GR", False)
def get_gis_marker_max_height(self):
return self.gis.get("marker_max_height", 35)
def get_gis_marker_max_width(self):
return self.gis.get("marker_max_width", 30)
def get_gis_mouse_position(self):
return self.gis.get("mouse_position", "normal")
def get_gis_print_service(self):
return self.gis.get("print_service", "")
def get_gis_geoserver_url(self):
return self.gis.get("geoserver_url", "")
def get_gis_geoserver_username(self):
return self.gis.get("geoserver_username", "admin")
def get_gis_geoserver_password(self):
return self.gis.get("geoserver_password", "password")
def get_gis_spatialdb(self):
return self.gis.get("spatialdb", False)
# OpenStreetMap settings
def get_osm_oauth_consumer_key(self):
return self.osm.get("oauth_consumer_key", "")
def get_osm_oauth_consumer_secret(self):
return self.osm.get("oauth_consumer_secret", "")
# -----------------------------------------------------------------------------
# L10N Settings
def get_L10n_default_country_code(self):
return self.L10n.get("default_country_code", 1)
def get_L10n_default_language(self):
return self.L10n.get("default_language", "en")
def get_L10n_display_toolbar(self):
return self.L10n.get("display_toolbar", True)
def get_L10n_languages(self):
return self.L10n.get("languages", { "en":current.T("English") })
def get_L10n_religions(self):
T = current.T
return self.L10n.get("religions", { "none":T("None"),
"other":T("Other") })
def get_L10n_date_format(self):
T = current.T
return self.L10n.get("date_format", T("%Y-%m-%d"))
def get_L10n_time_format(self):
T = current.T
return self.L10n.get("time_format", T("%H:%M:%S"))
def get_L10n_datetime_format(self):
T = current.T
return self.L10n.get("datetime_format", T("%Y-%m-%d %H:%M:%S"))
def get_L10n_utc_offset(self):
return self.L10n.get("utc_offset", "UTC +0000")
def get_L10n_mandatory_lastname(self):
return self.L10n.get("mandatory_lastname", False)
# -----------------------------------------------------------------------------
# Messaging
# -----------------------------------------------------------------------------
# Mail settings
def get_mail_server(self):
return self.mail.get("server", "127.0.0.1:25")
def get_mail_server_login(self):
return self.mail.get("login", False)
def get_mail_server_tls(self):
"""
Does the Mail Server use TLS?
- default Debian is False
- GMail is True
"""
return self.mail.get("tls", False)
def get_mail_sender(self):
return self.mail.get("sender", "sahana@your.org")
def get_mail_approver(self):
return self.mail.get("approver", "useradmin@your.org")
def get_mail_limit(self):
""" A daily limit to the number of messages which can be sent """
return self.mail.get("limit", None)
# Twitter settings
def get_twitter_oauth_consumer_key(self):
return self.twitter.get("oauth_consumer_key", "")
def get_twitter_oauth_consumer_secret(self):
return self.twitter.get("oauth_consumer_secret", "")
# -----------------------------------------------------------------------------
# PDF settings
def get_paper_size(self):
return self.base.get("paper_size", "A4")
def get_pdf_logo(self):
return self.ui.get("pdf_logo", None)
# Optical Character Recognition (OCR)
def get_pdf_excluded_fields(self, resourcename):
excluded_fields_dict = {
"hms_hospital" : [
"hrm_human_resource",
],
"pr_group" : [
"pr_group_membership",
],
}
excluded_fields =\
excluded_fields_dict.get(resourcename, [])
return excluded_fields
# -----------------------------------------------------------------------------
# Options
def get_terms_of_service(self):
return self.options.get("terms_of_service", False)
def get_options_support_requests(self):
return self.options.get("support_requests", False)
# -----------------------------------------------------------------------------
# UI/Workflow Settings
def get_ui_navigate_away_confirm(self):
return self.ui.get("navigate_away_confirm", True)
def get_ui_confirm(self):
"""
For Delete actions
Workaround for this Bug in Selenium with FF4:
http://code.google.com/p/selenium/issues/detail?id=1604
"""
return self.ui.get("confirm", True)
def get_ui_autocomplete(self):
""" Currently Unused """
return self.ui.get("autocomplete", False)
def get_ui_update_label(self):
return self.ui.get("update_label", current.T("Open"))
def get_ui_cluster(self):
""" UN-style deployment? """
return self.ui.get("cluster", False)
def get_ui_camp(self):
""" 'Camp' instead of 'Shelter'? """
return self.ui.get("camp", False)
def get_ui_label_mobile_phone(self):
"""
Label for the Mobile Phone field
e.g. 'Cell Phone'
"""
T = current.T
label = self.ui.get("label_mobile_phone", T("Mobile Phone"))
# May need this form for Web Setup
#return T(label)
return label
def get_ui_label_postcode(self):
"""
Label for the Postcode field
e.g. 'ZIP Code'
"""
T = current.T
label = self.ui.get("label_postcode", T("Postcode"))
# May need this form for Web Setup
#return T(label)
return label
# -----------------------------------------------------------------------------
# Modules
# -----------------------------------------------------------------------------
# Request Settings
def get_req_type_inv_label(self):
return self.req.get("type_inv_label", current.T("Inventory Items"))
def get_req_type_hrm_label(self):
return self.req.get("type_hrm_label", current.T("People"))
def get_req_status_writable(self):
""" Whether Request Status should be manually editable """
return self.req.get("status_writable", True)
def get_req_quantities_writable(self):
""" Whether Item Quantities should be manually editable """
return self.req.get("quantities_writable", False)
def get_req_skill_quantities_writable(self):
""" Whether People Quantities should be manually editable """
return self.req.get("skill_quantities_writable", False)
def get_req_multiple_req_items(self):
return self.req.get("multiple_req_items", True)
def get_req_show_quantity_transit(self):
return self.req.get("show_quantity_transit", True)
def get_req_use_commit(self):
return self.req.get("use_commit", True)
def get_req_req_crud_strings(self, type = None):
return self.req.get("req_crud_strings") and \
self.req.req_crud_strings.get(type, None)
def get_req_webeoc_is_master(self):
return self.req.get("webeoc_is_master", True)
# -----------------------------------------------------------------------------
# Inventory Management Setting
def get_inv_collapse_tabs(self):
return self.inv.get("collapse_tabs", True)
def get_inv_shipment_name(self):
"""
Get the name of Shipments
- currently supported options are:
* shipment
* order
"""
return self.inv.get("shipment_name", "shipment")
# -----------------------------------------------------------------------------
# Human Resource Management
def get_hrm_email_required(self):
return self.hrm.get("email_required", True)
# Save Search and Subscription
def get_save_search_widget(self):
return self.save_search.get("widget", True)
# -----------------------------------------------------------------------------
# Active modules list
def has_module(self, module_name):
if not self.modules:
# Provide a minimal list of core modules
_modules = [
"admin", # Admin
"gis", # GIS
"pr", # Person Registry
"org" # Organization Registry
]
else:
_modules = self.modules
return module_name in _modules
| 42.347737 | 123 | 0.59244 | 2,445 | 20,581 | 4.78364 | 0.213906 | 0.052839 | 0.083789 | 0.028728 | 0.331053 | 0.224863 | 0.125342 | 0.084644 | 0.064295 | 0.03762 | 0 | 0.009773 | 0.249259 | 20,581 | 485 | 124 | 42.435052 | 0.747201 | 0.260386 | 0 | 0.080495 | 0 | 0 | 0.185917 | 0.032917 | 0 | 0 | 0.001963 | 0.002062 | 0 | 1 | 0.325077 | false | 0.012384 | 0.009288 | 0.219814 | 0.662539 | 0.006192 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
fbda1644f2552e19e5d19991b8502432f589b88d | 555 | py | Python | ejercicios/basico/suma_entre_dos_numeros.py | carlosviveros/Soluciones | 115f4fa929c7854ca497e4c994352adc64565456 | [
"MIT"
] | 1 | 2022-02-02T04:44:56.000Z | 2022-02-02T04:44:56.000Z | ejercicios/basico/suma_entre_dos_numeros.py | leugimkm/Soluciones | d71601c8d9b5e86e926f48d9e49462af8a956b6d | [
"MIT"
] | null | null | null | ejercicios/basico/suma_entre_dos_numeros.py | leugimkm/Soluciones | d71601c8d9b5e86e926f48d9e49462af8a956b6d | [
"MIT"
] | null | null | null | """AyudaEnPython: https://www.facebook.com/groups/ayudapython
Escribe un programa que solicite dos números enteros al usuario y
muestre por pantalla la suma de todos los números enteros que hay
entre los dos números (ambos números incluidos).
Ejemplo:
Introduce el número de inicio: 4
Introduce el número de fin: 8
El resusltado es: 30
"""
inicio = int(input("Introduce el número de inicio: "))
fin = int(input("Introduce el número de fin: "))
if inicio > fin:
inicio, fin = fin, inicio
print(f"El resusltado es: {sum(range(inicio, fin + 1))}")
| 26.428571 | 65 | 0.735135 | 87 | 555 | 4.689655 | 0.563218 | 0.107843 | 0.166667 | 0.186275 | 0.269608 | 0.132353 | 0 | 0 | 0 | 0 | 0 | 0.010799 | 0.165766 | 555 | 20 | 66 | 27.75 | 0.87041 | 0.603604 | 0 | 0 | 0 | 0 | 0.497653 | 0 | 0 | 0 | 0 | 0.05 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83754c383c9d8616fa873063f2aefba1b4b85aa1 | 508 | py | Python | dataloader/create_data_loader.py | WiktorSa/Music-Generation-with-LSTM-and-.wav-files | 37b713b5e6193788a7710cc0fac4134efb74fa62 | [
"MIT"
] | 1 | 2022-03-09T20:13:57.000Z | 2022-03-09T20:13:57.000Z | dataloader/create_data_loader.py | WiktorSa/Music-Generation-with-LSTM-and-.wav-files | 37b713b5e6193788a7710cc0fac4134efb74fa62 | [
"MIT"
] | 1 | 2021-10-01T16:20:06.000Z | 2021-10-01T17:25:30.000Z | dataloader/create_data_loader.py | WiktorSa/Music-Generation-with-LSTM-and-.wav-files | 37b713b5e6193788a7710cc0fac4134efb74fa62 | [
"MIT"
] | null | null | null | import numpy as np
from torch.utils.data import DataLoader
from dataloader.music_dataset import MusicDataset
def get_data_loader(x: np.ndarray, y: np.ndarray, batch_size: int) -> DataLoader:
"""
Generate a DataLoader from a given data
:param x: input sequences
:param y: output sequences
:param batch_size: batch size
:return: DataLoader
"""
dataset = MusicDataset(x, y)
dataloader = DataLoader(dataset, batch_size, shuffle=True, drop_last=True)
return dataloader
| 26.736842 | 81 | 0.722441 | 68 | 508 | 5.294118 | 0.485294 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19685 | 508 | 18 | 82 | 28.222222 | 0.882353 | 0.281496 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.428571 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
8387f80c01d3857d9c86fea346c1dfb0c8183f4b | 586 | py | Python | paw2018/nflgames/migrations/0003_auto_20180830_1438.py | mdcollins80/PAW-api | ba40ec77301ac0c84a8ad95481323031b398168b | [
"MIT"
] | null | null | null | paw2018/nflgames/migrations/0003_auto_20180830_1438.py | mdcollins80/PAW-api | ba40ec77301ac0c84a8ad95481323031b398168b | [
"MIT"
] | null | null | null | paw2018/nflgames/migrations/0003_auto_20180830_1438.py | mdcollins80/PAW-api | ba40ec77301ac0c84a8ad95481323031b398168b | [
"MIT"
] | null | null | null | # Generated by Django 2.0.8 on 2018-08-30 18:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nflgames', '0002_auto_20180830_1436'),
]
operations = [
migrations.AlterField(
model_name='game',
name='away_team_score',
field=models.SmallIntegerField(blank=True, null=True),
),
migrations.AlterField(
model_name='game',
name='home_team_score',
field=models.SmallIntegerField(blank=True, null=True),
),
]
| 24.416667 | 66 | 0.59727 | 61 | 586 | 5.590164 | 0.639344 | 0.117302 | 0.146628 | 0.170088 | 0.533724 | 0.533724 | 0.316716 | 0.316716 | 0.316716 | 0 | 0 | 0.074519 | 0.290102 | 586 | 23 | 67 | 25.478261 | 0.745192 | 0.076792 | 0 | 0.470588 | 1 | 0 | 0.128015 | 0.042672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8397b03e82838d41b9922244369430b1cf00de3f | 1,086 | py | Python | operators/get_pillow.py | BlenderAddonsArchive/material-combiner-addon | 8103f17bddf0153df9c6211c7269c0524858a70c | [
"MIT"
] | null | null | null | operators/get_pillow.py | BlenderAddonsArchive/material-combiner-addon | 8103f17bddf0153df9c6211c7269c0524858a70c | [
"MIT"
] | null | null | null | operators/get_pillow.py | BlenderAddonsArchive/material-combiner-addon | 8103f17bddf0153df9c6211c7269c0524858a70c | [
"MIT"
] | null | null | null | import os
import sys
from subprocess import call
import bpy
from .. import globs
class InstallPIL(bpy.types.Operator):
bl_idname = 'smc.get_pillow'
bl_label = 'Install PIL'
bl_description = 'Click to install Pillow. This could take a while and might require you to start Blender as admin'
def execute(self, context):
python_executable = bpy.app.binary_path_python if bpy.app.version < (3, 0, 0) else sys.executable
try:
import pip
try:
from PIL import Image, ImageChops
except ImportError:
call([python_executable, '-m', 'pip', 'install', 'Pillow', '--user', '--upgrade'], shell=True)
except ImportError:
call([python_executable, os.path.join(os.path.dirname(os.path.abspath(__file__)), 'get_pip.py'),
'--user'], shell=True)
call([python_executable, '-m', 'pip', 'install', 'Pillow', '--user', '--upgrade'], shell=True)
globs.smc_pi = True
self.report({'INFO'}, 'Installation complete')
return {'FINISHED'}
| 36.2 | 119 | 0.615101 | 133 | 1,086 | 4.902256 | 0.548872 | 0.09816 | 0.092025 | 0.082822 | 0.257669 | 0.174847 | 0.174847 | 0.174847 | 0.174847 | 0.174847 | 0 | 0.003713 | 0.255985 | 1,086 | 29 | 120 | 37.448276 | 0.803218 | 0 | 0 | 0.25 | 0 | 0 | 0.217311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.375 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
839dbe4891e64b4c1d88f3e8ad00c5e89d4fa29c | 2,867 | py | Python | src/django_prefetch_utils/descriptors/annotation.py | roverdotcom/django-prefetch-utils | f901cb2159b3e95f44baf18812d5bbb7c52afd97 | [
"BSD-3-Clause"
] | 1 | 2019-09-26T10:32:47.000Z | 2019-09-26T10:32:47.000Z | src/django_prefetch_utils/descriptors/annotation.py | roverdotcom/django-prefetch-utils | f901cb2159b3e95f44baf18812d5bbb7c52afd97 | [
"BSD-3-Clause"
] | 1 | 2019-07-23T09:25:06.000Z | 2019-07-23T09:25:06.000Z | src/django_prefetch_utils/descriptors/annotation.py | roverdotcom/django-prefetch-utils | f901cb2159b3e95f44baf18812d5bbb7c52afd97 | [
"BSD-3-Clause"
] | 1 | 2021-12-22T14:38:20.000Z | 2021-12-22T14:38:20.000Z | from django.utils.functional import cached_property
from .base import GenericPrefetchRelatedDescriptor
from .base import GenericSinglePrefetchRelatedDescriptorMixin
class AnnotationDescriptor(GenericSinglePrefetchRelatedDescriptorMixin, GenericPrefetchRelatedDescriptor):
"""
This descriptor behaves like an annotated value would appear
on a model. It lets you turn an annotation into a prefetch at
the cost of an additional query::
>>> class Author(models.Model):
... book_count = AnnotationDescriptor(Count('books'))
...
authors.models.Author
>>> author = Author.objects.get(name="Jane")
>>> author.book_count
11
>>> author = Author.objects.prefetch_related('book_count').get(name="Jane")
>>> author.book_count # no queries done
11
It works by storing a ``values_list`` tuple containing the annotated value
on :attr:`cache_name` on the object.
"""
def __init__(self, annotation):
self.annotation = annotation
def get_prefetch_model_class(self):
"""
Returns the model class of the objects that are prefetched
by this descriptor.
:returns: subclass of :class:`django.db.models.model`
"""
return self.model
@cached_property
def cache_name(self):
"""
Returns the name of the attribute where we will cache the annotated
value. We are overriding ``cache_name`` from
:class:`GenericPrefetchRelatedDescriptor` so that we can just return
the annotated value from :attr:`__get__`.
:rtype: str
"""
return "_prefetched_{}".format(self.name)
def __get__(self, obj, type=None):
if obj is None:
return self
# Perform the query if we haven't already fetched the annotated value
if not self.is_cached(obj):
annotation_value = super().__get__(obj, type)
setattr(obj, self.cache_name, annotation_value)
return getattr(obj, self.cache_name)[1]
def filter_queryset_for_instances(self, queryset, instances):
"""
Returns *queryset* filtered to the objects which are related to
*instances*.
:param list instances: instances of the class on which this
descriptor is found
:param QuerySet queryset: the queryset to filter for *instances*
:rtype: :class:`django.db.models.QuerySet`
"""
queryset = (
queryset.filter(pk__in=[obj.pk for obj in instances])
.annotate(**{self.name: self.annotation})
.values_list("pk", self.name)
)
return queryset
def get_join_value_for_instance(self, instance):
return instance.pk
def get_join_value_for_related_obj(self, annotation_value):
return annotation_value[0]
| 33.729412 | 106 | 0.652947 | 333 | 2,867 | 5.459459 | 0.339339 | 0.038504 | 0.037404 | 0.018702 | 0.048405 | 0.028603 | 0 | 0 | 0 | 0 | 0 | 0.002836 | 0.261946 | 2,867 | 84 | 107 | 34.130952 | 0.856333 | 0.462504 | 0 | 0 | 0 | 0 | 0.012195 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.241379 | false | 0 | 0.103448 | 0.068966 | 0.62069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
83a0c082bc6603eb00250798f4a9293fb5b6a9f7 | 1,391 | py | Python | Chapter_3/Chapter_3_1_4_1.py | flytian/python_machinelearning | 004707c3e66429f102272a7da97e532255cca293 | [
"Apache-2.0"
] | null | null | null | Chapter_3/Chapter_3_1_4_1.py | flytian/python_machinelearning | 004707c3e66429f102272a7da97e532255cca293 | [
"Apache-2.0"
] | null | null | null | Chapter_3/Chapter_3_1_4_1.py | flytian/python_machinelearning | 004707c3e66429f102272a7da97e532255cca293 | [
"Apache-2.0"
] | null | null | null | # coding:utf-8
# 从sklearn.datasets中导入20类新闻文本抓取器。
from sklearn.datasets import fetch_20newsgroups
# 导入numpy,并且重命名为np。
import numpy as np
# 使用新闻抓取器从互联网上下载所有数据,并且存储在变量news中。
news = fetch_20newsgroups(subset='all')
# 从sklearn.cross_validation导入train_test_split用来分割数据。
from sklearn.cross_validation import train_test_split
# 对前3000条新闻文本进行数据分割,25%文本用于未来测试。
X_train, X_test, y_train, y_test = train_test_split(news.data[:3000], news.target[:3000], test_size=0.25,
random_state=33)
# 导入支持向量机(分类)模型。
from sklearn.svm import SVC
# 导入TfidfVectorizer文本抽取器。
from sklearn.feature_extraction.text import TfidfVectorizer
# 导入Pipeline。
from sklearn.pipeline import Pipeline
# 使用Pipeline 简化系统搭建流程,将文本抽取与分类器模型串联起来。
clf = Pipeline([('vect', TfidfVectorizer(stop_words='english', analyzer='word')), ('svc', SVC())])
# 这里需要试验的2个超参数的的个数分别是4、3, svc__gamma的参数共有10^-2, 10^-1... 。这样我们一共有12种的超参数组合,12个不同参数下的模型。
parameters = {'svc__gamma': np.logspace(-2, 1, 4), 'svc__C': np.logspace(-1, 1, 3)}
# 从sklearn.grid_search中导入网格搜索模块GridSearchCV。
from sklearn.grid_search import GridSearchCV
# 将12组参数组合以及初始化的Pipline包括3折交叉验证的要求全部告知GridSearchCV。请大家务必注意refit=True这样一个设定 。
gs = GridSearchCV(clf, parameters, verbose=2, refit=True, cv=3)
# 执行单线程网格搜索。
% time_ = gs.fit(X_train, y_train)
gs.best_params_, gs.best_score_
# 输出最佳模型在测试集上的准确性。
print gs.score(X_test, y_test)
| 32.348837 | 105 | 0.761323 | 168 | 1,391 | 6.095238 | 0.571429 | 0.064453 | 0.027344 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041017 | 0.123652 | 1,391 | 42 | 106 | 33.119048 | 0.799016 | 0.357297 | 0 | 0 | 0 | 0 | 0.042141 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.4375 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
83a99a6012b085a99fc3b809de934d3e8644c29e | 821 | py | Python | tests/test_micloud.py | ekorsanov01/micloud | 65b78cc55510baa8d5d7cf4556ab5fbdf3464095 | [
"MIT"
] | 3 | 2021-01-08T15:16:33.000Z | 2021-03-04T18:06:17.000Z | tests/test_micloud.py | starkillerOG/micloud | 6d9c886a9c2a58f5a7711a410fd53be5386db694 | [
"MIT"
] | null | null | null | tests/test_micloud.py | starkillerOG/micloud | 6d9c886a9c2a58f5a7711a410fd53be5386db694 | [
"MIT"
] | null | null | null | import unittest
import logging
import os, json
from micloud import MiCloud
from tests.configuration import setup_testing_environment
setup_testing_environment()
class TestMiCloud(unittest.TestCase):
"""def test_login_success(self):
mc = MiCloud(os.getenv("USERNAME"), os.getenv("PASSWORD"))
f = open("tests.txt", "w")
f.write("Now the file has more content!")
f.close()
self.assertTrue(mc.login())"""
def test_get_devices(self):
mc = MiCloud(os.getenv("USERNAME"), os.getenv("PASSWORD"))
self.assertTrue(mc.login())
self.assertIsNotNone(mc.get_token())
res = mc.get_devices(save=True)
self.assertIsNotNone(res)
self.assertTrue(type(res)==list)
if __name__ == '__main__':
unittest.main()
| 23.457143 | 66 | 0.640682 | 98 | 821 | 5.183673 | 0.5 | 0.062992 | 0.090551 | 0.059055 | 0.177165 | 0.177165 | 0.177165 | 0.177165 | 0.177165 | 0 | 0 | 0 | 0.228989 | 821 | 34 | 67 | 24.147059 | 0.802528 | 0.237515 | 0 | 0 | 0 | 0 | 0.041379 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.0625 | false | 0.0625 | 0.3125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 2 |
83aa7a6b82a233ee798bc220c22027904429169d | 1,281 | py | Python | admin_sso/openid/models.py | allink/django-admin-sso | 04935aa31fa701ad1dd62c0aa1b9625235b555ad | [
"BSD-3-Clause"
] | 17 | 2015-03-30T09:37:33.000Z | 2022-03-11T08:36:26.000Z | admin_sso/openid/models.py | allink/django-admin-sso | 04935aa31fa701ad1dd62c0aa1b9625235b555ad | [
"BSD-3-Clause"
] | 6 | 2015-03-06T10:55:04.000Z | 2021-03-24T10:10:44.000Z | admin_sso/openid/models.py | allink/django-admin-sso | 04935aa31fa701ad1dd62c0aa1b9625235b555ad | [
"BSD-3-Clause"
] | 4 | 2015-06-25T07:32:59.000Z | 2019-07-24T08:35:09.000Z | from django.db import models
from django.utils.timezone import now
from django.utils.translation import ugettext_lazy as _
from admin_sso import settings
class OpenIDUser(models.Model):
claimed_id = models.TextField(max_length=2047)
email = models.EmailField()
fullname = models.CharField(max_length=255)
user = models.ForeignKey(settings.AUTH_USER_MODEL)
last_login = models.DateTimeField(_('last login'), default=now)
class Meta:
verbose_name = _('OpenIDUser')
verbose_name_plural = _('OpenIDUsers')
app_label = 'admin_sso'
def __unicode__(self):
return self.claimed_id
def update_last_login(self):
self.last_login = now()
self.save()
class Nonce(models.Model):
server_url = models.CharField(max_length=2047)
timestamp = models.IntegerField()
salt = models.CharField(max_length=40)
class Meta:
app_label = 'admin_sso'
class Association(models.Model):
server_url = models.CharField(max_length=2047)
handle = models.CharField(max_length=255)
secret = models.CharField(max_length=255)
issued = models.IntegerField()
lifetime = models.IntegerField()
assoc_type = models.CharField(max_length=64)
class Meta:
app_label = 'admin_sso'
| 27.255319 | 67 | 0.70726 | 158 | 1,281 | 5.493671 | 0.405063 | 0.082949 | 0.145161 | 0.193548 | 0.261521 | 0.168203 | 0.110599 | 0.110599 | 0.110599 | 0 | 0 | 0.024272 | 0.195941 | 1,281 | 46 | 68 | 27.847826 | 0.818447 | 0 | 0 | 0.235294 | 0 | 0 | 0.045277 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0.029412 | 0.794118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
83b21bfd3e88588a1f804872b03450864e71b9ce | 253 | py | Python | PythonLearning/Design and Analysis of Algorithms/week2/Q2_4.py | JimouChen/python-application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | 1 | 2020-08-09T12:47:27.000Z | 2020-08-09T12:47:27.000Z | PythonLearning/Design and Analysis of Algorithms/week2/Q2_4.py | JimouChen/Python_Application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | null | null | null | PythonLearning/Design and Analysis of Algorithms/week2/Q2_4.py | JimouChen/Python_Application | b7b16506a17e2c304d1c5fabd6385e96be211c56 | [
"Apache-2.0"
] | null | null | null | """
# @Time : 2020/9/17
# @Author : Jimou Chen
"""
def T(n):
if n == 1:
return 4
elif n > 1:
return 3 * T(n - 1)
def T(n):
if n == 1:
return 1
elif n > 1:
return 2 * T(n // 3) + n
print(T(5))
| 11 | 32 | 0.387352 | 43 | 253 | 2.27907 | 0.44186 | 0.102041 | 0.326531 | 0.142857 | 0.306122 | 0.306122 | 0.306122 | 0 | 0 | 0 | 0 | 0.126761 | 0.438735 | 253 | 22 | 33 | 11.5 | 0.56338 | 0.189723 | 0 | 0.545455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.545455 | 0.090909 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
83b3ae52b6957745430684a86578513982143650 | 2,601 | py | Python | test/test_bilayer_class_auto.py | aravindhk/Vides | 65d9ea9764ddf5f6ef40e869bd31387d0e3e378f | [
"BSD-4-Clause"
] | 2 | 2021-11-03T17:24:24.000Z | 2021-12-02T06:06:50.000Z | test/test_bilayer_class_auto.py | aravindhk/Vides | 65d9ea9764ddf5f6ef40e869bd31387d0e3e378f | [
"BSD-4-Clause"
] | null | null | null | test/test_bilayer_class_auto.py | aravindhk/Vides | 65d9ea9764ddf5f6ef40e869bd31387d0e3e378f | [
"BSD-4-Clause"
] | null | null | null | #!/usr/bin/python
from NanoTCAD_ViDES import *
###########################
#FIRST TEST
###########################
BG=bilayer_graphene(10);
BG.dE=1e-3;
BG.eta=1e-5;
BG.kmax=12.5958;
BG.kmin=5.39722;
BG.dk=(BG.kmax-BG.kmin)/32.0;
#-0.2
BG.Eupper = 0.633527
BG.Elower = -0.433527
BG.Phi=-0.2*ones(size(BG.Phi));
BG.charge_T();
plot(BG.charge*0.144*sqrt(3)*1e-9);
hold
a=loadtxt("carica.-0.2");
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)));
plot(a,'o')
show()
if (max(abs(a-BG.charge*0.144*sqrt(3)*1e-9))<1e-5):
string="PASSED"
else:
string="NOT PASSED"
###########################
#SECOND TEST
###########################
BG=bilayer_graphene(10);
BG.dE=1e-3;
BG.eta=1e-5;
BG.kmax=12.5958;
BG.kmin=5.39722;
BG.dk=(BG.kmax-BG.kmin)/32.0;
#0.5
BG.Eupper = 0.258527
BG.Elower = -0.933527
BG.Phi=0.5*ones(size(BG.Phi));
BG.charge_T();
plot(BG.charge*0.144*sqrt(3)*1e-9);
hold
a=loadtxt("carica.0.5");
#print max(abs(a-BG.charge*0.144*sqrt(3)*1e-9))
plot(a,'o')
show()
if (max(abs(a-BG.charge*0.144*sqrt(3)*1e-9))<1e-5):
string2="PASSED"
else:
string2="NOT PASSED"
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)))
###########################
# THIRD TEST
###########################
BG=bilayer_graphene(10);
BG.dE=1e-3;
BG.eta=1e-8;
BG.kmax=12.5958;
BG.kmin=5.39722;
BG.dk=(BG.kmax-BG.kmin)/32.0;
#lin
BG.Eupper = 0.433527
BG.Elower = -0.530727
BG.Phi=BG.y*1e-2;
BG.charge_T();
plot(BG.charge*0.144*sqrt(3)*1e-9);
hold
a=loadtxt("carica.lin");
plot(a,'o')
show()
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)))
if (max(abs(a-BG.charge*0.144*sqrt(3)*1e-9))<5e-5):
string3="PASSED"
else:
string3="NOT PASSED"
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)))
#show()
###########################
# FOURTH TEST
###########################
BG=bilayer_graphene(10);
BG.dE=1e-3;
BG.eta=1e-5;
BG.kmax=12.5958;
BG.kmin=5.39722;
BG.dk=(BG.kmax-BG.kmin)/32.0;
#plusminus
BG.Eupper = 0.533527
BG.Elower = -0.533527
i=linspace(0,size(BG.Phi)-1,size(BG.Phi))
i_even=nonzero((i%2)==0);
i_odd=nonzero((i%2)==1);
BG.Phi[i_even]=-0.1;
BG.Phi[i_odd]=0.1;
BG.Ei=-BG.Phi;
BG.charge_T();
plot(BG.charge*0.144*sqrt(3)*1e-9);
hold
a=loadtxt("carica.plusminus");
plot(a,'o')
show()
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)))
if (max(abs(a-BG.charge*0.144*sqrt(3)*1e-9))<5e-5):
string4="PASSED"
else:
string4="NOT PASSED"
print(max(abs(a-BG.charge*0.144*sqrt(3)*1e-9)))
fp=open("bilayer_test","w");
string5="TEST on bilayer \n Test 1: %s \n Test 2: %s \n Test 3: %s \n Test 4: %s \n" %(string,string2,string3,string4);
fp.write(string5);
fp.close()
#show()
| 17.815068 | 119 | 0.590927 | 512 | 2,601 | 2.974609 | 0.167969 | 0.099803 | 0.088641 | 0.118188 | 0.644123 | 0.644123 | 0.644123 | 0.644123 | 0.644123 | 0.644123 | 0 | 0.123624 | 0.091888 | 2,601 | 145 | 120 | 17.937931 | 0.521169 | 0.052672 | 0 | 0.608696 | 0 | 0.01087 | 0.09034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.086957 | 0.01087 | 0 | 0.01087 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
83baccd90e553fca59c4116ef6b652786e64cf56 | 1,814 | py | Python | utilities/generator.py | RogerGee/php-git2 | ec8092016e379c79ba0ab3304e684c9a074b218c | [
"MIT"
] | 2 | 2019-05-13T12:31:15.000Z | 2021-07-12T13:50:15.000Z | utilities/generator.py | RogerGee/php-git2 | ec8092016e379c79ba0ab3304e684c9a074b218c | [
"MIT"
] | null | null | null | utilities/generator.py | RogerGee/php-git2 | ec8092016e379c79ba0ab3304e684c9a074b218c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#
# generator.py
#
# Use this script to generate templates for a set of function bindings. Pipe the
# function names, one per line, to this program. It first generates the template
# declarations and then generates the function entry macro.
import re
import sys
def genFunc(func):
print ("static constexpr auto ZIF_{} = zif_php_git2_function<\n" \
+ " php_git2::func_wrapper<\n" \
+ "\n" \
+ " >::func<{}>,\n" \
+ " php_git2::local_pack<\n" \
+ "\n" \
+ " >,\n" \
+ " -1,\n" \
+ " php_git2::sequence<>,\n" \
+ " php_git2::sequence<>,\n" \
+ " php_git2::sequence<>\n" \
+ " >;\n").format(func.upper(),func)
def genFreeFunc(func,section):
print ("static constexpr auto ZIF_{} = zif_php_git2_function_free<\n" \
" php_git2::local_pack<\n" \
" php_git2::php_resource_cleanup<php_git2::php_{}>\n" \
" >\n" \
" >;\n").format(func.upper(),section)
def genDefine(func,trailing = True):
print " PHP_GIT2_FE({},ZIF_{},NULL){}".format(func,func.upper()," \\" if trailing else "")
def genDoc(func):
print "{}()".format(func)
names = []
while True:
line = sys.stdin.readline()
if len(line) == 0:
break
func = re.search("[^\s]+",line)
if func is None:
print "error: bad input: '{}'".format(line)
name = func.group(0)
names.append(name)
match = re.search("(.*)_free$",name)
if match:
genFreeFunc(name,match.group(1))
else:
genFunc(name)
print "#define GIT___FE \\"
for i in range(len(names)):
genDefine(names[i], i + 1 < len(names))
print ""
print "-"*40
print "[git_]"
print "-"*40
for name in names:
genDoc(name)
| 27.074627 | 97 | 0.551819 | 230 | 1,814 | 4.217391 | 0.395652 | 0.079381 | 0.057732 | 0.049485 | 0.214433 | 0.180412 | 0.143299 | 0.143299 | 0.143299 | 0 | 0 | 0.015129 | 0.271224 | 1,814 | 66 | 98 | 27.484848 | 0.718608 | 0.137266 | 0 | 0.122449 | 1 | 0 | 0.330552 | 0.173941 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.040816 | null | null | 0.204082 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83c6d674a547b24f4e6713ae599c43f3801f2901 | 519 | py | Python | ebl/dictionary/web/bootstrap.py | ElectronicBabylonianLiterature/dictionary | 5977a57314cf57f94f75cd12520f178b1d6a6555 | [
"MIT"
] | null | null | null | ebl/dictionary/web/bootstrap.py | ElectronicBabylonianLiterature/dictionary | 5977a57314cf57f94f75cd12520f178b1d6a6555 | [
"MIT"
] | null | null | null | ebl/dictionary/web/bootstrap.py | ElectronicBabylonianLiterature/dictionary | 5977a57314cf57f94f75cd12520f178b1d6a6555 | [
"MIT"
] | null | null | null | import falcon
from ebl.context import Context
from ebl.dictionary.application.dictionary import Dictionary
from ebl.dictionary.web.word_search import WordSearch
from ebl.dictionary.web.words import WordsResource
def create_dictionary_routes(api: falcon.App, context: Context):
dictionary = Dictionary(context.word_repository, context.changelog)
words = WordsResource(dictionary)
word_search = WordSearch(dictionary)
api.add_route("/words", word_search)
api.add_route("/words/{object_id}", words)
| 34.6 | 71 | 0.795761 | 65 | 519 | 6.215385 | 0.369231 | 0.069307 | 0.126238 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115607 | 519 | 14 | 72 | 37.071429 | 0.880174 | 0 | 0 | 0 | 0 | 0 | 0.046243 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.454545 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
83d35e8b67f7fe3524402d845a60b2ed832f68d2 | 500 | py | Python | hsir/__init__.py | WenjieZ/wuhan-pneumonia | 3d26955daa2deedec57cdd3effb3118531bbea7f | [
"BSD-3-Clause"
] | 6 | 2020-01-26T07:33:41.000Z | 2020-02-25T22:15:43.000Z | hsir/__init__.py | WenjieZ/wuhan-pneumonia | 3d26955daa2deedec57cdd3effb3118531bbea7f | [
"BSD-3-Clause"
] | 2 | 2020-02-17T16:12:50.000Z | 2020-02-29T21:31:17.000Z | hsir/__init__.py | WenjieZ/wuhan-pneumonia | 3d26955daa2deedec57cdd3effb3118531bbea7f | [
"BSD-3-Clause"
] | 1 | 2020-03-07T00:13:05.000Z | 2020-03-07T00:13:05.000Z | from .utils import *
from .law import *
from .norm import *
from .empirical import *
from .sir import *
from .sirq import *
from .sirt import *
from .sirqt import *
__all__ = ['Id', 'JumpProcess',
'Law', 'Bin', 'Poi', 'Gau',
'variation1', 'variation2', 'elastic_net',
'Region', 'Epidemic', 'Sample', 'Confirmed', 'Resisted',
'SIR', 'InferSIR',
'SIRQ', 'InferSIRQ',
'SIRt', 'InferSIRt',
'SIRQt', 'InferSIRQt',
]
| 25 | 67 | 0.538 | 48 | 500 | 5.5 | 0.604167 | 0.265152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00565 | 0.292 | 500 | 19 | 68 | 26.315789 | 0.740113 | 0 | 0 | 0 | 0 | 0 | 0.29 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.470588 | 0 | 0.470588 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
83d3986613ac445839d15243517f52ee08b5825b | 239 | py | Python | dagger/dag_creator/airflow/utils/utils.py | jorgetagle/dagger | dafcfb9df904e512f050aefdacf6581c571bac23 | [
"MIT"
] | 5 | 2020-09-09T11:44:49.000Z | 2021-12-31T14:07:00.000Z | dagger/dag_creator/airflow/utils/utils.py | jorgetagle/dagger | dafcfb9df904e512f050aefdacf6581c571bac23 | [
"MIT"
] | null | null | null | dagger/dag_creator/airflow/utils/utils.py | jorgetagle/dagger | dafcfb9df904e512f050aefdacf6581c571bac23 | [
"MIT"
] | 3 | 2021-08-31T10:14:42.000Z | 2022-02-28T17:03:39.000Z | from pathlib import Path
def get_sql_queries(path):
sql_queries = {}
for query_file in Path(path).glob("*.sql"):
with open(query_file, "r") as f:
sql_queries[query_file.stem] = f.read()
return sql_queries
| 23.9 | 51 | 0.640167 | 36 | 239 | 4.027778 | 0.583333 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.238494 | 239 | 9 | 52 | 26.555556 | 0.796703 | 0 | 0 | 0 | 0 | 0 | 0.025105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83d4347102f8d7fc0cdbbc3324f10dea48926a95 | 7,983 | py | Python | pyaz/network/watcher/connection_monitor/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/network/watcher/connection_monitor/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | null | null | null | pyaz/network/watcher/connection_monitor/__init__.py | py-az-cli/py-az-cli | 9a7dc44e360c096a5a2f15595353e9dad88a9792 | [
"MIT"
] | 1 | 2022-02-03T09:12:01.000Z | 2022-02-03T09:12:01.000Z | from .... pyaz_utils import _call_az
from . import endpoint, output, test_configuration, test_group
def create(name, dest_address=None, dest_port=None, dest_resource=None, do_not_start=None, endpoint_dest_address=None, endpoint_dest_coverage_level=None, endpoint_dest_name=None, endpoint_dest_resource_id=None, endpoint_dest_type=None, endpoint_source_address=None, endpoint_source_coverage_level=None, endpoint_source_name=None, endpoint_source_resource_id=None, endpoint_source_type=None, frequency=None, http_method=None, http_path=None, http_port=None, http_valid_status_codes=None, https_prefer=None, icmp_disable_trace_route=None, location=None, monitoring_interval=None, notes=None, output_type=None, preferred_ip_version=None, protocol=None, resource_group=None, source_port=None, source_resource=None, tags=None, tcp_disable_trace_route=None, tcp_port=None, tcp_port_behavior=None, test_config_name=None, test_group_disable=None, test_group_name=None, threshold_failed_percent=None, threshold_round_trip_time=None, workspace_ids=None):
'''
Create a connection monitor.
Required Parameters:
- name -- Connection monitor name.
Optional Parameters:
- dest_address -- The IP address or URI at which to receive traffic.
- dest_port -- Port number on which to receive traffic.
- dest_resource -- Name of ID of the resource to receive traffic. Currently only Virtual Machines are supported.
- do_not_start -- Create the connection monitor but do not start it immediately.
- endpoint_dest_address -- Address of the destination of connection monitor endpoint (IP or domain name)
- endpoint_dest_coverage_level -- Test coverage for the endpoint
- endpoint_dest_name -- The name of the destination of connection monitor endpoint. If you are creating a V2 Connection Monitor, it's required
- endpoint_dest_resource_id -- Resource ID of the destination of connection monitor endpoint
- endpoint_dest_type -- The endpoint type
- endpoint_source_address -- Address of the source of connection monitor endpoint (IP or domain name)
- endpoint_source_coverage_level -- Test coverage for the endpoint
- endpoint_source_name -- The name of the source of connection monitor endpoint. If you are creating a V2 Connection Monitor, it's required
- endpoint_source_resource_id -- Resource ID of the source of connection monitor endpoint. If endpoint is intended to used as source, this option is required.
- endpoint_source_type -- The endpoint type
- frequency -- The frequency of test evaluation, in seconds
- http_method -- The HTTP method to use
- http_path -- The path component of the URI. For instance, "/dir1/dir2"
- http_port -- The port to connect to
- http_valid_status_codes -- Space-separated list of HTTP status codes to consider successful. For instance, "2xx 301-304 418"
- https_prefer -- Value indicating whether HTTPS is preferred over HTTP in cases where the choice is not explicit
- icmp_disable_trace_route -- Value indicating whether path evaluation with trace route should be disabled. false is default.
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- monitoring_interval -- Monitoring interval in seconds.
- notes -- Optional notes to be associated with the connection monitor
- output_type -- Connection monitor output destination type. Currently, only "Workspace" is supported
- preferred_ip_version -- The preferred IP version to use in test evaluation. The connection monitor may choose to use a different version depending on other parameters
- protocol -- The protocol to use in test evaluation
- resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>`
- source_port -- Port number from which to originate traffic.
- source_resource -- Name or ID of the resource from which to originate traffic. Currently only Virtual Machines are supported.
- tags -- space-separated tags: key[=value] [key[=value] ...]. Use '' to clear existing tags.
- tcp_disable_trace_route -- Value indicating whether path evaluation with trace route should be disabled. false is default.
- tcp_port -- The port to connect to
- tcp_port_behavior -- Destination port behavior
- test_config_name -- The name of the connection monitor test configuration. If you are creating a V2 Connection Monitor, it's required
- test_group_disable -- Value indicating whether test group is disabled. false is default.
- test_group_name -- The name of the connection monitor test group
- threshold_failed_percent -- The maximum percentage of failed checks permitted for a test to evaluate as successful
- threshold_round_trip_time -- The maximum round-trip time in milliseconds permitted for a test to evaluate as successful
- workspace_ids -- Space-separated list of ids of log analytics workspace
'''
return _call_az("az network watcher connection-monitor create", locals())
def delete(location, name, resource_group=None):
'''
Delete a connection monitor for the given region.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Connection monitor name.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor delete", locals())
def show(location, name, resource_group=None):
'''
Shows a connection monitor by name.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Connection monitor name.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor show", locals())
def stop(location, name, resource_group=None):
'''
Stop the specified connection monitor.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Connection monitor name.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor stop", locals())
def start(location, name, resource_group=None):
'''
Start the specified connection monitor.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Connection monitor name.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor start", locals())
def query(location, name, resource_group=None):
'''
Query a snapshot of the most recent connection state of a connection monitor.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
- name -- Connection monitor name.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor query", locals())
def list(location, resource_group=None):
'''
List connection monitors for the given region.
Required Parameters:
- location -- Location. Values from: `az account list-locations`. You can configure the default location using `az configure --defaults location=<location>`.
Optional Parameters:
- resource_group -- ==SUPPRESS==
'''
return _call_az("az network watcher connection-monitor list", locals())
| 57.431655 | 944 | 0.749217 | 1,060 | 7,983 | 5.492453 | 0.166038 | 0.099279 | 0.020611 | 0.024734 | 0.551872 | 0.488664 | 0.46685 | 0.435933 | 0.37719 | 0.360357 | 0 | 0.002271 | 0.172742 | 7,983 | 138 | 945 | 57.847826 | 0.879316 | 0.711011 | 0 | 0 | 0 | 0 | 0.156904 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4375 | false | 0 | 0.125 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
83df9264ba2ce4b89fcbdd158f066fbadf85c82d | 8,703 | py | Python | airmozilla/manage/tests/test_event_hit_stats.py | anurag90x/airmozilla | c758fb86695e16891e45dc24a2cf1406c54f2686 | [
"BSD-3-Clause"
] | null | null | null | airmozilla/manage/tests/test_event_hit_stats.py | anurag90x/airmozilla | c758fb86695e16891e45dc24a2cf1406c54f2686 | [
"BSD-3-Clause"
] | null | null | null | airmozilla/manage/tests/test_event_hit_stats.py | anurag90x/airmozilla | c758fb86695e16891e45dc24a2cf1406c54f2686 | [
"BSD-3-Clause"
] | null | null | null | import datetime
from cStringIO import StringIO
from nose.tools import eq_, ok_
import mock
from django.utils import timezone
from django.test import TestCase
from airmozilla.manage import event_hit_stats
from airmozilla.main.models import Event, EventHitStats, Template
SAMPLE_STATISTICS_XML = (
'<?xml version="1.0"?>'
'<Response><Message/><MessageCode/><Success><StatsInfo><StatsTable>'
'<cols><col>Class</col><col>Vendor</col><col>Model</col>'
'<col>Platform</col><col>OS</col><col>Browser</col><col>Browser Ver</col>'
'<col>Hits</col></cols><rows><row><col>Desktop</col><col></col><col></col>'
'<col></col><col>Apple</col><col>Firefox</col><col>21.0</col><col>5</col>'
'</row><row><col>Desktop</col><col></col><col></col><col></col>'
'<col>Apple</col><col>Firefox</col><col>20.0</col><col>2</col></row>'
'</rows></StatsTable><Others>0</Others><TotalHits>%s</TotalHits>'
'</StatsInfo></Success></Response>'
)
def non_signal_save(obj, **kwargs):
obj.__class__.objects.filter(pk=obj.pk).update(**kwargs)
class EventHitStatsTestCase(TestCase):
fixtures = ['airmozilla/manage/tests/main_testdata.json']
@mock.patch('urllib2.urlopen')
def test_update(self, p_urlopen):
calls = []
def mocked_urlopen(request):
calls.append(1)
assert 'abc123' in request.data
return StringIO((SAMPLE_STATISTICS_XML % (10,)).strip())
p_urlopen.side_effect = mocked_urlopen
assert not EventHitStats.objects.count()
assert Event.objects.all()
assert Event.objects.archived().all()
eq_(event_hit_stats.update(), 0)
assert not EventHitStats.objects.count()
vidly_template = Template.objects.create(name='Vid.ly Template')
event, = Event.objects.archived().all()
event.template = vidly_template
event.template_environment = {'tag': 'abc123'}
event.save()
eq_(event_hit_stats.update(), 1)
assert EventHitStats.objects.count()
stat, = EventHitStats.objects.all()
eq_(stat.event, event)
eq_(stat.total_hits, 10)
eq_(stat.shortcode, 'abc123')
eq_(len(calls), 1)
# do it again and nothing should happen
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 1)
# let's pretend the event is half an hour old
now = timezone.now()
half_hour_ago = now - datetime.timedelta(minutes=30)
# event.update(modified=event.modified - half_hour_ago)
# non_signal_save(event, modified=half_hour_ago)
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 1)
# ...because the EventHitStats was modified too recently
non_signal_save(stat, modified=half_hour_ago)
non_signal_save(event, modified=half_hour_ago)
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 1)
# it needs to be at least one hour old
hour_ago = now - datetime.timedelta(minutes=60, seconds=1)
non_signal_save(stat, modified=hour_ago)
non_signal_save(event, modified=hour_ago)
eq_(event_hit_stats.update(), 1)
eq_(len(calls), 2)
# a second time, nothing should happen
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 2)
# let's pretend it's even older
day_ago = now - datetime.timedelta(hours=24, seconds=1)
non_signal_save(stat, modified=day_ago)
non_signal_save(event, modified=day_ago)
eq_(event_hit_stats.update(), 1)
eq_(len(calls), 3)
# second time, nothing should happen
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 3)
# even older still
week_ago = now - datetime.timedelta(days=7, seconds=1)
non_signal_save(stat, modified=week_ago)
non_signal_save(event, modified=week_ago)
eq_(event_hit_stats.update(), 1)
eq_(len(calls), 4)
# second time, nothing should happen
eq_(event_hit_stats.update(), 0)
eq_(len(calls), 4)
@mock.patch('airmozilla.manage.event_hit_stats.logging')
@mock.patch('urllib2.urlopen')
def test_first_update_with_errors(self, p_urlopen, mock_logging):
def mocked_urlopen(request):
raise IOError('foo')
p_urlopen.side_effect = mocked_urlopen
vidly_template = Template.objects.create(name='Vid.ly Template')
event, = Event.objects.archived().all()
event.template = vidly_template
event.template_environment = {'tag': ''}
event.save()
eq_(event_hit_stats.update(), 0)
mock_logging.warn.assert_called_with(
'Event %r does not have a Vid.ly tag',
event.title
)
event.template_environment = {'tag': 'abc123'}
event.save()
self.assertRaises(
IOError,
event_hit_stats.update
)
eq_(event_hit_stats.update(swallow_errors=True), 0)
mock_logging.error.assert_called_with(
'Unable to download statistics for %r (tag: %s)',
event.title,
'abc123'
)
@mock.patch('urllib2.urlopen')
def test_update_new_tag(self, p_urlopen):
def mocked_urlopen(request):
assert 'xyz987' in request.data
return StringIO((SAMPLE_STATISTICS_XML % (10,)).strip())
p_urlopen.side_effect = mocked_urlopen
vidly_template = Template.objects.create(name='Vid.ly Template')
event, = Event.objects.archived().all()
event.template = vidly_template
event.template_environment = {'tag': 'abc123'}
event.save()
stat = EventHitStats.objects.create(
event=event,
shortcode='abc123',
total_hits=5,
)
event.template_environment = {'tag': 'xyz987'}
event.save()
# set them back two days
now = timezone.now()
days_ago = now - datetime.timedelta(hours=24 * 2, seconds=1)
non_signal_save(stat, modified=days_ago)
non_signal_save(
event, modified=days_ago + datetime.timedelta(seconds=1)
)
eq_(event_hit_stats.update(), 1)
stat = EventHitStats.objects.get(pk=stat.pk)
eq_(stat.total_hits, 10)
eq_(stat.shortcode, 'xyz987')
@mock.patch('urllib2.urlopen')
def test_update_removed_tag(self, p_urlopen):
def mocked_urlopen(request):
assert 'xyz987' in request.data
return StringIO((SAMPLE_STATISTICS_XML % (10,)).strip())
p_urlopen.side_effect = mocked_urlopen
vidly_template = Template.objects.create(name='Vid.ly Template')
event, = Event.objects.archived().all()
event.template = vidly_template
event.template_environment = {'tag': 'abc123'}
event.save()
stat = EventHitStats.objects.create(
event=event,
shortcode='abc123',
total_hits=5,
)
event.template_environment = {'foo': 'bar'}
event.save()
# set them back two days
now = timezone.now()
days_ago = now - datetime.timedelta(hours=24 * 2, seconds=1)
non_signal_save(stat, modified=days_ago)
non_signal_save(
event, modified=days_ago + datetime.timedelta(seconds=1)
)
eq_(event_hit_stats.update(), 0)
ok_(not EventHitStats.objects.all().count())
@mock.patch('airmozilla.manage.event_hit_stats.logging')
@mock.patch('urllib2.urlopen')
def test_update_with_errors(self, p_urlopen, mock_logging):
def mocked_urlopen(request):
raise IOError('boo!')
p_urlopen.side_effect = mocked_urlopen
vidly_template = Template.objects.create(name='Vid.ly Template')
event, = Event.objects.archived().all()
event.template = vidly_template
event.template_environment = {'tag': 'abc123'}
event.save()
stat = EventHitStats.objects.create(
event=event,
shortcode='abc123',
total_hits=5,
)
# set them back two days
now = timezone.now()
days_ago = now - datetime.timedelta(hours=24 * 2, seconds=1)
non_signal_save(stat, modified=days_ago)
non_signal_save(
event, modified=days_ago + datetime.timedelta(seconds=1)
)
self.assertRaises(
IOError,
event_hit_stats.update
)
eq_(event_hit_stats.update(swallow_errors=True), 0)
mock_logging.error.assert_called_with(
'Unable to download statistics for %r (tag: %s)',
event.title,
'abc123'
)
| 32.35316 | 79 | 0.619327 | 1,065 | 8,703 | 4.850704 | 0.171831 | 0.031359 | 0.052846 | 0.066202 | 0.748935 | 0.731514 | 0.696283 | 0.631049 | 0.61866 | 0.61866 | 0 | 0.019917 | 0.255774 | 8,703 | 268 | 80 | 32.473881 | 0.777675 | 0.057107 | 0 | 0.628866 | 0 | 0.030928 | 0.136125 | 0.082652 | 0 | 0 | 0 | 0 | 0.06701 | 1 | 0.056701 | false | 0 | 0.041237 | 0 | 0.123711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83e450fcc4a3f50dd35481308fb48db2f167bb8f | 31,158 | py | Python | envs/real/robot.py | cww97/visual-language-grasping | f96404c9997ef55ede07293ce319ca19a39ae5ec | [
"BSD-2-Clause"
] | 3 | 2020-05-08T11:14:21.000Z | 2021-07-09T15:30:01.000Z | envs/real/robot.py | cww97/visual-language-grasping | f96404c9997ef55ede07293ce319ca19a39ae5ec | [
"BSD-2-Clause"
] | 4 | 2020-03-10T13:24:43.000Z | 2021-07-13T06:09:03.000Z | envs/real/robot.py | cww97/visual-language-grasping | f96404c9997ef55ede07293ce319ca19a39ae5ec | [
"BSD-2-Clause"
] | 1 | 2021-04-23T01:38:46.000Z | 2021-04-23T01:38:46.000Z | import socket
import struct
import time
import numpy as np
import utils
from .camera import Camera
from ..robot import Robot as BaseRobot
class RealRobot(BaseRobot):
def __init__(self, tcp_host_ip, tcp_port, rtc_host_ip, rtc_port, workspace_limits):
BaseRobot.__init__(self, workspace_limits)
# Connect to robot client
self.tcp_host_ip = tcp_host_ip
self.tcp_port = tcp_port
# self.tcp_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Connect as real-time client to parse state data
self.rtc_host_ip = rtc_host_ip
self.rtc_port = rtc_port
# Default home joint configuration
# self.home_joint_config = [-np.pi, -np.pi/2, np.pi/2, -np.pi/2, -np.pi/2, 0]
self.home_joint_config = [-(180.0 / 360.0) * 2 * np.pi, -(84.2 / 360.0) * 2 * np.pi, (112.8 / 360.0) * 2 * np.pi, -(119.7 / 360.0) * 2 * np.pi, -(90.0 / 360.0) * 2 * np.pi, 0.0]
# Default joint speed configuration
self.joint_acc = 8 # Safe: 1.4
self.joint_vel = 3 # Safe: 1.05
# Joint tolerance for blocking calls
self.joint_tolerance = 0.01
# Default tool speed configuration
self.tool_acc = 1.2 # Safe: 0.5
self.tool_vel = 0.25 # Safe: 0.2
# Tool pose tolerance for blocking calls
self.tool_pose_tolerance = [0.002, 0.002, 0.002, 0.01, 0.01, 0.01]
# Move robot to home pose
self.close_gripper()
self.go_home()
# Fetch RGB-D data from RealSense camera
self.camera = Camera()
self.cam_intrinsics = self.camera.intrinsics
# Load camera pose (from running calibrate.py), intrinsics and depth scale
self.cam_pose = np.loadtxt('real/camera_pose.txt', delimiter=' ')
self.cam_depth_scale = np.loadtxt('real/camera_depth_scale.txt', delimiter=' ')
def get_camera_data(self):
# Get color and depth image from ROS service
color_img, depth_img = self.camera.get_data()
# color_img = self.camera.color_data.copy()
# depth_img = self.camera.depth_data.copy()
return color_img, depth_img
@staticmethod
def parse_tcp_state_data(state_data, subpackage):
# Read package header
data_bytes = bytearray()
data_bytes.extend(state_data)
data_length = struct.unpack("!i", data_bytes[0:4])[0]
robot_message_type = data_bytes[4]
assert (robot_message_type == 16)
byte_idx = 5
# Parse sub-packages
subpackage_types = {'joint_data': 1, 'cartesian_info': 4, 'force_mode_data': 7, 'tool_data': 2}
while byte_idx < data_length:
# package_length = int.from_bytes(data_bytes[byte_idx:(byte_idx+4)], byteorder='big', signed=False)
package_length = struct.unpack("!i", data_bytes[byte_idx:(byte_idx + 4)])[0]
byte_idx += 4
package_idx = data_bytes[byte_idx]
if package_idx == subpackage_types[subpackage]:
byte_idx += 1
break
byte_idx += package_length - 4
def parse_joint_data(data_bytes, byte_idx):
actual_joint_positions = [0, 0, 0, 0, 0, 0]
target_joint_positions = [0, 0, 0, 0, 0, 0]
for joint_idx in range(6):
actual_joint_positions[joint_idx] = struct.unpack('!d', data_bytes[(byte_idx + 0):(byte_idx + 8)])[0]
target_joint_positions[joint_idx] = struct.unpack('!d', data_bytes[(byte_idx + 8):(byte_idx + 16)])[0]
byte_idx += 41
return actual_joint_positions
def parse_cartesian_info(data_bytes, byte_idx):
actual_tool_pose = [0, 0, 0, 0, 0, 0]
for pose_value_idx in range(6):
actual_tool_pose[pose_value_idx] = struct.unpack('!d', data_bytes[(byte_idx + 0):(byte_idx + 8)])[0]
byte_idx += 8
return actual_tool_pose
def parse_tool_data(data_bytes, byte_idx):
byte_idx += 2
tool_analog_input2 = struct.unpack('!d', data_bytes[(byte_idx + 0):(byte_idx + 8)])[0]
return tool_analog_input2
parse_functions = {'joint_data': parse_joint_data, 'cartesian_info': parse_cartesian_info, 'tool_data': parse_tool_data}
return parse_functions[subpackage](data_bytes, byte_idx)
def parse_rtc_state_data(self, state_data):
# Read package header
data_bytes = bytearray()
data_bytes.extend(state_data)
data_length = struct.unpack("!i", data_bytes[0:4])[0]
assert (data_length == 812)
byte_idx = 4 + 8 + 8 * 48 + 24 + 120
TCP_forces = [0, 0, 0, 0, 0, 0]
for joint_idx in range(6):
TCP_forces[joint_idx] = struct.unpack('!d', data_bytes[(byte_idx + 0):(byte_idx + 8)])[0]
byte_idx += 8
return TCP_forces
def get_instruction(self):
raise NotImplementedError()
def close_gripper(self, _async=False):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "set_digital_out(8,True)\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
if _async:
gripper_fully_closed = True
else:
time.sleep(1.5)
gripper_fully_closed = self.check_grasp()
return gripper_fully_closed
def open_gripper(self, _async=False):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "set_digital_out(8,False)\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
if not _async:
time.sleep(1.5)
def get_state(self):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
state_data = self.tcp_socket.recv(2048)
self.tcp_socket.close()
return state_data
def move_to(self, tool_position, tool_orientation):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "movel(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0)\n" % (tool_position[0], tool_position[1], tool_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.tool_acc, self.tool_vel)
self.tcp_socket.send(str.encode(tcp_command))
# Block until robot reaches target tool position
tcp_state_data = self.tcp_socket.recv(2048)
actual_tool_pose = self.parse_tcp_state_data(tcp_state_data, 'cartesian_info')
while not all([np.abs(actual_tool_pose[j] - tool_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
# [min(np.abs(actual_tool_pose[j] - tool_orientation[j-3]), np.abs(np.abs(actual_tool_pose[j] - tool_orientation[j-3]) - np.pi*2)) < self.tool_pose_tolerance[j] for j in range(3,6)]
# print([np.abs(actual_tool_pose[j] - tool_position[j]) for j in range(3)] + [min(np.abs(actual_tool_pose[j] - tool_orientation[j-3]), np.abs(np.abs(actual_tool_pose[j] - tool_orientation[j-3]) - np.pi*2)) for j in range(3,6)])
tcp_state_data = self.tcp_socket.recv(2048)
# prev_actual_tool_pose = np.asarray(actual_tool_pose).copy()
actual_tool_pose = self.parse_tcp_state_data(tcp_state_data, 'cartesian_info')
time.sleep(0.01)
self.tcp_socket.close()
def guarded_move_to(self, tool_position, tool_orientation):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.rtc_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
self.rtc_socket.connect((self.rtc_host_ip, self.rtc_port))
# Read actual tool position
tcp_state_data = self.tcp_socket.recv(2048)
actual_tool_pose = self.parse_tcp_state_data(tcp_state_data, 'cartesian_info')
execute_success = True
# Increment every cm, check force
self.tool_acc = 0.1 # 1.2 # 0.5
while not all([np.abs(actual_tool_pose[j] - tool_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
# [min(np.abs(actual_tool_pose[j] - tool_orientation[j-3]), np.abs(np.abs(actual_tool_pose[j] - tool_orientation[j-3]) - np.pi*2)) < self.tool_pose_tolerance[j] for j in range(3,6)]
# Compute motion trajectory in 1cm increments
increment = np.asarray([(tool_position[j] - actual_tool_pose[j]) for j in range(3)])
if np.linalg.norm(increment) < 0.01:
increment_position = tool_position
else:
increment = 0.01 * increment / np.linalg.norm(increment)
increment_position = np.asarray(actual_tool_pose[0:3]) + increment
# Move to next increment position (blocking call)
tcp_command = "movel(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0)\n" % (increment_position[0], increment_position[1], increment_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.tool_acc, self.tool_vel)
self.tcp_socket.send(str.encode(tcp_command))
time_start = time.time()
tcp_state_data = self.tcp_socket.recv(2048)
actual_tool_pose = self.parse_tcp_state_data(tcp_state_data, 'cartesian_info')
while not all([np.abs(actual_tool_pose[j] - increment_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
# print([np.abs(actual_tool_pose[j] - increment_position[j]) for j in range(3)])
tcp_state_data = self.tcp_socket.recv(2048)
actual_tool_pose = self.parse_tcp_state_data(tcp_state_data, 'cartesian_info')
time_snapshot = time.time()
if time_snapshot - time_start > 1:
break
time.sleep(0.01)
# Reading TCP forces from real-time client connection
rtc_state_data = self.rtc_socket.recv(6496)
TCP_forces = self.parse_rtc_state_data(rtc_state_data)
# If TCP forces in x/y exceed 20 Newtons, stop moving
# print(TCP_forces[0:3])
if np.linalg.norm(np.asarray(TCP_forces[0:2])) > 20 or (time_snapshot - time_start) > 1:
print('Warning: contact detected! Movement halted. TCP forces: [%f, %f, %f]' % (TCP_forces[0], TCP_forces[1], TCP_forces[2]))
execute_success = False
break
time.sleep(0.01)
self.tool_acc = 1.2 # 1.2 # 0.5
self.tcp_socket.close()
self.rtc_socket.close()
return execute_success
def move_joints(self, joint_configuration):
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "movej([%f" % joint_configuration[0]
for joint_idx in range(1, 6):
tcp_command = tcp_command + (",%f" % joint_configuration[joint_idx])
tcp_command = tcp_command + "],a=%f,v=%f)\n" % (self.joint_acc, self.joint_vel)
self.tcp_socket.send(str.encode(tcp_command))
# Block until robot reaches home state
state_data = self.tcp_socket.recv(2048)
actual_joint_positions = self.parse_tcp_state_data(state_data, 'joint_data')
while not all([np.abs(actual_joint_positions[j] - joint_configuration[j]) < self.joint_tolerance for j in range(6)]):
state_data = self.tcp_socket.recv(2048)
actual_joint_positions = self.parse_tcp_state_data(state_data, 'joint_data')
time.sleep(0.01)
self.tcp_socket.close()
def go_home(self):
self.move_joints(self.home_joint_config)
# Note: must be preceded by close_gripper()
def check_grasp(self):
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
return tool_analog_input2 > 0.26
# Primitives ----------------------------------------------------------
def grasp(self, position, heightmap_rotation_angle, workspace_limits):
print('Executing: grasp at (%f, %f, %f)' % (position[0], position[1], position[2]))
# Compute tool orientation from heightmap rotation angle
grasp_orientation = [1.0, 0.0]
if heightmap_rotation_angle > np.pi:
heightmap_rotation_angle = heightmap_rotation_angle - 2 * np.pi
tool_rotation_angle = heightmap_rotation_angle / 2
tool_orientation = np.asarray([grasp_orientation[0] * np.cos(tool_rotation_angle) - grasp_orientation[1] * np.sin(tool_rotation_angle), grasp_orientation[0] * np.sin(tool_rotation_angle) + grasp_orientation[1] * np.cos(tool_rotation_angle), 0.0]) * np.pi
tool_orientation_angle = np.linalg.norm(tool_orientation)
tool_orientation_axis = tool_orientation / tool_orientation_angle
tool_orientation_rotm = utils.angle2rotm(tool_orientation_angle, tool_orientation_axis, point=None)[:3, :3]
# Compute tilted tool orientation during dropping into bin
tilt_rotm = utils.euler2rotm(np.asarray([-np.pi / 4, 0, 0]))
tilted_tool_orientation_rotm = np.dot(tilt_rotm, tool_orientation_rotm)
tilted_tool_orientation_axis_angle = utils.rotm2angle(tilted_tool_orientation_rotm)
tilted_tool_orientation = tilted_tool_orientation_axis_angle[0] * np.asarray(tilted_tool_orientation_axis_angle[1:4])
# Attempt grasp
position = np.asarray(position).copy()
position[2] = max(position[2] - 0.05, workspace_limits[2][0])
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " set_digital_out(8,False)\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.09)\n" % (position[0], position[1], position[2] + 0.1, tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (position[0], position[1], position[2], tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " set_digital_out(8,True)\n"
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# Block until robot reaches target tool position and gripper fingers have stopped moving
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
timeout_t0 = time.time()
while True:
state_data = self.get_state()
new_tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
timeout_t1 = time.time()
if (tool_analog_input2 < 3.7 and (abs(new_tool_analog_input2 - tool_analog_input2) < 0.01) and all([np.abs(actual_tool_pose[j] - position[j]) < self.tool_pose_tolerance[j] for j in range(3)])) or (timeout_t1 - timeout_t0) > 5:
break
tool_analog_input2 = new_tool_analog_input2
# Check if gripper is open (grasp might be successful)
gripper_open = tool_analog_input2 > 0.26
# # Check if grasp is successful
# grasp_success = tool_analog_input2 > 0.26
home_position = [0.49, 0.11, 0.03]
bin_position = [0.5, -0.45, 0.1]
# If gripper is open, drop object in bin and check if grasp is successful
grasp_success = False
if gripper_open:
# Pre-compute blend radius
blend_radius = min(abs(bin_position[1] - position[1]) / 2 - 0.01, 0.2)
# Attempt placing
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=%f)\n" % (position[0], position[1], bin_position[2], tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc, self.joint_vel, blend_radius)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=%f)\n" % (bin_position[0], bin_position[1], bin_position[2], tilted_tool_orientation[0], tilted_tool_orientation[1], tilted_tool_orientation[2], self.joint_acc, self.joint_vel, blend_radius)
tcp_command += " set_digital_out(8,False)\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.0)\n" % (home_position[0], home_position[1], home_position[2], tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# print(tcp_command) # Debug
# Measure gripper width until robot reaches near bin location
state_data = self.get_state()
measurements = []
while True:
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
measurements.append(tool_analog_input2)
if abs(actual_tool_pose[1] - bin_position[1]) < 0.2 or all([np.abs(actual_tool_pose[j] - home_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
break
# If gripper width did not change before reaching bin location, then object is in grip and grasp is successful
if len(measurements) >= 2:
if abs(measurements[0] - measurements[1]) < 0.1:
grasp_success = True
else:
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.09)\n" % (position[0], position[1], position[2] + 0.1, tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.0)\n" % (home_position[0], home_position[1], home_position[2], tool_orientation[0], tool_orientation[1], 0.0, self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# Block until robot reaches home location
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
while True:
state_data = self.get_state()
new_tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
if (abs(new_tool_analog_input2 - tool_analog_input2) < 0.01) and all([np.abs(actual_tool_pose[j] - home_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
break
tool_analog_input2 = new_tool_analog_input2
return grasp_success
def push(self, position, heightmap_rotation_angle, workspace_limits):
print('Executing: push at (%f, %f, %f)' % (position[0], position[1], position[2]))
# Compute tool orientation from heightmap rotation angle
push_orientation = [1.0, 0.0]
tool_rotation_angle = heightmap_rotation_angle / 2
tool_orientation = np.asarray([push_orientation[0] * np.cos(tool_rotation_angle) - push_orientation[1] * np.sin(tool_rotation_angle), push_orientation[0] * np.sin(tool_rotation_angle) + push_orientation[1] * np.cos(tool_rotation_angle), 0.0]) * np.pi
tool_orientation_angle = np.linalg.norm(tool_orientation)
tool_orientation_axis = tool_orientation / tool_orientation_angle
tool_orientation_rotm = utils.angle2rotm(tool_orientation_angle, tool_orientation_axis, point=None)[:3, :3]
# Compute push direction and endpoint (push to right of rotated heightmap)
push_direction = np.asarray([push_orientation[0] * np.cos(heightmap_rotation_angle) - push_orientation[1] * np.sin(heightmap_rotation_angle), push_orientation[0] * np.sin(heightmap_rotation_angle) + push_orientation[1] * np.cos(heightmap_rotation_angle), 0.0])
target_x = min(max(position[0] + push_direction[0] * 0.1, workspace_limits[0][0]), workspace_limits[0][1])
target_y = min(max(position[1] + push_direction[1] * 0.1, workspace_limits[1][0]), workspace_limits[1][1])
push_endpoint = np.asarray([target_x, target_y, position[2]])
push_direction.shape = (3, 1)
# Compute tilted tool orientation during push
tilt_axis = np.dot(utils.euler2rotm(np.asarray([0, 0, np.pi / 2]))[:3, :3], push_direction)
tilt_rotm = utils.angle2rotm(-np.pi / 8, tilt_axis, point=None)[:3, :3]
tilted_tool_orientation_rotm = np.dot(tilt_rotm, tool_orientation_rotm)
tilted_tool_orientation_axis_angle = utils.rotm2angle(tilted_tool_orientation_rotm)
tilted_tool_orientation = tilted_tool_orientation_axis_angle[0] * np.asarray(tilted_tool_orientation_axis_angle[1:4])
# Push only within workspace limits
position = np.asarray(position).copy()
position[0] = min(max(position[0], workspace_limits[0][0]), workspace_limits[0][1])
position[1] = min(max(position[1], workspace_limits[1][0]), workspace_limits[1][1])
position[2] = max(position[2] + 0.005, workspace_limits[2][0] + 0.005) # Add buffer to surface
home_position = [0.49, 0.11, 0.03]
# Attempt push
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " set_digital_out(8,True)\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.09)\n" % (position[0], position[1], position[2] + 0.1, tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (position[0], position[1], position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (push_endpoint[0], push_endpoint[1], push_endpoint[2], tilted_tool_orientation[0], tilted_tool_orientation[1], tilted_tool_orientation[2], self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.03)\n" % (position[0], position[1], position[2] + 0.1, tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (home_position[0], home_position[1], home_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# Block until robot reaches target tool position and gripper fingers have stopped moving
state_data = self.get_state()
while True:
state_data = self.get_state()
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
if all([np.abs(actual_tool_pose[j] - home_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
break
push_success = True
time.sleep(0.5)
return push_success
def restart_real(self):
# Compute tool orientation from heightmap rotation angle
grasp_orientation = [1.0, 0.0]
tool_rotation_angle = -np.pi / 4
tool_orientation = np.asarray([grasp_orientation[0] * np.cos(tool_rotation_angle) - grasp_orientation[1] * np.sin(tool_rotation_angle), grasp_orientation[0] * np.sin(tool_rotation_angle) + grasp_orientation[1] * np.cos(tool_rotation_angle), 0.0]) * np.pi
tool_orientation_angle = np.linalg.norm(tool_orientation)
tool_orientation_axis = tool_orientation / tool_orientation_angle
tool_orientation_rotm = utils.angle2rotm(tool_orientation_angle, tool_orientation_axis, point=None)[:3, :3]
tilt_rotm = utils.euler2rotm(np.asarray([-np.pi / 4, 0, 0]))
tilted_tool_orientation_rotm = np.dot(tilt_rotm, tool_orientation_rotm)
tilted_tool_orientation_axis_angle = utils.rotm2angle(tilted_tool_orientation_rotm)
tilted_tool_orientation = tilted_tool_orientation_axis_angle[0] * np.asarray(tilted_tool_orientation_axis_angle[1:4])
# Move to box grabbing position
box_grab_position = [0.5, -0.35, -0.12]
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " set_digital_out(8,False)\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.09)\n" % (box_grab_position[0], box_grab_position[1], box_grab_position[2] + 0.1, tilted_tool_orientation[0], tilted_tool_orientation[1], tilted_tool_orientation[2], self.joint_acc, self.joint_vel)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_grab_position[0], box_grab_position[1], box_grab_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc, self.joint_vel)
tcp_command += " set_digital_out(8,True)\n"
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# Block until robot reaches box grabbing position and gripper fingers have stopped moving
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
while True:
state_data = self.get_state()
new_tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
if tool_analog_input2 < 3.7 and (abs(new_tool_analog_input2 - tool_analog_input2) < 0.01) and all([np.abs(actual_tool_pose[j] - box_grab_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
break
tool_analog_input2 = new_tool_analog_input2
# Move to box release position
box_release_position = [0.5, 0.08, -0.12]
home_position = [0.49, 0.11, 0.03]
self.tcp_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.tcp_socket.connect((self.tcp_host_ip, self.tcp_port))
tcp_command = "def process():\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_release_position[0], box_release_position[1], box_release_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_release_position[0], box_release_position[1], box_release_position[2] + 0.3, tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.02, self.joint_vel * 0.02)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.29)\n" % (box_grab_position[0] - 0.05, box_grab_position[1] + 0.1, box_grab_position[2] + 0.3, tilted_tool_orientation[0], tilted_tool_orientation[1], tilted_tool_orientation[2], self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_grab_position[0] - 0.05, box_grab_position[1] + 0.1, box_grab_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.5, self.joint_vel * 0.5)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_grab_position[0], box_grab_position[1], box_grab_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (box_grab_position[0] + 0.05, box_grab_position[1], box_grab_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc * 0.1, self.joint_vel * 0.1)
tcp_command += " set_digital_out(8,False)\n"
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.09)\n" % (box_grab_position[0], box_grab_position[1], box_grab_position[2] + 0.1, tilted_tool_orientation[0], tilted_tool_orientation[1], tilted_tool_orientation[2], self.joint_acc, self.joint_vel)
tcp_command += " movej(p[%f,%f,%f,%f,%f,%f],a=%f,v=%f,t=0,r=0.00)\n" % (home_position[0], home_position[1], home_position[2], tool_orientation[0], tool_orientation[1], tool_orientation[2], self.joint_acc, self.joint_vel)
tcp_command += "end\n"
self.tcp_socket.send(str.encode(tcp_command))
self.tcp_socket.close()
# Block until robot reaches home position
state_data = self.get_state()
tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
while True:
state_data = self.get_state()
new_tool_analog_input2 = self.parse_tcp_state_data(state_data, 'tool_data')
actual_tool_pose = self.parse_tcp_state_data(state_data, 'cartesian_info')
if tool_analog_input2 > 3.0 and (abs(new_tool_analog_input2 - tool_analog_input2) < 0.01) and all([np.abs(actual_tool_pose[j] - home_position[j]) < self.tool_pose_tolerance[j] for j in range(3)]):
break
tool_analog_input2 = new_tool_analog_input2
# def place(self, position, orientation, workspace_limits):
# print('Executing: place at (%f, %f, %f)' % (position[0], position[1], position[2]))
# # Attempt placing
# position[2] = max(position[2], workspace_limits[2][0])
# self.move_to([position[0], position[1], position[2] + 0.2], orientation)
# self.move_to([position[0], position[1], position[2] + 0.05], orientation)
# self.tool_acc = 1 # 0.05
# self.tool_vel = 0.02 # 0.02
# self.move_to([position[0], position[1], position[2]], orientation)
# self.open_gripper()
# self.tool_acc = 1 # 0.5
# self.tool_vel = 0.2 # 0.2
# self.move_to([position[0], position[1], position[2] + 0.2], orientation)
# self.close_gripper()
# self.go_home()
| 59.12334 | 292 | 0.65277 | 4,746 | 31,158 | 4.016646 | 0.067004 | 0.013429 | 0.015737 | 0.015108 | 0.754236 | 0.719561 | 0.694382 | 0.666789 | 0.652153 | 0.631485 | 0 | 0.038233 | 0.210091 | 31,158 | 526 | 293 | 59.235741 | 0.736307 | 0.129565 | 0 | 0.521978 | 0 | 0.065934 | 0.079692 | 0.053461 | 0 | 0 | 0 | 0 | 0.005495 | 1 | 0.052198 | false | 0 | 0.019231 | 0 | 0.107143 | 0.008242 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
83e7df11f7a30a1a6ea7f7518c0377a94ddead08 | 1,806 | py | Python | Qualification/no_parking/main.py | galavasteg/YXProgChallenge2019 | 4dcdbdcc92f1efec89bdc4c7009024cf66800750 | [
"MIT"
] | null | null | null | Qualification/no_parking/main.py | galavasteg/YXProgChallenge2019 | 4dcdbdcc92f1efec89bdc4c7009024cf66800750 | [
"MIT"
] | null | null | null | Qualification/no_parking/main.py | galavasteg/YXProgChallenge2019 | 4dcdbdcc92f1efec89bdc4c7009024cf66800750 | [
"MIT"
] | null | null | null | """
В одном городе запретили машинам останавливаться,
кроме как для посадки пасажира. А пассажир не
согласен ждать больше 3 минут. В этом городе
пешеход заказывает такси в точку X и указывает
интервал в 180 секунд. Водитель должен приехать
ровно в этот интервал. Если приехать раньше, то
ожидать пассажира будет нельзя. Если приехать
слишком поздно, то разозленный пассажир отменит
заказ.
Из-за подобных ограничений в этом городе осталось
всего Z водителей, каждый из которых в момент
старта задачи находится в какой-то вершине графа
дорожного движения. Система управления должна
назначить наилучшего водителя из тех, которые
успеют приехать в указанный интервал. Наилучшим
водителем считается тот, который приедет на заказ
максимально близко к началу интервала. Если таких
водителей несколько, то любой из них.
Нужно для каждого водителя определить, успевает
ли он приехать в указанный интервал, и если да -
то к какому самому раннему моменту времени в
указанном интервале он может приехать.
Формальное описание
Дано:
1. Ориентированный граф G с N вершинами и K ребрами,
вершины пронумерованы от 0 до N-1, 0 ≤ N ≤ 10^4,
0 ≤ K ≤ 10^4. Каждому ребру соответствует время
поездки в нем - целое число W, 10 ≤ W ≤ 10^4.
2. Позиция заказа на графе ID_target
3. Z позиций водителей на графе ID_sourcez,
1 ≤ Z ≤ 10^4
4. Время t0, 0 ≤ t0 ≤ 600 - целое число
Надо для каждого водителя найти такоe t_min что:
1. существует такой маршрут от ID_sourcez водителя
к ID_target, что водитель приезжает в t_min
2. t_min ∈ [t0; t0+180]
3. и это самый ранний возможный t_min : t_min ≤ t_i
для любого t_i, удовлетворяющего пунктам 1 и 2;
4. Или убедиться, что такого t_min не существует
"""
from pathlib import Path
scriptDir = Path.cwd()
def main():
pass
if __name__ == '__main__':
main()
| 30.610169 | 52 | 0.769103 | 304 | 1,806 | 4.542763 | 0.588816 | 0.017379 | 0.011586 | 0.037654 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02977 | 0.181617 | 1,806 | 58 | 53 | 31.137931 | 0.896482 | 0.93134 | 0 | 0 | 0 | 0 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.166667 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
83ee2bc08be92b2e2f6dc79b6f57d5f107b6c5a7 | 361 | py | Python | src/combustion/lightning/metrics/__init__.py | TidalPaladin/combustion | 69b9a2b9baf90b81ed9098b4f0391f5c15efaee7 | [
"Apache-2.0"
] | 3 | 2020-07-09T22:18:19.000Z | 2021-11-08T03:47:19.000Z | src/combustion/lightning/metrics/__init__.py | TidalPaladin/combustion | 69b9a2b9baf90b81ed9098b4f0391f5c15efaee7 | [
"Apache-2.0"
] | 15 | 2020-06-12T21:48:59.000Z | 2022-02-05T18:41:50.000Z | src/combustion/lightning/metrics/__init__.py | TidalPaladin/combustion | 69b9a2b9baf90b81ed9098b4f0391f5c15efaee7 | [
"Apache-2.0"
] | 1 | 2021-02-15T20:06:16.000Z | 2021-02-15T20:06:16.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from .auroc import BoxAUROC
from .average_precision import BoxAveragePrecision
from .confidence import BootstrapMixin
from .entropy import Entropy
from .uncertainty import ECE, UCE, ErrorAtUncertainty
__all__ = ["BoxAveragePrecision", "BoxAUROC", "BootstrapMixin", "Entropy", "ECE", "UCE", "ErrorAtUncertainty"]
| 30.083333 | 110 | 0.764543 | 38 | 361 | 7.131579 | 0.578947 | 0.04428 | 0.177122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003125 | 0.113573 | 361 | 11 | 111 | 32.818182 | 0.84375 | 0.116343 | 0 | 0 | 0 | 0 | 0.227129 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
83f06d01eec1969a06fa163324b2ddf8e8cd172e | 507 | py | Python | fam_assignment/video_collector/models.py | deepanshnagaria/Youtube-Video-Collector | 1ee5dac77ab0386c0d4e62f049d1f37af94bba7c | [
"MIT"
] | 1 | 2020-10-13T11:01:45.000Z | 2020-10-13T11:01:45.000Z | fam_assignment/video_collector/models.py | deepanshnagaria/Youtube-Video-Collector | 1ee5dac77ab0386c0d4e62f049d1f37af94bba7c | [
"MIT"
] | null | null | null | fam_assignment/video_collector/models.py | deepanshnagaria/Youtube-Video-Collector | 1ee5dac77ab0386c0d4e62f049d1f37af94bba7c | [
"MIT"
] | null | null | null | from django.db import models
from datetime import datetime
from django.contrib.postgres.fields import ArrayField
class Video(models.Model):
title = models.CharField(max_length=100)
description = models.CharField(max_length=1000)
published_at = models.DateTimeField()
urls = ArrayField(
models.URLField(max_length=100, blank=True),
blank = True,
null = True,
)
channelTitle = models.CharField(max_length=100)
def __str__(self):
return self.title | 28.166667 | 53 | 0.706114 | 61 | 507 | 5.721311 | 0.540984 | 0.103152 | 0.154728 | 0.206304 | 0.154728 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0.205128 | 507 | 18 | 54 | 28.166667 | 0.833747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0.066667 | 0.733333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
8600657de42e110981abec23d5d5d28989baaafc | 183 | py | Python | spam.py | lucasnogueiragomes/Bot-Spam-Whatsapp | b5bffefab4094cc2a7099e4899de49c4866fe289 | [
"MIT"
] | 1 | 2021-11-21T03:57:15.000Z | 2021-11-21T03:57:15.000Z | spam.py | lucasnogueiragomes/Bot-Spam-Whatsapp | b5bffefab4094cc2a7099e4899de49c4866fe289 | [
"MIT"
] | null | null | null | spam.py | lucasnogueiragomes/Bot-Spam-Whatsapp | b5bffefab4094cc2a7099e4899de49c4866fe289 | [
"MIT"
] | null | null | null | import pyautogui, time
time.sleep(10)
texto = open('roteiro.txt')
for frase in texto:
pyautogui.typewrite(frase)
pyautogui.press('enter')
| 26.142857 | 36 | 0.579235 | 20 | 183 | 5.3 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016 | 0.31694 | 183 | 7 | 36 | 26.142857 | 0.832 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
8601472200cfb4131cb1477e3f96cef8536f43a9 | 2,301 | py | Python | cloudrail/knowledge/context/aws/efs/efs_mount_target.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/efs/efs_mount_target.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | cloudrail/knowledge/context/aws/efs/efs_mount_target.py | my-devops-info/cloudrail-knowledge | b7c1bbd6fe1faeb79c105a01c0debbe24d031a0e | [
"MIT"
] | null | null | null | from typing import List, Optional
from dataclasses import dataclass
from cloudrail.knowledge.context.aws.networking_config.network_configuration import NetworkConfiguration
from cloudrail.knowledge.context.aws.networking_config.network_entity import NetworkEntity
from cloudrail.knowledge.context.aws.service_name import AwsServiceName
@dataclass
class MountTargetSecurityGroups:
security_groups_ids: List[str]
mount_target_id: str
class EfsMountTarget(NetworkEntity):
"""
Attributes:
efs_id: The ID of the EFS the mount target belongs to.
mount_target_id: The ID of the mount target.
eni_id: The ID of the elastic network interface the target is using.
subnet_id: The ID of the subnet the EFS Mount Target is on.
security_groups_ids: The security groups protecting the target.
"""
def __init__(self,
account: str,
region: str,
efs_id: str,
mount_target_id: str,
eni_id: str,
subnet_id: str,
security_groups_ids: Optional[List[str]]):
super().__init__(mount_target_id, account, region, AwsServiceName.AWS_EFS_MOUNT_TARGET)
self.mount_target_id: str = mount_target_id
self.efs_id: str = efs_id
self.eni_id: str = eni_id
self.subnet_id: str = subnet_id
self.security_groups_ids: Optional[List[str]] = security_groups_ids
def get_keys(self) -> List[str]:
return [self.mount_target_id]
def get_name(self) -> str:
return f'mount target with id: {self.mount_target_id}'
def get_arn(self) -> str:
pass
def get_type(self, is_plural: bool = False) -> str:
if not is_plural:
return 'EFS mount target'
else:
return 'EFS mount targets'
def get_all_network_configurations(self) -> Optional[List[NetworkConfiguration]]:
return [NetworkConfiguration(False, self.security_groups_ids, [self.subnet_id])]
def get_cloud_resource_url(self) -> str:
return '{0}efs/home?region={1}#/file-systems/{2}?tabId=mounts' \
.format(self.AWS_CONSOLE_URL, self.region, self.efs_id)
@property
def is_tagable(self) -> bool:
return False
| 35.953125 | 104 | 0.664059 | 295 | 2,301 | 4.932203 | 0.281356 | 0.105842 | 0.071478 | 0.024742 | 0.246048 | 0.151203 | 0.075601 | 0.075601 | 0 | 0 | 0 | 0.00175 | 0.255106 | 2,301 | 63 | 105 | 36.52381 | 0.847141 | 0.140808 | 0 | 0 | 0 | 0 | 0.067814 | 0.039124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.186047 | false | 0.023256 | 0.116279 | 0.116279 | 0.55814 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 2 |
f7a321a5c6b8c025d43176f17113ca2fbebab2a7 | 37,316 | py | Python | google/cloud/datastore/_generated/datastore_pb2.py | Ofekmeister/google-cloud-python | 07dd51bc447beca67b8da1c66f1dfb944ef70418 | [
"Apache-2.0"
] | 1 | 2018-12-09T01:32:50.000Z | 2018-12-09T01:32:50.000Z | google/cloud/datastore/_generated/datastore_pb2.py | Ofekmeister/google-cloud-python | 07dd51bc447beca67b8da1c66f1dfb944ef70418 | [
"Apache-2.0"
] | null | null | null | google/cloud/datastore/_generated/datastore_pb2.py | Ofekmeister/google-cloud-python | 07dd51bc447beca67b8da1c66f1dfb944ef70418 | [
"Apache-2.0"
] | 1 | 2020-09-04T00:05:30.000Z | 2020-09-04T00:05:30.000Z | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/datastore/v1/datastore.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2
from google.cloud.datastore._generated import entity_pb2 as google_dot_datastore_dot_v1_dot_entity__pb2
from google.cloud.datastore._generated import query_pb2 as google_dot_datastore_dot_v1_dot_query__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='google/datastore/v1/datastore.proto',
package='google.datastore.v1',
syntax='proto3',
serialized_pb=_b('\n#google/datastore/v1/datastore.proto\x12\x13google.datastore.v1\x1a\x1cgoogle/api/annotations.proto\x1a google/datastore/v1/entity.proto\x1a\x1fgoogle/datastore/v1/query.proto\"\x83\x01\n\rLookupRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\x12\x36\n\x0cread_options\x18\x01 \x01(\x0b\x32 .google.datastore.v1.ReadOptions\x12&\n\x04keys\x18\x03 \x03(\x0b\x32\x18.google.datastore.v1.Key\"\xa2\x01\n\x0eLookupResponse\x12\x30\n\x05\x66ound\x18\x01 \x03(\x0b\x32!.google.datastore.v1.EntityResult\x12\x32\n\x07missing\x18\x02 \x03(\x0b\x32!.google.datastore.v1.EntityResult\x12*\n\x08\x64\x65\x66\x65rred\x18\x03 \x03(\x0b\x32\x18.google.datastore.v1.Key\"\x84\x02\n\x0fRunQueryRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\x12\x36\n\x0cpartition_id\x18\x02 \x01(\x0b\x32 .google.datastore.v1.PartitionId\x12\x36\n\x0cread_options\x18\x01 \x01(\x0b\x32 .google.datastore.v1.ReadOptions\x12+\n\x05query\x18\x03 \x01(\x0b\x32\x1a.google.datastore.v1.QueryH\x00\x12\x32\n\tgql_query\x18\x07 \x01(\x0b\x32\x1d.google.datastore.v1.GqlQueryH\x00\x42\x0c\n\nquery_type\"s\n\x10RunQueryResponse\x12\x34\n\x05\x62\x61tch\x18\x01 \x01(\x0b\x32%.google.datastore.v1.QueryResultBatch\x12)\n\x05query\x18\x02 \x01(\x0b\x32\x1a.google.datastore.v1.Query\"-\n\x17\x42\x65ginTransactionRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\"/\n\x18\x42\x65ginTransactionResponse\x12\x13\n\x0btransaction\x18\x01 \x01(\x0c\":\n\x0fRollbackRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\x12\x13\n\x0btransaction\x18\x01 \x01(\x0c\"\x12\n\x10RollbackResponse\"\x83\x02\n\rCommitRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\x12\x35\n\x04mode\x18\x05 \x01(\x0e\x32\'.google.datastore.v1.CommitRequest.Mode\x12\x15\n\x0btransaction\x18\x01 \x01(\x0cH\x00\x12\x30\n\tmutations\x18\x06 \x03(\x0b\x32\x1d.google.datastore.v1.Mutation\"F\n\x04Mode\x12\x14\n\x10MODE_UNSPECIFIED\x10\x00\x12\x11\n\rTRANSACTIONAL\x10\x01\x12\x15\n\x11NON_TRANSACTIONAL\x10\x02\x42\x16\n\x14transaction_selector\"f\n\x0e\x43ommitResponse\x12=\n\x10mutation_results\x18\x03 \x03(\x0b\x32#.google.datastore.v1.MutationResult\x12\x15\n\rindex_updates\x18\x04 \x01(\x05\"P\n\x12\x41llocateIdsRequest\x12\x12\n\nproject_id\x18\x08 \x01(\t\x12&\n\x04keys\x18\x01 \x03(\x0b\x32\x18.google.datastore.v1.Key\"=\n\x13\x41llocateIdsResponse\x12&\n\x04keys\x18\x01 \x03(\x0b\x32\x18.google.datastore.v1.Key\"\x87\x02\n\x08Mutation\x12-\n\x06insert\x18\x04 \x01(\x0b\x32\x1b.google.datastore.v1.EntityH\x00\x12-\n\x06update\x18\x05 \x01(\x0b\x32\x1b.google.datastore.v1.EntityH\x00\x12-\n\x06upsert\x18\x06 \x01(\x0b\x32\x1b.google.datastore.v1.EntityH\x00\x12*\n\x06\x64\x65lete\x18\x07 \x01(\x0b\x32\x18.google.datastore.v1.KeyH\x00\x12\x16\n\x0c\x62\x61se_version\x18\x08 \x01(\x03H\x01\x42\x0b\n\toperationB\x1d\n\x1b\x63onflict_detection_strategy\"c\n\x0eMutationResult\x12%\n\x03key\x18\x03 \x01(\x0b\x32\x18.google.datastore.v1.Key\x12\x0f\n\x07version\x18\x04 \x01(\x03\x12\x19\n\x11\x63onflict_detected\x18\x05 \x01(\x08\"\xd5\x01\n\x0bReadOptions\x12L\n\x10read_consistency\x18\x01 \x01(\x0e\x32\x30.google.datastore.v1.ReadOptions.ReadConsistencyH\x00\x12\x15\n\x0btransaction\x18\x02 \x01(\x0cH\x00\"M\n\x0fReadConsistency\x12 \n\x1cREAD_CONSISTENCY_UNSPECIFIED\x10\x00\x12\n\n\x06STRONG\x10\x01\x12\x0c\n\x08\x45VENTUAL\x10\x02\x42\x12\n\x10\x63onsistency_type2\xdb\x06\n\tDatastore\x12~\n\x06Lookup\x12\".google.datastore.v1.LookupRequest\x1a#.google.datastore.v1.LookupResponse\"+\x82\xd3\xe4\x93\x02%\" /v1/projects/{project_id}:lookup:\x01*\x12\x86\x01\n\x08RunQuery\x12$.google.datastore.v1.RunQueryRequest\x1a%.google.datastore.v1.RunQueryResponse\"-\x82\xd3\xe4\x93\x02\'\"\"/v1/projects/{project_id}:runQuery:\x01*\x12\xa6\x01\n\x10\x42\x65ginTransaction\x12,.google.datastore.v1.BeginTransactionRequest\x1a-.google.datastore.v1.BeginTransactionResponse\"5\x82\xd3\xe4\x93\x02/\"*/v1/projects/{project_id}:beginTransaction:\x01*\x12~\n\x06\x43ommit\x12\".google.datastore.v1.CommitRequest\x1a#.google.datastore.v1.CommitResponse\"+\x82\xd3\xe4\x93\x02%\" /v1/projects/{project_id}:commit:\x01*\x12\x86\x01\n\x08Rollback\x12$.google.datastore.v1.RollbackRequest\x1a%.google.datastore.v1.RollbackResponse\"-\x82\xd3\xe4\x93\x02\'\"\"/v1/projects/{project_id}:rollback:\x01*\x12\x92\x01\n\x0b\x41llocateIds\x12\'.google.datastore.v1.AllocateIdsRequest\x1a(.google.datastore.v1.AllocateIdsResponse\"0\x82\xd3\xe4\x93\x02*\"%/v1/projects/{project_id}:allocateIds:\x01*B+\n\x17\x63om.google.datastore.v1B\x0e\x44\x61tastoreProtoP\x01\x62\x06proto3')
,
dependencies=[google_dot_api_dot_annotations__pb2.DESCRIPTOR,google_dot_datastore_dot_v1_dot_entity__pb2.DESCRIPTOR,google_dot_datastore_dot_v1_dot_query__pb2.DESCRIPTOR,])
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_COMMITREQUEST_MODE = _descriptor.EnumDescriptor(
name='Mode',
full_name='google.datastore.v1.CommitRequest.Mode',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='MODE_UNSPECIFIED', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='TRANSACTIONAL', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='NON_TRANSACTIONAL', index=2, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=1178,
serialized_end=1248,
)
_sym_db.RegisterEnumDescriptor(_COMMITREQUEST_MODE)
_READOPTIONS_READCONSISTENCY = _descriptor.EnumDescriptor(
name='ReadConsistency',
full_name='google.datastore.v1.ReadOptions.ReadConsistency',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='READ_CONSISTENCY_UNSPECIFIED', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='STRONG', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='EVENTUAL', index=2, number=2,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=2007,
serialized_end=2084,
)
_sym_db.RegisterEnumDescriptor(_READOPTIONS_READCONSISTENCY)
_LOOKUPREQUEST = _descriptor.Descriptor(
name='LookupRequest',
full_name='google.datastore.v1.LookupRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.LookupRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='read_options', full_name='google.datastore.v1.LookupRequest.read_options', index=1,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='keys', full_name='google.datastore.v1.LookupRequest.keys', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=158,
serialized_end=289,
)
_LOOKUPRESPONSE = _descriptor.Descriptor(
name='LookupResponse',
full_name='google.datastore.v1.LookupResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='found', full_name='google.datastore.v1.LookupResponse.found', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='missing', full_name='google.datastore.v1.LookupResponse.missing', index=1,
number=2, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='deferred', full_name='google.datastore.v1.LookupResponse.deferred', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=292,
serialized_end=454,
)
_RUNQUERYREQUEST = _descriptor.Descriptor(
name='RunQueryRequest',
full_name='google.datastore.v1.RunQueryRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.RunQueryRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='partition_id', full_name='google.datastore.v1.RunQueryRequest.partition_id', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='read_options', full_name='google.datastore.v1.RunQueryRequest.read_options', index=2,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='query', full_name='google.datastore.v1.RunQueryRequest.query', index=3,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='gql_query', full_name='google.datastore.v1.RunQueryRequest.gql_query', index=4,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='query_type', full_name='google.datastore.v1.RunQueryRequest.query_type',
index=0, containing_type=None, fields=[]),
],
serialized_start=457,
serialized_end=717,
)
_RUNQUERYRESPONSE = _descriptor.Descriptor(
name='RunQueryResponse',
full_name='google.datastore.v1.RunQueryResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='batch', full_name='google.datastore.v1.RunQueryResponse.batch', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='query', full_name='google.datastore.v1.RunQueryResponse.query', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=719,
serialized_end=834,
)
_BEGINTRANSACTIONREQUEST = _descriptor.Descriptor(
name='BeginTransactionRequest',
full_name='google.datastore.v1.BeginTransactionRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.BeginTransactionRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=836,
serialized_end=881,
)
_BEGINTRANSACTIONRESPONSE = _descriptor.Descriptor(
name='BeginTransactionResponse',
full_name='google.datastore.v1.BeginTransactionResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='transaction', full_name='google.datastore.v1.BeginTransactionResponse.transaction', index=0,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=883,
serialized_end=930,
)
_ROLLBACKREQUEST = _descriptor.Descriptor(
name='RollbackRequest',
full_name='google.datastore.v1.RollbackRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.RollbackRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='transaction', full_name='google.datastore.v1.RollbackRequest.transaction', index=1,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=932,
serialized_end=990,
)
_ROLLBACKRESPONSE = _descriptor.Descriptor(
name='RollbackResponse',
full_name='google.datastore.v1.RollbackResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=992,
serialized_end=1010,
)
_COMMITREQUEST = _descriptor.Descriptor(
name='CommitRequest',
full_name='google.datastore.v1.CommitRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.CommitRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='mode', full_name='google.datastore.v1.CommitRequest.mode', index=1,
number=5, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='transaction', full_name='google.datastore.v1.CommitRequest.transaction', index=2,
number=1, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='mutations', full_name='google.datastore.v1.CommitRequest.mutations', index=3,
number=6, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_COMMITREQUEST_MODE,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='transaction_selector', full_name='google.datastore.v1.CommitRequest.transaction_selector',
index=0, containing_type=None, fields=[]),
],
serialized_start=1013,
serialized_end=1272,
)
_COMMITRESPONSE = _descriptor.Descriptor(
name='CommitResponse',
full_name='google.datastore.v1.CommitResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='mutation_results', full_name='google.datastore.v1.CommitResponse.mutation_results', index=0,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='index_updates', full_name='google.datastore.v1.CommitResponse.index_updates', index=1,
number=4, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1274,
serialized_end=1376,
)
_ALLOCATEIDSREQUEST = _descriptor.Descriptor(
name='AllocateIdsRequest',
full_name='google.datastore.v1.AllocateIdsRequest',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='project_id', full_name='google.datastore.v1.AllocateIdsRequest.project_id', index=0,
number=8, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='keys', full_name='google.datastore.v1.AllocateIdsRequest.keys', index=1,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1378,
serialized_end=1458,
)
_ALLOCATEIDSRESPONSE = _descriptor.Descriptor(
name='AllocateIdsResponse',
full_name='google.datastore.v1.AllocateIdsResponse',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='keys', full_name='google.datastore.v1.AllocateIdsResponse.keys', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1460,
serialized_end=1521,
)
_MUTATION = _descriptor.Descriptor(
name='Mutation',
full_name='google.datastore.v1.Mutation',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='insert', full_name='google.datastore.v1.Mutation.insert', index=0,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='update', full_name='google.datastore.v1.Mutation.update', index=1,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='upsert', full_name='google.datastore.v1.Mutation.upsert', index=2,
number=6, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='delete', full_name='google.datastore.v1.Mutation.delete', index=3,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='base_version', full_name='google.datastore.v1.Mutation.base_version', index=4,
number=8, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='operation', full_name='google.datastore.v1.Mutation.operation',
index=0, containing_type=None, fields=[]),
_descriptor.OneofDescriptor(
name='conflict_detection_strategy', full_name='google.datastore.v1.Mutation.conflict_detection_strategy',
index=1, containing_type=None, fields=[]),
],
serialized_start=1524,
serialized_end=1787,
)
_MUTATIONRESULT = _descriptor.Descriptor(
name='MutationResult',
full_name='google.datastore.v1.MutationResult',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='key', full_name='google.datastore.v1.MutationResult.key', index=0,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='version', full_name='google.datastore.v1.MutationResult.version', index=1,
number=4, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='conflict_detected', full_name='google.datastore.v1.MutationResult.conflict_detected', index=2,
number=5, type=8, cpp_type=7, label=1,
has_default_value=False, default_value=False,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=1789,
serialized_end=1888,
)
_READOPTIONS = _descriptor.Descriptor(
name='ReadOptions',
full_name='google.datastore.v1.ReadOptions',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='read_consistency', full_name='google.datastore.v1.ReadOptions.read_consistency', index=0,
number=1, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='transaction', full_name='google.datastore.v1.ReadOptions.transaction', index=1,
number=2, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=_b(""),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
_READOPTIONS_READCONSISTENCY,
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
_descriptor.OneofDescriptor(
name='consistency_type', full_name='google.datastore.v1.ReadOptions.consistency_type',
index=0, containing_type=None, fields=[]),
],
serialized_start=1891,
serialized_end=2104,
)
_LOOKUPREQUEST.fields_by_name['read_options'].message_type = _READOPTIONS
_LOOKUPREQUEST.fields_by_name['keys'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_LOOKUPRESPONSE.fields_by_name['found'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._ENTITYRESULT
_LOOKUPRESPONSE.fields_by_name['missing'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._ENTITYRESULT
_LOOKUPRESPONSE.fields_by_name['deferred'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_RUNQUERYREQUEST.fields_by_name['partition_id'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._PARTITIONID
_RUNQUERYREQUEST.fields_by_name['read_options'].message_type = _READOPTIONS
_RUNQUERYREQUEST.fields_by_name['query'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._QUERY
_RUNQUERYREQUEST.fields_by_name['gql_query'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._GQLQUERY
_RUNQUERYREQUEST.oneofs_by_name['query_type'].fields.append(
_RUNQUERYREQUEST.fields_by_name['query'])
_RUNQUERYREQUEST.fields_by_name['query'].containing_oneof = _RUNQUERYREQUEST.oneofs_by_name['query_type']
_RUNQUERYREQUEST.oneofs_by_name['query_type'].fields.append(
_RUNQUERYREQUEST.fields_by_name['gql_query'])
_RUNQUERYREQUEST.fields_by_name['gql_query'].containing_oneof = _RUNQUERYREQUEST.oneofs_by_name['query_type']
_RUNQUERYRESPONSE.fields_by_name['batch'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._QUERYRESULTBATCH
_RUNQUERYRESPONSE.fields_by_name['query'].message_type = google_dot_datastore_dot_v1_dot_query__pb2._QUERY
_COMMITREQUEST.fields_by_name['mode'].enum_type = _COMMITREQUEST_MODE
_COMMITREQUEST.fields_by_name['mutations'].message_type = _MUTATION
_COMMITREQUEST_MODE.containing_type = _COMMITREQUEST
_COMMITREQUEST.oneofs_by_name['transaction_selector'].fields.append(
_COMMITREQUEST.fields_by_name['transaction'])
_COMMITREQUEST.fields_by_name['transaction'].containing_oneof = _COMMITREQUEST.oneofs_by_name['transaction_selector']
_COMMITRESPONSE.fields_by_name['mutation_results'].message_type = _MUTATIONRESULT
_ALLOCATEIDSREQUEST.fields_by_name['keys'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_ALLOCATEIDSRESPONSE.fields_by_name['keys'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_MUTATION.fields_by_name['insert'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._ENTITY
_MUTATION.fields_by_name['update'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._ENTITY
_MUTATION.fields_by_name['upsert'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._ENTITY
_MUTATION.fields_by_name['delete'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_MUTATION.oneofs_by_name['operation'].fields.append(
_MUTATION.fields_by_name['insert'])
_MUTATION.fields_by_name['insert'].containing_oneof = _MUTATION.oneofs_by_name['operation']
_MUTATION.oneofs_by_name['operation'].fields.append(
_MUTATION.fields_by_name['update'])
_MUTATION.fields_by_name['update'].containing_oneof = _MUTATION.oneofs_by_name['operation']
_MUTATION.oneofs_by_name['operation'].fields.append(
_MUTATION.fields_by_name['upsert'])
_MUTATION.fields_by_name['upsert'].containing_oneof = _MUTATION.oneofs_by_name['operation']
_MUTATION.oneofs_by_name['operation'].fields.append(
_MUTATION.fields_by_name['delete'])
_MUTATION.fields_by_name['delete'].containing_oneof = _MUTATION.oneofs_by_name['operation']
_MUTATION.oneofs_by_name['conflict_detection_strategy'].fields.append(
_MUTATION.fields_by_name['base_version'])
_MUTATION.fields_by_name['base_version'].containing_oneof = _MUTATION.oneofs_by_name['conflict_detection_strategy']
_MUTATIONRESULT.fields_by_name['key'].message_type = google_dot_datastore_dot_v1_dot_entity__pb2._KEY
_READOPTIONS.fields_by_name['read_consistency'].enum_type = _READOPTIONS_READCONSISTENCY
_READOPTIONS_READCONSISTENCY.containing_type = _READOPTIONS
_READOPTIONS.oneofs_by_name['consistency_type'].fields.append(
_READOPTIONS.fields_by_name['read_consistency'])
_READOPTIONS.fields_by_name['read_consistency'].containing_oneof = _READOPTIONS.oneofs_by_name['consistency_type']
_READOPTIONS.oneofs_by_name['consistency_type'].fields.append(
_READOPTIONS.fields_by_name['transaction'])
_READOPTIONS.fields_by_name['transaction'].containing_oneof = _READOPTIONS.oneofs_by_name['consistency_type']
DESCRIPTOR.message_types_by_name['LookupRequest'] = _LOOKUPREQUEST
DESCRIPTOR.message_types_by_name['LookupResponse'] = _LOOKUPRESPONSE
DESCRIPTOR.message_types_by_name['RunQueryRequest'] = _RUNQUERYREQUEST
DESCRIPTOR.message_types_by_name['RunQueryResponse'] = _RUNQUERYRESPONSE
DESCRIPTOR.message_types_by_name['BeginTransactionRequest'] = _BEGINTRANSACTIONREQUEST
DESCRIPTOR.message_types_by_name['BeginTransactionResponse'] = _BEGINTRANSACTIONRESPONSE
DESCRIPTOR.message_types_by_name['RollbackRequest'] = _ROLLBACKREQUEST
DESCRIPTOR.message_types_by_name['RollbackResponse'] = _ROLLBACKRESPONSE
DESCRIPTOR.message_types_by_name['CommitRequest'] = _COMMITREQUEST
DESCRIPTOR.message_types_by_name['CommitResponse'] = _COMMITRESPONSE
DESCRIPTOR.message_types_by_name['AllocateIdsRequest'] = _ALLOCATEIDSREQUEST
DESCRIPTOR.message_types_by_name['AllocateIdsResponse'] = _ALLOCATEIDSRESPONSE
DESCRIPTOR.message_types_by_name['Mutation'] = _MUTATION
DESCRIPTOR.message_types_by_name['MutationResult'] = _MUTATIONRESULT
DESCRIPTOR.message_types_by_name['ReadOptions'] = _READOPTIONS
LookupRequest = _reflection.GeneratedProtocolMessageType('LookupRequest', (_message.Message,), dict(
DESCRIPTOR = _LOOKUPREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.LookupRequest)
))
_sym_db.RegisterMessage(LookupRequest)
LookupResponse = _reflection.GeneratedProtocolMessageType('LookupResponse', (_message.Message,), dict(
DESCRIPTOR = _LOOKUPRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.LookupResponse)
))
_sym_db.RegisterMessage(LookupResponse)
RunQueryRequest = _reflection.GeneratedProtocolMessageType('RunQueryRequest', (_message.Message,), dict(
DESCRIPTOR = _RUNQUERYREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.RunQueryRequest)
))
_sym_db.RegisterMessage(RunQueryRequest)
RunQueryResponse = _reflection.GeneratedProtocolMessageType('RunQueryResponse', (_message.Message,), dict(
DESCRIPTOR = _RUNQUERYRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.RunQueryResponse)
))
_sym_db.RegisterMessage(RunQueryResponse)
BeginTransactionRequest = _reflection.GeneratedProtocolMessageType('BeginTransactionRequest', (_message.Message,), dict(
DESCRIPTOR = _BEGINTRANSACTIONREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.BeginTransactionRequest)
))
_sym_db.RegisterMessage(BeginTransactionRequest)
BeginTransactionResponse = _reflection.GeneratedProtocolMessageType('BeginTransactionResponse', (_message.Message,), dict(
DESCRIPTOR = _BEGINTRANSACTIONRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.BeginTransactionResponse)
))
_sym_db.RegisterMessage(BeginTransactionResponse)
RollbackRequest = _reflection.GeneratedProtocolMessageType('RollbackRequest', (_message.Message,), dict(
DESCRIPTOR = _ROLLBACKREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.RollbackRequest)
))
_sym_db.RegisterMessage(RollbackRequest)
RollbackResponse = _reflection.GeneratedProtocolMessageType('RollbackResponse', (_message.Message,), dict(
DESCRIPTOR = _ROLLBACKRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.RollbackResponse)
))
_sym_db.RegisterMessage(RollbackResponse)
CommitRequest = _reflection.GeneratedProtocolMessageType('CommitRequest', (_message.Message,), dict(
DESCRIPTOR = _COMMITREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.CommitRequest)
))
_sym_db.RegisterMessage(CommitRequest)
CommitResponse = _reflection.GeneratedProtocolMessageType('CommitResponse', (_message.Message,), dict(
DESCRIPTOR = _COMMITRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.CommitResponse)
))
_sym_db.RegisterMessage(CommitResponse)
AllocateIdsRequest = _reflection.GeneratedProtocolMessageType('AllocateIdsRequest', (_message.Message,), dict(
DESCRIPTOR = _ALLOCATEIDSREQUEST,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.AllocateIdsRequest)
))
_sym_db.RegisterMessage(AllocateIdsRequest)
AllocateIdsResponse = _reflection.GeneratedProtocolMessageType('AllocateIdsResponse', (_message.Message,), dict(
DESCRIPTOR = _ALLOCATEIDSRESPONSE,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.AllocateIdsResponse)
))
_sym_db.RegisterMessage(AllocateIdsResponse)
Mutation = _reflection.GeneratedProtocolMessageType('Mutation', (_message.Message,), dict(
DESCRIPTOR = _MUTATION,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.Mutation)
))
_sym_db.RegisterMessage(Mutation)
MutationResult = _reflection.GeneratedProtocolMessageType('MutationResult', (_message.Message,), dict(
DESCRIPTOR = _MUTATIONRESULT,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.MutationResult)
))
_sym_db.RegisterMessage(MutationResult)
ReadOptions = _reflection.GeneratedProtocolMessageType('ReadOptions', (_message.Message,), dict(
DESCRIPTOR = _READOPTIONS,
__module__ = 'google.datastore.v1.datastore_pb2'
# @@protoc_insertion_point(class_scope:google.datastore.v1.ReadOptions)
))
_sym_db.RegisterMessage(ReadOptions)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\n\027com.google.datastore.v1B\016DatastoreProtoP\001'))
# @@protoc_insertion_point(module_scope)
| 41.834081 | 4,517 | 0.763613 | 4,609 | 37,316 | 5.875678 | 0.074203 | 0.040176 | 0.079724 | 0.045752 | 0.690263 | 0.647096 | 0.569477 | 0.548244 | 0.520439 | 0.49496 | 0 | 0.042624 | 0.109122 | 37,316 | 891 | 4,518 | 41.881033 | 0.771989 | 0.034596 | 0 | 0.640049 | 1 | 0.009828 | 0.237356 | 0.186835 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.011057 | 0 | 0.011057 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7ae9e430fad51920006bf7a558e4c86f472388f | 227 | py | Python | ex032.py | GabrielMarquesss/Exercicios-Python | 90e97ac19ae36220c42892ca773648e578b53f10 | [
"MIT"
] | null | null | null | ex032.py | GabrielMarquesss/Exercicios-Python | 90e97ac19ae36220c42892ca773648e578b53f10 | [
"MIT"
] | null | null | null | ex032.py | GabrielMarquesss/Exercicios-Python | 90e97ac19ae36220c42892ca773648e578b53f10 | [
"MIT"
] | null | null | null | #Programa que diz se ano é bissexto ou não:
ano = int(input('Digite um ano: '))
if ano // 400 and ano % 4 == 0 and ano % 100 != 0 :
print('{} é bissexto'.format(ano))
else:
print('{} não é ano bissexto.'.format(ano))
| 25.222222 | 51 | 0.603524 | 39 | 227 | 3.512821 | 0.564103 | 0.131387 | 0.248175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.051136 | 0.22467 | 227 | 8 | 52 | 28.375 | 0.727273 | 0.185022 | 0 | 0 | 0 | 0 | 0.271739 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7b0f10a8f2dbd65f38b1bf5607f8b02637cf3bc | 293 | py | Python | helpers/fixproj.py | j-caparotta/FML | 6dfe3da19ba83805ef7aa181d837223824c0985a | [
"Unlicense"
] | 43 | 2015-01-22T16:53:06.000Z | 2020-03-24T18:53:46.000Z | helpers/fixproj.py | j-caparotta/FML | 6dfe3da19ba83805ef7aa181d837223824c0985a | [
"Unlicense"
] | 6 | 2016-02-20T14:34:48.000Z | 2021-03-02T14:21:59.000Z | helpers/fixproj.py | j-caparotta/FML | 6dfe3da19ba83805ef7aa181d837223824c0985a | [
"Unlicense"
] | 33 | 2015-08-14T14:38:53.000Z | 2020-04-18T17:09:24.000Z | import sys, os
with open(sys.argv[1], 'r') as fd:
content = '\n'.join(line.strip() for line in fd if line.strip())
if len(sys.argv) == 3:
content = content.replace('Win32', sys.argv[2]).replace('x64', sys.argv[2])
with open(sys.argv[1], 'w') as fd:
fd.write(content)
| 36.625 | 81 | 0.59727 | 51 | 293 | 3.431373 | 0.509804 | 0.2 | 0.125714 | 0.171429 | 0.182857 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038298 | 0.197952 | 293 | 7 | 82 | 41.857143 | 0.706383 | 0 | 0 | 0 | 0 | 0 | 0.041958 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7b31606eec2c54f37f556bcc1db3c46a0b97dbd | 626 | py | Python | login_register/views.py | imranparuk/whether-the-weather | 347f3fe97975a6980a37cc60f32e65fdb04a89c6 | [
"MIT"
] | null | null | null | login_register/views.py | imranparuk/whether-the-weather | 347f3fe97975a6980a37cc60f32e65fdb04a89c6 | [
"MIT"
] | 1 | 2020-06-05T18:25:45.000Z | 2020-06-05T18:25:45.000Z | login_register/views.py | imranparuk/whether-the-weather | 347f3fe97975a6980a37cc60f32e65fdb04a89c6 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.http import HttpResponse
from django.template import loader
from django.contrib.auth import get_user_model
User = get_user_model()
#def login_reg_view(request):
# #template = loader.get_template('login_register/templates/test.html')
# return render(request, 'login_register/templates/test.html')
from django.contrib.auth.forms import UserCreationForm
from django.urls import reverse_lazy
from django.views import generic
class register(generic.CreateView):
form_class = UserCreationForm
success_url = reverse_lazy('login')
template_name = 'signup.html'
| 27.217391 | 74 | 0.795527 | 83 | 626 | 5.831325 | 0.457831 | 0.144628 | 0.070248 | 0.086777 | 0.123967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123003 | 626 | 22 | 75 | 28.454545 | 0.881603 | 0.261981 | 0 | 0 | 0 | 0 | 0.035011 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.583333 | 0 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f7b62b8cc50d7f9008c5396d0e2fb86e8e20f0d3 | 734 | py | Python | airone/wsgi.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | airone/wsgi.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | airone/wsgi.py | userlocalhost/airone-1 | 8aabeabb65fd2117876380f1f69a04f0cf39889d | [
"MIT"
] | null | null | null | """
WSGI config for airone project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/howto/deployment/wsgi/
"""
import os
import importlib
from django.conf import settings
from configurations.wsgi import get_wsgi_application
from airone.lib.log import Logger
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "airone.settings")
os.environ.setdefault("DJANGO_CONFIGURATION", "Dev")
for extension in settings.AIRONE["EXTENSIONS"]:
try:
importlib.import_module("%s.settings" % extension)
except ImportError:
Logger.warning("Failed to load settings %s" % extension)
application = get_wsgi_application()
| 28.230769 | 78 | 0.76703 | 97 | 734 | 5.721649 | 0.587629 | 0.036036 | 0.064865 | 0.09009 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003135 | 0.13079 | 734 | 25 | 79 | 29.36 | 0.866771 | 0.288828 | 0 | 0 | 0 | 0 | 0.208171 | 0.042802 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.538462 | 0 | 0.538462 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
f7bb33d8ab751c76b0cd7f5a763e6ccd73116808 | 615 | py | Python | Sources/Python/utility.py | ParliamoDiPC/fasmga | 55a340917b6ba4f7557249f0b32a8e2791c488de | [
"MIT"
] | null | null | null | Sources/Python/utility.py | ParliamoDiPC/fasmga | 55a340917b6ba4f7557249f0b32a8e2791c488de | [
"MIT"
] | null | null | null | Sources/Python/utility.py | ParliamoDiPC/fasmga | 55a340917b6ba4f7557249f0b32a8e2791c488de | [
"MIT"
] | null | null | null | import random, string, json
global ratelimit
ratelimit = {}
Authorizated = json.load(open("Sources/Json/Authorizated.json", "r"))
def newUrlID(): return "".join(random.choice(string.ascii_lowercase) for i in range(8))
def ratelimitCheck(dbToken):
for auth in Authorizated:
ratelimit[auth] = -1
if not dbToken["Owner"] in ratelimit:
ratelimit[dbToken["Owner"]] = 1
return True
else:
if ratelimit[dbToken["Owner"]] < 20:
if ratelimit[dbToken["Owner"]] == -1: return True
ratelimit[dbToken["Owner"]] += 1
return True
elif ratelimit[dbToken["Owner"]] == -1: return True
else: return False | 27.954545 | 87 | 0.697561 | 81 | 615 | 5.283951 | 0.432099 | 0.168224 | 0.245327 | 0.205607 | 0.317757 | 0.317757 | 0.168224 | 0 | 0 | 0 | 0 | 0.015385 | 0.154472 | 615 | 22 | 88 | 27.954545 | 0.807692 | 0 | 0 | 0.111111 | 0 | 0 | 0.099026 | 0.048701 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0.055556 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7cb542b8b2219b7c693d7dd0736895d479ce62c | 566 | py | Python | tests/test_wallet.py | mschneider/mango-explorer | ed50880ef80b31b679c9c89fa9bf0579391d71c9 | [
"MIT"
] | 1 | 2021-09-09T20:49:46.000Z | 2021-09-09T20:49:46.000Z | tests/test_wallet.py | mschneider/mango-explorer | ed50880ef80b31b679c9c89fa9bf0579391d71c9 | [
"MIT"
] | null | null | null | tests/test_wallet.py | mschneider/mango-explorer | ed50880ef80b31b679c9c89fa9bf0579391d71c9 | [
"MIT"
] | 2 | 2021-09-09T20:49:50.000Z | 2021-11-05T21:41:41.000Z | from .context import mango
def test_constructor():
secret_key = [0] * 32
actual = mango.Wallet(secret_key)
assert actual is not None
assert actual.logger is not None
assert actual.secret_key == secret_key
assert actual.account is not None
def test_constructor_with_longer_secret_key():
secret_key = [0] * 64
actual = mango.Wallet(secret_key)
assert actual is not None
assert actual.logger is not None
assert actual.secret_key != secret_key
assert len(actual.secret_key) == 32
assert actual.account is not None
| 26.952381 | 46 | 0.717314 | 84 | 566 | 4.654762 | 0.27381 | 0.230179 | 0.138107 | 0.153453 | 0.690537 | 0.690537 | 0.56266 | 0.56266 | 0.56266 | 0.56266 | 0 | 0.018018 | 0.215548 | 566 | 20 | 47 | 28.3 | 0.862613 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5625 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7d4a9aa8b746937f3ba159dc212e1cb8a816c3a | 3,640 | py | Python | ibm-iot-quickstart.py | oscarordaz27/TheIoTLearningInitiative | 54678a38d5b58d4f41c839d133ed3c4dc1cd6025 | [
"Apache-2.0"
] | null | null | null | ibm-iot-quickstart.py | oscarordaz27/TheIoTLearningInitiative | 54678a38d5b58d4f41c839d133ed3c4dc1cd6025 | [
"Apache-2.0"
] | null | null | null | ibm-iot-quickstart.py | oscarordaz27/TheIoTLearningInitiative | 54678a38d5b58d4f41c839d133ed3c4dc1cd6025 | [
"Apache-2.0"
] | null | null | null | ##*****************************************************************************
## Copyright (c) 2014 IBM Corporation and other Contributors.
##
## All rights reserved. This program and the accompanying materials
## are made available under the terms of the Eclipse Public License v1.0
## which accompanies this distribution, and is available at
## http://www.eclipse.org/legal/epl-v10.html
##
## Contributors:
## IBM - Initial Contribution
##*****************************************************************************
## IoT Foundation QuickStart Driver
## A sample IBM Internet of Things Foundation Service client for Intel Internet of Things Gateway Solutions
import time
import client as mqtt
import json
import uuid
#Class for retrieving CPU % utilisation
class CPUutil(object):
def __init__(self):
self.prev_idle = 0
self.prev_total = 0
self.new_idle = 0
self.new_total = 0
def get(self):
self.read()
delta_idle = self.new_idle - self.prev_idle
delta_total = self.new_total - self.prev_total
cpuut = 0.0
if (self.prev_total != 0) and (delta_total != 0):
cpuut = ((delta_total - delta_idle) * 100.0 / delta_total)
return cpuut
def read(self):
self.prev_idle = self.new_idle
self.prev_total = self.new_total
self.new_idle = 0;
self.new_total = 0;
with open('/proc/stat') as f:
line = f.readline()
parts = line.split()
if len(parts) >= 5:
self.new_idle = int(parts[4])
for part in parts[1:]:
self.new_total += int(part)
#Initialise class to retrieve CPU Usage
cpuutil = CPUutil()
macAddress = hex(uuid.getnode())[2:-1]
macAddress = format(long(macAddress, 16),'012x')
#remind the user of the mac address further down in code (post 'connecitng to QS')
#Set the variables for connecting to the Quickstart service
organization = "quickstart"
deviceType = "iotsample-gateway"
broker = ""
topic = "iot-2/evt/status/fmt/json"
username = ""
password = ""
error_to_catch = getattr(__builtins__,'FileNotFoundError', IOError)
try:
file_object = open("./device.cfg")
for line in file_object:
readType, readValue = line.split("=")
if readType == "org":
organization = readValue.strip()
elif readType == "type":
deviceType = readValue.strip()
elif readType == "id":
macAddress = readValue.strip()
elif readType == "auth-method":
username = "use-token-auth"
elif readType == "auth-token":
password = readValue.strip()
else:
print("please check the format of your config file") #will want to repeat this error further down if their connection fails?
file_object.close()
print("Configuration file found - connecting to the registered service")
except error_to_catch:
print("No config file found, connecting to the Quickstart service")
print("MAC address: " + macAddress)
#Creating the client connection
#Set clientID and broker
clientID = "d:" + organization + ":" + deviceType + ":" + macAddress
broker = organization + ".messaging.internetofthings.ibmcloud.com"
mqttc = mqtt.Client(clientID)
#Set authentication values, if connecting to registered service
if username is not "":
mqttc.username_pw_set(username, password=password)
mqttc.connect(host=broker, port=1883, keepalive=60)
#Publishing to IBM Internet of Things Foundation
mqttc.loop_start()
while mqttc.loop() == 0:
cpuutilvalue = cpuutil.get()
print cpuutilvalue
msg = json.JSONEncoder().encode({"d":{"cpuutil":cpuutilvalue}})
mqttc.publish(topic, payload=msg, qos=0, retain=False)
print "message published"
time.sleep(5)
pass
| 28 | 130 | 0.665385 | 464 | 3,640 | 5.131466 | 0.44181 | 0.029399 | 0.0231 | 0.032759 | 0.121378 | 0.040319 | 0.021 | 0.021 | 0 | 0 | 0 | 0.014165 | 0.18544 | 3,640 | 129 | 131 | 28.217054 | 0.78887 | 0.29478 | 0 | 0 | 0 | 0 | 0.15263 | 0.025702 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.052632 | 0.052632 | null | null | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f7d53a2094471d114a4a21c06099237c0ae7a9d6 | 2,872 | py | Python | test/programytest/aiml_tests/arrow_tests/test_arrow_aiml.py | minhdc/documented-programy | fe947d68c0749201fbe93ee5644d304235d0c626 | [
"MIT"
] | null | null | null | test/programytest/aiml_tests/arrow_tests/test_arrow_aiml.py | minhdc/documented-programy | fe947d68c0749201fbe93ee5644d304235d0c626 | [
"MIT"
] | null | null | null | test/programytest/aiml_tests/arrow_tests/test_arrow_aiml.py | minhdc/documented-programy | fe947d68c0749201fbe93ee5644d304235d0c626 | [
"MIT"
] | null | null | null | import unittest
import os
from programy.context import ClientContext
from programytest.aiml_tests.client import TestClient
class ArrowTestClient(TestClient):
def __init__(self):
TestClient.__init__(self)
def load_configuration(self, arguments):
super(ArrowTestClient, self).load_configuration(arguments)
self.configuration.client_configuration.configurations[0].configurations[0].files.aiml_files._files = [os.path.dirname(__file__)]
class ArrowAIMLTests(unittest.TestCase):
def setUp(self):
client = ArrowTestClient()
self._client_context = client.create_client_context("testid")
def test_arrow_first_word(self):
response = self._client_context.bot.ask_question(self._client_context, "SAY HEY")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS SAY')
def test_arrow_first_no_word(self):
response = self._client_context.bot.ask_question(self._client_context, "HEY")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS')
def test_arrow_first_multi_word(self):
response = self._client_context.bot.ask_question(self._client_context, "WE SAY HEY")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS WE SAY')
def test_arrow_last_word(self):
response = self._client_context.bot.ask_question(self._client_context, "HELLO YOU")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS YOU')
def test_arrow_no_word(self):
response = self._client_context.bot.ask_question(self._client_context, "HELLO")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS')
def test_arrow_no_multi_word(self):
response = self._client_context.bot.ask_question(self._client_context, "HELLO YOU THERE")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS YOU THERE')
def test_arrow_middle_word(self):
response = self._client_context.bot.ask_question(self._client_context, "WELL HI THERE")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS HI')
def test_arrow_middle_no_word(self):
response = self._client_context.bot.ask_question(self._client_context, "WELL THERE")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS')
def test_arrow_middle_multi_word(self):
response = self._client_context.bot.ask_question(self._client_context, "WELL I WAS THERE")
self.assertIsNotNone(response)
self.assertEqual(response, 'ARROW IS I WAS')
def test_arrow_specific_case1(self):
response = self._client_context.bot.ask_question(self._client_context, "aaa bbb ccc ddd")
self.assertIsNotNone(response)
self.assertEqual(response, 'passed')
| 38.810811 | 137 | 0.725975 | 347 | 2,872 | 5.700288 | 0.184438 | 0.111223 | 0.180485 | 0.111223 | 0.672396 | 0.672396 | 0.647118 | 0.647118 | 0.647118 | 0.517189 | 0 | 0.00127 | 0.177577 | 2,872 | 73 | 138 | 39.342466 | 0.836156 | 0 | 0 | 0.240741 | 0 | 0 | 0.07695 | 0 | 0 | 0 | 0 | 0 | 0.37037 | 1 | 0.240741 | false | 0.018519 | 0.074074 | 0 | 0.351852 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7de0c5d7c143dbbaa1cce7fc39c805f3b84b54c | 31,938 | py | Python | tkge/models/model.py | tkg-framework/TKG-framework | 98586b7199bda0e96d74b2ea02c62226901822cc | [
"MIT",
"Unlicense"
] | null | null | null | tkge/models/model.py | tkg-framework/TKG-framework | 98586b7199bda0e96d74b2ea02c62226901822cc | [
"MIT",
"Unlicense"
] | null | null | null | tkge/models/model.py | tkg-framework/TKG-framework | 98586b7199bda0e96d74b2ea02c62226901822cc | [
"MIT",
"Unlicense"
] | null | null | null | import torch
from torch import nn
from torch.nn import functional as F
import numpy as np
from enum import Enum
import os
from collections import defaultdict
from typing import Mapping, Dict
import random
from tkge.common.registry import Registrable
from tkge.common.config import Config
from tkge.common.error import ConfigurationError
from tkge.data.dataset import DatasetProcessor
from tkge.models.layers import LSTMModel
from tkge.models.utils import *
class BaseModel(nn.Module, Registrable):
def __init__(self, config: Config, dataset: DatasetProcessor):
nn.Module.__init__(self)
Registrable.__init__(self, config=config)
self.dataset = dataset
@staticmethod
def create(config: Config, dataset: DatasetProcessor):
"""Factory method for sampler creation"""
model_type = config.get("model.name")
if model_type in BaseModel.list_available():
# kwargs = config.get("model.args") # TODO: 需要改成key的格式
return BaseModel.by_name(model_type)(config, dataset)
else:
raise ConfigurationError(
f"{model_type} specified in configuration file is not supported"
f"implement your model class with `BaseModel.register(name)"
)
def load_config(self):
# TODO(gengyuan): 有参数的话加载,没指定参数的话用默认,最好可以直接读config文件然后setattr,需不需要做assert?
raise NotImplementedError
def prepare_embedding(self):
raise NotImplementedError
def get_embedding(self, **kwargs):
raise NotImplementedError
def forward(self, samples, **kwargs):
raise NotImplementedError
def predict(self, queries: torch.Tensor):
"""
Should be a wrapper of method forward or a computation flow same as that in forward.
Particularly for prediction task with incomplete queries as inputs.
New modules or learnable parameter constructed in this namespace should be avoided since it's not evolved in training procedure.
"""
raise NotImplementedError
def fit(self, samples: torch.Tensor):
# TODO(gengyuan): wrapping all the models
"""
Should be a wrapper of forward or a computation flow same as that in forward.
This method is intended to handle arbitrarily-shaped samples due to negative sampling, either matrix or flatteded.
Especially when training procedure and prediction procedure are different.
Samples should be processed in this method and then passed to forward.
Input samples are the direct output of the negative sampling.
"""
raise NotImplementedError
@BaseModel.register(name='de_simple')
class DeSimplEModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
self.prepare_embedding()
self.time_nl = torch.sin # TODO add to configuration file
def prepare_embedding(self):
num_ent = self.dataset.num_entities()
num_rel = self.dataset.num_relations()
emb_dim = self.config.get("model.embedding.emb_dim")
se_prop = self.config.get("model.embedding.se_prop")
s_emb_dim = int(se_prop * emb_dim)
t_emb_dim = emb_dim - s_emb_dim
# torch.manual_seed(0)
# torch.cuda.manual_seed_all(0)
# np.random.seed(0)
# random.seed(0)
# torch.backends.cudnn.deterministic = True
# os.environ['PYTHONHASHSEED'] = str(0)
self.embedding: Dict[str, nn.Module] = defaultdict(dict)
self.embedding.update({'ent_embs_h': nn.Embedding(num_ent, s_emb_dim)})
self.embedding.update({'ent_embs_t': nn.Embedding(num_ent, s_emb_dim)})
self.embedding.update({'rel_embs_f': nn.Embedding(num_rel, s_emb_dim + t_emb_dim)})
self.embedding.update({'rel_embs_i': nn.Embedding(num_rel, s_emb_dim + t_emb_dim)})
# frequency embeddings for the entities
self.embedding.update({'m_freq_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'m_freq_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_freq_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_freq_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_freq_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_freq_t': nn.Embedding(num_ent, t_emb_dim)})
# phi embeddings for the entities
self.embedding.update({'m_phi_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'m_phi_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_phi_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_phi_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_phi_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_phi_t': nn.Embedding(num_ent, t_emb_dim)})
# frequency embeddings for the entities
self.embedding.update({'m_amps_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'m_amps_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_amps_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'d_amps_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_amps_h': nn.Embedding(num_ent, t_emb_dim)})
self.embedding.update({'y_amps_t': nn.Embedding(num_ent, t_emb_dim)})
self.embedding = nn.ModuleDict(self.embedding)
for k, v in self.embedding.items():
nn.init.xavier_uniform_(v.weight)
# nn.init.xavier_uniform_(self.ent_embs_h.weight)
# nn.init.xavier_uniform_(self.ent_embs_t.weight)
# nn.init.xavier_uniform_(self.rel_embs_f.weight)
# nn.init.xavier_uniform_(self.rel_embs_i.weight)
#
# nn.init.xavier_uniform_(self.m_freq_h.weight)
# nn.init.xavier_uniform_(self.d_freq_h.weight)
# nn.init.xavier_uniform_(self.y_freq_h.weight)
# nn.init.xavier_uniform_(self.m_freq_t.weight)
# nn.init.xavier_uniform_(self.d_freq_t.weight)
# nn.init.xavier_uniform_(self.y_freq_t.weight)
#
# nn.init.xavier_uniform_(self.m_phi_h.weight)
# nn.init.xavier_uniform_(self.d_phi_h.weight)
# nn.init.xavier_uniform_(self.y_phi_h.weight)
# nn.init.xavier_uniform_(self.m_phi_t.weight)
# nn.init.xavier_uniform_(self.d_phi_t.weight)
# nn.init.xavier_uniform_(self.y_phi_t.weight)
#
# nn.init.xavier_uniform_(self.m_amps_h.weight)
# nn.init.xavier_uniform_(self.d_amps_h.weight)
# nn.init.xavier_uniform_(self.y_amps_h.weight)
# nn.init.xavier_uniform_(self.m_amps_t.weight)
# nn.init.xavier_uniform_(self.d_amps_t.weight)
# nn.init.xavier_uniform_(self.y_amps_t.weight)
# nn.init.xavier_uniform_(self.embedding['ent_embs_h'].weight)
# nn.init.xavier_uniform_(self.embedding['ent_embs_t'].weight)
# nn.init.xavier_uniform_(self.embedding['rel_embs_f'].weight)
# nn.init.xavier_uniform_(self.embedding['rel_embs_i'].weight)
#
# nn.init.xavier_uniform_(self.embedding['m_freq_h'].weight)
# nn.init.xavier_uniform_(self.embedding['d_freq_h'].weight)
# nn.init.xavier_uniform_(self.embedding['y_freq_h'].weight)
# nn.init.xavier_uniform_(self.embedding['m_freq_t'].weight)
# nn.init.xavier_uniform_(self.embedding['d_freq_t'].weight)
# nn.init.xavier_uniform_(self.embedding['y_freq_t'].weight)
#
# nn.init.xavier_uniform_(self.embedding['m_phi_h'].weight)
# nn.init.xavier_uniform_(self.embedding['d_phi_h'].weight)
# nn.init.xavier_uniform_(self.embedding['y_phi_h'].weight)
# nn.init.xavier_uniform_(self.embedding['m_phi_t'].weight)
# nn.init.xavier_uniform_(self.embedding['d_phi_t'].weight)
# nn.init.xavier_uniform_(self.embedding['y_phi_t'].weight)
#
# nn.init.xavier_uniform_(self.embedding['m_amps_h'].weight)
# nn.init.xavier_uniform_(self.embedding['d_amps_h'].weight)
# nn.init.xavier_uniform_(self.embedding['y_amps_h'].weight)
# nn.init.xavier_uniform_(self.embedding['m_amps_t'].weight)
# nn.init.xavier_uniform_(self.embedding['d_amps_t'].weight)
# nn.init.xavier_uniform_(self.embedding['y_amps_t'].weight)
# for name, params in self.named_parameters():
# print(name)
# print(params)
# print(params.size())
#
# assert False
def get_time_embedding(self, ent, year, month, day, ent_pos):
# TODO: enum
if ent_pos == "head":
time_emb = self.embedding['y_amps_h'](ent) * self.time_nl(
self.embedding['y_freq_h'](ent) * year + self.embedding['y_phi_h'](ent))
time_emb += self.embedding['m_amps_h'](ent) * self.time_nl(
self.embedding['m_freq_h'](ent) * month + self.embedding['m_phi_h'](ent))
time_emb += self.embedding['d_amps_h'](ent) * self.time_nl(
self.embedding['d_freq_h'](ent) * day + self.embedding['d_phi_h'](ent))
else:
time_emb = self.embedding['y_amps_t'](ent) * self.time_nl(
self.embedding['y_freq_t'](ent) * year + self.embedding['y_phi_t'](ent))
time_emb += self.embedding['m_amps_t'](ent) * self.time_nl(
self.embedding['m_freq_t'](ent) * month + self.embedding['m_phi_t'](ent))
time_emb += self.embedding['d_amps_t'](ent) * self.time_nl(
self.embedding['d_freq_t'](ent) * day + self.embedding['d_phi_t'](ent))
return time_emb
def get_embedding(self, head, rel, tail, year, month, day):
year = year.view(-1, 1)
month = month.view(-1, 1)
day = day.view(-1, 1)
h_emb1 = self.embedding['ent_embs_h'](head)
r_emb1 = self.embedding['rel_embs_f'](rel)
t_emb1 = self.embedding['ent_embs_t'](tail)
h_emb2 = self.embedding['ent_embs_h'](tail)
r_emb2 = self.embedding['rel_embs_i'](rel)
t_emb2 = self.embedding['ent_embs_t'](head)
h_emb1 = torch.cat((h_emb1, self.get_time_embedding(head, year, month, day, 'head')), 1)
t_emb1 = torch.cat((t_emb1, self.get_time_embedding(tail, year, month, day, 'tail')), 1)
h_emb2 = torch.cat((h_emb2, self.get_time_embedding(tail, year, month, day, 'head')), 1)
t_emb2 = torch.cat((t_emb2, self.get_time_embedding(head, year, month, day, 'tail')), 1)
return h_emb1, r_emb1, t_emb1, h_emb2, r_emb2, t_emb2
def forward(self, samples, **kwargs):
head = samples[:, 0].long()
rel = samples[:, 1].long()
tail = samples[:, 2].long()
year = samples[:, 3]
month = samples[:, 4]
day = samples[:, 5]
h_emb1, r_emb1, t_emb1, h_emb2, r_emb2, t_emb2 = self.get_embedding(head, rel, tail, year, month, day)
p = self.config.get('model.dropout')
scores = ((h_emb1 * r_emb1) * t_emb1 + (h_emb2 * r_emb2) * t_emb2) / 2.0
scores = F.dropout(scores, p=p, training=self.training) # TODO training
scores = torch.sum(scores, dim=1)
return scores, None
def fit(self, samples: torch.Tensor):
bs = samples.size(0)
dim = samples.size(1) // (1 + self.config.get("negative_sampling.num_samples"))
samples = samples.view(-1, dim)
scores, factor = self.forward(samples)
scores = scores.view(bs, -1)
return scores, factor
def predict(self, queries: torch.Tensor):
assert torch.isnan(queries).sum(1).byte().all(), "Either head or tail should be absent."
bs = queries.size(0)
dim = queries.size(0)
candidates = all_candidates_of_ent_queries(queries, self.dataset.num_entities())
scores, _ = self.forward(candidates)
scores = scores.view(bs, -1)
return scores
@BaseModel.register(name="tcomplex")
class TComplExModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
self.rank = self.config.get("model.rank")
self.no_time_emb = self.config.get("model.no_time_emb")
self.init_size = self.config.get("model.init_size")
self.num_ent = self.dataset.num_entities()
self.num_rel = self.dataset.num_relations()
self.num_ts = self.dataset.num_timestamps()
self.prepare_embedding()
def prepare_embedding(self):
self.embeddings = nn.ModuleList([
nn.Embedding(s, 2 * self.rank, sparse=True)
for s in [self.num_ent, self.num_rel, self.num_ts]
])
for emb in self.embeddings:
emb.weight.data *= self.init_size
def forward(self, x):
"""
x is spot
"""
lhs = self.embeddings[0](x[:, 0].long())
rel = self.embeddings[1](x[:, 1].long())
rhs = self.embeddings[0](x[:, 2].long())
time = self.embeddings[2](x[:, 3].long())
lhs = lhs[:, :self.rank], lhs[:, self.rank:]
rel = rel[:, :self.rank], rel[:, self.rank:]
rhs = rhs[:, :self.rank], rhs[:, self.rank:]
time = time[:, :self.rank], time[:, self.rank:]
right = self.embeddings[0].weight # all ent tensor
right = right[:, :self.rank], right[:, self.rank:]
rt = rel[0] * time[0], rel[1] * time[0], rel[0] * time[1], rel[1] * time[1]
full_rel = rt[0] - rt[3], rt[1] + rt[2]
# 1st item: scores
# 2nd item: reg item factors
# 3rd item: time
scores = (lhs[0] * full_rel[0] - lhs[1] * full_rel[1]) @ right[0].t() + \
(lhs[1] * full_rel[0] + lhs[0] * full_rel[1]) @ right[1].t()
factors = {
"n3": (torch.sqrt(lhs[0] ** 2 + lhs[1] ** 2),
torch.sqrt(full_rel[0] ** 2 + full_rel[1] ** 2),
torch.sqrt(rhs[0] ** 2 + rhs[1] ** 2)),
"lambda3": (self.embeddings[2].weight[:-1] if self.no_time_emb else self.embeddings[2].weight)
}
return scores, factors
def predict(self, x):
assert torch.isnan(x).sum(1).byte().all(), "Either head or tail should be absent."
missing_head_ind = torch.isnan(x)[:, 0].byte().unsqueeze(1)
reversed_x = x.clone()
reversed_x[:, 1] += 1
reversed_x[:, (0, 2)] = reversed_x[:, (2, 0)]
x = torch.where(missing_head_ind,
reversed_x,
x)
lhs = self.embeddings[0](x[:, 0].long())
rel = self.embeddings[1](x[:, 1].long())
time = self.embeddings[2](x[:, 3].long())
lhs = lhs[:, :self.rank], lhs[:, self.rank:]
rel = rel[:, :self.rank], rel[:, self.rank:]
time = time[:, :self.rank], time[:, self.rank:]
right = self.embeddings[0].weight
right = right[:, :self.rank], right[:, self.rank:]
scores = (lhs[0] * rel[0] * time[0] - lhs[1] * rel[1] * time[0] -
lhs[1] * rel[0] * time[1] - lhs[0] * rel[1] * time[1]) @ right[0].t() + \
(lhs[1] * rel[0] * time[0] + lhs[0] * rel[1] * time[0] +
lhs[0] * rel[0] * time[1] - lhs[1] * rel[1] * time[1]) @ right[1].t()
return scores
def forward_over_time(self, x):
lhs = self.embeddings[0](x[:, 0])
rel = self.embeddings[1](x[:, 1])
rhs = self.embeddings[0](x[:, 2])
time = self.embeddings[2].weight
lhs = lhs[:, :self.rank], lhs[:, self.rank:]
rel = rel[:, :self.rank], rel[:, self.rank:]
rhs = rhs[:, :self.rank], rhs[:, self.rank:]
time = time[:, :self.rank], time[:, self.rank:]
return (
(lhs[0] * rel[0] * rhs[0] - lhs[1] * rel[1] * rhs[0] -
lhs[1] * rel[0] * rhs[1] + lhs[0] * rel[1] * rhs[1]) @ time[0].t() +
(lhs[1] * rel[0] * rhs[0] - lhs[0] * rel[1] * rhs[0] +
lhs[0] * rel[0] * rhs[1] - lhs[1] * rel[1] * rhs[1]) @ time[1].t()
)
@BaseModel.register(name="hyte")
class HyTEModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
@BaseModel.register(name="atise")
class ATiSEModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
# TODO(gengyuan) load params before initialize
self.cmin = self.config.get("model.cmin")
self.cmax = self.config.get("model.cmax")
self.emb_dim = self.config.get("model.embedding_dim")
self.prepare_embedding()
def prepare_embedding(self):
num_ent = self.dataset.num_entities()
num_rel = self.dataset.num_relations()
self.embedding: Dict[str, nn.Module] = defaultdict(None)
self.embedding.update({'emb_E': nn.Embedding(num_ent, self.emb_dim, padding_idx=0)})
self.embedding.update({'emb_E_var': nn.Embedding(num_ent, self.emb_dim, padding_idx=0)})
self.embedding.update({'emb_R': nn.Embedding(num_rel, self.emb_dim, padding_idx=0)})
self.embedding.update({'emb_R_var': nn.Embedding(num_rel, self.emb_dim, padding_idx=0)})
self.embedding.update({'emb_TE': nn.Embedding(num_ent, self.emb_dim, padding_idx=0)})
self.embedding.update({'alpha_E': nn.Embedding(num_ent, 1, padding_idx=0)})
self.embedding.update({'beta_E': nn.Embedding(num_ent, self.emb_dim, padding_idx=0)})
self.embedding.update({'omega_E': nn.Embedding(num_ent, self.emb_dim, padding_idx=0)})
self.embedding.update({'emb_TR': nn.Embedding(num_rel, self.emb_dim, padding_idx=0)})
self.embedding.update({'alpha_R': nn.Embedding(num_rel, 1, padding_idx=0)})
self.embedding.update({'beta_R': nn.Embedding(num_rel, self.emb_dim, padding_idx=0)})
self.embedding.update({'omega_R': nn.Embedding(num_rel, self.emb_dim, padding_idx=0)})
self.embedding = nn.ModuleDict(self.embedding)
r = 6 / np.sqrt(self.emb_dim)
self.embedding['emb_E'].weight.data.uniform_(-r, r)
self.embedding['emb_E_var'].weight.data.uniform_(self.cmin, self.cmax)
self.embedding['emb_R'].weight.data.uniform_(-r, r)
self.embedding['emb_R_var'].weight.data.uniform_(self.cmin, self.cmax)
self.embedding['emb_TE'].weight.data.uniform_(-r, r)
self.embedding['alpha_E'].weight.data.uniform_(0, 0)
self.embedding['beta_E'].weight.data.uniform_(0, 0)
self.embedding['omega_E'].weight.data.uniform_(-r, r)
self.embedding['emb_TR'].weight.data.uniform_(-r, r)
self.embedding['alpha_R'].weight.data.uniform_(0, 0)
self.embedding['beta_R'].weight.data.uniform_(0, 0)
self.embedding['omega_R'].weight.data.uniform_(-r, r)
self.embedding['emb_E'].weight.data.renorm_(p=2, dim=0, maxnorm=1)
self.embedding['emb_E_var'].weight.data.uniform_(self.cmin, self.cmax)
self.embedding['emb_R'].weight.data.renorm_(p=2, dim=0, maxnorm=1)
self.embedding['emb_R_var'].weight.data.uniform_(self.cmin, self.cmax)
self.embedding['emb_TE'].weight.data.renorm_(p=2, dim=0, maxnorm=1)
self.embedding['emb_TR'].weight.data.renorm_(p=2, dim=0, maxnorm=1)
def forward(self, sample: torch.Tensor):
bs = sample.size(0)
# TODO(gengyuan)
dim = sample.size(1) // (1 + self.config.get("negative_sampling.num_samples"))
sample = sample.view(-1, dim)
# TODO(gengyuan) type conversion when feeding the data instead of running the models
h_i, t_i, r_i, d_i = sample[:, 0].long(), sample[:, 2].long(), sample[:, 1].long(), sample[:, 3]
pi = 3.14159265358979323846
h_mean = self.embedding['emb_E'](h_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_E'](h_i).view(-1, 1) * self.embedding['emb_TE'](h_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_E'](h_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_E'](h_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
t_mean = self.embedding['emb_E'](t_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_E'](t_i).view(-1, 1) * self.embedding['emb_TE'](t_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_E'](t_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_E'](t_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
r_mean = self.embedding['emb_R'](r_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_R'](r_i).view(-1, 1) * self.embedding['emb_TR'](r_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_R'](r_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_R'](r_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
h_var = self.embedding['emb_E_var'](h_i).view(-1, self.emb_dim)
t_var = self.embedding['emb_E_var'](t_i).view(-1, self.emb_dim)
r_var = self.embedding['emb_R_var'](r_i).view(-1, self.emb_dim)
out1 = torch.sum((h_var + t_var) / r_var, 1) + torch.sum(((r_mean - h_mean + t_mean) ** 2) / r_var,
1) - self.emb_dim
out2 = torch.sum(r_var / (h_var + t_var), 1) + torch.sum(((h_mean - t_mean - r_mean) ** 2) / (h_var + t_var),
1) - self.emb_dim
scores = (out1 + out2) / 4
scores = scores.view(bs, -1)
factors = {
"renorm": (self.embedding['emb_E'].weight,
self.embedding['emb_R'].weight,
self.embedding['emb_TE'].weight,
self.embedding['emb_TR'].weight),
"clamp": (self.embedding['emb_E_var'].weight,
self.embedding['emb_R_var'].weight)
}
return scores, factors
# TODO(gengyaun):
# walkaround
def predict(self, sample: torch.Tensor):
bs = sample.size(0)
# TODO(gengyuan)
dim = sample.size(1) // (self.dataset.num_entities())
sample = sample.view(-1, dim)
# TODO(gengyuan) type conversion when feeding the data instead of running the models
h_i, t_i, r_i, d_i = sample[:, 0].long(), sample[:, 2].long(), sample[:, 1].long(), sample[:, 3]
pi = 3.14159265358979323846
h_mean = self.embedding['emb_E'](h_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_E'](h_i).view(-1, 1) * self.embedding['emb_TE'](h_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_E'](h_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_E'](h_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
t_mean = self.embedding['emb_E'](t_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_E'](t_i).view(-1, 1) * self.embedding['emb_TE'](t_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_E'](t_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_E'](t_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
r_mean = self.embedding['emb_R'](r_i).view(-1, self.emb_dim) + \
d_i.view(-1, 1) * self.embedding['alpha_R'](r_i).view(-1, 1) * self.embedding['emb_TR'](r_i).view(-1,
self.emb_dim) \
+ self.embedding['beta_R'](r_i).view(-1, self.emb_dim) * torch.sin(
2 * pi * self.embedding['omega_R'](r_i).view(-1, self.emb_dim) * d_i.view(-1, 1))
h_var = self.embedding['emb_E_var'](h_i).view(-1, self.emb_dim)
t_var = self.embedding['emb_E_var'](t_i).view(-1, self.emb_dim)
r_var = self.embedding['emb_R_var'](r_i).view(-1, self.emb_dim)
out1 = torch.sum((h_var + t_var) / r_var, 1) + torch.sum(((r_mean - h_mean + t_mean) ** 2) / r_var,
1) - self.emb_dim
out2 = torch.sum(r_var / (h_var + t_var), 1) + torch.sum(((h_mean - t_mean - r_mean) ** 2) / (h_var + t_var),
1) - self.emb_dim
scores = (out1 + out2) / 4
scores = scores.view(bs, -1)
factors = {
"renorm": (self.embedding['emb_E'].weight,
self.embedding['emb_R'].weight,
self.embedding['emb_TE'].weight,
self.embedding['emb_TR'].weight),
"clamp": (self.embedding['emb_E_var'].weight,
self.embedding['emb_R_var'].weight)
}
return scores, factors
# reference: https://github.com/bsantraigi/TA_TransE/blob/master/model.py
# reference: https://github.com/jimmywangheng/knowledge_representation_pytorch
@BaseModel.register(name="ta_transe")
class TATransEModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
# model params from files
self.emb_dim = self.config.get("model.emb_dim")
self.l1_flag = self.config.get("model.l1_flag")
self.p = self.config.get("model.p")
self.dropout = torch.nn.Dropout(p=self.p)
self.lstm = LSTMModel(self.emb_dim, n_layer=1)
self.prepare_embedding()
def prepare_embedding(self):
num_ent = self.dataset.num_entities()
num_rel = self.dataset.num_relations()
num_tem = 32 # should be 32
self.embedding: Dict[str, torch.nn.Embedding] = defaultdict(None)
self.embedding['ent'] = torch.nn.Embedding(num_ent, self.emb_dim)
self.embedding['rel'] = torch.nn.Embedding(num_rel, self.emb_dim)
self.embedding['tem'] = torch.nn.Embedding(num_tem, self.emb_dim)
self.embedding = nn.ModuleDict(self.embedding)
for _, emb in self.embedding.items():
torch.nn.init.xavier_uniform_(emb.weight)
emb.weight.data.renorm(p=2, dim=1, maxnorm=1)
def get_rseq(self, rel: torch.LongTensor, tem: torch.LongTensor):
r_e = self.embedding['rel'](rel)
r_e = r_e.unsqueeze(0).transpose(0, 1)
bs = tem.size(0)
tem_len = tem.size(1)
tem = tem.contiguous()
tem = tem.view(bs * tem_len)
token_e = self.embedding['tem'](tem)
token_e = token_e.view(bs, tem_len, self.emb_dim)
seq_e = torch.cat((r_e, token_e), 1)
hidden_tem = self.lstm(seq_e)
hidden_tem = hidden_tem[0, :, :]
rseq_e = hidden_tem
return rseq_e
def forward(self, samples: torch.Tensor):
h, r, t, tem = samples[:, 0].long(), samples[:, 1].long(), samples[:, 2].long(), samples[:, 3:].long()
h_e = self.embedding['ent'](h)
t_e = self.embedding['ent'](t)
rseq_e = self.get_rseq(r, tem)
h_e = self.dropout(h_e)
t_e = self.dropout(t_e)
rseq_e = self.dropout(rseq_e)
if self.l1_flag:
scores = torch.sum(torch.abs(h_e + rseq_e - t_e), 1)
else:
scores = torch.sum((h_e + rseq_e - t_e) ** 2, 1)
factors = {
"norm": (h_e,
t_e,
rseq_e)
}
return scores, factors
def fit(self, samples: torch.Tensor):
bs = samples.size(0)
dim = samples.size(1) // (1 + self.config.get("negative_sampling.num_samples"))
samples = samples.view(-1, dim)
scores, factor = self.forward(samples)
scores = scores.view(bs, -1)
return scores, factor
def predict(self, queries: torch.Tensor):
assert torch.isnan(queries).sum(1).byte().all(), "Either head or tail should be absent."
bs = queries.size(0)
dim = queries.size(0)
candidates = all_candidates_of_ent_queries(queries, self.dataset.num_entities())
scores, _ = self.forward(candidates)
scores = scores.view(bs, -1)
return scores
# reference: https://github.com/bsantraigi/TA_TransE/blob/master/model.py
@BaseModel.register(name="ta_distmult")
class TADistmultModel(BaseModel):
def __init__(self, config: Config, dataset: DatasetProcessor):
super().__init__(config, dataset)
# model params from files
self.emb_dim = self.config.get("model.emb_dim")
self.l1_flag = self.config.get("model.l1_flag")
self.p = self.config.get("model.p")
self.dropout = torch.nn.Dropout(p=self.p)
self.lstm = LSTMModel(self.emb_dim, n_layer=1)
self.criterion = nn.Softplus()
self.prepare_embedding()
def prepare_embedding(self):
num_ent = self.dataset.num_entities()
num_rel = self.dataset.num_relations()
num_tem = 32 # should be 32
self.embedding: Dict[str, torch.nn.Embedding] = defaultdict(None)
self.embedding['ent'] = torch.nn.Embedding(num_ent, self.emb_dim)
self.embedding['rel'] = torch.nn.Embedding(num_rel, self.emb_dim)
self.embedding['tem'] = torch.nn.Embedding(num_tem, self.emb_dim)
self.embedding = nn.ModuleDict(self.embedding)
for _, emb in self.embedding.items():
torch.nn.init.xavier_uniform_(emb.weight)
emb.weight.data.renorm(p=2, dim=1, maxnorm=1)
def forward(self, samples: torch.Tensor):
h, r, t, tem = samples[:, 0].long(), samples[:, 1].long(), samples[:, 2].long(), samples[:, 3:].long()
h_e = self.embedding['ent'](h)
t_e = self.embedding['ent'](t)
rseq_e = self.get_rseq(r, tem)
h_e = self.dropout(h_e)
t_e = self.dropout(t_e)
rseq_e = self.dropout(rseq_e)
scores = torch.sum(h_e * t_e * rseq_e, 1, False)
factors = {
"norm": (self.embedding['ent'].weight,
self.embedding['rel'].weight,
self.embedding['tem'].weight)
}
return scores, factors
def get_rseq(self, rel, tem):
r_e = self.embedding['rel'](rel)
r_e = r_e.unsqueeze(0).transpose(0, 1)
bs = tem.size(0)
tem_len = tem.size(1)
tem = tem.contiguous()
tem = tem.view(bs * tem_len)
token_e = self.embedding['tem'](tem)
token_e = token_e.view(bs, tem_len, self.emb_dim)
seq_e = torch.cat((r_e, token_e), 1)
hidden_tem = self.lstm(seq_e)
hidden_tem = hidden_tem[0, :, :]
rseq_e = hidden_tem
return rseq_e
def fit(self, samples: torch.Tensor):
bs = samples.size(0)
dim = samples.size(1) // (1 + self.config.get("negative_sampling.num_samples"))
samples = samples.view(-1, dim)
scores, factor = self.forward(samples)
scores = scores.view(bs, -1)
return scores, factor
def predict(self, queries: torch.Tensor):
assert torch.isnan(queries).sum(1).byte().all(), "Either head or tail should be absent."
bs = queries.size(0)
dim = queries.size(0)
candidates = all_candidates_of_ent_queries(queries, self.dataset.num_entities())
scores, _ = self.forward(candidates)
scores = scores.view(bs, -1)
return scores
| 42.301987 | 136 | 0.591365 | 4,452 | 31,938 | 4.024483 | 0.072776 | 0.129151 | 0.032371 | 0.049841 | 0.780655 | 0.737344 | 0.711057 | 0.68142 | 0.641346 | 0.53452 | 0 | 0.018796 | 0.25872 | 31,938 | 754 | 137 | 42.35809 | 0.737994 | 0.134354 | 0 | 0.555324 | 0 | 0 | 0.060144 | 0.006816 | 0 | 0 | 0 | 0.005305 | 0.008351 | 1 | 0.077244 | false | 0 | 0.031315 | 0 | 0.162839 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7e464e396dc4a7b5350555581b28024555d3b3e | 244 | py | Python | app/ml/objects/feature/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2021-05-04T08:46:19.000Z | 2021-05-04T08:46:19.000Z | app/ml/objects/feature/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | null | null | null | app/ml/objects/feature/enum.py | PSE-TECO-2020-TEAM1/e2e-ml_model-management | 7f01a008648e25a29c639a5e16124b2e399eb821 | [
"MIT"
] | 1 | 2022-01-28T21:21:32.000Z | 2022-01-28T21:21:32.000Z | from enum import Enum
class Feature(str, Enum):
MINIMUM = "MINIMUM"
MAXIMUM = "MAXIMUM"
VARIANCE = "VARIANCE"
ABS_ENERGY = "ABS_ENERGY"
MEAN = "MEAN"
MEDIAN = "MEDIAN"
SKEWNESS = "SKEWNESS"
KURTOSIS = "KURTOSIS" | 22.181818 | 29 | 0.627049 | 26 | 244 | 5.807692 | 0.576923 | 0.119205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.258197 | 244 | 11 | 30 | 22.181818 | 0.834254 | 0 | 0 | 0 | 0 | 0 | 0.236735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f7efad003a168e738e4e8d59b3b9142e900130cd | 825 | py | Python | tools/leetcode.071.Simplify Path/leetcode.071.Simplify Path.submission9.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | 4 | 2015-10-10T00:30:55.000Z | 2020-07-27T19:45:54.000Z | tools/leetcode.071.Simplify Path/leetcode.071.Simplify Path.submission9.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | tools/leetcode.071.Simplify Path/leetcode.071.Simplify Path.submission9.py | tedye/leetcode | 975d7e3b8cb9b6be9e80e07febf4bcf6414acd46 | [
"MIT"
] | null | null | null | class Solution:
# @param path, a string
# @return a string
def simplifyPath(self, path):
root = self.node('')
root.previous = root
p = [i for i in path.split('/') if i != '']
temp = root
for i in p:
if i == '..':
temp = temp.previous
temp.next = None
elif i != '.':
temp.next = self.node(i)
temp.next.previous = temp
temp = temp.next
temp = root.next
res = ''
while temp != None:
res += '/'+temp.val
temp = temp.next
if not res:
return '/'
return res
class node:
def __init__(self,x):
self.val = x
self.previous = None
self.next = None
| 825 | 825 | 0.420606 | 90 | 825 | 3.811111 | 0.322222 | 0.116618 | 0.034985 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.467879 | 825 | 1 | 825 | 825 | 0.781321 | 0.046061 | 0 | 0.074074 | 0 | 0 | 0.007643 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7904a48fd08bc3a934fe3f4f273745b2570dce4c | 21,535 | py | Python | hw06_train.py | arao53/BME695-object-detection | 7f094cc016d91c6b00d6f86f7c3e2e96acbb0083 | [
"MIT"
] | null | null | null | hw06_train.py | arao53/BME695-object-detection | 7f094cc016d91c6b00d6f86f7c3e2e96acbb0083 | [
"MIT"
] | null | null | null | hw06_train.py | arao53/BME695-object-detection | 7f094cc016d91c6b00d6f86f7c3e2e96acbb0083 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""hw06_training.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1bjAeUIVjt9W8mASk8vw1y2SbuBYuQdlG
"""
!pip install reports
import PIL.Image as Image, requests, urllib, random
import argparse, json, PIL.Image, reports, os, pickle
from requests.exceptions import ConnectionError, ReadTimeout, TooManyRedirects, MissingSchema, InvalidURL
import numpy, torch, cv2, skimage
import skimage.io as io
from torch import nn
import torch.nn.functional as F
from pycocotools.coco import COCO
import glob
from torch.utils.data import DataLoader,Dataset
import torchvision.transforms as tvt
import matplotlib.pyplot as plt
from torchsummary import summary
import pandas as pd
# Mount google drive to run on Colab
#from google.colab import drive
#drive.mount('/content/drive')
#%cd "/content/drive/My Drive/Colab Notebooks/DeepLearning/hw06/"
#!pwd
#!ls
root_path = "/content/drive/My Drive/Colab Notebooks/DeepLearning/hw06/"
coco_json_path = "annotations/instances_train2017.json"
class_list = ["person", "dog", "hot dog"]
coco = COCO(coco_json_path)
class build_annotations:
# Structure of the all_annotations file:
# indexed by the image filepath, removing the '.jpg' or the string version (with zeros) of the imageID
# For each image:
# 'imageID': corresponds to the integer image ID assigned within COCO.
# 'num_objects': integer number of objects in the image (at most 5)
# 'bbox': a dictionary of the bounding box array for each instance within the image. The dictionary key is the string 0-5 of each instance in order of decreasing area
# 'labels': a dictionary of the labels of each instance within the image. The key is the same as bbox but the value is the integer category ID assigned within COCO.
def __init__(self, root_path, class_list, max_instances = 5):
self.root_path = root_path
self.image_dir = root_path + '*.jpg'
self.cat_IDs = coco.getCatIds(catNms=class_list)
self.max_instances = max_instances
def __call__(self):
all_annotations = {}
g = glob.glob(self.image_dir)
for i, filename in enumerate(g):
filename = filename.split('/')[-1]
img_ID = int(filename.split('.')[0])
ann_Ids = coco.getAnnIds(imgIds=img_ID, catIds = self.cat_IDs, iscrowd = False)
num_objects = min(len(ann_Ids), self.max_instances) # cap at a max of 5 images
anns = coco.loadAnns(ann_Ids)
indices = sort_by_area(anns, self.max_instances)
bbox = {}
label = {}
i = 0
for n in indices:
instance = anns[n]
bbox[str(i)] = instance['bbox']
label[str(i)] = instance['category_id']
i+=1
annotation= {"imageID":img_ID, "num_objects":i, 'bbox': bbox, 'labels':label}
all_annotations[filename.split('.')[0]] = annotation
ann_path = self.root_path + "image_annotations.p"
pickle.dump( all_annotations, open(ann_path, "wb" ) )
print('Annotations saved in:', ann_path)
def sort_by_area(anns, num):
areas = numpy.zeros(len(anns))
for i, instance in enumerate(anns):
areas[i] = instance['area']
indices = numpy.argsort(areas)[-num:]
return indices[::-1]
class your_dataset_class(Dataset):
def __init__(self, path, class_list, coco):
self.class_list = class_list
self.folder = path
self.coco = coco
self.catIds = coco.getCatIds(catNms = class_list)
self.imgIds = coco.getImgIds(catIds = self.catIds)
self.categories = coco.loadCats(self.catIds)
#create label dictionary
labeldict = {}
for idx, in_class in enumerate(self.class_list):
for c in self.categories:
if c["name"] == in_class:
labeldict[c['id']] = idx
self.coco_labeldict = labeldict
#if first time running, index the image dataset to make annotation .p file
annotation_path = path + 'image_annotations.p'
if os.path.exists(annotation_path) ==False:
print("Indexing dataset to compile annotations...")
dataset_annotations = build_annotations(path, class_list)
dataset_annotations()
self.data_anns = pickle.load(open(annotation_path, "rb" ))
def __len__(self):
g = glob.glob(self.folder + '*.jpg') # ,'*.jpg')
return (len(g))
def get_imagelabel(self, img_path, sc, max_objects = 5): #img_path = file location, sc = scale [0]: width, [1]: height
saved_filename = os.path.basename(img_path)
filename = saved_filename.split('.jpg')[0]
image_id = int(filename)#.split('_')[-1])
bbox_tensor = torch.zeros(max_objects, 4, dtype=torch.uint8)
label_tensor = torch.zeros(max_objects+1, dtype=torch.uint8) + len(self.class_list)
target_obj = self.data_anns[filename]
num_objects = target_obj['num_objects']
for n in range(num_objects):
[x,y,w,h] = target_obj['bbox'][str(n)]
bbox = [sc[1]*y, x*sc[0], sc[1]*(h), sc[0]*(w)]
bbox_tensor[n,:] = torch.tensor(numpy.array(bbox))
cat_label = target_obj['labels'][str(n)]
data_label = self.coco_labeldict[cat_label]
label_tensor[n] = torch.tensor(data_label)
return bbox_tensor, label_tensor
def __getitem__(self, item):
g = glob.glob(self.folder + '*.jpg') #'**/*.jpg') # , '*.jpg')
im = PIL.Image.open(g[item])
im, scale_fac = rescale_factor(im, 128) #overwrite old image with new resized image of size 256
W, H = im.size
transformer = tvt.Compose([tvt.ToTensor(), tvt.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
im_array = torch.randint(0, 256, (3, H, W)).type(torch.uint8)
for i in range(H):
for j in range(W):
im_array[:, j, i] = torch.tensor(im.getpixel((i, j)))
im_scaled = im_array / im_array.max() # scaled from 0-1
im_tf = transformer(numpy.transpose(im_scaled.numpy()))
num_classes = len(self.class_list)
bbox, label = self.get_imagelabel(g[item], scale_fac)
sample = {'im_ID': g[item],
'scale':scale_fac,
'image': im_tf,
'bbox' : bbox,
'label': label}
return sample
def rescale_factor(im_original, std_size):
raw_width, raw_height = im_original.size
im = im_original.resize((std_size, std_size), Image.BOX)
w_factor = std_size/raw_width
h_factor = std_size/raw_height
return (im, [w_factor, h_factor])
#train_path = os.path.join(root_path, "Train/")
train_path = root_path + "Train/"
val_path = os.path.join(root_path, "Val/")
batch_size = 64
train_dataset = your_dataset_class(train_path, class_list, coco)
#train_dataset.__getitem__(32)
train_data_loader = torch.utils.data.DataLoader(dataset = train_dataset,
batch_size = batch_size,
shuffle = True,
num_workers= 2,
drop_last=True)
#val_dataset = your_dataset_class(val_path, class_list)
#val_data_loader = torch.utils.data.DataLoader(dataset = val_dataset,
# batch_size = batch_size,
# shuffle = True,
# num_workers = 4,
# drop_last=True)
class SkipBlock(nn.Module):
def __init__(self,in_ch, out_ch, downsample = False):
super().__init__()
self.in_ch = in_ch
self.out_ch = out_ch
self.conv1 = nn.Conv2d(in_ch, out_ch, 3, stride = 1, padding = 1)
self.conv2 = nn.Conv2d(in_ch, out_ch, 3, padding = 1)
self.bnorm1 = nn.BatchNorm2d(out_ch)
self.bnorm2 = nn.BatchNorm2d(out_ch)
self.downsample_tf = downsample
self.downsampler = nn.Conv2d(in_ch, out_ch, 1, stride= 2)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bnorm1(out)
out = F.relu(out)
if self.downsample_tf == True:
identity = self.downsampler(identity)
out = self.downsampler(out)
out += identity
else:
out = self.conv2(out)
out = self.bnorm2(out)
out = F.relu(out)
out += identity
return out
class MechEnet(nn.Module):
def __init__(self, num_classes, depth):
super().__init__()
self.depth = depth // 8
self.conv_initial = nn.Conv2d( 3, 64, 3, padding = 1)
self.pool = nn.MaxPool2d(2,2)
## assume all layers are 64 channels deep
self.skipblock64_1 = nn.ModuleList()
for i in range(self.depth):
#print("adding layer", i)
self.skipblock64_1.append( SkipBlock(64,64, downsample = False) ) #append a 64 in/out ch layer - depth*2/4 convolutions
self.skip_downsample = SkipBlock(64,64, downsample= True)
self.skipblock64_2 = nn.ModuleList()
for i in range(self.depth):
#print("adding layer", i + self.depth)
self.skipblock64_2.append( SkipBlock(64,64, downsample = False) ) #append a 64 in/out layer - depth*2/4 convolutions
self.fc_seqn = nn.Sequential(
nn.Linear(64*4*4, 3000),
nn.ReLU(inplace =True),
nn.Linear(3000,3000),
nn.ReLU(inplace =True),
nn.Linear(3000,8*8*(5*(5+3))) #5 anchor boxes*(1+ bbox(4) + classes (3))
)
def forward(self, x):
# x1 is the output of classification
x = self.pool(F.relu(self.conv_initial(x)))
x1 = self.skip_downsample(x)
for i, skips in enumerate(self.skipblock64_1[self.depth//4 :]):
x1 = skips(x1)
x1 = self.skip_downsample(x1)
for i, skips in enumerate(self.skipblock64_1[:self.depth//4]):
x1 = skips(x1)
x1 = self.skip_downsample(x1)
for i, skips in enumerate(self.skipblock64_2[self.depth//4:]):
x1 = skips(x1)
x1 = self.skip_downsample(x1)
for i, skips in enumerate(self.skipblock64_2[:self.depth//4]):
x1 = skips(x1)
#x1 = self.skip_downsample(x)
x1 = x1.view(x1.size(0),-1)
x1 = self.fc_seqn(x1)
return x1
class IoULoss(torch.nn.Module):
def __init__(self, weight=None, size_average=True):
super(IoULoss, self).__init__()
def forward(self, inputs, targets, smooth=1):
#flatten label and prediction tensors
# tensor shape = [b, yolo_cell, anch, yolovector]
# flattened tensor = [b, numcells*numanch*8]
b_size = inputs.shape[0]
pred_unscrm = inputs.view(b_size, 8**2, 5, -1)
targ_unscrm = targets.view(b_size, 8**2, 5, -1)
pred_bbox = pred_unscrm[:,:,:,1:5]
targ_bbox = targ_unscrm[:,:,:,1:5]
intersection = targ_bbox*pred_bbox
union = targ_bbox + pred_bbox
J_idx = torch.div(intersection, union)
#print(J_idx)
J_dist = 1.0-J_idx
return torch.sum(J_dist)
## significant code is adapted from Prof. Kak's Multi-instance detector
def run_code_for_training(net, lrate, mom, epochs, im_size, max_objects, yolo_interval = 16):
print('Beginning training for', epochs,'epochs...')
#criterion1 = torch.nn.CrossEntropyLoss()
criterion = torch.nn.MSELoss()
#criterion = IoULoss()
optimizer = torch.optim.SGD(net.parameters(), lr = lrate, momentum = mom)
loss_tracker = []
num_cells_image_height = im_size//yolo_interval
num_cells_image_width = im_size//yolo_interval
num_yolo_cells = num_cells_image_height*num_cells_image_width
print_iteration = 3
num_anchor_boxes = 5
yolo_tensor = torch.zeros(batch_size, num_yolo_cells, num_anchor_boxes, 1*5+3) #batch size, 8*8, 1*5+3 classes
class AnchorBox:
def __init__(self, AR, topleft, abox_h, abox_w, abox_idx):
self.AR = AR
self.topleft = topleft
self.abox_h = abox_h
self.abox_w = abox_w
self.abox_idx= abox_idx
device = torch.device("cuda:0")
for epoch in range(epochs):
print('\nEpoch %d training...' %(epoch + 1))
running_loss = 0.0
for i, data in enumerate(train_data_loader):
sample_batch = data['im_ID']
im_tensor = data["image"]
target_reg = data["bbox"].type(torch.FloatTensor)
target_clf = data["label"].type(torch.LongTensor)
optimizer.zero_grad()
im_tensor = im_tensor.to(device)
target_reg = target_reg.to(device)
target_clf = target_clf.to(device)
yolo_tensor = yolo_tensor.to(device)
obj_centers = {ibx :
{idx : None for idx in range(max_objects)}
for ibx in range(im_tensor.shape[0])}
anchor_boxes_1_1 = [[AnchorBox(1/1, (i*yolo_interval,j*yolo_interval), yolo_interval, yolo_interval, 0)
for i in range(0,num_cells_image_height)]
for j in range(0,num_cells_image_width)]
anchor_boxes_1_3 = [[AnchorBox(1/3, (i*yolo_interval,j*yolo_interval), yolo_interval, 3*yolo_interval, 1)
for i in range(0,num_cells_image_height)]
for j in range(0,num_cells_image_width)]
anchor_boxes_3_1 = [[AnchorBox(3/1, (i*yolo_interval,j*yolo_interval), 3*yolo_interval, yolo_interval, 2)
for i in range(0,num_cells_image_height)]
for j in range(0,num_cells_image_width)]
anchor_boxes_1_5 = [[AnchorBox(1/5, (i*yolo_interval,j*yolo_interval), yolo_interval, 5*yolo_interval, 3)
for i in range(0,num_cells_image_height)]
for j in range(0,num_cells_image_width)]
anchor_boxes_5_1 = [[AnchorBox(5/1, (i*yolo_interval,j*yolo_interval), 5*yolo_interval, yolo_interval, 4)
for i in range(0,num_cells_image_height)]
for j in range(0,num_cells_image_width)]
#Build the yolo tensor based on the bounding box and label tensors from the target/dataset
for b in range(im_tensor.shape[0]): # Loop through batch index
for idx in range(max_objects): # Loop through each object in the target tensor
height_center_bb = (target_reg[b][idx][1].item() + target_reg[b][idx][3].item()) // 2
width_center_bb = (target_reg[b][idx][0].item() + target_reg[b][idx][2].item()) // 2
obj_bb_height = target_reg[b][idx][3].item() - target_reg[b][idx][1].item()
obj_bb_width = target_reg[b][idx][2].item() - target_reg[b][idx][0].item()
obj_label = target_clf[b][idx].item()
if obj_label == 13:
obj_label = 4
eps = 1e-8
AR = float(obj_bb_height + eps) / float(obj_bb_width + eps)
cell_row_idx = int(height_center_bb // yolo_interval) ## for the i coordinate
cell_col_idx = int(width_center_bb // yolo_interval) ## for the j coordinates
if AR <= 0.2: ## (F)
anchbox = anchor_boxes_1_5[cell_row_idx][cell_col_idx]
elif AR <= 0.5:
anchbox = anchor_boxes_1_3[cell_row_idx][cell_col_idx]
elif AR <= 1.5:
anchbox = anchor_boxes_1_1[cell_row_idx][cell_col_idx]
elif AR <= 4:
anchbox = anchor_boxes_3_1[cell_row_idx][cell_col_idx]
elif AR > 4:
anchbox = anchor_boxes_5_1[cell_row_idx][cell_col_idx]
bh = float(obj_bb_height) / float(yolo_interval) ## (G)
bw = float(obj_bb_width) / float(yolo_interval)
obj_center_x = float(target_reg[b][idx][2].item() + target_reg[b][idx][0].item()) / 2.0
obj_center_y = float(target_reg[b][idx][3].item() + target_reg[b][idx][1].item()) / 2.0
yolocell_center_i = cell_row_idx*yolo_interval + float(yolo_interval) / 2.0
yolocell_center_j = cell_col_idx*yolo_interval + float(yolo_interval) / 2.0
del_x = float(obj_center_x - yolocell_center_j) / yolo_interval
del_y = float(obj_center_y - yolocell_center_i) / yolo_interval
yolo_vector = [0, del_x, del_y, bh, bw, 0, 0, 0]
if obj_label<4:
yolo_vector[4 + obj_label] = 1
yolo_vector[0] = 1
yolo_cell_index = cell_row_idx * num_cells_image_width + cell_col_idx
yolo_tensor[b, yolo_cell_index, anchbox.abox_idx] = torch.FloatTensor( yolo_vector )
yolo_tensor_flattened = yolo_tensor.view(im_tensor.shape[0], -1)
## Foward Pass
pred_yolo = net(im_tensor)
#pred_yolo = filter_yolo_tensor(pred_yolo, im_tensor.shape[0], num_yolo_cells, num_anchor_boxes)
loss = criterion(pred_yolo, yolo_tensor_flattened)
loss.backward(retain_graph = True)
pred_unscrm = pred_yolo.view(im_tensor.shape[0], 8**2, 5, -1)
sample_yolo_tensor = pred_unscrm
optimizer.step()
running_loss += loss.item()
if (i+1)%print_iteration ==0:
average_loss = running_loss/float(print_iteration)
print("[epoch: %d, batch: %5d] Avg Batch loss: %.4f" %(epoch + 1, i+1, average_loss))
loss_tracker = numpy.append(loss_tracker, average_loss)
running_loss = 0.0
return loss_tracker, sample_yolo_tensor, sample_batch
def filter_yolo_tensor(yolo_tensor, batch_size, num_yolo_cells, aboxes):
#loop through each yolo_cell_index in the in the prediction tensor
# if idx[0] of the yolo vector is less than 0.5, make the whole vector zero
zero_vec = torch.zeros(8)
print(yolo_tensor.shape)
for b in range(batch_size):
for num in range(num_yolo_cells):
for an in range(aboxes):
if yolo_tensor[b,num][an][0] < 0.5:
yolo_tensor[b,num][an][:] = torch.zeros(8)
return yolo_tensor
model = MechEnet(len(class_list), depth = 64)
lrate = 5e-3
mom = 0.5
epochs = 1
yolo_int = 16
im_size = 128
max_objects = 5
savepath = "MechEnet.pth"
model.load_state_dict(torch.load(savepath))
if torch.cuda.is_available():
device = torch.device("cuda:0")
model.cuda()
summary(model, (3, im_size, im_size))
training_loss, yolo_sample, batches = run_code_for_training(model, lrate, mom, epochs, im_size, max_objects, yolo_interval = yolo_int)
#savepath = "/content/drive/My Drive/Colab Notebooks/DeepLearning/hw06/MechEnet.pth"
#torch.save(model.state_dict(), savepath)
#pd.DataFrame(training_loss).to_csv("/content/drive/My Drive/Colab Notebooks/DeepLearning/hw06/loss.csv")
fig, ax = plt.subplots()
ax.plot(training_loss)
ax.set_title('Training loss')
ax.set_ylabel('Loss')
ax.set_xlabel('Iterations')
## Visualize prediction on training set
annotation_path = root_path + 'Train/'+ 'image_annotations.p'
data_anns = pickle.load(open(annotation_path, "rb" ))
def show_image(image_anns):
img = coco.loadImgs(rand_img['imageID'])[0]
I = io.imread(img['coco_url'])
if len(I.shape) == 2:
I = skimage.color.gray2rgb(I)
catIds = coco.getCatIds(catNms= class_list)
annIds = coco.getAnnIds(imgIds=rand_img['imageID'], catIds= catIds, iscrowd=False)
anns = coco.loadAnns(annIds)
image = numpy.uint8(I)
for i in range(rand_img['num_objects']):
[x,y,w,h] = rand_img['bbox'][str(i)]
label = rand_img['labels'][str(i)]
image = cv2.rectangle(image, (int(x), int(y)), (int(x +w), int(y + h)), (36,255,12), 2)
class_label = coco_labels_inverse[label]
image = cv2.putText(image, 'True ' + class_list[class_label], (int(x), int(y-10)), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (36,255,12), 2)
return image
bdx =37 #numpy.random.randint(0,64)
#55 #18 #5
img_loc = batches[bdx].split('/')[-1].split('.')[0]
rand_img = data_anns[img_loc]
image = show_image(rand_img)
scale = train_dataset.__getitem__(sdx)['scale']
g = glob.glob(root_path + 'Train/*.jpg')
for i in range(len(g)):
if img_loc in g[i]:
sdx = i
import math
im_considered = yolo_sample[bdx,:,:,:]
im_pred_anch = torch.zeros(64,8)
cell_pred = []
num_cell_width = 8
yolo_interval = 16
for i in range(im_considered.shape[0]):
AR = torch.argmax(im_considered[i,:,0])
im_pred_anch[i,:] = im_considered[i,AR,:]
if im_pred_anch[i,0] > 0.75:
if AR == 0:
w,h = 1,1
elif AR == 1:
w,h = 1,3
elif AR == 2:
w,h = 3,1
elif AR == 3:
w,h = 1,5
elif AR == 4:
w,h = 5,1
row_idx = math.floor(i/num_cell_width)
col_idx = i%num_cell_width
yolo_box = im_pred_anch[i,1:5].cpu().detach().numpy()
x1 = ((row_idx + 0.5)*yolo_interval)/scale[0]
x2 = x1 + (w*yolo_interval)/scale[0]
y1 = (col_idx + 0.5)*yolo_interval/scale[1]
y2 = y1+ (h*yolo_interval)/scale[1]
label = torch.argmax(im_pred_anch[i,5:]).cpu().detach().numpy()
pred_label = str('Predicted ' + class_list[label])
temp = [pred_label, x1,y1, x2,y2]
cell_pred = numpy.append(cell_pred, temp)
image = cv2.rectangle(image, (int(x1), int(y1)), (int(x2), int(y2)), (255,0,0), 2)
image = cv2.putText(image, pred_label, (int(x1), int(y1-10)), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,0,0), 2)
fig, ax = plt.subplots(1,1, dpi = 150)
ax.imshow(image)
ax.set_axis_off()
plt.axis('tight')
plt.show() | 39.083485 | 168 | 0.622661 | 3,102 | 21,535 | 4.102192 | 0.156673 | 0.036778 | 0.015324 | 0.012259 | 0.26334 | 0.220982 | 0.171395 | 0.146248 | 0.107269 | 0.086994 | 0 | 0.030871 | 0.252426 | 21,535 | 551 | 169 | 39.083485 | 0.75955 | 0.135872 | 0 | 0.087719 | 1 | 0 | 0.033771 | 0.003486 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037594 | null | null | 0.022556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7905bc2147b61285b910192e1759b412fc9029ee | 67 | py | Python | webpie/Version.py | webpie/webpie | c7f1bc29a63c0be683c60756165a6c65260211f9 | [
"BSD-3-Clause"
] | 2 | 2021-12-10T16:12:51.000Z | 2022-01-06T17:29:12.000Z | webpie/Version.py | webpie/webpie | c7f1bc29a63c0be683c60756165a6c65260211f9 | [
"BSD-3-Clause"
] | null | null | null | webpie/Version.py | webpie/webpie | c7f1bc29a63c0be683c60756165a6c65260211f9 | [
"BSD-3-Clause"
] | null | null | null | Version = "5.6.5"
if __name__ == "__main__":
print (Version)
| 11.166667 | 26 | 0.597015 | 9 | 67 | 3.555556 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 0.223881 | 67 | 5 | 27 | 13.4 | 0.557692 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
79079afb5049c4952a78491f534997124403c2b1 | 999 | py | Python | sdk/communication/azure-communication-networktraversal/azure/communication/networktraversal/_generated/models/_communication_network_traversal_client_enums.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | 1 | 2022-03-09T08:59:13.000Z | 2022-03-09T08:59:13.000Z | sdk/communication/azure-communication-networktraversal/azure/communication/networktraversal/_generated/models/_communication_network_traversal_client_enums.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | null | null | null | sdk/communication/azure-communication-networktraversal/azure/communication/networktraversal/_generated/models/_communication_network_traversal_client_enums.py | vincenttran-msft/azure-sdk-for-python | 348b56f9f03eeb3f7b502eed51daf494ffff874d | [
"MIT"
] | 1 | 2022-03-04T06:21:56.000Z | 2022-03-04T06:21:56.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from enum import Enum
from six import with_metaclass
from azure.core import CaseInsensitiveEnumMeta
class RouteType(with_metaclass(CaseInsensitiveEnumMeta, str, Enum)):
"""The routing methodology to where the ICE server will be located from the client. "any" will
have higher reliability while "nearest" will have lower latency. It is recommended to default
to use the "any" routing method unless there are specific scenarios which minimizing latency is
critical.
"""
ANY = "any"
NEAREST = "nearest"
| 43.434783 | 99 | 0.648649 | 118 | 999 | 5.474576 | 0.686441 | 0.018576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001183 | 0.154154 | 999 | 22 | 100 | 45.409091 | 0.763314 | 0.745746 | 0 | 0 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
790dd1c98aa84804fd6deae9806710c465315553 | 528 | py | Python | ex009a.py | emerfelippini/Curso_em_video-Aulas_Python | 5b1d78b259732bb9bbad27cd30ce91bba77c5ef0 | [
"MIT"
] | null | null | null | ex009a.py | emerfelippini/Curso_em_video-Aulas_Python | 5b1d78b259732bb9bbad27cd30ce91bba77c5ef0 | [
"MIT"
] | null | null | null | ex009a.py | emerfelippini/Curso_em_video-Aulas_Python | 5b1d78b259732bb9bbad27cd30ce91bba77c5ef0 | [
"MIT"
] | null | null | null | a = int(input('Digite um número para saber sua tabuada :'))
n1 = a*1
n2 = a*2
n3 = a*3
n4 = a*4
n5 = a*5
n6 = a*6
n7 = a*7
n8 = a*8
n9 = a*9
n10 = a*10
print('A sua tabuada é')
print('{} x 1 = {}'.format(a, n1))
print('{} x 2 = {}'.format(a, n2))
print('{} x 3 = {}'.format(a, n3))
print('{} x 4 = {}'.format(a, n4))
print('{} x 5 = {}'.format(a, n5))
print('{} x 6 = {}'.format(a, n6))
print('{} x 7 = {}'.format(a, n7))
print('{} x 8 = {}'.format(a, n8))
print('{} x 9 = {}'.format(a, n9))
print('{} x 10 = {}'.format(a, n10)) | 24 | 59 | 0.498106 | 105 | 528 | 2.504762 | 0.32381 | 0.228137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103529 | 0.195076 | 528 | 22 | 60 | 24 | 0.515294 | 0 | 0 | 0 | 0 | 0 | 0.31569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 2 |
792091588798652fc74a75c6e942925554fc4793 | 562 | py | Python | qiubai/qiubai/items.py | zouyuwuse/Spiders | 28764b6f4eeccffcf875a8656f30d0c8bb34ce22 | [
"Apache-2.0"
] | 31 | 2017-05-24T02:31:33.000Z | 2020-04-19T08:15:23.000Z | qiubai/qiubai/items.py | zouyuwuse/Spiders | 28764b6f4eeccffcf875a8656f30d0c8bb34ce22 | [
"Apache-2.0"
] | 3 | 2017-05-26T15:12:54.000Z | 2018-03-13T05:37:04.000Z | qiubai/qiubai/items.py | zouyuwuse/Spiders | 28764b6f4eeccffcf875a8656f30d0c8bb34ce22 | [
"Apache-2.0"
] | 28 | 2017-05-24T15:40:45.000Z | 2020-04-19T08:15:29.000Z | # -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class QiubaiItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
_id = scrapy.Field()
avatar = scrapy.Field()
profile_link = scrapy.Field()
name = scrapy.Field()
gender = scrapy.Field()
age = scrapy.Field()
content = scrapy.Field()
content_link = scrapy.Field()
up = scrapy.Field()
comment_num = scrapy.Field()
| 23.416667 | 51 | 0.653025 | 73 | 562 | 4.972603 | 0.547945 | 0.333333 | 0.082645 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002262 | 0.213523 | 562 | 23 | 52 | 24.434783 | 0.819005 | 0.362989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
7926d5a00fb093db1e989099849d56be9ae1af2c | 495 | py | Python | python/interpret-core/interpret/visual/test/test_interactive.py | prateekiiest/interpret | b5530a587251a77516ab443037fc37f71708564c | [
"MIT"
] | 2,674 | 2019-10-03T14:14:35.000Z | 2022-03-31T13:40:49.000Z | python/interpret-core/interpret/visual/test/test_interactive.py | prateekiiest/interpret | b5530a587251a77516ab443037fc37f71708564c | [
"MIT"
] | 257 | 2019-11-08T19:22:56.000Z | 2022-03-29T20:09:07.000Z | python/interpret-core/interpret/visual/test/test_interactive.py | prateekiiest/interpret | b5530a587251a77516ab443037fc37f71708564c | [
"MIT"
] | 367 | 2019-10-31T15:33:21.000Z | 2022-03-31T13:40:50.000Z | # Copyright (c) 2019 Microsoft Corporation
# Distributed under the MIT software license
from ..interactive import set_visualize_provider, get_visualize_provider
from ...provider import PreserveProvider
def test_provider_properties():
provider = PreserveProvider()
old_provider = get_visualize_provider()
set_visualize_provider(provider)
assert get_visualize_provider() == provider
set_visualize_provider(old_provider)
assert get_visualize_provider() == old_provider
| 29.117647 | 72 | 0.79798 | 55 | 495 | 6.836364 | 0.436364 | 0.316489 | 0.212766 | 0.148936 | 0.180851 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00939 | 0.139394 | 495 | 16 | 73 | 30.9375 | 0.873239 | 0.167677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222222 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
7931b1b9e5431194d283b44e51c8b4121c3e6633 | 228 | py | Python | d3rlpy/envs/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 565 | 2020-08-01T02:44:28.000Z | 2022-03-30T15:00:54.000Z | d3rlpy/envs/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 144 | 2020-08-01T03:45:10.000Z | 2022-03-30T14:51:16.000Z | d3rlpy/envs/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 103 | 2020-08-26T13:27:34.000Z | 2022-03-31T12:24:27.000Z | from .batch import AsyncBatchEnv, BatchEnv, SyncBatchEnv
from .wrappers import Atari, ChannelFirst, Monitor
__all__ = [
"BatchEnv",
"SyncBatchEnv",
"AsyncBatchEnv",
"ChannelFirst",
"Atari",
"Monitor",
]
| 19 | 56 | 0.675439 | 19 | 228 | 7.894737 | 0.578947 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20614 | 228 | 11 | 57 | 20.727273 | 0.828729 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
793298dd2e5b74126bb9798597b8237bba111ab9 | 354 | py | Python | tests/assertions.py | Smosker/vcrpy | a56a0726d4f325f963696087d83c82b78e2a3464 | [
"MIT"
] | 2 | 2016-02-21T17:56:43.000Z | 2018-06-10T21:15:32.000Z | tests/assertions.py | Smosker/vcrpy | a56a0726d4f325f963696087d83c82b78e2a3464 | [
"MIT"
] | 3 | 2018-07-08T11:07:17.000Z | 2018-07-08T11:10:21.000Z | tests/assertions.py | Smosker/vcrpy | a56a0726d4f325f963696087d83c82b78e2a3464 | [
"MIT"
] | 3 | 2019-03-08T16:08:30.000Z | 2019-07-11T18:57:33.000Z | import json
def assert_cassette_empty(cass):
assert len(cass) == 0
assert cass.play_count == 0
def assert_cassette_has_one_response(cass):
assert len(cass) == 1
assert cass.play_count == 1
def assert_is_json(a_string):
try:
json.loads(a_string.decode('utf-8'))
except Exception:
assert False
assert True
| 17.7 | 44 | 0.672316 | 52 | 354 | 4.346154 | 0.519231 | 0.119469 | 0.150442 | 0.150442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018382 | 0.231638 | 354 | 19 | 45 | 18.631579 | 0.8125 | 0 | 0 | 0 | 0 | 0 | 0.014124 | 0 | 0 | 0 | 0 | 0 | 0.692308 | 1 | 0.230769 | false | 0 | 0.076923 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
79345ef9e45ad9b25b5a4a6fddc5730eea2f5284 | 4,242 | py | Python | platecurie/__init__.py | craigmillernz/PlateCurie | 1e0f4fc4a9595e3d261c014d6cc5d7ccca944474 | [
"MIT"
] | 11 | 2019-10-16T13:29:32.000Z | 2022-02-10T09:58:22.000Z | platecurie/__init__.py | craigmillernz/PlateCurie | 1e0f4fc4a9595e3d261c014d6cc5d7ccca944474 | [
"MIT"
] | 4 | 2020-10-15T13:50:02.000Z | 2021-09-22T22:08:07.000Z | platecurie/__init__.py | craigmillernz/PlateCurie | 1e0f4fc4a9595e3d261c014d6cc5d7ccca944474 | [
"MIT"
] | 8 | 2020-06-12T04:42:48.000Z | 2022-03-31T18:41:03.000Z | # Copyright 2019 Pascal Audet
#
# This file is part of PlateCurie.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""
PlateFlex is a software for estimating the effective elastic thickness of the lithosphere
from the inversion of flexural isostatic response functions calculated from a wavelet
analysis of gravity and topography data.
Licence
-------
Copyright 2019 Pascal Audet
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Installation
------------
Dependencies
++++++++++++
The current version was developed using **Python3.7** \
Also, the following package is required:
- `plateflex <https://github.com/paudetseis/PlateFlex>`_
See below for full installation details.
Conda environment
+++++++++++++++++
We recommend creating a custom ``conda`` environment
where ``platecurie`` can be installed along with its dependencies.
.. sourcecode:: bash
conda create -n curie python=3.7 fortran-compiler numpy pymc3 matplotlib seaborn -c conda-forge
Activate the newly created environment:
.. sourcecode:: bash
conda activate curie
Install the required ``plateflex`` software
(see `here <https://paudetseis.github.io/PlateFlex/getting_started.html#installing-from-source>`_)
Installing from source
++++++++++++++++++++++
- Clone the repository:
.. sourcecode:: bash
git clone https://github.com/paudetseis/PlateCurie.git
cd PlateCurie
- Install using ``pip``
.. sourcecode:: bash
pip install .
"""
# -*- coding: utf-8 -*-
from . import estimate
from . import plotting
from .classes import MagGrid, ZtGrid, SigZtGrid, Project
from plateflex.cpwt import conf_cpwt as cf_wt
def set_conf_cpwt(k0=5.336):
cf_wt.k0 = k0
set_conf_cpwt()
def get_conf_cpwt():
"""
Print global variable that controls the spatio-spectral resolution of the
wavelet transform
.. rubric:: Example
>>> import platecurie
>>> platecurie.get_conf_cpwt()
Wavelet parameter used in platecurie.cpwt:
------------------------------------------
[Internal wavenumber] k0 (float): 5.336
"""
print('\n'.join((
'Wavelet parameter used in platecurie.cpwt:',
'------------------------------------------',
'[Internal wavenumber] k0 (float): {0:.3f}'.format(cf_wt.k0))))
| 32.883721 | 98 | 0.730552 | 584 | 4,242 | 5.280822 | 0.373288 | 0.057069 | 0.016861 | 0.015564 | 0.575227 | 0.575227 | 0.575227 | 0.575227 | 0.575227 | 0.575227 | 0 | 0.008511 | 0.169024 | 4,242 | 128 | 99 | 33.140625 | 0.866383 | 0.869873 | 0 | 0 | 0 | 0 | 0.285714 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0 | 0.5 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f715181158ed97a842f823c3209fa5647bf6dec5 | 246 | py | Python | lightning_plus/api_basebone/app/client_urls.py | twocucao/lightning-plus | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | 1 | 2021-04-15T14:52:12.000Z | 2021-04-15T14:52:12.000Z | lightning_plus/api_basebone/app/client_urls.py | twocucao/lightning | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | null | null | null | lightning_plus/api_basebone/app/client_urls.py | twocucao/lightning | e69c81da9c15fdfc37355e0362ff7ed804e94b2a | [
"MIT"
] | null | null | null | from lightning_plus.api_basebone.drf.routers import SimpleRouter
from .upload import views as upload_views
router = SimpleRouter(custom_base_name="basebone-app")
router.register("upload", upload_views.UploadViewSet)
urlpatterns = router.urls
| 24.6 | 64 | 0.829268 | 32 | 246 | 6.1875 | 0.65625 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089431 | 246 | 9 | 65 | 27.333333 | 0.883929 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f71bdb90d7af4d9b66887e0eed93bbd899a25113 | 1,264 | py | Python | tests/test_blog.py | Geerocktricks/Blogs4Keeps | 5a91a1c9ec38ecc50b5831d526dd376463a56210 | [
"MIT"
] | null | null | null | tests/test_blog.py | Geerocktricks/Blogs4Keeps | 5a91a1c9ec38ecc50b5831d526dd376463a56210 | [
"MIT"
] | null | null | null | tests/test_blog.py | Geerocktricks/Blogs4Keeps | 5a91a1c9ec38ecc50b5831d526dd376463a56210 | [
"MIT"
] | null | null | null | from app.models import Blog , User
from app import db
def setUp(self):
self.user_Gerald = User(username = 'Gerald',password = 'potato', email = 'geerockface4@gmail.com')
self.new_blog = Blog(id=12345,title='Full stack development',content='Is it safe to go the full-stack way or better the Android path',category= "technology" , date = 1-12-2019, time = 16:04 ,user = self.user_Gerald )
def tearDown(self):
Blog.query.delete()
User.query.delete()
def test_check_instance_variables(self):
self.assertEquals(self.new_blog.id,12345)
self.assertEquals(self.new_blog.title,'Full stack development')
self.assertEquals(self.new_blog.content,"Is it safe to go the full-stack way or better the Android path")
self.assertEquals(self.new_blog.category,'technology')
self.assertEquals(self.new_blog.date, 1-12-2019)
self.assertEquals(self.new_blog. time,16:04)
self.assertEquals(self.new_blog.user,self.user_Gerald)
def test_save_blog(self):
self.new_blog.save_blog()
self.assertTrue(len(Review.query.all())>0)
def test_get_blog_by_id(self):
self.new_blog.save_review()
got_blogs = Blog.get_blogs(12345)
self.assertTrue(len(got_blogs) == 1) | 40.774194 | 224 | 0.699367 | 188 | 1,264 | 4.558511 | 0.340426 | 0.08168 | 0.128355 | 0.187865 | 0.444574 | 0.130688 | 0.130688 | 0.130688 | 0.130688 | 0.130688 | 0 | 0.03861 | 0.18038 | 1,264 | 31 | 225 | 40.774194 | 0.78861 | 0 | 0 | 0 | 0 | 0 | 0.175494 | 0.017391 | 0 | 0 | 0 | 0 | 0.391304 | 0 | null | null | 0.043478 | 0.086957 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f71edbd83661cc9f7d04a5509ea567a09ae80c46 | 1,163 | py | Python | 693.binary-number-with-alternating-bits.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | 693.binary-number-with-alternating-bits.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | 693.binary-number-with-alternating-bits.py | Lonitch/hackerRank | 84991b8340e725422bc47eec664532cc84a3447e | [
"MIT"
] | null | null | null | #
# @lc app=leetcode id=693 lang=python3
#
# [693] Binary Number with Alternating Bits
#
# https://leetcode.com/problems/binary-number-with-alternating-bits/description/
#
# algorithms
# Easy (58.47%)
# Likes: 359
# Dislikes: 74
# Total Accepted: 52.4K
# Total Submissions: 89.1K
# Testcase Example: '5'
#
# Given a positive integer, check whether it has alternating bits: namely, if
# two adjacent bits will always have different values.
#
# Example 1:
#
# Input: 5
# Output: True
# Explanation:
# The binary representation of 5 is: 101
#
#
#
# Example 2:
#
# Input: 7
# Output: False
# Explanation:
# The binary representation of 7 is: 111.
#
#
#
# Example 3:
#
# Input: 11
# Output: False
# Explanation:
# The binary representation of 11 is: 1011.
#
#
#
# Example 4:
#
# Input: 10
# Output: True
# Explanation:
# The binary representation of 10 is: 1010.
#
#
#
# @lc code=start
class Solution:
def hasAlternatingBits(self, n: int) -> bool:
b = list(bin(n)[2:])
if len(b)<2:
return True
else:
return b[0]!=b[1] and len(set(b[::2]))==1 and len(set(b[1::2]))==1
# @lc code=end
| 17.358209 | 80 | 0.625107 | 162 | 1,163 | 4.487654 | 0.561728 | 0.077029 | 0.110041 | 0.18707 | 0.371389 | 0.255846 | 0.255846 | 0 | 0 | 0 | 0 | 0.069507 | 0.233018 | 1,163 | 66 | 81 | 17.621212 | 0.745516 | 0.693035 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f731caeaa20126dfbac6f2f856ed43cf1d1b4fd9 | 3,511 | py | Python | utest/api/test_exposed_api.py | rdagum/robotframework | b7069d505374e9f09a140ed5a9727d2a40716446 | [
"ECL-2.0",
"Apache-2.0"
] | 7,073 | 2015-01-01T17:19:16.000Z | 2022-03-31T22:01:29.000Z | utest/api/test_exposed_api.py | imust6226/robotframework | 08c56fef2ebc64d682c7f99acd77c480d8d0e028 | [
"ECL-2.0",
"Apache-2.0"
] | 2,412 | 2015-01-02T09:29:05.000Z | 2022-03-31T13:10:46.000Z | utest/api/test_exposed_api.py | rticau/robotframework | 33ee46dfacd5173c0a38d89c1a60abf6a747c8c0 | [
"ECL-2.0",
"Apache-2.0"
] | 2,298 | 2015-01-03T02:47:15.000Z | 2022-03-31T02:00:16.000Z | import unittest
from os.path import join
from robot import api, model, parsing, reporting, result, running
from robot.api import parsing as api_parsing
from robot.utils.asserts import assert_equal, assert_true
class TestExposedApi(unittest.TestCase):
def test_execution_result(self):
assert_equal(api.ExecutionResult, result.ExecutionResult)
def test_test_suite(self):
assert_equal(api.TestSuite, running.TestSuite)
def test_result_writer(self):
assert_equal(api.ResultWriter, reporting.ResultWriter)
def test_visitors(self):
assert_equal(api.SuiteVisitor, model.SuiteVisitor)
assert_equal(api.ResultVisitor, result.ResultVisitor)
def test_deprecated_parsing(self):
assert_equal(api.get_model, parsing.get_model)
assert_equal(api.get_resource_model, parsing.get_resource_model)
assert_equal(api.get_tokens, parsing.get_tokens)
assert_equal(api.get_resource_tokens, parsing.get_resource_tokens)
assert_equal(api.Token, parsing.Token)
def test_parsing_getters(self):
assert_equal(api_parsing.get_model, parsing.get_model)
assert_equal(api_parsing.get_resource_model, parsing.get_resource_model)
assert_equal(api_parsing.get_tokens, parsing.get_tokens)
assert_equal(api_parsing.get_resource_tokens, parsing.get_resource_tokens)
def test_parsing_token(self):
assert_equal(api_parsing.Token, parsing.Token)
def test_parsing_model_statements(self):
for cls in parsing.model.Statement._statement_handlers.values():
assert_equal(getattr(api_parsing, cls.__name__), cls)
assert_true(not hasattr(api_parsing, 'Statement'))
def test_parsing_model_blocks(self):
for name in ('File', 'SettingSection', 'VariableSection', 'TestCaseSection',
'KeywordSection', 'CommentSection', 'TestCase', 'Keyword', 'For',
'If'):
assert_equal(getattr(api_parsing, name), getattr(parsing.model, name))
assert_true(not hasattr(api_parsing, 'Block'))
def test_parsing_visitors(self):
assert_equal(api_parsing.ModelVisitor, parsing.ModelVisitor)
assert_equal(api_parsing.ModelTransformer, parsing.ModelTransformer)
class TestModelObjects(unittest.TestCase):
"""These model objects are part of the public API.
They are only seldom needed directly and thus not exposed via the robot.api
package. Tests just validate they are not removed accidentally.
"""
def test_running_objects(self):
assert_true(running.TestSuite)
assert_true(running.TestCase)
assert_true(running.Keyword)
def test_result_objects(self):
assert_true(result.TestSuite)
assert_true(result.TestCase)
assert_true(result.Keyword)
class TestTestSuiteBuilder(unittest.TestCase):
# This list has paths like `/path/file.py/../file.robot` on purpose.
# They don't work unless normalized.
sources = [join(__file__, '../../../atest/testdata/misc', name)
for name in ('pass_and_fail.robot', 'normal.robot')]
def test_create_with_datasources_as_list(self):
suite = api.TestSuiteBuilder().build(*self.sources)
assert_equal(suite.name, 'Pass And Fail & Normal')
def test_create_with_datasource_as_string(self):
suite = api.TestSuiteBuilder().build(self.sources[0])
assert_equal(suite.name, 'Pass And Fail')
if __name__ == '__main__':
unittest.main()
| 37.351064 | 86 | 0.720308 | 432 | 3,511 | 5.571759 | 0.268519 | 0.10054 | 0.098878 | 0.059826 | 0.331533 | 0.262983 | 0.203573 | 0.107187 | 0.044038 | 0.044038 | 0 | 0.00035 | 0.185417 | 3,511 | 93 | 87 | 37.752688 | 0.841259 | 0.082882 | 0 | 0 | 0 | 0 | 0.06625 | 0.00875 | 0 | 0 | 0 | 0 | 0.491803 | 1 | 0.229508 | false | 0.04918 | 0.081967 | 0 | 0.377049 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f732e886bb6651ccc0f8d71e8ddd18c7c6dbccb9 | 266 | py | Python | Workshop/Part3/part3_sol.py | ibenemerito88/openBF_workshop | a63a6fbd1ef8528890fb1072730124e054875008 | [
"Zlib",
"Apache-2.0"
] | null | null | null | Workshop/Part3/part3_sol.py | ibenemerito88/openBF_workshop | a63a6fbd1ef8528890fb1072730124e054875008 | [
"Zlib",
"Apache-2.0"
] | null | null | null | Workshop/Part3/part3_sol.py | ibenemerito88/openBF_workshop | a63a6fbd1ef8528890fb1072730124e054875008 | [
"Zlib",
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
import reslast
plt.close("all")
# Symmetric network
q,a,p,u,c,n,s = reslast.resu("network")
# Non-symmetric network
qn,an,pn,un,cn,nn,sn = reslast.resu("networknonsym")
plt.show() | 14.777778 | 52 | 0.733083 | 45 | 266 | 4.333333 | 0.733333 | 0.164103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 266 | 18 | 53 | 14.777778 | 0.844156 | 0.146617 | 0 | 0 | 0 | 0 | 0.102222 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f73f94c3158ed613dbc0d80b35b507f9509e5682 | 212 | py | Python | submissions/contains-duplicate/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | submissions/contains-duplicate/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | 1 | 2022-03-04T20:24:32.000Z | 2022-03-04T20:31:58.000Z | submissions/contains-duplicate/solution.py | Wattyyy/LeetCode | 13a9be056d0a0c38c2f8c8222b11dc02cb25a935 | [
"MIT"
] | null | null | null | # https://leetcode.com/problems/contains-duplicate
class Solution:
def containsDuplicate(self, nums):
hs = set()
for num in nums:
hs.add(num)
return len(hs) != len(nums)
| 21.2 | 50 | 0.589623 | 26 | 212 | 4.807692 | 0.769231 | 0.096 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.287736 | 212 | 9 | 51 | 23.555556 | 0.827815 | 0.226415 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f7430bcfeacf7847fa8004d6e4bccb004f8b8467 | 164 | py | Python | manage.py | guoxianru/goodsmovie | d9f3eab8578719a97b9f4328bc9083f1caeedb3d | [
"Apache-2.0"
] | 4 | 2019-03-19T06:41:58.000Z | 2020-10-18T07:24:08.000Z | manage.py | guoxianru/goodsmovie | d9f3eab8578719a97b9f4328bc9083f1caeedb3d | [
"Apache-2.0"
] | 7 | 2020-12-16T02:18:59.000Z | 2020-12-16T02:19:01.000Z | manage.py | guoxianru/goodsmovie | d9f3eab8578719a97b9f4328bc9083f1caeedb3d | [
"Apache-2.0"
] | 2 | 2019-07-12T06:48:11.000Z | 2019-12-04T03:14:40.000Z | # coding:utf8
# author:GXR
from app import app
from flask_script import Manager
manage = Manager(app)
if __name__ == '__main__':
# app.run()
manage.run()
| 14.909091 | 32 | 0.689024 | 23 | 164 | 4.521739 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007576 | 0.195122 | 164 | 10 | 33 | 16.4 | 0.780303 | 0.195122 | 0 | 0 | 0 | 0 | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 |
f74b0037b3b040cd8b224b0b2fbdf1df12cde31e | 343 | py | Python | minidump/streams/CommentStreamA.py | lucasg/minidump | 18474e3221038abe866256e4e0eb255e33615110 | [
"MIT"
] | 1 | 2021-06-13T10:00:44.000Z | 2021-06-13T10:00:44.000Z | minidump/streams/CommentStreamA.py | lucasg/minidump | 18474e3221038abe866256e4e0eb255e33615110 | [
"MIT"
] | null | null | null | minidump/streams/CommentStreamA.py | lucasg/minidump | 18474e3221038abe866256e4e0eb255e33615110 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#
# Author:
# Tamas Jos (@skelsec)
#
class CommentStreamA:
def __init__(self):
self.data = None
@staticmethod
def parse(dir, buff):
csa = CommentStreamA()
buff.seek(dir.Location.Rva)
csa.data = buff.read(dir.Location.DataSize).decode()
return csa
def __str__(self):
return 'CommentA: %s' % self.data | 19.055556 | 54 | 0.685131 | 46 | 343 | 4.934783 | 0.652174 | 0.070485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003497 | 0.166181 | 343 | 18 | 55 | 19.055556 | 0.79021 | 0.148688 | 0 | 0 | 0 | 0 | 0.041667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0 | 0.090909 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
f74b6140204c5d56fa5f872e5e75c62e6ed3b64a | 701 | py | Python | database/models.py | christopherthompson81/fastapi_demo | be6f2486ce810c9574fafb506a153560d1cbcd5e | [
"MIT"
] | 5 | 2019-07-15T21:19:47.000Z | 2021-08-07T16:37:25.000Z | database/models.py | christopherthompson81/fastapi_demo | be6f2486ce810c9574fafb506a153560d1cbcd5e | [
"MIT"
] | null | null | null | database/models.py | christopherthompson81/fastapi_demo | be6f2486ce810c9574fafb506a153560d1cbcd5e | [
"MIT"
] | 1 | 2019-08-29T02:51:38.000Z | 2019-08-29T02:51:38.000Z | '''
FastAPI Demo
SQLAlchemy ORM Models
'''
# Standard Imports
# PyPi Imports
from sqlalchemy import (
Boolean,
Column,
Integer,
String
)
# Local Imports
from database.setup import Base
###############################################################################
class User(Base):
'''ORM Models - users'''
__tablename__ = "users"
user_id = Column(Integer, primary_key=True, index=True)
username = Column(String, unique=True)
salted_password_hash = Column(String)
first_name = Column(String)
last_name = Column(String)
email = Column(String, unique=True)
active_boolean = Column(Boolean, nullable=False, default=True)
admin_boolean = Column(Boolean, nullable=False, default=False)
| 21.90625 | 79 | 0.661912 | 79 | 701 | 5.721519 | 0.518987 | 0.132743 | 0.079646 | 0.097345 | 0.176991 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128388 | 701 | 31 | 80 | 22.612903 | 0.739771 | 0.141227 | 0 | 0 | 0 | 0 | 0.009843 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.058824 | 0.117647 | 0 | 0.705882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 2 |
f74cc5f2a65a80b8c188d0b2d860a923080ef38b | 1,126 | py | Python | tendytrader/numeric/get_finance.py | Volhacks-III-Supreme-Team/tendytrader | 9b91dc82f43bcc0a06d8ca6d66e4d465c78575e9 | [
"MIT"
] | null | null | null | tendytrader/numeric/get_finance.py | Volhacks-III-Supreme-Team/tendytrader | 9b91dc82f43bcc0a06d8ca6d66e4d465c78575e9 | [
"MIT"
] | 2 | 2018-09-30T04:44:34.000Z | 2018-10-03T17:51:08.000Z | tendytrader/numeric/get_finance.py | Volhacks-III-Supreme-Team/tendytrader | 9b91dc82f43bcc0a06d8ca6d66e4d465c78575e9 | [
"MIT"
] | null | null | null | import datetime
import resource
from pandas_datareader import data as pdr
import fix_yahoo_finance as yf
from tickers import test_tickers
def get_all_stock_data(start, end, threads=(int)(resource.RLIMIT_NPROC*0.25)):
assert isinstance(start, datetime.datetime), "Error: start time must be datetime object"
assert isinstance(end, datetime.datetime), "Error: end time must be datetime object"
yf.pdr_override()
data = []
for t in test_tickers:
data.append((t, pdr.get_data_yahoo(t, start=start, end=end, threads=threads)))
return data
def get_stock_data(tick, start, end, threads=(int)(resource.RLIMIT_NPROC*0.25)):
assert isinstance(start, datetime.datetime), "Error: start time must be datetime object"
assert isinstance(end, datetime.datetime), "Error: end time must be datetime object"
yf.pdr_override()
data = []
if type(tick) is str:
data.append((tick, pdr.get_data_yahoo(tick, start=start, end=end, threads=threads)))
else:
for t in tick:
data.append((t, pdr.get_data_yahoo(t, start=start, end=end, threads=threads)))
return data
| 40.214286 | 92 | 0.715808 | 167 | 1,126 | 4.706587 | 0.275449 | 0.050891 | 0.10687 | 0.091603 | 0.704835 | 0.704835 | 0.666667 | 0.666667 | 0.666667 | 0.666667 | 0 | 0.006466 | 0.175844 | 1,126 | 27 | 93 | 41.703704 | 0.840517 | 0 | 0 | 0.5 | 0 | 0 | 0.142096 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.083333 | false | 0 | 0.208333 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
f74d889061c5de72d2336ef0d7d37a36e9e9ca47 | 1,676 | py | Python | sentIA/utils/figure/__init__.py | thomas-brth/sentinel | 747bd0b9a4a9356be69aae6d6ebbfa500e845218 | [
"MIT"
] | null | null | null | sentIA/utils/figure/__init__.py | thomas-brth/sentinel | 747bd0b9a4a9356be69aae6d6ebbfa500e845218 | [
"MIT"
] | null | null | null | sentIA/utils/figure/__init__.py | thomas-brth/sentinel | 747bd0b9a4a9356be69aae6d6ebbfa500e845218 | [
"MIT"
] | null | null | null | # Plotting tools and utility functions
# Nested GridSpec : https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_nested.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-nested-py
# GridSpec : https://matplotlib.org/stable/gallery/subplots_axes_and_figures/gridspec_multicolumn.html#sphx-glr-gallery-subplots-axes-and-figures-gridspec-multicolumn-py
# colorbar : https://matplotlib.org/stable/gallery/subplots_axes_and_figures/colorbar_placement.html#sphx-glr-gallery-subplots-axes-and-figures-colorbar-placement-py
#############
## Imports ##
#############
## General imports ##
from matplotlib import pyplot as plt
from matplotlib import colors
import numpy as np
import os
###############
## Constants ##
###############
#############
## Classes ##
#############
class MidpointNormalize(colors.Normalize):
"""
Useful object enbling to normalize colorbar with a chosen midpoint.
"""
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
super(MidpointNormalize, self).__init__(vmin, vmax, clip)
self.midpoint = midpoint
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0,0.5,1]
return np.ma.masked_array(np.interp(value, x, y))
class FigBase():
"""
"""
CREDITS = "Credit : EU, contains modified Copernicus Sentinel data, processed with custom script."
def __init__(self, title : str, dim : tuple):
self.title = title
self.fig = plt.figure(figsize=dim)
def _format(self):
pass
def show(self):
pass
###############
## Functions ##
###############
def main():
pass
if __name__ == '__main__':
main()
else:
print(f"Module {__name__} imported.", flush=True) | 27.032258 | 169 | 0.694511 | 213 | 1,676 | 5.267606 | 0.464789 | 0.080214 | 0.101604 | 0.117647 | 0.35205 | 0.35205 | 0.35205 | 0.291444 | 0.255793 | 0.122995 | 0 | 0.002705 | 0.117542 | 1,676 | 62 | 170 | 27.032258 | 0.755916 | 0.390811 | 0 | 0.115385 | 0 | 0 | 0.139884 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.230769 | false | 0.115385 | 0.192308 | 0 | 0.576923 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 2 |
f74dcd7dd1b925a0a8000af109ee319c3b0e23c1 | 270 | py | Python | stratascratch/Pandas-Solution/Ranking Most Active Guests.py | CAG9/SQL-Interview-Questions | 79d0855a7cab28abd8b6462273aacaf38b5d8448 | [
"MIT"
] | null | null | null | stratascratch/Pandas-Solution/Ranking Most Active Guests.py | CAG9/SQL-Interview-Questions | 79d0855a7cab28abd8b6462273aacaf38b5d8448 | [
"MIT"
] | null | null | null | stratascratch/Pandas-Solution/Ranking Most Active Guests.py | CAG9/SQL-Interview-Questions | 79d0855a7cab28abd8b6462273aacaf38b5d8448 | [
"MIT"
] | null | null | null | # Import your libraries
import pandas as pd
# Start writing code
Grouped = airbnb_contacts.groupby('id_guest').sum().reset_index().sort_values(by=['n_messages'], ascending =False)
Grouped['ranking'] = Grouped['n_messages'].rank(method='dense',ascending =False)
Grouped
| 33.75 | 114 | 0.762963 | 37 | 270 | 5.405405 | 0.783784 | 0.09 | 0.21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085185 | 270 | 7 | 115 | 38.571429 | 0.809717 | 0.148148 | 0 | 0 | 0 | 0 | 0.176211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.